content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
foreach access the index or an associative array
I have the following code snippet.
$items['A'] = "Test";
$items['B'] = "Test";
$items['C'] = "Test";
$items['D'] = "Test";
$index = 0;
foreach($items as $key => $value)
{
echo "$index is a $key containing $value\n";
$index++;
}
Expected output:
0 is a A containing Test
1 is a B containing Test
2 is a C containing Test
3 is a D containing Test
Is there a way to leave out the $index variable?
A:
Your $index variable there kind of misleading. That number isn't the index, your "A", "B", "C", "D" keys are. You can still access the data through the numbered index $index[1], but that's really not the point. If you really want to keep the numbered index, I'd almost restructure the data:
$items[] = array("A", "Test");
$items[] = array("B", "Test");
$items[] = array("C", "Test");
$items[] = array("D", "Test");
foreach($items as $key => $value) {
echo $key.' is a '.$value[0].' containing '.$value[1];
}
A:
You can do this:
$items[A] = "Test";
$items[B] = "Test";
$items[C] = "Test";
$items[D] = "Test";
for($i=0;$i<count($items);$i++)
{
list($key,$value) = each($items[$i]);
echo "$i $key contains $value";
}
I haven't done that before, but in theory it should work.
A:
Be careful how you're defining your keys there. While your example works, it might not always:
$myArr = array();
$myArr[A] = "a"; // "A" is assumed.
echo $myArr['A']; // "a" - this is expected.
define ('A', 'aye');
$myArr2 = array();
$myArr2[A] = "a"; // A is a constant
echo $myArr['A']; // error, no key.
print_r($myArr);
// Array
// (
// [aye] => a
// )
|
foreach access the index or an associative array
|
I have the following code snippet.
$items['A'] = "Test";
$items['B'] = "Test";
$items['C'] = "Test";
$items['D'] = "Test";
$index = 0;
foreach($items as $key => $value)
{
echo "$index is a $key containing $value\n";
$index++;
}
Expected output:
0 is a A containing Test
1 is a B containing Test
2 is a C containing Test
3 is a D containing Test
Is there a way to leave out the $index variable?
|
[
"Your $index variable there kind of misleading. That number isn't the index, your \"A\", \"B\", \"C\", \"D\" keys are. You can still access the data through the numbered index $index[1], but that's really not the point. If you really want to keep the numbered index, I'd almost restructure the data:\n\n$items[] = array(\"A\", \"Test\");\n$items[] = array(\"B\", \"Test\");\n$items[] = array(\"C\", \"Test\");\n$items[] = array(\"D\", \"Test\");\n\nforeach($items as $key => $value) {\n echo $key.' is a '.$value[0].' containing '.$value[1];\n}\n\n",
"You can do this:\n$items[A] = \"Test\";\n$items[B] = \"Test\";\n$items[C] = \"Test\";\n$items[D] = \"Test\";\n\nfor($i=0;$i<count($items);$i++)\n{\n list($key,$value) = each($items[$i]);\n echo \"$i $key contains $value\";\n}\n\nI haven't done that before, but in theory it should work.\n",
"Be careful how you're defining your keys there. While your example works, it might not always:\n$myArr = array();\n$myArr[A] = \"a\"; // \"A\" is assumed.\necho $myArr['A']; // \"a\" - this is expected.\n\ndefine ('A', 'aye');\n\n$myArr2 = array();\n$myArr2[A] = \"a\"; // A is a constant\n\necho $myArr['A']; // error, no key.\nprint_r($myArr);\n\n// Array\n// (\n// [aye] => a\n// )\n\n"
] |
[
15,
4,
1
] |
[] |
[] |
[
"foreach",
"php"
] |
stackoverflow_0000068583_foreach_php.txt
|
Q:
What's the best way to hash a url in ruby?
I'm writing a web app that points to external links. I'm looking to create a non-sequential, non-guessable id for each document that I can use in the URL. I did the obvious thing: treating the url as a string and str#crypt on it, but that seems to choke on any non-alphanumberic characters, like the slashes, dots and underscores.
Any suggestions on the best way to solve this problem?
Thanks!
A:
Depending on how long a string you would like you can use a few alternatives:
require 'digest'
Digest.hexencode('http://foo-bar.com/yay/?foo=bar&a=22')
# "687474703a2f2f666f6f2d6261722e636f6d2f7961792f3f666f6f3d62617226613d3232"
require 'digest/md5'
Digest::MD5.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')
# "43facc5eb5ce09fd41a6b55dba3fe2fe"
require 'digest/sha1'
Digest::SHA1.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')
# "2aba83b05dc9c2d9db7e5d34e69787d0a5e28fc5"
require 'digest/sha2'
Digest::SHA2.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')
# "e78f3d17c1c0f8d8c4f6bd91f175287516ecf78a4027d627ebcacfca822574b2"
Note that this won't be unguessable, you may have to combine it with some other (secret but static) data to salt the string:
salt = 'foobar'
Digest::SHA1.hexdigest(salt + 'http://foo-bar.com/yay/?foo=bar&a=22')
# "dbf43aff5e808ae471aa1893c6ec992088219bbb"
Now it becomes much harder to generate this hash for someone who doesn't know the original content and has no access to your source.
A:
I would also suggest looking at the different algorithms in the digest namespace. To make it harder to guess, rather than (or in addition to) salting with a secret passphrase, you can also use a precise dump of the time:
require 'digest/md5'
def hash_url(url)
Digest::MD5.hexdigest("#{Time.now.to_f}--#{url}")
end
Since the result of any hashing algorithm is not guaranteed to be unique, don't forget to check for the uniqueness of your result against previously generated hashes before assuming that your hash is usable. The use of Time.now makes the retry trivial to implement, since you only have to call until a unique hash is generated.
A:
Use Digest::MD5 from Ruby's standard library:
Digest::MD5.hexdigest(my_url)
|
What's the best way to hash a url in ruby?
|
I'm writing a web app that points to external links. I'm looking to create a non-sequential, non-guessable id for each document that I can use in the URL. I did the obvious thing: treating the url as a string and str#crypt on it, but that seems to choke on any non-alphanumberic characters, like the slashes, dots and underscores.
Any suggestions on the best way to solve this problem?
Thanks!
|
[
"Depending on how long a string you would like you can use a few alternatives:\nrequire 'digest'\nDigest.hexencode('http://foo-bar.com/yay/?foo=bar&a=22')\n# \"687474703a2f2f666f6f2d6261722e636f6d2f7961792f3f666f6f3d62617226613d3232\"\n\nrequire 'digest/md5'\nDigest::MD5.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')\n# \"43facc5eb5ce09fd41a6b55dba3fe2fe\"\n\nrequire 'digest/sha1'\nDigest::SHA1.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')\n# \"2aba83b05dc9c2d9db7e5d34e69787d0a5e28fc5\"\n\nrequire 'digest/sha2'\nDigest::SHA2.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22')\n# \"e78f3d17c1c0f8d8c4f6bd91f175287516ecf78a4027d627ebcacfca822574b2\"\n\nNote that this won't be unguessable, you may have to combine it with some other (secret but static) data to salt the string:\nsalt = 'foobar'\nDigest::SHA1.hexdigest(salt + 'http://foo-bar.com/yay/?foo=bar&a=22')\n# \"dbf43aff5e808ae471aa1893c6ec992088219bbb\"\n\nNow it becomes much harder to generate this hash for someone who doesn't know the original content and has no access to your source.\n",
"I would also suggest looking at the different algorithms in the digest namespace. To make it harder to guess, rather than (or in addition to) salting with a secret passphrase, you can also use a precise dump of the time:\nrequire 'digest/md5'\ndef hash_url(url)\n Digest::MD5.hexdigest(\"#{Time.now.to_f}--#{url}\")\nend\n\nSince the result of any hashing algorithm is not guaranteed to be unique, don't forget to check for the uniqueness of your result against previously generated hashes before assuming that your hash is usable. The use of Time.now makes the retry trivial to implement, since you only have to call until a unique hash is generated.\n",
"Use Digest::MD5 from Ruby's standard library:\nDigest::MD5.hexdigest(my_url)\n\n"
] |
[
35,
3,
0
] |
[] |
[] |
[
"ruby"
] |
stackoverflow_0000067890_ruby.txt
|
Q:
Best way to represent a parameterized enum in C#?
Are there any good solutions to represent a parameterized enum in C# 3.0? I am looking for something like OCaml or Haxe has. I can only think of class hierarchy with a simple enum field for easy switching for now, maybe there are better ideas?
See Ocaml example below in one of the replies, a Haxe code follows:
enum Tree {
Node(left: Tree, right: Tree);
Leaf(val: Int);
}
A:
Not being familiar with OCaml or Haxe, and not being clever enough to understand the other explanations, I went and looked up the Haxe enum documentation - the 'Enum Type Parameters' bit at the bottom seems to be the relevant part.
My understanding based on that is as follows:
A 'normal' enum is basically a value which is restricted to the things that you have defined in your enum definition. C# Example:
enum Color{ Red, Green, Yellow, Blue };
Color c = Color.Red;
c can either be Red, Green, Yellow, or Blue, but nothing else.
In Haxe, you can add complex types to enums, Contrived example from their page:
enum Cell<T>{
empty;
cons( item : T, next : Cell<T> )
}
Cell<int> c = <I don't know>;
What this appears to mean is that c is restricted to either being the literal value empty (like our old fashioned C# enums), or it can also be a complex type cons(item, next), where item is a T and next is a Cell<T>.
Not having ever used this it looks like it is probably generating some anonymous types (like how the C# compiler does when you do new { Name='Joe'}.
Whenever you 'access' the enum value, you have to declare item and next when you do so, and it looks like they get bound to temporary local variables.
Haxe example - You can see 'next' being used as a temporary local variable to pull data out of the anonymous cons structure:
switch( c ) {
case empty : 0;
case cons(item,next): 1 + cell_length(next);
}
To be honest, this blew my mind when I 'clicked' onto what it seemed to be doing. It seems incredibly powerful, and I can see why you'd be looking for a similar feature in C#.
C# enums are pretty much the same as C/++ enums from which they were originally copied. It's basically a nice way of saying #define Red 1 so the compiler can do comparisons and storage with integers instead of strings when you are passing Color objects around.
My stab at doing this in C# would be to use generics and interfaces. Something like this:
public interface ICell<T> {
T Item{ get; set; }
ICell<T>{ get; set; }
}
class Cons<T> : ICell<T> {
public T Item{ get; set; } /* C#3 auto-backed property */
public Cell<T> Next{ get; set; }
}
class EmptyCell<T> : ICell<T>{
public T Item{ get{ return default(T); set{ /* do nothing */ }; }
public ICell<T> Next{ get{ return null }; set{ /* do nothing */; }
}
Then you could have a List<ICell<T>> which would contain items and next cell, and you could insert EmptyCell at the end (or just have the Next reference explicitly set to null).
The advantages would be that because EmptyCell contains no member variables, it wouldn't require any storage space (like the empty in Haxe), whereas a Cons cell would.
The compiler may also inline / optimize out the methods in EmptyCell as they do nothing, so there may be a speed increase over just having a Cons with it's member data set to null.
I don't really know. I'd welcome any other possible solutions as I'm not particularly proud of my one :-)
A:
Use a class that with static properties to represent the enumeration values. You can, optionally, use a private constructor to force all references to the class to go through a static property.
Take a look at the System.Drawing.Color class. It uses this approach.
A:
C# (the .NET framework in general, as far as I know) doesn't support parametrized enums like Java does. That being said, you might want to look at Attributes. Some of the features that Java enums are capable of are somewhat doable through Attributes.
A:
What's wrong with just using a class for this? Its ugly, but thats how the Java people did it until they had language integrated Enum support!
|
Best way to represent a parameterized enum in C#?
|
Are there any good solutions to represent a parameterized enum in C# 3.0? I am looking for something like OCaml or Haxe has. I can only think of class hierarchy with a simple enum field for easy switching for now, maybe there are better ideas?
See Ocaml example below in one of the replies, a Haxe code follows:
enum Tree {
Node(left: Tree, right: Tree);
Leaf(val: Int);
}
|
[
"Not being familiar with OCaml or Haxe, and not being clever enough to understand the other explanations, I went and looked up the Haxe enum documentation - the 'Enum Type Parameters' bit at the bottom seems to be the relevant part.\nMy understanding based on that is as follows:\nA 'normal' enum is basically a value which is restricted to the things that you have defined in your enum definition. C# Example:\nenum Color{ Red, Green, Yellow, Blue };\nColor c = Color.Red;\n\nc can either be Red, Green, Yellow, or Blue, but nothing else.\nIn Haxe, you can add complex types to enums, Contrived example from their page:\nenum Cell<T>{ \n empty; \n cons( item : T, next : Cell<T> )\n}\n\nCell<int> c = <I don't know>;\n\nWhat this appears to mean is that c is restricted to either being the literal value empty (like our old fashioned C# enums), or it can also be a complex type cons(item, next), where item is a T and next is a Cell<T>.\nNot having ever used this it looks like it is probably generating some anonymous types (like how the C# compiler does when you do new { Name='Joe'}.\nWhenever you 'access' the enum value, you have to declare item and next when you do so, and it looks like they get bound to temporary local variables.\nHaxe example - You can see 'next' being used as a temporary local variable to pull data out of the anonymous cons structure:\nswitch( c ) {\n case empty : 0;\n case cons(item,next): 1 + cell_length(next);\n}\n\nTo be honest, this blew my mind when I 'clicked' onto what it seemed to be doing. It seems incredibly powerful, and I can see why you'd be looking for a similar feature in C#.\nC# enums are pretty much the same as C/++ enums from which they were originally copied. It's basically a nice way of saying #define Red 1 so the compiler can do comparisons and storage with integers instead of strings when you are passing Color objects around.\nMy stab at doing this in C# would be to use generics and interfaces. Something like this:\npublic interface ICell<T> {\n T Item{ get; set; }\n ICell<T>{ get; set; }\n}\n\nclass Cons<T> : ICell<T> {\n public T Item{ get; set; } /* C#3 auto-backed property */\n public Cell<T> Next{ get; set; }\n}\n\nclass EmptyCell<T> : ICell<T>{\n public T Item{ get{ return default(T); set{ /* do nothing */ }; }\n public ICell<T> Next{ get{ return null }; set{ /* do nothing */; }\n}\n\nThen you could have a List<ICell<T>> which would contain items and next cell, and you could insert EmptyCell at the end (or just have the Next reference explicitly set to null).\nThe advantages would be that because EmptyCell contains no member variables, it wouldn't require any storage space (like the empty in Haxe), whereas a Cons cell would.\nThe compiler may also inline / optimize out the methods in EmptyCell as they do nothing, so there may be a speed increase over just having a Cons with it's member data set to null.\nI don't really know. I'd welcome any other possible solutions as I'm not particularly proud of my one :-)\n",
"Use a class that with static properties to represent the enumeration values. You can, optionally, use a private constructor to force all references to the class to go through a static property. \nTake a look at the System.Drawing.Color class. It uses this approach.\n",
"C# (the .NET framework in general, as far as I know) doesn't support parametrized enums like Java does. That being said, you might want to look at Attributes. Some of the features that Java enums are capable of are somewhat doable through Attributes.\n",
"What's wrong with just using a class for this? Its ugly, but thats how the Java people did it until they had language integrated Enum support!\n"
] |
[
7,
1,
0,
0
] |
[] |
[] |
[
"c#",
"enums",
"haxe",
"ocaml"
] |
stackoverflow_0000066819_c#_enums_haxe_ocaml.txt
|
Q:
What Causes Flash Error #2012 (Can't instantiate class)?
I am new to ActionScript 3 and have run into a problem:
Using Flex Builder 3, I have a created a project with a few simple classes. If code in class A instantiates an object of class B (class B is in its own source file) then the code compiles fine, but I get the following run time error:
ArgumentError: Error #2012: B class cannot be instantiated.
Can someone explain what I'm doing wrong?
Update: Please see my own answer below (I could not vote it to the top since I'm not yet registered).
A:
I finally realized what was wrong: Class B was subclassing from DisplayObject which I now see is an abstract class. Class B did not implement the abstract members, thus the error. I'll probably change class B to subclass from Sprite instead.
This seems like a problem that should have been caught by the compiler. Does the fact that it wasn't mean implementation of abstract members can wait until run time? Even if so, it would be nice to at least get a compiler warning.
Thanks for everyone's answers, hopefully they will help others who run into error 2012.
A:
This usually means that the class information was not included in the SWF.
Make sure that you are importing the class, and that there is a reference to it somewhere (so the compiler will included it in the SWF).
btw, here are the runtime error codes:
http://livedocs.adobe.com/flex/201/langref/runtimeErrors.html
(not much useful info though)
mike chambers
[email protected]
A:
It's worth noting that if you're including classes that someone else built, and they used Flash CS3 and you're using Flex, or vice versa, that the core libraries of each are different and some things are not included in both. Check out the two reference docs to be sure:
CS3: http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/
Flex: http://livedocs.adobe.com/flex/2/langref/
|
What Causes Flash Error #2012 (Can't instantiate class)?
|
I am new to ActionScript 3 and have run into a problem:
Using Flex Builder 3, I have a created a project with a few simple classes. If code in class A instantiates an object of class B (class B is in its own source file) then the code compiles fine, but I get the following run time error:
ArgumentError: Error #2012: B class cannot be instantiated.
Can someone explain what I'm doing wrong?
Update: Please see my own answer below (I could not vote it to the top since I'm not yet registered).
|
[
"I finally realized what was wrong: Class B was subclassing from DisplayObject which I now see is an abstract class. Class B did not implement the abstract members, thus the error. I'll probably change class B to subclass from Sprite instead.\nThis seems like a problem that should have been caught by the compiler. Does the fact that it wasn't mean implementation of abstract members can wait until run time? Even if so, it would be nice to at least get a compiler warning.\nThanks for everyone's answers, hopefully they will help others who run into error 2012.\n",
"This usually means that the class information was not included in the SWF. \nMake sure that you are importing the class, and that there is a reference to it somewhere (so the compiler will included it in the SWF).\nbtw, here are the runtime error codes:\nhttp://livedocs.adobe.com/flex/201/langref/runtimeErrors.html\n(not much useful info though)\nmike chambers\[email protected]\n",
"It's worth noting that if you're including classes that someone else built, and they used Flash CS3 and you're using Flex, or vice versa, that the core libraries of each are different and some things are not included in both. Check out the two reference docs to be sure:\nCS3: http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/\nFlex: http://livedocs.adobe.com/flex/2/langref/\n"
] |
[
21,
2,
0
] |
[] |
[] |
[
"actionscript_3",
"apache_flex",
"flash",
"flexbuilder"
] |
stackoverflow_0000068244_actionscript_3_apache_flex_flash_flexbuilder.txt
|
Q:
Building an auditing system; MS Access frontend on SQL Server backend
So basically I'm building an app for my company and it NEEDS to be built using MS Access and it needs to be built on SQL Server.
I've drawn up most of the plans but am having a hard time figuring out a way to handle the auditing system.
Since it is being used internally only and you won't even be able to touch the db from outside the building we are not using a login system as the program will only be used once a user has already logged in to our internal network via Active Directory. Knowing this, we're using a system to detect automatically the name of the Active Directory user and with their permissions in one of the DB tables, deciding what they can or cannot do.
So the actual audit table will have 3 columns (this design may change but for this question it doesn't matter); who (Active Directory User), when (time of addition/deletion/edit), what (what was changed)
My question is how should I be handling this. Ideally I know I should be using a trigger so that it is impossible for the database to be updated without an audit being logged, however I don't know how I could grab the Active Directory User that way. An alternate would be to code it directly into the Access source so that whenever something changes I run an INSERT statement. Obviously that is flawed because if something happens to Access or the database is touched by something else then it will not log the audit.
Any advice, examples or articles that may help me would be greatly appreciated!
A:
Does this work for you?
select user_name(),suser_sname()
Doh! I forgot to escape my code.
A:
Ok, it's working here. I'm seeing my windows credentials when I update my tables. So, I bet we missed a step. Let me put together a 1,2,3 sequence of what I did and maybe we can track down where this is breaking for you.
Create a new MSAccess database (empty)
Click on the tables section
Select external data
Pick ODBC database
Pick Link to the datasource by creating a linked table
Select Machine datasource
Pick New...
System Datasource
Pick SQL Server from the list and click Next, Finish.
Give the new datasource a name and description, and select (local) for the server. Click Next.
Pick "With Windows NT authentication using the network login ID". Click Next.
Check Change the default database to, and pick the DB. Click Next. Click Finish.
Test the datasource.
Pick the table that the Trigger is associated with and click OK.
Open the table in Access and modify one of the entries (the trigger doesn't fire on Insert, just Update)
Select * from your audit table
A:
If you specify SSPI in your connection string to Sql, I think your Windows credentials are provided.
A:
I tried playing with Access a bit to see if I could find a way for you. I think you can specify a new datasource to your SQL table, and select Windows NT Authentication as your connection type.
A:
Sure :)
There should be a section in Access called "External Data" (I'm running a new version of Access, so the menu choice might be different).
Form this there should be an option to specify an ODBC connection.
I get an option to Link to the datasource by creating a linked table.
I then created a Machine datasource. I selected SqlServer from the drop down list. Then when I click Next, I'm prompted for how I want to authenticate.
A:
CREATE TRIGGER testtrigger1
ON testdatatable
AFTER update
AS
BEGIN
INSERT INTO testtable (datecol,usercol1,usercol2) VALUES (getdate(),user_name(),suser_sname());
END
GO
A:
We also have a database system that is used exclusively within the organisation and use Window NT logins. This function returns the current users login name:
CREATE FUNCTION dbo.UserName() RETURNS varchar(50)
AS
BEGIN
RETURN (SELECT nt_username FROM master.dbo.sysprocesses WHERE spid = @@SPID)
END
You can use this function in your triggers.
A:
How many users of the app will there be? Is there possibility of using windows integrated authentication for SQL authentication?
Updated: If you can give each user a SQL login (windows integrated) then you can pickup the logged on user using the SYSTEM_USER function.
A:
It should be
select user name(),suser sname()
replace spaces with underscores
A:
you need to connect with integrated security aka trusted connection see (http://www.connectionstrings.com/?carrier=sqlserver)
A:
My solution would be not to let Access modify the data with linked tables.
I would only create the UI in Access and create an ADO connection to the server using windows authenticated in the connection string. Compile you Access application as dbe to protect the VB code.
I would not issue SQL statement, but I would call stored procedures to perform the changes in the database, and create the audit log entry in an atomic transaction.
The UI (Access) does not need to know the inner works on the server. All it needs to do is request and update/insert/delete using the stored procedures you would create for this purpose. The server should handle the work.
Retrieve a record set with ADO using a view with the hint NOLOCK implemented in the server and cache this data in Access for local display. Or retrieve a single record and lock only that row for editing.
Using linked tables your users will be locking each other.
With ADO connections you will not have the trouble to set ODBCs on every single client.
Create a table to set the server status. You application will check it before any action. you can use it to close the server to the application in case that you need to perform changes or maintenance.
Access is a great tool. But it should only handle its local data and not be allowed to mess with the precious server.
|
Building an auditing system; MS Access frontend on SQL Server backend
|
So basically I'm building an app for my company and it NEEDS to be built using MS Access and it needs to be built on SQL Server.
I've drawn up most of the plans but am having a hard time figuring out a way to handle the auditing system.
Since it is being used internally only and you won't even be able to touch the db from outside the building we are not using a login system as the program will only be used once a user has already logged in to our internal network via Active Directory. Knowing this, we're using a system to detect automatically the name of the Active Directory user and with their permissions in one of the DB tables, deciding what they can or cannot do.
So the actual audit table will have 3 columns (this design may change but for this question it doesn't matter); who (Active Directory User), when (time of addition/deletion/edit), what (what was changed)
My question is how should I be handling this. Ideally I know I should be using a trigger so that it is impossible for the database to be updated without an audit being logged, however I don't know how I could grab the Active Directory User that way. An alternate would be to code it directly into the Access source so that whenever something changes I run an INSERT statement. Obviously that is flawed because if something happens to Access or the database is touched by something else then it will not log the audit.
Any advice, examples or articles that may help me would be greatly appreciated!
|
[
"Does this work for you?\n\nselect user_name(),suser_sname()\n\n\nDoh! I forgot to escape my code.\n",
"Ok, it's working here. I'm seeing my windows credentials when I update my tables. So, I bet we missed a step. Let me put together a 1,2,3 sequence of what I did and maybe we can track down where this is breaking for you.\n\n\nCreate a new MSAccess database (empty)\nClick on the tables section\nSelect external data\nPick ODBC database\nPick Link to the datasource by creating a linked table\nSelect Machine datasource\nPick New...\nSystem Datasource\nPick SQL Server from the list and click Next, Finish.\nGive the new datasource a name and description, and select (local) for the server. Click Next.\nPick \"With Windows NT authentication using the network login ID\". Click Next.\nCheck Change the default database to, and pick the DB. Click Next. Click Finish.\nTest the datasource.\nPick the table that the Trigger is associated with and click OK.\nOpen the table in Access and modify one of the entries (the trigger doesn't fire on Insert, just Update)\nSelect * from your audit table\n\n",
"If you specify SSPI in your connection string to Sql, I think your Windows credentials are provided.\n",
"I tried playing with Access a bit to see if I could find a way for you. I think you can specify a new datasource to your SQL table, and select Windows NT Authentication as your connection type. \n",
"Sure :)\nThere should be a section in Access called \"External Data\" (I'm running a new version of Access, so the menu choice might be different).\nForm this there should be an option to specify an ODBC connection.\nI get an option to Link to the datasource by creating a linked table.\nI then created a Machine datasource. I selected SqlServer from the drop down list. Then when I click Next, I'm prompted for how I want to authenticate.\n",
"CREATE TRIGGER testtrigger1\nON testdatatable\nAFTER update\nAS \nBEGIN\n INSERT INTO testtable (datecol,usercol1,usercol2) VALUES (getdate(),user_name(),suser_sname());\nEND\nGO\n\n",
"We also have a database system that is used exclusively within the organisation and use Window NT logins. This function returns the current users login name:\nCREATE FUNCTION dbo.UserName() RETURNS varchar(50)\nAS\n BEGIN\n RETURN (SELECT nt_username FROM master.dbo.sysprocesses WHERE spid = @@SPID)\n END\n\nYou can use this function in your triggers.\n",
"How many users of the app will there be? Is there possibility of using windows integrated authentication for SQL authentication?\nUpdated: If you can give each user a SQL login (windows integrated) then you can pickup the logged on user using the SYSTEM_USER function. \n",
"It should be\nselect user name(),suser sname()\n\nreplace spaces with underscores\n",
"you need to connect with integrated security aka trusted connection see (http://www.connectionstrings.com/?carrier=sqlserver)\n",
"My solution would be not to let Access modify the data with linked tables. \nI would only create the UI in Access and create an ADO connection to the server using windows authenticated in the connection string. Compile you Access application as dbe to protect the VB code. \nI would not issue SQL statement, but I would call stored procedures to perform the changes in the database, and create the audit log entry in an atomic transaction.\nThe UI (Access) does not need to know the inner works on the server. All it needs to do is request and update/insert/delete using the stored procedures you would create for this purpose. The server should handle the work.\nRetrieve a record set with ADO using a view with the hint NOLOCK implemented in the server and cache this data in Access for local display. Or retrieve a single record and lock only that row for editing. \nUsing linked tables your users will be locking each other.\nWith ADO connections you will not have the trouble to set ODBCs on every single client. \nCreate a table to set the server status. You application will check it before any action. you can use it to close the server to the application in case that you need to perform changes or maintenance.\nAccess is a great tool. But it should only handle its local data and not be allowed to mess with the precious server.\n"
] |
[
2,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"ms_access",
"sql",
"sql_server",
"triggers"
] |
stackoverflow_0000012638_ms_access_sql_sql_server_triggers.txt
|
Q:
Fluent NHibernate Architecture Question
I have a question that I may be over thinking at this point but here goes...
I have 2 classes Users and Groups. Users and groups have a many to many relationship and I was thinking that the join table group_users I wanted to have an IsAuthorized property (because some groups are private -- users will need authorization).
Would you recommend creating a class for the join table as well as the User and Groups table? Currently my classes look like this.
public class Groups
{
public Groups()
{
members = new List<Person>();
}
...
public virtual IList<Person> members { get; set; }
}
public class User
{
public User()
{
groups = new Groups()
}
...
public virtual IList<Groups> groups{ get; set; }
}
My mapping is like the following in both classes (I'm only showing the one in the users mapping but they are very similar):
HasManyToMany<Groups>(x => x.Groups)
.WithTableName("GroupMembers")
.WithParentKeyColumn("UserID")
.WithChildKeyColumn("GroupID")
.Cascade.SaveUpdate();
Should I write a class for the join table that looks like this?
public class GroupMembers
{
public virtual string GroupID { get; set; }
public virtual string PersonID { get; set; }
public virtual bool WaitingForAccept { get; set; }
}
I would really like to be able to adjust the group membership status and I guess I'm trying to think of the best way to go about this.
A:
Yes, sure you need another class like UserGroupBridge. Another good side-effect is that you can modify user membership and group members without loading potentially heavy User/Group objects to NHibernate session.
Cheers.
A:
I generally only like to create classes that represent actual business entities. In this case I don't think 'groupmembers' represents anything of value in your code. To me the ORM should map the database to your business objects. This means that your classes don't have to exactly mirror the database layout.
Also I suspect that by implementing GroupMembers, you will end up with some nasty collections in both your user and group classes. I.E. the group class will have the list of users and also a list of groupmembers which references a user and vice versa for the user class. To me this isn't that clean and will make it harder to maintain and propagate changes to the tables.
I would suggest keeping the join table in the database as you have suggested, and add a List of groups called waitingtoaccept in users and (if it makes sense too) add List of users called waitingtoaccept in groups.
These would then pull their values from your join-table in the database based on the waitingtoaccept flag.
|
Fluent NHibernate Architecture Question
|
I have a question that I may be over thinking at this point but here goes...
I have 2 classes Users and Groups. Users and groups have a many to many relationship and I was thinking that the join table group_users I wanted to have an IsAuthorized property (because some groups are private -- users will need authorization).
Would you recommend creating a class for the join table as well as the User and Groups table? Currently my classes look like this.
public class Groups
{
public Groups()
{
members = new List<Person>();
}
...
public virtual IList<Person> members { get; set; }
}
public class User
{
public User()
{
groups = new Groups()
}
...
public virtual IList<Groups> groups{ get; set; }
}
My mapping is like the following in both classes (I'm only showing the one in the users mapping but they are very similar):
HasManyToMany<Groups>(x => x.Groups)
.WithTableName("GroupMembers")
.WithParentKeyColumn("UserID")
.WithChildKeyColumn("GroupID")
.Cascade.SaveUpdate();
Should I write a class for the join table that looks like this?
public class GroupMembers
{
public virtual string GroupID { get; set; }
public virtual string PersonID { get; set; }
public virtual bool WaitingForAccept { get; set; }
}
I would really like to be able to adjust the group membership status and I guess I'm trying to think of the best way to go about this.
|
[
"Yes, sure you need another class like UserGroupBridge. Another good side-effect is that you can modify user membership and group members without loading potentially heavy User/Group objects to NHibernate session.\nCheers.\n",
"I generally only like to create classes that represent actual business entities. In this case I don't think 'groupmembers' represents anything of value in your code. To me the ORM should map the database to your business objects. This means that your classes don't have to exactly mirror the database layout.\nAlso I suspect that by implementing GroupMembers, you will end up with some nasty collections in both your user and group classes. I.E. the group class will have the list of users and also a list of groupmembers which references a user and vice versa for the user class. To me this isn't that clean and will make it harder to maintain and propagate changes to the tables.\nI would suggest keeping the join table in the database as you have suggested, and add a List of groups called waitingtoaccept in users and (if it makes sense too) add List of users called waitingtoaccept in groups. \nThese would then pull their values from your join-table in the database based on the waitingtoaccept flag.\n"
] |
[
1,
1
] |
[] |
[] |
[
"architecture",
"fluent",
"nhibernate"
] |
stackoverflow_0000068572_architecture_fluent_nhibernate.txt
|
Q:
c++ Having multiple graphics options
Currently my app uses just Direct3D9 for graphics, however in the future I' m planning to extend this to D3D10 and possibly OpenGL. The question is how can I do this in a tidy way?
At present there are various Render methods in my code
void Render(boost::function<void()> &Call)
{
D3dDevice->BeginScene();
Call();
D3dDevice->EndScene();
D3dDevice->Present(0,0,0,0);
}
The function passed then depends on the exact state, eg MainMenu->Render, Loading->Render, etc. These will then oftern call the methods of other objects.
void RenderGame()
{
for(entity::iterator it = entity::instances.begin();it != entity::instance.end(); ++it)
(*it)->Render();
UI->Render();
}
And a sample class derived from entity::Base
class Sprite: public Base
{
IDirect3DTexture9 *Tex;
Point2 Pos;
Size2 Size;
public:
Sprite(IDirect3DTexture9 *Tex, const Point2 &Pos, const Size2 &Size);
virtual void Render();
};
Each method then takes care of how best to render given the more detailed settings (eg are pixel shaders supported or not).
The problem is I'm really not sure how to extend this to be able to use one of, what may be somewhat different (D3D v OpenGL) render modes...
A:
Define an interface that is sufficient for your application's graphic output demands. Then implement this interface for every renderer you want to support.
class IRenderer {
public:
virtual ~IRenderer() {}
virtual void RenderModel(CModel* model) = 0;
virtual void DrawScreenQuad(int x1, int y1, int x2, int y2) = 0;
// ...etc...
};
class COpenGLRenderer : public IRenderer {
public:
virtual void RenderModel(CModel* model) {
// render model using OpenGL
}
virtual void DrawScreenQuad(int x1, int y1, int x2, int y2) {
// draw screen aligned quad using OpenGL
}
};
class CDirect3DRenderer : public IRenderer {
// similar, but render using Direct3D
};
Properly designing and maintaining these interfaces can be very challenging though.
In case you also operate with render driver dependent objects like textures, you can use a factory pattern to have the separate renderers each create their own implementation of e.g. ITexture using a factory method in IRenderer:
class IRenderer {
//...
virtual ITexture* CreateTexture(const char* filename) = 0;
//...
};
class COpenGLRenderer : public IRenderer {
//...
virtual ITexture* CreateTexture(const char* filename) {
// COpenGLTexture is the OpenGL specific ITexture implementation
return new COpenGLTexture(filename);
}
//...
};
Isn't it an idea to look at existing (3d) engines though? In my experience designing this kind of interfaces really distracts from what you actually want to make :)
A:
I'd say if you want a really complete the answer, go look at the source code for Ogre3D. They have both D3D and OpenGL back ends. Look at : http://www.ogre3d.org
Basically their API kind of forces you into working in a D3D-ish way, creating buffer objects and stuffing them with data, then issuing draw calls on those buffers. That's the way the hardware likes it anyway, so it's not a bad way to go.
And then once you see how they do things, you might as well just just go ahead and use it and save yourself the trouble of having to re-implement all that it already provides. :-)
|
c++ Having multiple graphics options
|
Currently my app uses just Direct3D9 for graphics, however in the future I' m planning to extend this to D3D10 and possibly OpenGL. The question is how can I do this in a tidy way?
At present there are various Render methods in my code
void Render(boost::function<void()> &Call)
{
D3dDevice->BeginScene();
Call();
D3dDevice->EndScene();
D3dDevice->Present(0,0,0,0);
}
The function passed then depends on the exact state, eg MainMenu->Render, Loading->Render, etc. These will then oftern call the methods of other objects.
void RenderGame()
{
for(entity::iterator it = entity::instances.begin();it != entity::instance.end(); ++it)
(*it)->Render();
UI->Render();
}
And a sample class derived from entity::Base
class Sprite: public Base
{
IDirect3DTexture9 *Tex;
Point2 Pos;
Size2 Size;
public:
Sprite(IDirect3DTexture9 *Tex, const Point2 &Pos, const Size2 &Size);
virtual void Render();
};
Each method then takes care of how best to render given the more detailed settings (eg are pixel shaders supported or not).
The problem is I'm really not sure how to extend this to be able to use one of, what may be somewhat different (D3D v OpenGL) render modes...
|
[
"Define an interface that is sufficient for your application's graphic output demands. Then implement this interface for every renderer you want to support.\nclass IRenderer {\n public:\n virtual ~IRenderer() {}\n virtual void RenderModel(CModel* model) = 0;\n virtual void DrawScreenQuad(int x1, int y1, int x2, int y2) = 0;\n // ...etc...\n};\n\nclass COpenGLRenderer : public IRenderer {\n public:\n virtual void RenderModel(CModel* model) {\n // render model using OpenGL\n }\n virtual void DrawScreenQuad(int x1, int y1, int x2, int y2) {\n // draw screen aligned quad using OpenGL\n }\n};\n\nclass CDirect3DRenderer : public IRenderer {\n // similar, but render using Direct3D\n};\n\nProperly designing and maintaining these interfaces can be very challenging though.\nIn case you also operate with render driver dependent objects like textures, you can use a factory pattern to have the separate renderers each create their own implementation of e.g. ITexture using a factory method in IRenderer:\nclass IRenderer {\n //...\n virtual ITexture* CreateTexture(const char* filename) = 0;\n //...\n};\n\nclass COpenGLRenderer : public IRenderer {\n //...\n virtual ITexture* CreateTexture(const char* filename) {\n // COpenGLTexture is the OpenGL specific ITexture implementation\n return new COpenGLTexture(filename);\n }\n //...\n};\n\nIsn't it an idea to look at existing (3d) engines though? In my experience designing this kind of interfaces really distracts from what you actually want to make :)\n",
"I'd say if you want a really complete the answer, go look at the source code for Ogre3D. They have both D3D and OpenGL back ends. Look at : http://www.ogre3d.org\nBasically their API kind of forces you into working in a D3D-ish way, creating buffer objects and stuffing them with data, then issuing draw calls on those buffers. That's the way the hardware likes it anyway, so it's not a bad way to go. \nAnd then once you see how they do things, you might as well just just go ahead and use it and save yourself the trouble of having to re-implement all that it already provides. :-)\n"
] |
[
6,
1
] |
[] |
[] |
[
"c++",
"direct3d",
"graphics",
"opengl"
] |
stackoverflow_0000060751_c++_direct3d_graphics_opengl.txt
|
Q:
Best Practice for Creating Data Tables Without Controls in ASP.net
So, I am kinda new to ASP.net development still, and I already don't like the stock ASP.net controls for displaying my database query results in table format. (I.e. I would much rather handle the HTML myself and so would the designer!)
So my question is: What is the best and most secure practice for doing this without using ASP.net controls? So far my only idea involves populating my query result during the Page_Load event and then exposing a DataTable through a getter to the *.aspx page. From there I think I could just iterate with a foreach loop and craft my table as I see fit.
A:
I believe you're looking for a <Repeater> control. It contains some functionality similar to the GridViews, but allows you hand-craft all of the HTML for the Header, Item, and Footers yourself. Simply call the databinding code as you would for a gridview, and change the ASPX page to suit your exact HTML needs.
http://msdn.microsoft.com/en-us/magazine/cc163780.aspx
A:
The successor to the repeater is the ListView, which will also help you handcraft your HTML.
A:
If you're more interested in hand Coding your HTML, it might be worth looking at the ASP.NET MVC project. You get a little more control over things than standard WebForms.
As an aside, plugging data access code in the Page_Load is never a good idea. It ties your presentation too much to your data code. Have a look at either the MVC as suggested above which applies a standard design pattern to separate concerns, or do a google search for "ASP.NET nTier" or something similar. It might take a little longer to get a site up and running, but it will save you pain in the long run.
A:
three options:
learn to use the existing controls like GridView; with proper CSS they can look quite nice, since they just generate HTML on the client side
generate your HTML using templates or StringBuilder and put it in a Literal control on your web page
buy a third-party library that has controls that you do like
number 1 is the 'best' option for asp.net in the long term because you will actually master the controls that everyone else uses; option 2 is tedious; option 3 is a quick fix and may be expensive
|
Best Practice for Creating Data Tables Without Controls in ASP.net
|
So, I am kinda new to ASP.net development still, and I already don't like the stock ASP.net controls for displaying my database query results in table format. (I.e. I would much rather handle the HTML myself and so would the designer!)
So my question is: What is the best and most secure practice for doing this without using ASP.net controls? So far my only idea involves populating my query result during the Page_Load event and then exposing a DataTable through a getter to the *.aspx page. From there I think I could just iterate with a foreach loop and craft my table as I see fit.
|
[
"I believe you're looking for a <Repeater> control. It contains some functionality similar to the GridViews, but allows you hand-craft all of the HTML for the Header, Item, and Footers yourself. Simply call the databinding code as you would for a gridview, and change the ASPX page to suit your exact HTML needs.\nhttp://msdn.microsoft.com/en-us/magazine/cc163780.aspx\n",
"The successor to the repeater is the ListView, which will also help you handcraft your HTML.\n",
"If you're more interested in hand Coding your HTML, it might be worth looking at the ASP.NET MVC project. You get a little more control over things than standard WebForms.\nAs an aside, plugging data access code in the Page_Load is never a good idea. It ties your presentation too much to your data code. Have a look at either the MVC as suggested above which applies a standard design pattern to separate concerns, or do a google search for \"ASP.NET nTier\" or something similar. It might take a little longer to get a site up and running, but it will save you pain in the long run.\n",
"three options:\n\nlearn to use the existing controls like GridView; with proper CSS they can look quite nice, since they just generate HTML on the client side\ngenerate your HTML using templates or StringBuilder and put it in a Literal control on your web page\nbuy a third-party library that has controls that you do like\n\nnumber 1 is the 'best' option for asp.net in the long term because you will actually master the controls that everyone else uses; option 2 is tedious; option 3 is a quick fix and may be expensive\n"
] |
[
4,
3,
1,
1
] |
[] |
[] |
[
".net",
"asp.net"
] |
stackoverflow_0000068543_.net_asp.net.txt
|
Q:
What is the best way to migrate an existing messy webapp to elegant MVC?
I joined a new company about a month ago. The company is rather small in size and has pretty strong "start-up" feel to it. I'm working as a Java developer on a team of 3 others. The company primarily sells a service to for businesses/business-type people to use in communicating with each other.
One of the main things I have been, and will be working on, is the main website for the company - from which the service is sold, existing users login to check their service and pay their bills, new users can sign up for a trial, etc. Currently this is a JSP application deployed on Tomcat, with access to a database done thru a persistence layer written by the company itself.
A repeated and growing frustration I am having here (and I'm pretty happy with the job overall, so this isn't an "oh no I don't like my job"-type post) is the lack of any larger design or architecture for this web application. The app is made up of several dozen JSP pages, with almost no logic existing in Servlets or Beans or any other sort of framework. Many of the JSP pages are thousands of lines of code, they jsp:include other JSP pages, business logic is mixed in with the HTML, frequently used snippets of code (such as obtaining a web service connection) is cut and paste rather than reused, etc. In other words, the application is a mess.
There have been some rumblings within the company of trying to re-architect this site so that it fits MVC better; I think that the developers and higher-ups are beginning to realize that this current pattern of spaghetti code isn't sustainable or very easily scalable to add more features for the users. The higher-ups and developers are wary of completely re-writing the thing (with good reason, since this would mean several weeks or months of work re-writing existing functionality), but we've had some discussions of (slowly) re-writing certain areas of the site into a new framework.
What are some of the best strategies to enable moving the application and codebase into this direction? How can I as a developer really help move this along, and quickly, without seeming like the jerk-y new guy who comes into a job and tells everyone that what they've written is crap? Are there any proven strategies or experiences that you've used in your own job experience when you've encountered this sort of thing?
A:
Your best bet is probably to refactor it slowly as you go along. Few us of have the resources that would be required to completely start from scratch with something that has so many business rules buried in it. Management really hates it when you spend months on developing an app that has more bugs than the one you replaced.
If you have the opportunity to build any separate apps from scratch, use all of the best practices there and use it to demonstrate how effective they are. When you can, incorporate those ideas gradually into the old application.
A:
First pick up a copy of Michael Feather's Working Effectively with Legacy Code. Then identify how best to test the existing code. The worst case is that you are stuck with just some high level regression tests (or nothing at all) and If you are lucky there will be unit tests. Then it is a case of slow steady refactoring hopefully while adding new business functionality at the same time.
A:
In my experience, the "elegance" of an app typically has more to do with the database design than anything. If you have a great database design, including a well-defined stored procedure interface, good application code tends to follow no matter what platform you use. If you have poor database design, no matter what platform you use you'll have a very difficult time building elegant application code, as you'll be constantly compensating for the database.
Of course, there's plenty of room between great and poor, but my point is that if you want good application code, start by making sure your database is up to snuff.
A:
This is harder to do in applications that are only in maintenance mode because it is hard to convince management that rewriting something that is already 'working' is worth doing. I would start by applying the principles of MVC to any new code that you are able to work on (ie. move business logic to something akin to a model, put all your layout/view code in one place)
As you gain experience with new code in MVC you can start seeing opportunities to change the existing code subtly so that it also falls in line. It can be a very slow process, but if you can show the benefits of this way of doing it then you will be able to convince others and get the entire team on board.
A:
I would agree with the slow refactoring approach; for example, take that copy-and-pasted code and extract it into whatever the appropriate Java paradigm is (a class, perhaps? or better yet, use an existing library?). When your code is really clean and terse, yet still lacking in overall architectural strategy, then you'll be able to make things fit into an overall architecture much more easily.
A:
Iteratively refactor. Also look for new features that can be done completely in the new framework as a way to show the value of the new framework.
A:
My suggestion would be to find the rare pages that do not require an import of other JSPs, or have very few imports. Treat each JSP imported as a black box, and refactor these pages around them (iteratively, testing each change and making sure it works before continuing). Once these are cleaned up, you can continue finding pages with more and more imports, until finally you have refactored the imports.
When refactoring, note the parts that try to access resources not given to the page and try to take this out to a controller. For instance, anything that accesses the database should be inside a controller, let the JSP handle the display of the information the controller gives to it via a forward. In this way you will develop several servlets, or things like servlets, for each page. I would suggest using a front-controller based framework for this refactoring (from personal experience I recommend Spring and its Controller interface), so that each controller isn't a separate Servlet but is rather delegated to from a single servlet that is mapped appropriately.
For the controller, it is better to do database hits all at once rather than attempt them piecemeal. Users can and generally do tolerate a page load, but the page output will be much faster if all the database data is given to the rendering code rather than the rendering code hanging and not giving data to the client while it is trying to read yet another piece of data from the database.
I feel your pain and wish you luck in this endeavor. Now, when you have to maintain an application that abuses Spring Webflow, that's another story :)
A:
The best way is to print out the code, crumple it up, and throw it out. Don't even recycle the paper.
You've got an application that is written in 1,000+ line long JSPs. It probably has a god-awful Domain Model (if it even has one at all) and doesn't just MIX presentation with business logic, it BLENDS it and sits there and keeps stirring for hours. There is no way to take out the code that is crappy and move into an MVC Controller class and still be doing the right thing, you'll just end up with a MVC app with an anemic domain model or one that has stuff like Database calls in Controller code, you're still failing.
You could try a new app that does the right thing and then have the two apps talk to each other, but that's new complexity in itself. Also you're likely going to be doing the same amount of work that you were going to do if you just started from scratch, but you might have an easier time trying to convince your bosses that this is a better approach.
|
What is the best way to migrate an existing messy webapp to elegant MVC?
|
I joined a new company about a month ago. The company is rather small in size and has pretty strong "start-up" feel to it. I'm working as a Java developer on a team of 3 others. The company primarily sells a service to for businesses/business-type people to use in communicating with each other.
One of the main things I have been, and will be working on, is the main website for the company - from which the service is sold, existing users login to check their service and pay their bills, new users can sign up for a trial, etc. Currently this is a JSP application deployed on Tomcat, with access to a database done thru a persistence layer written by the company itself.
A repeated and growing frustration I am having here (and I'm pretty happy with the job overall, so this isn't an "oh no I don't like my job"-type post) is the lack of any larger design or architecture for this web application. The app is made up of several dozen JSP pages, with almost no logic existing in Servlets or Beans or any other sort of framework. Many of the JSP pages are thousands of lines of code, they jsp:include other JSP pages, business logic is mixed in with the HTML, frequently used snippets of code (such as obtaining a web service connection) is cut and paste rather than reused, etc. In other words, the application is a mess.
There have been some rumblings within the company of trying to re-architect this site so that it fits MVC better; I think that the developers and higher-ups are beginning to realize that this current pattern of spaghetti code isn't sustainable or very easily scalable to add more features for the users. The higher-ups and developers are wary of completely re-writing the thing (with good reason, since this would mean several weeks or months of work re-writing existing functionality), but we've had some discussions of (slowly) re-writing certain areas of the site into a new framework.
What are some of the best strategies to enable moving the application and codebase into this direction? How can I as a developer really help move this along, and quickly, without seeming like the jerk-y new guy who comes into a job and tells everyone that what they've written is crap? Are there any proven strategies or experiences that you've used in your own job experience when you've encountered this sort of thing?
|
[
"Your best bet is probably to refactor it slowly as you go along. Few us of have the resources that would be required to completely start from scratch with something that has so many business rules buried in it. Management really hates it when you spend months on developing an app that has more bugs than the one you replaced.\nIf you have the opportunity to build any separate apps from scratch, use all of the best practices there and use it to demonstrate how effective they are. When you can, incorporate those ideas gradually into the old application.\n",
"First pick up a copy of Michael Feather's Working Effectively with Legacy Code. Then identify how best to test the existing code. The worst case is that you are stuck with just some high level regression tests (or nothing at all) and If you are lucky there will be unit tests. Then it is a case of slow steady refactoring hopefully while adding new business functionality at the same time.\n",
"In my experience, the \"elegance\" of an app typically has more to do with the database design than anything. If you have a great database design, including a well-defined stored procedure interface, good application code tends to follow no matter what platform you use. If you have poor database design, no matter what platform you use you'll have a very difficult time building elegant application code, as you'll be constantly compensating for the database.\nOf course, there's plenty of room between great and poor, but my point is that if you want good application code, start by making sure your database is up to snuff.\n",
"This is harder to do in applications that are only in maintenance mode because it is hard to convince management that rewriting something that is already 'working' is worth doing. I would start by applying the principles of MVC to any new code that you are able to work on (ie. move business logic to something akin to a model, put all your layout/view code in one place)\nAs you gain experience with new code in MVC you can start seeing opportunities to change the existing code subtly so that it also falls in line. It can be a very slow process, but if you can show the benefits of this way of doing it then you will be able to convince others and get the entire team on board.\n",
"I would agree with the slow refactoring approach; for example, take that copy-and-pasted code and extract it into whatever the appropriate Java paradigm is (a class, perhaps? or better yet, use an existing library?). When your code is really clean and terse, yet still lacking in overall architectural strategy, then you'll be able to make things fit into an overall architecture much more easily.\n",
"Iteratively refactor. Also look for new features that can be done completely in the new framework as a way to show the value of the new framework.\n",
"My suggestion would be to find the rare pages that do not require an import of other JSPs, or have very few imports. Treat each JSP imported as a black box, and refactor these pages around them (iteratively, testing each change and making sure it works before continuing). Once these are cleaned up, you can continue finding pages with more and more imports, until finally you have refactored the imports.\nWhen refactoring, note the parts that try to access resources not given to the page and try to take this out to a controller. For instance, anything that accesses the database should be inside a controller, let the JSP handle the display of the information the controller gives to it via a forward. In this way you will develop several servlets, or things like servlets, for each page. I would suggest using a front-controller based framework for this refactoring (from personal experience I recommend Spring and its Controller interface), so that each controller isn't a separate Servlet but is rather delegated to from a single servlet that is mapped appropriately.\nFor the controller, it is better to do database hits all at once rather than attempt them piecemeal. Users can and generally do tolerate a page load, but the page output will be much faster if all the database data is given to the rendering code rather than the rendering code hanging and not giving data to the client while it is trying to read yet another piece of data from the database.\nI feel your pain and wish you luck in this endeavor. Now, when you have to maintain an application that abuses Spring Webflow, that's another story :)\n",
"The best way is to print out the code, crumple it up, and throw it out. Don't even recycle the paper.\nYou've got an application that is written in 1,000+ line long JSPs. It probably has a god-awful Domain Model (if it even has one at all) and doesn't just MIX presentation with business logic, it BLENDS it and sits there and keeps stirring for hours. There is no way to take out the code that is crappy and move into an MVC Controller class and still be doing the right thing, you'll just end up with a MVC app with an anemic domain model or one that has stuff like Database calls in Controller code, you're still failing. \nYou could try a new app that does the right thing and then have the two apps talk to each other, but that's new complexity in itself. Also you're likely going to be doing the same amount of work that you were going to do if you just started from scratch, but you might have an easier time trying to convince your bosses that this is a better approach. \n"
] |
[
12,
4,
2,
2,
2,
1,
1,
0
] |
[] |
[] |
[
"architecture",
"java",
"jsp",
"model_view_controller"
] |
stackoverflow_0000040242_architecture_java_jsp_model_view_controller.txt
|
Q:
Pass Silverlight type to Microsoft AJAX and pass parameter validation
I'm working on a Silverlight application where I want to take advantage of the Microsoft ASP.NET AJAX Client library. I'm calling the library using the HTML Bridge that is part of Silverlight 2. Silverlight got great support for passing types between JavaScript and Managed Code, but now I've bumped against a problem.
Microsoft ASP.NET AJAX Client Libraries includes a "type system", and one of the things the framework does is validating that the parameters is of correct type. The specific function I'm calling is the Sys.Application.addHistoryPoint, and the validation code looks like this:
var e = Function.validateParams(arguments, [
{name: "state", type: Object},
{name: "title", type: String, mayBeNull: true, optional: true}
]);
I've tried passing all kinds of CLR types as the state parameter (C# structs, [ScriptableTypes], Dictionary types etc. And every time I get the error:
"Sys.ArgumentTypeException: Object of type 'Function' cannot be converted to type 'Object'
This error is obviously coming from the parameter validation... But WHY does ASP.NET AJAX think my types are Functions? Does anyone understand the type validation in MS AJAX?
I know I can do workarounds like calling HtmlPage.Window.Eval("...") and pass my JS integration as strings, but I don't want to do that. I want to pass a real .NET type as the state parameter.
A:
I found a pretty good overview of this here, but even that overview seemed to cover every scenario except the one you mention. I'm wondering if this can't be done because javascript objects really are functions (more or less).
What if you wrote a wrapper function that could create the state object using a string?
|
Pass Silverlight type to Microsoft AJAX and pass parameter validation
|
I'm working on a Silverlight application where I want to take advantage of the Microsoft ASP.NET AJAX Client library. I'm calling the library using the HTML Bridge that is part of Silverlight 2. Silverlight got great support for passing types between JavaScript and Managed Code, but now I've bumped against a problem.
Microsoft ASP.NET AJAX Client Libraries includes a "type system", and one of the things the framework does is validating that the parameters is of correct type. The specific function I'm calling is the Sys.Application.addHistoryPoint, and the validation code looks like this:
var e = Function.validateParams(arguments, [
{name: "state", type: Object},
{name: "title", type: String, mayBeNull: true, optional: true}
]);
I've tried passing all kinds of CLR types as the state parameter (C# structs, [ScriptableTypes], Dictionary types etc. And every time I get the error:
"Sys.ArgumentTypeException: Object of type 'Function' cannot be converted to type 'Object'
This error is obviously coming from the parameter validation... But WHY does ASP.NET AJAX think my types are Functions? Does anyone understand the type validation in MS AJAX?
I know I can do workarounds like calling HtmlPage.Window.Eval("...") and pass my JS integration as strings, but I don't want to do that. I want to pass a real .NET type as the state parameter.
|
[
"I found a pretty good overview of this here, but even that overview seemed to cover every scenario except the one you mention. I'm wondering if this can't be done because javascript objects really are functions (more or less). \nWhat if you wrote a wrapper function that could create the state object using a string? \n"
] |
[
1
] |
[] |
[] |
[
"ajax",
"asp.net_ajax",
"htmlbridge",
"javascript",
"silverlight"
] |
stackoverflow_0000051289_ajax_asp.net_ajax_htmlbridge_javascript_silverlight.txt
|
Q:
How do you stop a Visual Studio generated web service proxy class from encoding?
I'm using a Visual Studio generated proxy class to access a web service (added the web service as a web reference to my project). The problem is that the function the web service exposes expects a CDATA element, i.e.:
<Function><![CDATA[<Blah></Blah>]]></Function>
Unfortunately, when I pass in "" into the proxy class, it calls the web service with this:
<Function><![CDATA[<Blah></Blah>]]></Function>
This appears to be causing problems with the web service. Is there any way to fix this while still using the proxy class generated by Visual Studio?
A:
Can you provide a code sample of how you're calling the webservice? If it's a web service with a published WSDL I don't know why you'd even have to address this level of implementation detail, so I have a suspicion that you're calling it wrong somehow.
|
How do you stop a Visual Studio generated web service proxy class from encoding?
|
I'm using a Visual Studio generated proxy class to access a web service (added the web service as a web reference to my project). The problem is that the function the web service exposes expects a CDATA element, i.e.:
<Function><![CDATA[<Blah></Blah>]]></Function>
Unfortunately, when I pass in "" into the proxy class, it calls the web service with this:
<Function><![CDATA[<Blah></Blah>]]></Function>
This appears to be causing problems with the web service. Is there any way to fix this while still using the proxy class generated by Visual Studio?
|
[
"Can you provide a code sample of how you're calling the webservice? If it's a web service with a published WSDL I don't know why you'd even have to address this level of implementation detail, so I have a suspicion that you're calling it wrong somehow.\n"
] |
[
1
] |
[] |
[] |
[
"proxy_classes",
"visual_studio",
"web_services",
"wsdl"
] |
stackoverflow_0000068555_proxy_classes_visual_studio_web_services_wsdl.txt
|
Q:
Execute JavaScript from within a C# assembly
I'd like to execute JavaScript code from within a C# assembly and have the results of the JavaScript code returned to the calling C# code.
It's easier to define things that I'm not trying to do:
I'm not trying to call a JavaScript function on a web page from my code behind.
I'm not trying to load a WebBrowser control.
I don't want to have the JavaScript perform an AJAX call to a server.
What I want to do is write unit tests in JavaScript and have then unit tests output JSON, even plain text would be fine. Then I want to have a generic C# class/executible that can load the file containing the JS, run the JS unit tests, scrap/load the results, and return a pass/fail with details during a post-build task.
I think it's possible using the old ActiveX ScriptControl, but it seems like there ought to be a .NET way to do this without using SilverLight, the DLR, or anything else that hasn't shipped yet. Anyone have any ideas?
update: From Brad Abrams blog
namespace Microsoft.JScript.Vsa
{
[Obsolete("There is no replacement for this feature. " +
"Please see the ICodeCompiler documentation for additional help. " +
"http://go.microsoft.com/fwlink/?linkid=14202")]
Clarification:
We have unit tests for our JavaScript functions that are written in JavaScript using the JSUnit framework. Right now during our build process, we have to manually load a web page and click a button to ensure that all of the JavaScript unit tests pass. I'd like to be able to execute the tests during the post-build process when our automated C# unit tests are run and report the success/failure alongside of out C# unit tests and use them as an indicator as to whether or not the build is broken.
A:
The code should be pretty self explanitory, so I'll just post that.
<add assembly="Microsoft.Vsa, Version=8.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/></assemblies>
using Microsoft.JScript;
public class MyClass {
public static Microsoft.JScript.Vsa.VsaEngine Engine = Microsoft.JScript.Vsa.VsaEngine.CreateEngine();
public static object EvaluateScript(string script)
{
object Result = null;
try
{
Result = Microsoft.JScript.Eval.JScriptEvaluate(JScript, Engine);
}
catch (Exception ex)
{
return ex.Message;
}
return Result;
}
public void MyMethod() {
string myscript = ...;
object myresult = EvaluateScript(myscript);
}
}
A:
You can use the Microsoft Javascript engine for evaluating JavaScript code from C#
Update: This is obsolete as of VS 2008
A:
You can run your JSUnit from inside Nant using the JSUnit server, it's written in java and there is not a Nant task but you can run it from the command prompt, the results are logged as XML and you can them integrate them with your build report process.
This won't be part of your Nunit result but an extra report.
We fail the build if any of those test fails.
We are doing exactly that using CC.Net.
A:
Could it be simpler to use JSUnit to write your tests, and then use a WatiN
test wrapper to run them through C#, passing or failing based on the JSUnit results?
It is indeed an extra step though.
I believe I read somewhere that an upcoming version of either MBUnit or WatiN will have the functionality built in to process JSUnit test fixtures. If only I could remember where I read that...
A:
I don't know of any .NET specific way of doing this right now... Well, there's still JScript.NET, but that probably won't be compatible with whatever JS you need to execute :)
Obviously the future would be the .NET JScript implementation for the DLR which is coming... someday (hopefully).
So that probably leaves running the old ActiveX JScript engine, which is certainly possible to do so from .NET (I've done it in the past, though it's a bit on the ugly side!).
A:
If you're not executing the code in the context of a browser, why do the tests need to be written in Javascript? It's hard to understand the bigger picture of what you're trying to accomplish here.
|
Execute JavaScript from within a C# assembly
|
I'd like to execute JavaScript code from within a C# assembly and have the results of the JavaScript code returned to the calling C# code.
It's easier to define things that I'm not trying to do:
I'm not trying to call a JavaScript function on a web page from my code behind.
I'm not trying to load a WebBrowser control.
I don't want to have the JavaScript perform an AJAX call to a server.
What I want to do is write unit tests in JavaScript and have then unit tests output JSON, even plain text would be fine. Then I want to have a generic C# class/executible that can load the file containing the JS, run the JS unit tests, scrap/load the results, and return a pass/fail with details during a post-build task.
I think it's possible using the old ActiveX ScriptControl, but it seems like there ought to be a .NET way to do this without using SilverLight, the DLR, or anything else that hasn't shipped yet. Anyone have any ideas?
update: From Brad Abrams blog
namespace Microsoft.JScript.Vsa
{
[Obsolete("There is no replacement for this feature. " +
"Please see the ICodeCompiler documentation for additional help. " +
"http://go.microsoft.com/fwlink/?linkid=14202")]
Clarification:
We have unit tests for our JavaScript functions that are written in JavaScript using the JSUnit framework. Right now during our build process, we have to manually load a web page and click a button to ensure that all of the JavaScript unit tests pass. I'd like to be able to execute the tests during the post-build process when our automated C# unit tests are run and report the success/failure alongside of out C# unit tests and use them as an indicator as to whether or not the build is broken.
|
[
"The code should be pretty self explanitory, so I'll just post that.\n<add assembly=\"Microsoft.Vsa, Version=8.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A\"/></assemblies>\n\n\nusing Microsoft.JScript;\n\npublic class MyClass {\n\n public static Microsoft.JScript.Vsa.VsaEngine Engine = Microsoft.JScript.Vsa.VsaEngine.CreateEngine();\n\n public static object EvaluateScript(string script)\n {\n object Result = null;\n try\n {\n Result = Microsoft.JScript.Eval.JScriptEvaluate(JScript, Engine);\n }\n catch (Exception ex)\n {\n return ex.Message;\n }\n\n return Result;\n }\n\n public void MyMethod() {\n string myscript = ...;\n object myresult = EvaluateScript(myscript);\n }\n}\n\n",
"You can use the Microsoft Javascript engine for evaluating JavaScript code from C#\nUpdate: This is obsolete as of VS 2008\n",
"You can run your JSUnit from inside Nant using the JSUnit server, it's written in java and there is not a Nant task but you can run it from the command prompt, the results are logged as XML and you can them integrate them with your build report process.\nThis won't be part of your Nunit result but an extra report.\nWe fail the build if any of those test fails.\nWe are doing exactly that using CC.Net.\n",
"Could it be simpler to use JSUnit to write your tests, and then use a WatiN\n test wrapper to run them through C#, passing or failing based on the JSUnit results?\nIt is indeed an extra step though.\nI believe I read somewhere that an upcoming version of either MBUnit or WatiN will have the functionality built in to process JSUnit test fixtures. If only I could remember where I read that...\n",
"I don't know of any .NET specific way of doing this right now... Well, there's still JScript.NET, but that probably won't be compatible with whatever JS you need to execute :)\nObviously the future would be the .NET JScript implementation for the DLR which is coming... someday (hopefully).\nSo that probably leaves running the old ActiveX JScript engine, which is certainly possible to do so from .NET (I've done it in the past, though it's a bit on the ugly side!).\n",
"If you're not executing the code in the context of a browser, why do the tests need to be written in Javascript? It's hard to understand the bigger picture of what you're trying to accomplish here.\n"
] |
[
7,
2,
1,
0,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"codedom",
"javascript",
"unit_testing"
] |
stackoverflow_0000067734_.net_c#_codedom_javascript_unit_testing.txt
|
Q:
How would you migrate hundreds of MS Access databases to a central service?
We have literally 100's of Access databases floating around the network. Some with light usage and some with quite heavy usage, and some no usage whatsoever. What we would like to do is centralise these databases onto a managed database and retain as much as possible of the reports and forms within them.
The benefits of doing this would be to have some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.
There is no real constraints on RDBMS (Oracle, MS SQL server) or the stack it would run on (LAMP, ASP.net, Java) and there obviously won't be a silver bullet for this. We would like something that can remove the initial grunt work in an automated fashion.
A:
We upsize (either using the upsize wizard or by hand) users to SQL server. It's usually pretty straight forward. Replace all the access tables with linked tables to the sql server and keep all the forms/reports/macros in access. The investment in access isn't lost and the users can keep going business as usual. You get reliability of sql server and centralized backups. Keep in mind - we’ve done this for a few large access databases, not hundreds. I'd do a pilot of a few dozen and see how it works out.
UPDATE:
I just found this, the sql server migration assitant, it might be worth a look:
http://www.microsoft.com/sql/solutions/migration/default.mspx
Update: Yes, some refactoring will be necessary for poorly designed databases. As for how to handle access sprawl? I've run into this at companies with lots of technical users (engineers esp., are the worst for this... and excel sprawl). We did an audit - (after backing up) deleted any databases that hadn't been touched in over a year. "Owners" were assigned based the location &/or data in the database. If the database was in "S:\quality\test_dept" then the quality manager and head test engineer had to take ownership of it or we delete it (again after backing it up).
A:
Upsizing an Access application is no magic bullet. It may be that some things will be faster, but some types of operations will be real dogs. That means that an upsized app has to be tested thoroughly and performance bottlenecks addressed, usually by moving the data retrieval logic server-side (views, stored procedures, passthrough queries).
It's not really an answer to the question, though.
I don't think there is any automated answer to the problem. Indeed, I'd say this is a people problem and not a programming problem at all. Somebody has to survey the network and determine ownership of all the Access databases and then interview the users to find out what's in use and what's not. Then each app should be evaluated as to whether or not it should be folded into an Enterprise-wide data store/app, or whether its original implementation as a small app for a few users was the better approach.
That's not the answer you want to hear, but it's the right answer precisely because it's a people/management problem, not a programming task.
A:
Oracle has a migration workbench to port MS Access systems to Oracle Application Express, which would be worth investigating.
http://apex.oracle.com
A:
So? Dedicate a server to your Access databases.
Now you have the benefit of some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.
This is what you were going to do anyway, only you wanted to use a different database engine instead of NTFS.
And now you have to force the users onto your server.
Well, you can encourage them by telling them that you aren't going to overwrite their data with old backups anymore, because now you will own the data, and you won't do that anymore.
Also, you can tell them that their applications will run faster now, because you are going to exclude the folder from on-access virus scanning (you don't do that to your other databases, which is why they are full of sql-injection malware, but these databases won't be exposed to the internet), and planning to turn packet signing off (you won't need that on a dedicated server: it's only for people who put their file-share on their domain-server).
Easy upgrade path, improved service to users, greater centralization and control for IT. Everyone's a winner.
A:
Further to David Fenton's comments
Your administrative rule will be something like this:
If the data that is in the database is just being used by one user, for their own work (alone), then they can keep it in their own network share.
If the data that is in the database is for being used by more than one person (even if it is only two), then that database must go on a central server and go under IT's management (backups, schema changes, interfaces, etc.). This is because, someone experienced needs to coordinate the whole show or we will risk the time/resources of the next guy down the line.
|
How would you migrate hundreds of MS Access databases to a central service?
|
We have literally 100's of Access databases floating around the network. Some with light usage and some with quite heavy usage, and some no usage whatsoever. What we would like to do is centralise these databases onto a managed database and retain as much as possible of the reports and forms within them.
The benefits of doing this would be to have some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.
There is no real constraints on RDBMS (Oracle, MS SQL server) or the stack it would run on (LAMP, ASP.net, Java) and there obviously won't be a silver bullet for this. We would like something that can remove the initial grunt work in an automated fashion.
|
[
"We upsize (either using the upsize wizard or by hand) users to SQL server. It's usually pretty straight forward. Replace all the access tables with linked tables to the sql server and keep all the forms/reports/macros in access. The investment in access isn't lost and the users can keep going business as usual. You get reliability of sql server and centralized backups. Keep in mind - we’ve done this for a few large access databases, not hundreds. I'd do a pilot of a few dozen and see how it works out.\nUPDATE:\nI just found this, the sql server migration assitant, it might be worth a look:\nhttp://www.microsoft.com/sql/solutions/migration/default.mspx\nUpdate: Yes, some refactoring will be necessary for poorly designed databases. As for how to handle access sprawl? I've run into this at companies with lots of technical users (engineers esp., are the worst for this... and excel sprawl). We did an audit - (after backing up) deleted any databases that hadn't been touched in over a year. \"Owners\" were assigned based the location &/or data in the database. If the database was in \"S:\\quality\\test_dept\" then the quality manager and head test engineer had to take ownership of it or we delete it (again after backing it up).\n",
"Upsizing an Access application is no magic bullet. It may be that some things will be faster, but some types of operations will be real dogs. That means that an upsized app has to be tested thoroughly and performance bottlenecks addressed, usually by moving the data retrieval logic server-side (views, stored procedures, passthrough queries).\nIt's not really an answer to the question, though.\nI don't think there is any automated answer to the problem. Indeed, I'd say this is a people problem and not a programming problem at all. Somebody has to survey the network and determine ownership of all the Access databases and then interview the users to find out what's in use and what's not. Then each app should be evaluated as to whether or not it should be folded into an Enterprise-wide data store/app, or whether its original implementation as a small app for a few users was the better approach.\nThat's not the answer you want to hear, but it's the right answer precisely because it's a people/management problem, not a programming task.\n",
"Oracle has a migration workbench to port MS Access systems to Oracle Application Express, which would be worth investigating.\nhttp://apex.oracle.com\n",
"So? Dedicate a server to your Access databases. \nNow you have the benefit of some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.\nThis is what you were going to do anyway, only you wanted to use a different database engine instead of NTFS. \nAnd now you have to force the users onto your server.\nWell, you can encourage them by telling them that you aren't going to overwrite their data with old backups anymore, because now you will own the data, and you won't do that anymore. \nAlso, you can tell them that their applications will run faster now, because you are going to exclude the folder from on-access virus scanning (you don't do that to your other databases, which is why they are full of sql-injection malware, but these databases won't be exposed to the internet), and planning to turn packet signing off (you won't need that on a dedicated server: it's only for people who put their file-share on their domain-server).\nEasy upgrade path, improved service to users, greater centralization and control for IT. Everyone's a winner.\n",
"Further to David Fenton's comments\nYour administrative rule will be something like this:\nIf the data that is in the database is just being used by one user, for their own work (alone), then they can keep it in their own network share.\nIf the data that is in the database is for being used by more than one person (even if it is only two), then that database must go on a central server and go under IT's management (backups, schema changes, interfaces, etc.). This is because, someone experienced needs to coordinate the whole show or we will risk the time/resources of the next guy down the line.\n"
] |
[
5,
3,
1,
0,
0
] |
[] |
[] |
[
"ms_access",
"oracle",
"sql_server"
] |
stackoverflow_0000047225_ms_access_oracle_sql_server.txt
|
Q:
Design problem regarding type slicing with many different subclasses
A basic problem I run into quite often, but ever found a clean solution to, is one where you want to code behaviour for interaction between different objects of a common base class or interface. To make it a bit concrete, I'll throw in an example;
Bob has been coding on a strategy game which supports "cool geographical effects". These round up to simple constraints such as if troops are walking in water, they are slowed 25%. If they are walking on grass, they are slowed 5%, and if they are walking on pavement they are slowed by 0%.
Now, management told Bob that they needed new sorts of troops. There would be jeeps, boats and also hovercrafts. Also, they wanted jeeps to take damage if they went drove into water, and hovercrafts would ignore all three of the terrain types. Rumor has it also that they might add another terrain type with even more features than slowing units down and taking damage.
A very rough pseudo code example follows:
public interface ITerrain
{
void AffectUnit(IUnit unit);
}
public class Water : ITerrain
{
public void AffectUnit(IUnit unit)
{
if (unit is HoverCraft)
{
// Don't affect it anyhow
}
if (unit is FootSoldier)
{
unit.SpeedMultiplier = 0.75f;
}
if (unit is Jeep)
{
unit.SpeedMultiplier = 0.70f;
unit.Health -= 5.0f;
}
if (unit is Boat)
{
// Don't affect it anyhow
}
/*
* List grows larger each day...
*/
}
}
public class Grass : ITerrain
{
public void AffectUnit(IUnit unit)
{
if (unit is HoverCraft)
{
// Don't affect it anyhow
}
if (unit is FootSoldier)
{
unit.SpeedMultiplier = 0.95f;
}
if (unit is Jeep)
{
unit.SpeedMultiplier = 0.85f;
}
if (unit is Boat)
{
unit.SpeedMultiplier = 0.0f;
unit.Health = 0.0f;
Boat boat = unit as Boat;
boat.DamagePropeller();
// Perhaps throw in an explosion aswell?
}
/*
* List grows larger each day...
*/
}
}
As you can see, things would have been better if Bob had a solid design document from the beginning. As the number of units and terrain types grow, so does code complexity. Not only does Bob have to worry about figuring out which members might need to be added to the unit interface, but he also has to repeat alot of code. It's very likely that new terrain types require additional information from what can be obtained from the basic IUnit interface.
Each time we add another unit into the game, each terrain must be updated to handle the new unit. Clearly, this makes for a lot of repetition, not to mention the ugly runtime check which determines the type of unit being dealt with. I've opted out calls to the specific subtypes in this example, but those kinds of calls are neccessary to make. An example would be that when a boat hits land, its propeller should be damaged. Not all units have propellers.
I am unsure what this kind of problem is called, but it is a many-to-many dependence which I have a hard time decoupling. I don't fancy having 100's of overloads for each IUnit subclass on ITerrain as I would want to come clean with coupling.
Any light on this problem is highly sought after. Perhaps I'm thinking way out of orbit all together?
A:
Terrain has-a Terrain Attribute
Terrain Attributes are multidimensional.
Units has-a Propulsion.
Propulsion is compatible able with Terrain Attributes.
Units move by a Terrain visit with Propulsion as an argument.
That gets delegated to the Propulsion.
Units may get affected by terrain as part of the visit.
Unit code knows nothing about propulsion.
Terrain types can change w/o changing anything except Terrain Attributes and Propulsion.
Propuslion's constructors protect existing units from new methods of travel.
A:
The limitation you're running into here is that C#, unlike some other OOP languages, lacks multiple dispatch.
In other words, given these base classes:
public class Base
{
public virtual void Go() { Console.WriteLine("in Base"); }
}
public class Derived : Base
{
public virtual void Go() { Console.WriteLine("in Derived"); }
}
This function:
public void Test()
{
Base obj = new Derived();
obj.Go();
}
will correctly output "in Derived" even though the reference "obj" is of type Base. This is because at runtime C# will correctly find the most-derived Go() to call.
However, since C# is a single dispatch language, it only does this for the "first parameter" which is implicitly "this" in an OOP language. The following code does not work like the above:
public class TestClass
{
public void Go(Base b)
{
Console.WriteLine("Base arg");
}
public void Go(Derived d)
{
Console.WriteLine("Derived arg");
}
public void Test()
{
Base obj = new Derived();
Go(obj);
}
}
This will output "Base arg" because aside from "this" all other parameters are statically dispatched, which means they are bound to the called method at compile time. At compile time, the only thing the compiler knows is the declared type of the argument being passed ("Base obj") and not its actual type, so the method call is bound to the Go(Base b) one.
A solution to your problem then, is to basically hand-author a little method dispatcher:
public class Dispatcher
{
public void Dispatch(IUnit unit, ITerrain terrain)
{
Type unitType = unit.GetType();
Type terrainType = terrain.GetType();
// go through the list and find the action that corresponds to the
// most-derived IUnit and ITerrain types that are in the ancestor
// chain for unitType and terrainType.
Action<IUnit, ITerrain> action = /* left as exercise for reader ;) */
action(unit, terrain);
}
// add functions to this
public List<Action<IUnit, ITerrain>> Actions = new List<Action<IUnit, ITerrain>>();
}
You can use reflection to inspect the generic parameters of each Action passed in and then choose the most-derived one that matches the unit and terrain given, then call that function. The functions added to Actions can be anywhere, even distributed across multiple assemblies.
Interestingly, I've run into this problem a few times, but never outside of the context of games.
A:
decouple the interaction rules from the Unit and Terrain classes; interaction rules are more general than that. For example a hash table might be used with the key being a pair of interacting types and the value being an 'effector' method operating on objects of those types.
when two objects must interact, find ALL of the interaction rules in the hash table and execute them
this eliminates the inter-class dependencies, not to mention the hideous switch statements in your original example
if performance becomes an issue, and the interaction rules do not change during execution, cache the rule-sets for type pairs as they are encountered and emit a new MSIL method to run them all at once
A:
There's definitely three objects in play here:
1) Terrain
2) Terrain Effects
3) Units
I would not suggest creating a map with the pair of terrain/unit as a key to look up the action. That is going to make it difficult for you to make sure you've got every combination covered as the lists of units and terrains grow.
In fact, it appears that every terrain-unit combination has a unique terrain effect so it's doubtful that you'd see a benefit from having a common list of terrain effects at all.
Instead, I would have each unit maintain its own map of terrain to terrain effect. Then, the terrain can just call Unit->AffectUnit(myTerrainType) and the unit can look up the effect that the terrain will have on itself.
A:
Old idea:
Make a class iTerrain and another
class iUnit which accepts an argument
which is the terrain type including a
method for affecting each unit type
example:
boat = new
iUnit("watercraft") field = new
iTerrain("grass")
field.effects(boat)
ok forget all that I have a better idea:
Make the effects of each terrain a property of each unit
Example:
public class hovercraft : unit {
#You make a base class for defaults and redefine as necessary
speed_multiplier.water = 1
}
public class boat : unit {
speed_multiplier.land = 0
}
|
Design problem regarding type slicing with many different subclasses
|
A basic problem I run into quite often, but ever found a clean solution to, is one where you want to code behaviour for interaction between different objects of a common base class or interface. To make it a bit concrete, I'll throw in an example;
Bob has been coding on a strategy game which supports "cool geographical effects". These round up to simple constraints such as if troops are walking in water, they are slowed 25%. If they are walking on grass, they are slowed 5%, and if they are walking on pavement they are slowed by 0%.
Now, management told Bob that they needed new sorts of troops. There would be jeeps, boats and also hovercrafts. Also, they wanted jeeps to take damage if they went drove into water, and hovercrafts would ignore all three of the terrain types. Rumor has it also that they might add another terrain type with even more features than slowing units down and taking damage.
A very rough pseudo code example follows:
public interface ITerrain
{
void AffectUnit(IUnit unit);
}
public class Water : ITerrain
{
public void AffectUnit(IUnit unit)
{
if (unit is HoverCraft)
{
// Don't affect it anyhow
}
if (unit is FootSoldier)
{
unit.SpeedMultiplier = 0.75f;
}
if (unit is Jeep)
{
unit.SpeedMultiplier = 0.70f;
unit.Health -= 5.0f;
}
if (unit is Boat)
{
// Don't affect it anyhow
}
/*
* List grows larger each day...
*/
}
}
public class Grass : ITerrain
{
public void AffectUnit(IUnit unit)
{
if (unit is HoverCraft)
{
// Don't affect it anyhow
}
if (unit is FootSoldier)
{
unit.SpeedMultiplier = 0.95f;
}
if (unit is Jeep)
{
unit.SpeedMultiplier = 0.85f;
}
if (unit is Boat)
{
unit.SpeedMultiplier = 0.0f;
unit.Health = 0.0f;
Boat boat = unit as Boat;
boat.DamagePropeller();
// Perhaps throw in an explosion aswell?
}
/*
* List grows larger each day...
*/
}
}
As you can see, things would have been better if Bob had a solid design document from the beginning. As the number of units and terrain types grow, so does code complexity. Not only does Bob have to worry about figuring out which members might need to be added to the unit interface, but he also has to repeat alot of code. It's very likely that new terrain types require additional information from what can be obtained from the basic IUnit interface.
Each time we add another unit into the game, each terrain must be updated to handle the new unit. Clearly, this makes for a lot of repetition, not to mention the ugly runtime check which determines the type of unit being dealt with. I've opted out calls to the specific subtypes in this example, but those kinds of calls are neccessary to make. An example would be that when a boat hits land, its propeller should be damaged. Not all units have propellers.
I am unsure what this kind of problem is called, but it is a many-to-many dependence which I have a hard time decoupling. I don't fancy having 100's of overloads for each IUnit subclass on ITerrain as I would want to come clean with coupling.
Any light on this problem is highly sought after. Perhaps I'm thinking way out of orbit all together?
|
[
"Terrain has-a Terrain Attribute \nTerrain Attributes are multidimensional. \nUnits has-a Propulsion.\nPropulsion is compatible able with Terrain Attributes.\nUnits move by a Terrain visit with Propulsion as an argument.\nThat gets delegated to the Propulsion.\nUnits may get affected by terrain as part of the visit.\nUnit code knows nothing about propulsion.\nTerrain types can change w/o changing anything except Terrain Attributes and Propulsion.\nPropuslion's constructors protect existing units from new methods of travel.\n",
"The limitation you're running into here is that C#, unlike some other OOP languages, lacks multiple dispatch.\nIn other words, given these base classes:\npublic class Base\n{\n public virtual void Go() { Console.WriteLine(\"in Base\"); }\n}\n\npublic class Derived : Base\n{\n public virtual void Go() { Console.WriteLine(\"in Derived\"); }\n}\n\nThis function:\npublic void Test()\n{\n Base obj = new Derived();\n obj.Go();\n}\n\nwill correctly output \"in Derived\" even though the reference \"obj\" is of type Base. This is because at runtime C# will correctly find the most-derived Go() to call. \nHowever, since C# is a single dispatch language, it only does this for the \"first parameter\" which is implicitly \"this\" in an OOP language. The following code does not work like the above:\npublic class TestClass\n{\n public void Go(Base b)\n {\n Console.WriteLine(\"Base arg\");\n }\n\n public void Go(Derived d)\n {\n Console.WriteLine(\"Derived arg\");\n }\n\n public void Test()\n {\n Base obj = new Derived();\n Go(obj);\n }\n}\n\nThis will output \"Base arg\" because aside from \"this\" all other parameters are statically dispatched, which means they are bound to the called method at compile time. At compile time, the only thing the compiler knows is the declared type of the argument being passed (\"Base obj\") and not its actual type, so the method call is bound to the Go(Base b) one.\nA solution to your problem then, is to basically hand-author a little method dispatcher:\npublic class Dispatcher\n{\n public void Dispatch(IUnit unit, ITerrain terrain)\n {\n Type unitType = unit.GetType();\n Type terrainType = terrain.GetType();\n\n // go through the list and find the action that corresponds to the\n // most-derived IUnit and ITerrain types that are in the ancestor\n // chain for unitType and terrainType.\n Action<IUnit, ITerrain> action = /* left as exercise for reader ;) */\n\n action(unit, terrain);\n }\n\n // add functions to this\n public List<Action<IUnit, ITerrain>> Actions = new List<Action<IUnit, ITerrain>>();\n}\n\nYou can use reflection to inspect the generic parameters of each Action passed in and then choose the most-derived one that matches the unit and terrain given, then call that function. The functions added to Actions can be anywhere, even distributed across multiple assemblies.\nInterestingly, I've run into this problem a few times, but never outside of the context of games.\n",
"decouple the interaction rules from the Unit and Terrain classes; interaction rules are more general than that. For example a hash table might be used with the key being a pair of interacting types and the value being an 'effector' method operating on objects of those types.\nwhen two objects must interact, find ALL of the interaction rules in the hash table and execute them\nthis eliminates the inter-class dependencies, not to mention the hideous switch statements in your original example\nif performance becomes an issue, and the interaction rules do not change during execution, cache the rule-sets for type pairs as they are encountered and emit a new MSIL method to run them all at once\n",
"There's definitely three objects in play here:\n\n1) Terrain\n 2) Terrain Effects\n 3) Units\n\nI would not suggest creating a map with the pair of terrain/unit as a key to look up the action. That is going to make it difficult for you to make sure you've got every combination covered as the lists of units and terrains grow. \nIn fact, it appears that every terrain-unit combination has a unique terrain effect so it's doubtful that you'd see a benefit from having a common list of terrain effects at all.\nInstead, I would have each unit maintain its own map of terrain to terrain effect. Then, the terrain can just call Unit->AffectUnit(myTerrainType) and the unit can look up the effect that the terrain will have on itself.\n",
"Old idea:\n\nMake a class iTerrain and another\n class iUnit which accepts an argument\n which is the terrain type including a\n method for affecting each unit type\nexample:\n boat = new\niUnit(\"watercraft\") field = new\niTerrain(\"grass\")\nfield.effects(boat)\n\nok forget all that I have a better idea:\nMake the effects of each terrain a property of each unit\nExample:\n\npublic class hovercraft : unit {\n #You make a base class for defaults and redefine as necessary\n speed_multiplier.water = 1\n}\n\npublic class boat : unit {\n speed_multiplier.land = 0\n}\n\n"
] |
[
1,
1,
1,
1,
0
] |
[] |
[] |
[
"c#",
"design_patterns",
"type_slicing"
] |
stackoverflow_0000068537_c#_design_patterns_type_slicing.txt
|
Q:
SQL Server: Column nullability inconsistency
I have a SQL Server 2005 database that could only be restored using
Restore Database The_DB_Name
From Disk = 'C:\etc\etc'
With Continue_After_Error
I am told the source database was fine. The restore reports
Warning: A column nullability
inconsistency was detected in the
metadata of index
"IDX_Comp_CompanyId" (index_id = 2)
on object ID nnnnn in database
"The_DB_Name". The index may be
corrupt. Run DBCC CHECKTABLE to verify
consistency.
DBCC CHECKTABLE (Company)
gives
Msg 8967, Level 16, State 216, Line 1
An internal error occurred in DBCC
that prevented further processing.
Contact Customer Support Services.
Msg 8921, Level 16, State 1, Line 1
Check terminated. A failure was
detected while collecting facts.
Possibly tempdb out of space or a
system table is inconsistent. Check
previous errors.
Alter Index IDX_Comp_CompanyId On dbo.Company
Rebuild
gives me
Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical
consistency-based I/O error: incorrect
pageid (expected 1:77467; actual
45:2097184). It occurred during a read
of page (1:77467) in database ID 20 at
offset 0x00000025d36000 in file
'C:\etc\etc.mdf'. Additional messages
in the SQL Server error log or system
event log may provide more detail.
This is a severe error condition that
threatens database integrity and must
be corrected immediately. Complete a
full database consistency check (DBCC
CHECKDB). This error can be caused by
many factors; for more information,
see SQL Server Books Online.
How much trouble am I in?
A:
A corruption in an index is not nearly as bad as a corruption in the base table as an index can be rebuilt.
Compare the table and index definitions between the source and destination databases.
Check the version of both servers as well. (was the backup automatically upgraded when restored to your server)
Drop and recreate the index and rerun the CheckTable.
|
SQL Server: Column nullability inconsistency
|
I have a SQL Server 2005 database that could only be restored using
Restore Database The_DB_Name
From Disk = 'C:\etc\etc'
With Continue_After_Error
I am told the source database was fine. The restore reports
Warning: A column nullability
inconsistency was detected in the
metadata of index
"IDX_Comp_CompanyId" (index_id = 2)
on object ID nnnnn in database
"The_DB_Name". The index may be
corrupt. Run DBCC CHECKTABLE to verify
consistency.
DBCC CHECKTABLE (Company)
gives
Msg 8967, Level 16, State 216, Line 1
An internal error occurred in DBCC
that prevented further processing.
Contact Customer Support Services.
Msg 8921, Level 16, State 1, Line 1
Check terminated. A failure was
detected while collecting facts.
Possibly tempdb out of space or a
system table is inconsistent. Check
previous errors.
Alter Index IDX_Comp_CompanyId On dbo.Company
Rebuild
gives me
Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical
consistency-based I/O error: incorrect
pageid (expected 1:77467; actual
45:2097184). It occurred during a read
of page (1:77467) in database ID 20 at
offset 0x00000025d36000 in file
'C:\etc\etc.mdf'. Additional messages
in the SQL Server error log or system
event log may provide more detail.
This is a severe error condition that
threatens database integrity and must
be corrected immediately. Complete a
full database consistency check (DBCC
CHECKDB). This error can be caused by
many factors; for more information,
see SQL Server Books Online.
How much trouble am I in?
|
[
"A corruption in an index is not nearly as bad as a corruption in the base table as an index can be rebuilt. \nCompare the table and index definitions between the source and destination databases.\nCheck the version of both servers as well. (was the backup automatically upgraded when restored to your server)\nDrop and recreate the index and rerun the CheckTable. \n"
] |
[
3
] |
[] |
[] |
[
"restore",
"sql_server"
] |
stackoverflow_0000069215_restore_sql_server.txt
|
Q:
Copying a directory that is version controlled
I am curious whether it is OK to copy a directory that is under version control and start working on both copies.
I know it can be different from one VCS to another, but I intentionally don't specify any VCS since I am curious about different cases.
I was talking to a coworker recently about doing it in SVN. I think it should be OK, but I am still not 100% sure, since I don't know what exactly SVN is storing in the working copy.
However, if we talk about the DVCS world, things might be even more unclear, since every working copy is a repository by itself. Being faced with doing this in bzr now, I decided to ask the question.
Later edit:
Some people asked why I would want to do that. Here is the whole story:
In the case of SVN it was because being out of the office, the connection to the SVN server was really slow, so me and my coworker decided to check out the sources only once and make a local copy. That's what we did and it worked OK, but I am still wondering whether it is guaranteed to work, or it just happened.
In the bzr case, I am planning to move the "main" repo to another server. So I was thinking to just copy it there and start considering that the main repo. I guess the safest is to make a clone though.
A:
In Subversion, every .svn folder has whatever is necessary for the containing folder. And since all local paths are stored as relative, you are safe while copying whole or partial trees outside the original checkout tree. They will continue to function in their new homes.
I frequently copy subtrees from my trunk outside, switch the new copies to other branches/tags and do whatever is necessary on the "cloned" local copies. This way, if, for any reason, I need to go back and do something in the trunk, I have an undisturbed trunk copy in the original location.
Copying source-controlled directories into other source-controlled trees, on the other hand, is unsafe. If you will be overwriting any .svn folders, you'll most probably be corrupting your target copies.
A:
I do this occasionally in SVN and I haven't run into any problems. I believe that in SVN all that is stored is the original state of the directory and a pointer to the repository directory it came from.
So basically it works as you would think it should.
If File1 in Copy1 changes and File2 in Copy2 changes both can commit
If File1 in Copy1 changes and File1 in Copy2 changes whoever commits second will have an error and will have to update/merge first.
For those curious as to why I had to copy, I have had problems with checkouts over our network being very slow when first checking out one of our larger projects. By contrast, simply copying from another computer seemed to provide me with all the same benefits.
A:
In svn, it's no problem. You can just working with the copy as if you had made a second checkout.
I'd recommend just checking out a second time, though. If you want a copy without the .svn files, svn export will create one.
A:
For bzr, if you just copy the .bzr directory to another location, it'll work. It doesn't store any information about the path it's in or the host it's on, so you can copy it wherever and expect it to work out OK.
A:
I would suggest not, as you're circumventing the source control mechanism.
But perhaps you can explain 'why'?
A:
You could also just check out two working copies (at least with SVN) to say work/copy1 and work/copy2 and work on the two versions in parallel.
I wonder though what it is you are trying to achieve, since copying may not be the best solution to your problem.
A:
It will depend on the VCS. I know in CVS that it stores (hidden) directories inside every version-controlled directory. These files are, of course, then copied with any copy of that directory.
It's so frequently the case that you do NOT want to copy those hidden files that the rsync tool comes with an option (-C) to ignore these files the same way CVS does.
A:
I've had a few headaches with SVN when I've reorganized the folder layout from within Visual Studio. A folder moved within a solution will literally move the folder in the filesystem, including the hidden .svn folder. This causes commit problems because the .svn data is associated to the old path and I haven't found a way to reassociate to its new path. SVN clean up runs OK but fixes nothing. SVN switch doesn't allow you to change it after the folder was moved. I've only been able to fix this by deleting all .svn folders within the moved folder and its subfolders, then re-add the folder.
The problem I have with this fix is that you lose your version trail on those files because SVN sees it as brand new. Also, it doesn't store the file contents as efficiently by storing the diff from the previous version.
Per the SVN documentation, it is recommended to allow the svn client to do all your folder move/create/delete to keep everything in sync for the next commit. This isn't always acceptable from Visual Studio. Fortunately, most problem cases are caught during commit-time, particularly if you use TortoiseSVN.
A:
For SVN, this will generally work as others have already stated.
If you are copying between machines, you probably will run into trouble though. For example, if you are accessing your SVN repo using file:// repository URL, things will most likely break. Same applies to http:// or svn:// URLs where server access might be different.
To stay safe, I'd just to a checkout at the new location. If you have a lot of uncomitted changes in one that you want to have in the new working directory (generally a bad idea), you could then use rsync to copy your source across without bringing in the .svn directories.
|
Copying a directory that is version controlled
|
I am curious whether it is OK to copy a directory that is under version control and start working on both copies.
I know it can be different from one VCS to another, but I intentionally don't specify any VCS since I am curious about different cases.
I was talking to a coworker recently about doing it in SVN. I think it should be OK, but I am still not 100% sure, since I don't know what exactly SVN is storing in the working copy.
However, if we talk about the DVCS world, things might be even more unclear, since every working copy is a repository by itself. Being faced with doing this in bzr now, I decided to ask the question.
Later edit:
Some people asked why I would want to do that. Here is the whole story:
In the case of SVN it was because being out of the office, the connection to the SVN server was really slow, so me and my coworker decided to check out the sources only once and make a local copy. That's what we did and it worked OK, but I am still wondering whether it is guaranteed to work, or it just happened.
In the bzr case, I am planning to move the "main" repo to another server. So I was thinking to just copy it there and start considering that the main repo. I guess the safest is to make a clone though.
|
[
"In Subversion, every .svn folder has whatever is necessary for the containing folder. And since all local paths are stored as relative, you are safe while copying whole or partial trees outside the original checkout tree. They will continue to function in their new homes.\nI frequently copy subtrees from my trunk outside, switch the new copies to other branches/tags and do whatever is necessary on the \"cloned\" local copies. This way, if, for any reason, I need to go back and do something in the trunk, I have an undisturbed trunk copy in the original location. \nCopying source-controlled directories into other source-controlled trees, on the other hand, is unsafe. If you will be overwriting any .svn folders, you'll most probably be corrupting your target copies.\n",
"I do this occasionally in SVN and I haven't run into any problems. I believe that in SVN all that is stored is the original state of the directory and a pointer to the repository directory it came from.\nSo basically it works as you would think it should. \n\nIf File1 in Copy1 changes and File2 in Copy2 changes both can commit\nIf File1 in Copy1 changes and File1 in Copy2 changes whoever commits second will have an error and will have to update/merge first.\n\nFor those curious as to why I had to copy, I have had problems with checkouts over our network being very slow when first checking out one of our larger projects. By contrast, simply copying from another computer seemed to provide me with all the same benefits.\n",
"In svn, it's no problem. You can just working with the copy as if you had made a second checkout. \nI'd recommend just checking out a second time, though. If you want a copy without the .svn files, svn export will create one.\n",
"For bzr, if you just copy the .bzr directory to another location, it'll work. It doesn't store any information about the path it's in or the host it's on, so you can copy it wherever and expect it to work out OK. \n",
"I would suggest not, as you're circumventing the source control mechanism.\nBut perhaps you can explain 'why'?\n",
"You could also just check out two working copies (at least with SVN) to say work/copy1 and work/copy2 and work on the two versions in parallel.\nI wonder though what it is you are trying to achieve, since copying may not be the best solution to your problem.\n",
"It will depend on the VCS. I know in CVS that it stores (hidden) directories inside every version-controlled directory. These files are, of course, then copied with any copy of that directory.\nIt's so frequently the case that you do NOT want to copy those hidden files that the rsync tool comes with an option (-C) to ignore these files the same way CVS does. \n",
"I've had a few headaches with SVN when I've reorganized the folder layout from within Visual Studio. A folder moved within a solution will literally move the folder in the filesystem, including the hidden .svn folder. This causes commit problems because the .svn data is associated to the old path and I haven't found a way to reassociate to its new path. SVN clean up runs OK but fixes nothing. SVN switch doesn't allow you to change it after the folder was moved. I've only been able to fix this by deleting all .svn folders within the moved folder and its subfolders, then re-add the folder.\nThe problem I have with this fix is that you lose your version trail on those files because SVN sees it as brand new. Also, it doesn't store the file contents as efficiently by storing the diff from the previous version.\nPer the SVN documentation, it is recommended to allow the svn client to do all your folder move/create/delete to keep everything in sync for the next commit. This isn't always acceptable from Visual Studio. Fortunately, most problem cases are caught during commit-time, particularly if you use TortoiseSVN.\n",
"For SVN, this will generally work as others have already stated.\nIf you are copying between machines, you probably will run into trouble though. For example, if you are accessing your SVN repo using file:// repository URL, things will most likely break. Same applies to http:// or svn:// URLs where server access might be different.\nTo stay safe, I'd just to a checkout at the new location. If you have a lot of uncomitted changes in one that you want to have in the new working directory (generally a bad idea), you could then use rsync to copy your source across without bringing in the .svn directories.\n"
] |
[
6,
3,
3,
2,
0,
0,
0,
0,
0
] |
[
"Seems to me like GIT might also serve your needs, as you mention being disconnected or over a crappy connection. GIT also has very nice SVN support so the two are complementary and you'll end up with a nice versioned file system.\n"
] |
[
-2
] |
[
"bazaar",
"dvcs",
"svn",
"version_control"
] |
stackoverflow_0000056657_bazaar_dvcs_svn_version_control.txt
|
Q:
Calling function when program exits in java
I would like to save the programs settings every time the user exits the program. So I need a way to call a function when the user quits the program. How do I do that?
I am using Java 1.5.
A:
You can add a shutdown hook to your application by doing the following:
Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
public void run() {
// what you want to do
}
}));
This is basically equivalent to having a try {} finally {} block around your entire program, and basically encompasses what's in the finally block.
Please note the caveats though!
A:
Adding a shutdown hook addShutdownHook(java.lang.Thread) is probably what you look for. There are problems with that approach, though:
you will lose the changes if the program aborts in an uncontrolled way (i.e. if it is killed)
you will lose the changes if there are errors (permission denied, disk full, network errors)
So it might be better to save settings immediately (possibly in an extra thread, to avoid waiting times).
A:
Are you creating a stand alone GUI app (i.e. Swing)?
If so, you should consider how you are providing options to your users how to exit the application.
Namely, if there is going to be a File menu, I would expect that there will be an "Exit" menu item.
Also, if the user closes the last window in the app, I would also expect it to exit the application.
In both cases, it should call code that handles saving the user's preferences.
A:
Using Runtime.getRuntime().addShutdownHook() is certainly a way to do this - but if you are writing Swing applications, I strongly recommend that you take a look at JSR 296 (Swing Application Framework)
Here's a good article on the basics: http://java.sun.com/developer/technicalArticles/javase/swingappfr/.
The JSR reference implementation provides the kind of features that you are looking for at a higher level of abstraction than adding shutdown hooks.
Here is the reference implementation: https://appframework.dev.java.net/
|
Calling function when program exits in java
|
I would like to save the programs settings every time the user exits the program. So I need a way to call a function when the user quits the program. How do I do that?
I am using Java 1.5.
|
[
"You can add a shutdown hook to your application by doing the following:\nRuntime.getRuntime().addShutdownHook(new Thread(new Runnable() {\n public void run() {\n // what you want to do\n }\n}));\n\nThis is basically equivalent to having a try {} finally {} block around your entire program, and basically encompasses what's in the finally block.\nPlease note the caveats though!\n",
"Adding a shutdown hook addShutdownHook(java.lang.Thread) is probably what you look for. There are problems with that approach, though:\n\nyou will lose the changes if the program aborts in an uncontrolled way (i.e. if it is killed)\nyou will lose the changes if there are errors (permission denied, disk full, network errors)\n\nSo it might be better to save settings immediately (possibly in an extra thread, to avoid waiting times).\n",
"Are you creating a stand alone GUI app (i.e. Swing)?\nIf so, you should consider how you are providing options to your users how to exit the application.\nNamely, if there is going to be a File menu, I would expect that there will be an \"Exit\" menu item.\nAlso, if the user closes the last window in the app, I would also expect it to exit the application.\nIn both cases, it should call code that handles saving the user's preferences.\n",
"Using Runtime.getRuntime().addShutdownHook() is certainly a way to do this - but if you are writing Swing applications, I strongly recommend that you take a look at JSR 296 (Swing Application Framework)\nHere's a good article on the basics: http://java.sun.com/developer/technicalArticles/javase/swingappfr/.\nThe JSR reference implementation provides the kind of features that you are looking for at a higher level of abstraction than adding shutdown hooks.\nHere is the reference implementation: https://appframework.dev.java.net/\n"
] |
[
44,
11,
0,
0
] |
[] |
[] |
[
"events",
"java"
] |
stackoverflow_0000063687_events_java.txt
|
Q:
Namespaces in C#
I am using an ASP.NET MVC project and everytime I add a class to a folder it makes really long namespaces.
Example:
Project = Tully.Saps.Data
Folder = DataAccess/Interfaces
Namespace = Tully.Saps.Data.DataAccess.Interfaces
Folder = DataAccess/MbNetRepositories
Namespace = Tully.Saps.Data.DataAccess.MbNetRepositories
Question:
Is it best to leave the namespace alone and add the using clause to the classes that access it or change the namespace to Tully.Saps.Data for everything in this project?
A:
Leave them alone and add the usings. You're asking for trouble manually changing things like that (harder to debug, inconsistent with other projects, et cetera).
A:
It is really up to you how you want to deal with it. If you are only going to be accessing a member of a namespace once or twice, then adding the "using" statement really doesn't do much for you.
If you are going to use it multiple times then reducing the namespace chain is probably going to make things easier to read.
You could always change the namespace so it doesn't add the new folder name if you are just looking to logically group files together, without creating a new namespace.
A:
According to FXCop, and I agree:
Avoid namespaces with few types
A namespace should generally have more than five types.
also (and this applies to the "single namespace" suggestion -- which is almost the same to say as no namespace)
Declare types in namespaces
A type should be defined inside a namespace to avoid duplication.
A:
Namespaces
.Namespaces help us to define the "scope" of a set of entities in our object model or our application. This makes them a software design decision not a folder structure decision. For example, in an MVC application it would make good sense to have Model/View/Controller folders and related namespaces. So, while it is possible, in some cases, that the folder structure will match the namespace pattern we decide to use in our development, it is not required and may not be what we desire. Each namespace should be a case-by-case decision
using statements
To define using statements for a namespace is a seperate decision based on how often the object in that namespace will be referred to in code and should not in any way affect our namespace creation practice.
A:
Leave it. It's one great example of how your IDE is dictating your coding style.
A:
Just because the tool (Visual Studio) you are using has decided that each folder needs a new Namespace doesn't mean you do.
I personally tend to leave my "Data" projects as a single Namespace. If I have a subfolder called "Model" I don't want those files in the Something.Data.Model Namespace, I want them in Something.Data.
|
Namespaces in C#
|
I am using an ASP.NET MVC project and everytime I add a class to a folder it makes really long namespaces.
Example:
Project = Tully.Saps.Data
Folder = DataAccess/Interfaces
Namespace = Tully.Saps.Data.DataAccess.Interfaces
Folder = DataAccess/MbNetRepositories
Namespace = Tully.Saps.Data.DataAccess.MbNetRepositories
Question:
Is it best to leave the namespace alone and add the using clause to the classes that access it or change the namespace to Tully.Saps.Data for everything in this project?
|
[
"Leave them alone and add the usings. You're asking for trouble manually changing things like that (harder to debug, inconsistent with other projects, et cetera).\n",
"It is really up to you how you want to deal with it. If you are only going to be accessing a member of a namespace once or twice, then adding the \"using\" statement really doesn't do much for you. \nIf you are going to use it multiple times then reducing the namespace chain is probably going to make things easier to read.\nYou could always change the namespace so it doesn't add the new folder name if you are just looking to logically group files together, without creating a new namespace.\n",
"According to FXCop, and I agree:\n\nAvoid namespaces with few types\nA namespace should generally have more than five types.\n\nalso (and this applies to the \"single namespace\" suggestion -- which is almost the same to say as no namespace)\n\nDeclare types in namespaces\nA type should be defined inside a namespace to avoid duplication.\n\n",
"\nNamespaces \n\n.Namespaces help us to define the \"scope\" of a set of entities in our object model or our application. This makes them a software design decision not a folder structure decision. For example, in an MVC application it would make good sense to have Model/View/Controller folders and related namespaces. So, while it is possible, in some cases, that the folder structure will match the namespace pattern we decide to use in our development, it is not required and may not be what we desire. Each namespace should be a case-by-case decision\n\nusing statements\n\nTo define using statements for a namespace is a seperate decision based on how often the object in that namespace will be referred to in code and should not in any way affect our namespace creation practice.\n",
"Leave it. It's one great example of how your IDE is dictating your coding style. \n",
"Just because the tool (Visual Studio) you are using has decided that each folder needs a new Namespace doesn't mean you do.\nI personally tend to leave my \"Data\" projects as a single Namespace. If I have a subfolder called \"Model\" I don't want those files in the Something.Data.Model Namespace, I want them in Something.Data.\n"
] |
[
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c#"
] |
stackoverflow_0000066635_c#.txt
|
Q:
Best Refactor to Handle Multiple jQuery Email Field Form Validation
What is the best way to refactor the attached code to accommodate multiple email addresses?
The attached HTML/jQuery is complete and works for the first email address. I can setup the other two by copy/pasting and changing the code. But I would like to just refactor the existing code to handle multiple email address fields.
<html>
<head>
<script src="includes/jquery/jquery-1.2.6.min.js" type="text/javascript"></script>
<script language="javascript">
$(document).ready(function() {
var validateUsername = $('#Email_Address_Status_Icon_1');
$('#Email_Address_1').keyup(function() {
var t = this;
if (this.value != this.lastValue) {
if (this.timer) clearTimeout(this.timer);
validateUsername.removeClass('error').html('Validating Email');
this.timer = setTimeout(function() {
if (IsEmail(t.value)) {
validateUsername.html('Valid Email');
} else {
validateUsername.html('Not a valid Email');
};
}, 200);
this.lastValue = this.value;
}
});
});
function IsEmail(email) {
var regex = /^([a-zA-Z0-9_\.\-\+])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/;
if (regex.test(email)) return true;
else return false;
}
</script>
</head>
<body>
<div>
<label for="Email_Address_1">Friend #1</label></div>
<input type="text" ID="Email_Address_1">
<span id="Email_Address_Status_Icon_1"></span>
</div>
<div>
<label for="Email_Address_2">Friend #2</label></div>
<input type="text" id="Email_Address_2">
<span id="Email_Address_Status_Icon_2"></span>
</div>
<div>
<label for="Email_Address_3">Friend #3</label></div>
<input type="text" id="Email_Address_3">
<span id="Email_Address_Status_Icon_3"></span>
</div>
</form>
</body>
</html>
A:
Instead of using IDs for your email fields, you can give them each a class:
<div>
<label for="Email_Address_1">Friend #1</label></div>
<input type="text" class="email">
<span></span>
</div>
<div>
<label for="Email_Address_2">Friend #2</label></div>
<input type="text" class="email">
<span></span>
</div>
<div>
<label for="Email_Address_3">Friend #3</label></div>
<input type="text" class="email">
<span></span>
</div>
Then, instead of selecting $("#Email_Address_Status_Icon_1"), you can select $("input.email"), which would give you a jQuery wrapped set of all input elements of class email.
Finally, instead of referring to the status icon explicitly with an id, you could simply say:
$(this).next("span").removeClass('error').html('Validating Email');
'this' would be the email field, so 'this.next()' would give you its next sibling. We apply the "span" selector on top of that just to be sure we're getting what we intend to. $(this).next() would work the same way.
This way, you are referring to the status icon in a relative manner.
Hope this helps!
A:
Thanks! Here is the completed refactor with your suggested changes.
<script language="javascript">
$(document).ready(function() {
$('#Email_Address_1').keyup(function(){Update_Email_Validate_Status(this)});
$('#Email_Address_2').keyup(function() { Update_Email_Validate_Status(this)});
$('#Email_Address_3').keyup(function() { Update_Email_Validate_Status(this)});
});
function Update_Email_Validate_Status(field) {
var t = field;
if (t.value != t.lastValue) {
if (t.timer) clearTimeout(t.timer);
$(t).next("span").removeClass('error').html('Validating Email');
t.timer = setTimeout(function() {
if (IsEmail(t.value)) {
$(t).next("span").removeClass('error').html('Valid Email');
} else {
$(t).next("span").removeClass('error').html('Not a valid Email');
};
}, 200);
t.lastValue = t.value;
}
}
function IsEmail(email) {
var regex = /^([a-zA-Z0-9_\.\-\+])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/;
if (regex.test(email)) return true;
else return false;
}
</script>
A:
I would do :
$(document).ready(function() {
$('.validateEmail').keyup(function(){Update_Email_Validate_Status(this)});
});
Then add class='validateEmail' to all your email inputs.
Alternatively look into Form Validation Plugin i have used this a lot and it is very flexible and nice to use. Saves you re-inventing...
|
Best Refactor to Handle Multiple jQuery Email Field Form Validation
|
What is the best way to refactor the attached code to accommodate multiple email addresses?
The attached HTML/jQuery is complete and works for the first email address. I can setup the other two by copy/pasting and changing the code. But I would like to just refactor the existing code to handle multiple email address fields.
<html>
<head>
<script src="includes/jquery/jquery-1.2.6.min.js" type="text/javascript"></script>
<script language="javascript">
$(document).ready(function() {
var validateUsername = $('#Email_Address_Status_Icon_1');
$('#Email_Address_1').keyup(function() {
var t = this;
if (this.value != this.lastValue) {
if (this.timer) clearTimeout(this.timer);
validateUsername.removeClass('error').html('Validating Email');
this.timer = setTimeout(function() {
if (IsEmail(t.value)) {
validateUsername.html('Valid Email');
} else {
validateUsername.html('Not a valid Email');
};
}, 200);
this.lastValue = this.value;
}
});
});
function IsEmail(email) {
var regex = /^([a-zA-Z0-9_\.\-\+])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/;
if (regex.test(email)) return true;
else return false;
}
</script>
</head>
<body>
<div>
<label for="Email_Address_1">Friend #1</label></div>
<input type="text" ID="Email_Address_1">
<span id="Email_Address_Status_Icon_1"></span>
</div>
<div>
<label for="Email_Address_2">Friend #2</label></div>
<input type="text" id="Email_Address_2">
<span id="Email_Address_Status_Icon_2"></span>
</div>
<div>
<label for="Email_Address_3">Friend #3</label></div>
<input type="text" id="Email_Address_3">
<span id="Email_Address_Status_Icon_3"></span>
</div>
</form>
</body>
</html>
|
[
"Instead of using IDs for your email fields, you can give them each a class:\n<div>\n <label for=\"Email_Address_1\">Friend #1</label></div>\n <input type=\"text\" class=\"email\">\n <span></span>\n</div>\n<div>\n <label for=\"Email_Address_2\">Friend #2</label></div>\n <input type=\"text\" class=\"email\">\n <span></span>\n</div>\n<div>\n <label for=\"Email_Address_3\">Friend #3</label></div>\n <input type=\"text\" class=\"email\">\n <span></span>\n</div>\n\nThen, instead of selecting $(\"#Email_Address_Status_Icon_1\"), you can select $(\"input.email\"), which would give you a jQuery wrapped set of all input elements of class email.\nFinally, instead of referring to the status icon explicitly with an id, you could simply say:\n$(this).next(\"span\").removeClass('error').html('Validating Email');\n\n'this' would be the email field, so 'this.next()' would give you its next sibling. We apply the \"span\" selector on top of that just to be sure we're getting what we intend to. $(this).next() would work the same way.\nThis way, you are referring to the status icon in a relative manner.\nHope this helps!\n",
"Thanks! Here is the completed refactor with your suggested changes.\n<script language=\"javascript\">\n $(document).ready(function() {\n $('#Email_Address_1').keyup(function(){Update_Email_Validate_Status(this)});\n $('#Email_Address_2').keyup(function() { Update_Email_Validate_Status(this)});\n $('#Email_Address_3').keyup(function() { Update_Email_Validate_Status(this)}); \n });\n\n function Update_Email_Validate_Status(field) {\n var t = field;\n if (t.value != t.lastValue) {\n if (t.timer) clearTimeout(t.timer);\n $(t).next(\"span\").removeClass('error').html('Validating Email');\n\n t.timer = setTimeout(function() {\n if (IsEmail(t.value)) {\n $(t).next(\"span\").removeClass('error').html('Valid Email');\n } else {\n $(t).next(\"span\").removeClass('error').html('Not a valid Email');\n };\n }, 200);\n\n t.lastValue = t.value;\n }\n }\n\n function IsEmail(email) {\n var regex = /^([a-zA-Z0-9_\\.\\-\\+])+\\@(([a-zA-Z0-9\\-])+\\.)+([a-zA-Z0-9]{2,4})+$/;\n if (regex.test(email)) return true;\n else return false;\n } \n </script>\n\n",
"I would do :\n$(document).ready(function() {\n $('.validateEmail').keyup(function(){Update_Email_Validate_Status(this)}); \n });\n\nThen add class='validateEmail' to all your email inputs.\nAlternatively look into Form Validation Plugin i have used this a lot and it is very flexible and nice to use. Saves you re-inventing...\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"email",
"jquery",
"refactoring",
"validation"
] |
stackoverflow_0000069107_email_jquery_refactoring_validation.txt
|
Q:
How can you insure your code runs with no variability in execution time due to cache?
In an embedded application (written in C, on a 32-bit processor) with hard real-time constraints, the execution time of critical code (specially interrupts) needs to be constant.
How do you insure that time variability is not introduced in the execution of the code, specifically due to the processor's caches (be it L1, L2 or L3)?
Note that we are concerned with cache behavior due to the huge effect it has on execution speed (sometimes more than 100:1 vs. accessing RAM). Variability introduced due to specific processor architecture are nowhere near the magnitude of cache.
A:
Two possibilities:
Disable the cache entirely. The application will run slower, but without any variability.
Pre-load the code in the cache and "lock it in". Most processors provide a mechanism to do this.
A:
It seems that you are referring to x86 processor family that is not built with real-time systems in mind, so there is no real guarantee for constant time execution (CPU may reorder micro-instructions, than there is branch prediction and instruction prefetch queue which is flushed each time when CPU wrongly predicts conditional jumps...)
A:
If you can get your hands on the hardware, or work with someone who can, you can turn off the cache. Some CPUs have a pin that, if wired to ground instead of power (or maybe the other way), will disable all internal caches. That will give predictability but not speed!
Failing that, maybe in certain places in the software code could be written to deliberately fill the cache with junk, so whatever happens next can be guaranteed to be a cache miss. Done right, that can give predictability, and perhaps could be done only in certain places so speed may be better than totally disabling caches.
Finally, if speed does matter - carefully design the software and data as if in the old day of programming for an ancient 8-bit CPU - keep it small enough for it all to fit in L1 cache. I'm always amazed at how on-board caches these days are bigger than all of RAM on a minicomputer back in (mumble-decade). But this will be hard work and takes cleverness. Good luck!
A:
This answer will sound snide, but it is intended to make you think:
Only run the code once.
The reason I say that is because so much will make it variable and you might not even have control over it. And what is your definition of time? Suppose the operating system decides to put your process in the wait queue.
Next you have unpredictability due to cache performance, memory latency, disk I/O, and so on. These all boil down to one thing; sometimes it takes time to get the information into the processor where your code can use it. Including the time it takes to fetch/decode your code itself.
Also, how much variance is acceptable to you? It could be that you're okay with 40 milliseconds, or you're okay with 10 nanoseconds.
Depending on the application domain you can even further just mask over or hide the variance. Computer graphics people have been rendering to off screen buffers for years to hide variance in the time to rendering each frame.
The traditional solutions just remove as many known variable rate things as possible. Load files into RAM, warm up the cache and avoid IO.
A:
If you make all the function calls in the critical code 'inline', and minimize the number of variables you have, so that you can let them have the 'register' type.
This should improve the running time of your program. (You probably have to compile it in a special way since compilers these days tend to disregard your 'register' tags)
I'm assuming that you have enough memory not to cause page faults when you try to load something from memory. The page faults can take a lot of time.
You could also take a look at the generated assembly code, to see if there are lots of branches and memory instuctions that could change your running code.
If an interrupt happens in your code execution it WILL take longer time. Do you have interrupts/exceptions enabled?
|
How can you insure your code runs with no variability in execution time due to cache?
|
In an embedded application (written in C, on a 32-bit processor) with hard real-time constraints, the execution time of critical code (specially interrupts) needs to be constant.
How do you insure that time variability is not introduced in the execution of the code, specifically due to the processor's caches (be it L1, L2 or L3)?
Note that we are concerned with cache behavior due to the huge effect it has on execution speed (sometimes more than 100:1 vs. accessing RAM). Variability introduced due to specific processor architecture are nowhere near the magnitude of cache.
|
[
"Two possibilities:\nDisable the cache entirely. The application will run slower, but without any variability.\nPre-load the code in the cache and \"lock it in\". Most processors provide a mechanism to do this.\n",
"It seems that you are referring to x86 processor family that is not built with real-time systems in mind, so there is no real guarantee for constant time execution (CPU may reorder micro-instructions, than there is branch prediction and instruction prefetch queue which is flushed each time when CPU wrongly predicts conditional jumps...) \n",
"If you can get your hands on the hardware, or work with someone who can, you can turn off the cache. Some CPUs have a pin that, if wired to ground instead of power (or maybe the other way), will disable all internal caches. That will give predictability but not speed!\nFailing that, maybe in certain places in the software code could be written to deliberately fill the cache with junk, so whatever happens next can be guaranteed to be a cache miss. Done right, that can give predictability, and perhaps could be done only in certain places so speed may be better than totally disabling caches.\nFinally, if speed does matter - carefully design the software and data as if in the old day of programming for an ancient 8-bit CPU - keep it small enough for it all to fit in L1 cache. I'm always amazed at how on-board caches these days are bigger than all of RAM on a minicomputer back in (mumble-decade). But this will be hard work and takes cleverness. Good luck!\n",
"This answer will sound snide, but it is intended to make you think:\n\nOnly run the code once.\n\nThe reason I say that is because so much will make it variable and you might not even have control over it. And what is your definition of time? Suppose the operating system decides to put your process in the wait queue.\nNext you have unpredictability due to cache performance, memory latency, disk I/O, and so on. These all boil down to one thing; sometimes it takes time to get the information into the processor where your code can use it. Including the time it takes to fetch/decode your code itself. \nAlso, how much variance is acceptable to you? It could be that you're okay with 40 milliseconds, or you're okay with 10 nanoseconds.\nDepending on the application domain you can even further just mask over or hide the variance. Computer graphics people have been rendering to off screen buffers for years to hide variance in the time to rendering each frame.\nThe traditional solutions just remove as many known variable rate things as possible. Load files into RAM, warm up the cache and avoid IO.\n",
"If you make all the function calls in the critical code 'inline', and minimize the number of variables you have, so that you can let them have the 'register' type. \nThis should improve the running time of your program. (You probably have to compile it in a special way since compilers these days tend to disregard your 'register' tags)\nI'm assuming that you have enough memory not to cause page faults when you try to load something from memory. The page faults can take a lot of time.\nYou could also take a look at the generated assembly code, to see if there are lots of branches and memory instuctions that could change your running code. \nIf an interrupt happens in your code execution it WILL take longer time. Do you have interrupts/exceptions enabled?\n"
] |
[
2,
2,
2,
0,
0
] |
[
"Understand your worst case runtime for complex operations and use timers.\n"
] |
[
-1
] |
[
"caching",
"processor",
"profiling",
"time"
] |
stackoverflow_0000069049_caching_processor_profiling_time.txt
|
Q:
How do I set ItemTemplate dynamically in WPF?
Using WPF, I have a TreeView control that I want to set its ItemTemplate dynamically through procedural code. How do I do this? I assume I need to find the resource somewhere.
myTreeViewControl.ItemTemplate = ??
A:
If the template is defined in your <Window.Resources> section directly:
myTreeViewControl.ItemTemplate = this.Resources["SomeTemplate"] as DataTemplate;
If it's somewhere deep within your window, like in a <Grid.Resources> section or something, I think this'll work:
myTreeViewControl.ItemTemplate = this.FindResource("SomeTemplate") as DataTemplate;
And if it's elsewhere in your application, I think App.FindResource("SomeTemplate") will work.
A:
if your treeview control requires different templates for your items, you should implement DataTemplateSelector class and set it's instance to your tree view. as far as i remember there is a property of DataTemplateSelector.
|
How do I set ItemTemplate dynamically in WPF?
|
Using WPF, I have a TreeView control that I want to set its ItemTemplate dynamically through procedural code. How do I do this? I assume I need to find the resource somewhere.
myTreeViewControl.ItemTemplate = ??
|
[
"If the template is defined in your <Window.Resources> section directly:\nmyTreeViewControl.ItemTemplate = this.Resources[\"SomeTemplate\"] as DataTemplate;\n\nIf it's somewhere deep within your window, like in a <Grid.Resources> section or something, I think this'll work:\nmyTreeViewControl.ItemTemplate = this.FindResource(\"SomeTemplate\") as DataTemplate;\n\nAnd if it's elsewhere in your application, I think App.FindResource(\"SomeTemplate\") will work.\n",
"if your treeview control requires different templates for your items, you should implement DataTemplateSelector class and set it's instance to your tree view. as far as i remember there is a property of DataTemplateSelector.\n"
] |
[
12,
2
] |
[] |
[] |
[
"itemtemplate",
"wpf"
] |
stackoverflow_0000031249_itemtemplate_wpf.txt
|
Q:
How to write a Rails mixin that spans across model, controller, and view
In an effort to reduce code duplication in my little Rails app, I've been working on getting common code between my models into it's own separate module, so far so good.
The model stuff is fairly easy, I just have to include the module at the beginning, e.g.:
class Iso < Sale
include Shared::TracksSerialNumberExtension
include Shared::OrderLines
extend Shared::Filtered
include Sendable::Model
validates_presence_of :customer
validates_associated :lines
owned_by :customer
def initialize( params = nil )
super
self.created_at ||= Time.now.to_date
end
def after_initialize
end
order_lines :despatched
# tracks_serial_numbers :items
sendable :customer
def created_at=( date )
write_attribute( :created_at, Chronic.parse( date ) )
end
end
This is working fine, now however, I'm going to have some controller and view code that's going to be common between these models as well, so far I have this for my sendable stuff:
# This is a module that is used for pages/forms that are can be "sent"
# either via fax, email, or printed.
module Sendable
module Model
def self.included( klass )
klass.extend ClassMethods
end
module ClassMethods
def sendable( class_to_send_to )
attr_accessor :fax_number,
:email_address,
:to_be_faxed,
:to_be_emailed,
:to_be_printed
@_class_sending_to ||= class_to_send_to
include InstanceMethods
end
def class_sending_to
@_class_sending_to
end
end # ClassMethods
module InstanceMethods
def after_initialize( )
super
self.to_be_faxed = false
self.to_be_emailed = false
self.to_be_printed = false
target_class = self.send( self.class.class_sending_to )
if !target_class.nil?
self.fax_number = target_class.send( :fax_number )
self.email_address = target_class.send( :email_address )
end
end
end
end # Module Model
end # Module Sendable
Basically I'm planning on just doing an include Sendable::Controller, and Sendable::View (or the equivalent) for the controller and the view, but, is there a cleaner way to do this? I 'm after a neat way to have a bunch of common code between my model, controller, and view.
Edit: Just to clarify, this just has to be shared across 2 or 3 models.
A:
If that code needs to get added to all models and all controllers, you could always do the following:
# maybe put this in environment.rb or in your module declaration
class ActiveRecord::Base
include Iso
end
# application.rb
class ApplicationController
include Iso
end
If you needed functions from this module available to the views, you could expose them individually with helper_method declarations in application.rb.
A:
You could pluginize it (use script/generate plugin).
Then in your init.rb just do something like:
ActiveRecord::Base.send(:include, PluginName::Sendable)
ActionController::Base.send(:include, PluginName::SendableController)
And along with your self.included that should work just fine.
Check out some of the acts_* plugins, it's a pretty common pattern (http://github.com/technoweenie/acts_as_paranoid/tree/master/init.rb, check line 30)
A:
If you do go the plugin route, do check out Rails-Engines, which are intended to extend plugin semantics to Controllers and Views in a clear way.
|
How to write a Rails mixin that spans across model, controller, and view
|
In an effort to reduce code duplication in my little Rails app, I've been working on getting common code between my models into it's own separate module, so far so good.
The model stuff is fairly easy, I just have to include the module at the beginning, e.g.:
class Iso < Sale
include Shared::TracksSerialNumberExtension
include Shared::OrderLines
extend Shared::Filtered
include Sendable::Model
validates_presence_of :customer
validates_associated :lines
owned_by :customer
def initialize( params = nil )
super
self.created_at ||= Time.now.to_date
end
def after_initialize
end
order_lines :despatched
# tracks_serial_numbers :items
sendable :customer
def created_at=( date )
write_attribute( :created_at, Chronic.parse( date ) )
end
end
This is working fine, now however, I'm going to have some controller and view code that's going to be common between these models as well, so far I have this for my sendable stuff:
# This is a module that is used for pages/forms that are can be "sent"
# either via fax, email, or printed.
module Sendable
module Model
def self.included( klass )
klass.extend ClassMethods
end
module ClassMethods
def sendable( class_to_send_to )
attr_accessor :fax_number,
:email_address,
:to_be_faxed,
:to_be_emailed,
:to_be_printed
@_class_sending_to ||= class_to_send_to
include InstanceMethods
end
def class_sending_to
@_class_sending_to
end
end # ClassMethods
module InstanceMethods
def after_initialize( )
super
self.to_be_faxed = false
self.to_be_emailed = false
self.to_be_printed = false
target_class = self.send( self.class.class_sending_to )
if !target_class.nil?
self.fax_number = target_class.send( :fax_number )
self.email_address = target_class.send( :email_address )
end
end
end
end # Module Model
end # Module Sendable
Basically I'm planning on just doing an include Sendable::Controller, and Sendable::View (or the equivalent) for the controller and the view, but, is there a cleaner way to do this? I 'm after a neat way to have a bunch of common code between my model, controller, and view.
Edit: Just to clarify, this just has to be shared across 2 or 3 models.
|
[
"If that code needs to get added to all models and all controllers, you could always do the following:\n# maybe put this in environment.rb or in your module declaration\nclass ActiveRecord::Base\n include Iso\nend\n\n# application.rb\nclass ApplicationController\n include Iso\nend\n\nIf you needed functions from this module available to the views, you could expose them individually with helper_method declarations in application.rb.\n",
"You could pluginize it (use script/generate plugin).\nThen in your init.rb just do something like:\nActiveRecord::Base.send(:include, PluginName::Sendable)\nActionController::Base.send(:include, PluginName::SendableController)\n\nAnd along with your self.included that should work just fine.\nCheck out some of the acts_* plugins, it's a pretty common pattern (http://github.com/technoweenie/acts_as_paranoid/tree/master/init.rb, check line 30)\n",
"If you do go the plugin route, do check out Rails-Engines, which are intended to extend plugin semantics to Controllers and Views in a clear way. \n"
] |
[
7,
7,
1
] |
[] |
[] |
[
"dry",
"refactoring",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000068391_dry_refactoring_ruby_ruby_on_rails.txt
|
Q:
Learn Silverlight or WPF first?
It seems that Silverlight/WPF are the long term future for user interface development with .NET. This is great because as I can see the advantage of reusing XAML skills on both the client and web development sides. But looking at WPF/XAML/Silverlight they seem very large technologies and so where is the best place to get start?
I would like to hear from anyone who has good knowledge of both and can recommend which is a better starting point and why.
A:
Should you learn ASP.NET or Winforms first? ASP or MFC? HTML or VB? C# or VB?
Set aside the idea that there is a logical progression through what has become a highly complex interwoven set of technologies, and take a step back and ask yourself a series of questions:
What are your goals; how do you want to balance profit against enjoyment
Are you short term oriented or in for the long haul
Are you the type of person who likes to get good at something and do it a lot or do you get bored once you fully understand it?
The next and hardest step is to come to accept that any advice you are given is bound to be wrong; and the longer the time horizon the more likely it is to be incorrect. If the advice is for more than six to 12 months, the probability the advice is wildly incorrect approaches 1.
I can only tell you my story, quickly. In 2000 I was happy as a consultant working profitably in C++ on Windows applications, writing about ASP.NET and WinForms. then I saw C# and the world turned upside down. I never went back.
Two years ago I had the same kind of revelation, only an order of magnitude bigger, stronger and with more conviction about Silverlight. Yes, WPF is magnificent, and it may be that I'm all wet about this, but I believe in my gut that Silverlight changes everything. There was no doubt then and there is no doubt today that Silverlight is the most important development platform for Microsoft since .NET (certainly) and possibly since the switch to C++.
In a nutshell, here is why. I don't understand where its limitations are. With most platforms I do: you can do this, but you can't do that. WPF is a pretty good case in point, as was ASP.Net and WinForms and, well really everything until now.
With Silverlight, I don't see the boundaries yet. Silverlight has already leaped off the desktop onto phones, and I don't see any reason for it to stop there. Yes, it is true, it is bound by the browser, but I see that less as a jail cell than as a tank in which Silverlight will be riding over lots of terrain (it must be very late, I should go to bed).
In any case, for now, learning Silverlight is a gas, there is a lot of material on the Silverlight.net site, and what is the very best thing about learning Silverlight is that if you don't see what you need you can holler at me and I'll make sure you get it pretty quickly.
Enjoy, good luck and the dirty little secret is you'll be fine whichever you choose. It's all just software.
-jesse
Jesse Liberty
"Silverlight Geek"
A:
I'd say go with Silverlight first!
I have programmed with WPF and Silverlight before.
But as Silverlight is a subset of WPF if you go in too deep and try to switch to writing Silverlight applications, you'll be scratching your heads looking for that "tag" you learned to love in WPF but is not available in Silverlight.
When you master the basic things in Silverlight first, the extra mechanism/trigger/whatever features in WPF will simply add to most of what you've already known.
Silverlight in WPF differs at the features level, not just some missing controls or animations. Take the WPF triggers mechanism for example, is not available fully in Silverlight.
So learning the smaller subset first, you can extend that knowledge to the full set later, but if you started at the full set and gets addicted to some of the niceties available, you'll have trouble down the line when someone asks you to port your designed-utilizing-WPF apps to Silverlight.
A:
I'll go against the grain and say learn WPF first.
Here's my reasoning:
Much more resources are available for WPF than Silverlight, such as books, blogs, and msdn documentation
WPF Books
You're not dealing with a Beta, moving target
You don't have to deal with working with only asynchronous calls
Not limited by lack of features such as Merged Dictionaries, Triggers, TileBrushes, etc.
You don't have to worry about re-learning to do things correctly because of lacks of features in SL
A:
Silverlight is a stripped down version of WPF so it should have fewer things to learn inside. On the other hand, the two platforms have different targets (web & rich client) so I guess it depends on what app you're going to build.
If you just want to learn for yourself (no app in the close future) I'd pick Silverlight because it would be less to assimilate. Still, Silverlight is pretty much a moving target, much more than WPF, so you'll have to keep up with some changes from time to time (the joys of being an early adopter :)).
WPF has lots more stuff that you will probably want to use at some point but I would wait for the needs to arise first.
A:
Every industry expert I've heard on podcasts, blogs and interviews recommend learning Silverlight first and then gradually moving to WPF which is a huge UI framework.
Silverlight is light and allows you to work on smaller subset of controls and features such that you get your head around this new UI building paradigm based on,
Templating
DataBinding
Styles
Update: 07/2011
I hate to mention this, but in recent times Microsoft has put more focus on HTML5, Javascript and CSS by bringing forward powers of IE 9 and IE 10, as well as the upcoming Windows 8.
More and more developers and CTOs are skeptical about Silverlight as a LOB application platform as the time passes by, we are suspecting Silverlight will be limited to Windows Phone and niche, domain areas like healthcare of graphics related applications rather than a regular LOB app.
As it seems right now, as of summer 2011, the future might look fragmented with more opportunities for pure web technologies (HTML5, JS and CSS) as opposed to a plugin and OS-specific UI technology.
A:
I would start by learning XAML, by reading a few tutorials and playing around with XAMLPad. This will give you a feel for the basics before actually building an app.
A:
I would start with WPF and doing very simple control familiarizaton samples. You goal should be to learn XAML and Binding. So if you just create some basic WPF window apps will bootstrap your learning speed. Then eventually you can move to silverlight. Yeah as other mentioned here Silverlight is a subset of WPF.
A:
Well, it depends on what you are going to be working on. If you are working on client/server, then I would go with WPF. If you are working in an environment where you can guarantee that .Net is installed on all of the machines, then I would go with WPF as well, because you can use what is called an XBAP, which is a WPF application that is run through the browser.
It's really up to you. However, I would state that silverlight is not RTM yet, and WPF is. WPF has a lot of books out on the subject, where silverlight does not. It may be easier to get the whole Zen of WPF by reading a few of those books, and then dive into which ever one you would like to play with.
Just keep in mind that silverlight has a subset of the controls of WPF, a paired down .Net framework, and does not do synchronous calls. As long as you know that up front, you can start learned the core of the whole foundation and tailor your practical experience later on to whichever technology is best for you.
A:
Some tips at Getting started with Silverlight Development
|
Learn Silverlight or WPF first?
|
It seems that Silverlight/WPF are the long term future for user interface development with .NET. This is great because as I can see the advantage of reusing XAML skills on both the client and web development sides. But looking at WPF/XAML/Silverlight they seem very large technologies and so where is the best place to get start?
I would like to hear from anyone who has good knowledge of both and can recommend which is a better starting point and why.
|
[
"Should you learn ASP.NET or Winforms first? ASP or MFC? HTML or VB? C# or VB? \nSet aside the idea that there is a logical progression through what has become a highly complex interwoven set of technologies, and take a step back and ask yourself a series of questions:\n\nWhat are your goals; how do you want to balance profit against enjoyment\nAre you short term oriented or in for the long haul\nAre you the type of person who likes to get good at something and do it a lot or do you get bored once you fully understand it?\n\nThe next and hardest step is to come to accept that any advice you are given is bound to be wrong; and the longer the time horizon the more likely it is to be incorrect. If the advice is for more than six to 12 months, the probability the advice is wildly incorrect approaches 1. \nI can only tell you my story, quickly. In 2000 I was happy as a consultant working profitably in C++ on Windows applications, writing about ASP.NET and WinForms. then I saw C# and the world turned upside down. I never went back. \nTwo years ago I had the same kind of revelation, only an order of magnitude bigger, stronger and with more conviction about Silverlight. Yes, WPF is magnificent, and it may be that I'm all wet about this, but I believe in my gut that Silverlight changes everything. There was no doubt then and there is no doubt today that Silverlight is the most important development platform for Microsoft since .NET (certainly) and possibly since the switch to C++. \nIn a nutshell, here is why. I don't understand where its limitations are. With most platforms I do: you can do this, but you can't do that. WPF is a pretty good case in point, as was ASP.Net and WinForms and, well really everything until now.\nWith Silverlight, I don't see the boundaries yet. Silverlight has already leaped off the desktop onto phones, and I don't see any reason for it to stop there. Yes, it is true, it is bound by the browser, but I see that less as a jail cell than as a tank in which Silverlight will be riding over lots of terrain (it must be very late, I should go to bed). \nIn any case, for now, learning Silverlight is a gas, there is a lot of material on the Silverlight.net site, and what is the very best thing about learning Silverlight is that if you don't see what you need you can holler at me and I'll make sure you get it pretty quickly.\nEnjoy, good luck and the dirty little secret is you'll be fine whichever you choose. It's all just software.\n-jesse\n\nJesse Liberty\n\"Silverlight Geek\"\n",
"I'd say go with Silverlight first!\nI have programmed with WPF and Silverlight before.\nBut as Silverlight is a subset of WPF if you go in too deep and try to switch to writing Silverlight applications, you'll be scratching your heads looking for that \"tag\" you learned to love in WPF but is not available in Silverlight.\nWhen you master the basic things in Silverlight first, the extra mechanism/trigger/whatever features in WPF will simply add to most of what you've already known.\nSilverlight in WPF differs at the features level, not just some missing controls or animations. Take the WPF triggers mechanism for example, is not available fully in Silverlight.\nSo learning the smaller subset first, you can extend that knowledge to the full set later, but if you started at the full set and gets addicted to some of the niceties available, you'll have trouble down the line when someone asks you to port your designed-utilizing-WPF apps to Silverlight.\n",
"I'll go against the grain and say learn WPF first.\nHere's my reasoning:\n\nMuch more resources are available for WPF than Silverlight, such as books, blogs, and msdn documentation\n\nWPF Books\n\nYou're not dealing with a Beta, moving target\nYou don't have to deal with working with only asynchronous calls\nNot limited by lack of features such as Merged Dictionaries, Triggers, TileBrushes, etc.\nYou don't have to worry about re-learning to do things correctly because of lacks of features in SL\n\n",
"Silverlight is a stripped down version of WPF so it should have fewer things to learn inside. On the other hand, the two platforms have different targets (web & rich client) so I guess it depends on what app you're going to build.\nIf you just want to learn for yourself (no app in the close future) I'd pick Silverlight because it would be less to assimilate. Still, Silverlight is pretty much a moving target, much more than WPF, so you'll have to keep up with some changes from time to time (the joys of being an early adopter :)).\nWPF has lots more stuff that you will probably want to use at some point but I would wait for the needs to arise first.\n",
"Every industry expert I've heard on podcasts, blogs and interviews recommend learning Silverlight first and then gradually moving to WPF which is a huge UI framework. \nSilverlight is light and allows you to work on smaller subset of controls and features such that you get your head around this new UI building paradigm based on,\n\nTemplating\nDataBinding\nStyles\n\nUpdate: 07/2011\nI hate to mention this, but in recent times Microsoft has put more focus on HTML5, Javascript and CSS by bringing forward powers of IE 9 and IE 10, as well as the upcoming Windows 8.\nMore and more developers and CTOs are skeptical about Silverlight as a LOB application platform as the time passes by, we are suspecting Silverlight will be limited to Windows Phone and niche, domain areas like healthcare of graphics related applications rather than a regular LOB app.\nAs it seems right now, as of summer 2011, the future might look fragmented with more opportunities for pure web technologies (HTML5, JS and CSS) as opposed to a plugin and OS-specific UI technology.\n",
"I would start by learning XAML, by reading a few tutorials and playing around with XAMLPad. This will give you a feel for the basics before actually building an app.\n",
"I would start with WPF and doing very simple control familiarizaton samples. You goal should be to learn XAML and Binding. So if you just create some basic WPF window apps will bootstrap your learning speed. Then eventually you can move to silverlight. Yeah as other mentioned here Silverlight is a subset of WPF.\n",
"Well, it depends on what you are going to be working on. If you are working on client/server, then I would go with WPF. If you are working in an environment where you can guarantee that .Net is installed on all of the machines, then I would go with WPF as well, because you can use what is called an XBAP, which is a WPF application that is run through the browser.\nIt's really up to you. However, I would state that silverlight is not RTM yet, and WPF is. WPF has a lot of books out on the subject, where silverlight does not. It may be easier to get the whole Zen of WPF by reading a few of those books, and then dive into which ever one you would like to play with. \nJust keep in mind that silverlight has a subset of the controls of WPF, a paired down .Net framework, and does not do synchronous calls. As long as you know that up front, you can start learned the core of the whole foundation and tailor your practical experience later on to whichever technology is best for you.\n",
"Some tips at Getting started with Silverlight Development\n"
] |
[
28,
14,
7,
5,
4,
2,
2,
1,
0
] |
[] |
[] |
[
"silverlight",
"wpf"
] |
stackoverflow_0000061317_silverlight_wpf.txt
|
Q:
I have an issue with inline vs included Javascript
I am relatively new to JavaScript and am trying to understand how to use it correctly.
If I wrap JavaScript code in an anonymous function to avoid making variables public the functions within the JavaScript are not available from within the html that includes the JavaScript.
On initially loading the page the JavaScript loads and is executed but on subsequent reloads of the page the JavaScript code does not go through the execution process again. Specifically there is an ajax call using httprequest to get that from a PHP file and passes the returned data to a callback function that in onsuccess processes the data, if I could call the function that does the httprequest from within the html in a
<script type="text/javascript" ></script>
block on each page load I'd be all set - as it is I have to inject the entire JavaScript code into that block to get it to work on page load, hoping someone can educate me.
A:
If you aren't using a javascript framework, I strongly suggest it. I use MooTools, but there are many others that are very solid (Prototype, YUI, jQuery, etc). These include methods for attaching functionality to the DomReady event. The problem with:
window.onload = function(){...};
is that you can only ever have one function attached to that event (subsequent assignments will overwrite this one).
Frameworks provide more appropriate methods for doing this. For example, in MooTools:
window.addEvent('domready', function(){...});
Finally, there are other ways to avoid polluting the global namespace. Just namespacing your own code (mySite.foo = function...) will help you avoid any potential conflicts.
One more thing. I'm not 100% sure from your comment that the problem you have is specific to the page load event. Are you saying that the code needs to be executed when the ajax returns as well? Please edit your question if this is the case.
A:
I'd suggest just doing window.onload:
<script type="text/javascript">
(function() {
var private = "private var";
window.onload = function() {
console.log(private);
}
})();
</script>
A:
On initially loading the page the js loads and is executed but on subsequent reloads of the page the js code does not go through the execution process again
I'm not sure I understand your problem exactly, since the JS should execute every time, no matter if it's an include, or inline script. But I'm wondering if your problem somehow relates to browser caching. There may be two separate points of caching issues:
Your javascript include is being cached, and you are attempting to serve dynamically generated or recently edited javascript from this include.
Your ajax request is being cached.
You should be able to avoid caching by setting response headers on the server.
Also, this page describes another way to get around caching issues from ajax requests.
A:
It might be best not to wrap everything in an anonymous function and just hope that it is executed. You could name the function, and put its name in the body tag's onload handler. This should ensure that it's run each time the page is loaded.
A:
Depends what you want to do, but to avoid polluting the global namespace, you could attach your code to the element you care about.
e.g.
<div id="special">Hello World!</div>
<script>
(function(){
var foo = document.getElementById('special');
foo.mySpecialMethod = function(otherID, newData){
var bar = document.getElementById(otherID);
bar.innerHTML = newData;
};
//do some ajax... set callback to call "special" method above...
doAJAX(url, 'get', foo.mySpecialMethod);
})();
</script>
I'm not sure if this would solve your issue or not, but its one way to handle it.
|
I have an issue with inline vs included Javascript
|
I am relatively new to JavaScript and am trying to understand how to use it correctly.
If I wrap JavaScript code in an anonymous function to avoid making variables public the functions within the JavaScript are not available from within the html that includes the JavaScript.
On initially loading the page the JavaScript loads and is executed but on subsequent reloads of the page the JavaScript code does not go through the execution process again. Specifically there is an ajax call using httprequest to get that from a PHP file and passes the returned data to a callback function that in onsuccess processes the data, if I could call the function that does the httprequest from within the html in a
<script type="text/javascript" ></script>
block on each page load I'd be all set - as it is I have to inject the entire JavaScript code into that block to get it to work on page load, hoping someone can educate me.
|
[
"If you aren't using a javascript framework, I strongly suggest it. I use MooTools, but there are many others that are very solid (Prototype, YUI, jQuery, etc). These include methods for attaching functionality to the DomReady event. The problem with:\nwindow.onload = function(){...};\n\nis that you can only ever have one function attached to that event (subsequent assignments will overwrite this one).\nFrameworks provide more appropriate methods for doing this. For example, in MooTools:\nwindow.addEvent('domready', function(){...});\n\nFinally, there are other ways to avoid polluting the global namespace. Just namespacing your own code (mySite.foo = function...) will help you avoid any potential conflicts.\nOne more thing. I'm not 100% sure from your comment that the problem you have is specific to the page load event. Are you saying that the code needs to be executed when the ajax returns as well? Please edit your question if this is the case.\n",
"I'd suggest just doing window.onload:\n<script type=\"text/javascript\">\n(function() {\n var private = \"private var\";\n window.onload = function() {\n console.log(private);\n }\n})();\n</script>\n\n",
"\nOn initially loading the page the js loads and is executed but on subsequent reloads of the page the js code does not go through the execution process again\n\nI'm not sure I understand your problem exactly, since the JS should execute every time, no matter if it's an include, or inline script. But I'm wondering if your problem somehow relates to browser caching. There may be two separate points of caching issues: \n\nYour javascript include is being cached, and you are attempting to serve dynamically generated or recently edited javascript from this include. \nYour ajax request is being cached.\n\nYou should be able to avoid caching by setting response headers on the server.\nAlso, this page describes another way to get around caching issues from ajax requests.\n",
"It might be best not to wrap everything in an anonymous function and just hope that it is executed. You could name the function, and put its name in the body tag's onload handler. This should ensure that it's run each time the page is loaded.\n",
"Depends what you want to do, but to avoid polluting the global namespace, you could attach your code to the element you care about.\ne.g.\n<div id=\"special\">Hello World!</div>\n<script>\n (function(){\n var foo = document.getElementById('special');\n foo.mySpecialMethod = function(otherID, newData){\n var bar = document.getElementById(otherID);\n bar.innerHTML = newData;\n };\n //do some ajax... set callback to call \"special\" method above...\n doAJAX(url, 'get', foo.mySpecialMethod);\n })();\n</script>\n\nI'm not sure if this would solve your issue or not, but its one way to handle it.\n"
] |
[
2,
1,
1,
0,
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0000068012_javascript.txt
|
Q:
SQL Server 2k5 memory consumption?
I have a development vm which is running sql server as well as some other apps for my stack, and I found that the other apps are performing awfully. After doing some digging, SQL Server was hogging the memory. After a quick web search I discovered that by default, it will consume as much memory as it can in order to cache data and give it back to the system as other apps request it, but this process often doesn't happen fast enough, apparently my situation is a common problem.
There however is a way to limit the memory SQL Server is allowed to have. My question is, how should I set this limit. Obviously I'm going to need to do some guess and check, but is there an absolute minimum threshhold? Any recommendations are appreciated.
Edit:
I'll note that out developer machines have 2 gigs of memory so I'd like to be able to run the vm on 768 mb or less if possible. This vm will be only used for local dev and testing , so the load will be very minimal. After code has been tested locally it goes to another environment where the SQL server box is dedicated. What I'm really looking for here is recommendations on minimums
A:
Extracted fromt he SQL Server documentation:
Maximum server memory (in MB)
Specifies the maximum amount of memory
SQL Server can allocate when it starts
and while it runs. This configuration
option can be set to a specific value
if you know there are multiple
applications running at the same time
as SQL Server and you want to
guarantee that these applications have
sufficient memory to run. If these
other applications, such as Web or
e-mail servers, request memory only as
needed, then do not set the option,
because SQL Server will release memory
to them as needed. However,
applications often use whatever memory
is available when they start and do
not request more if needed. If an
application that behaves in this
manner runs on the same computer at
the same time as SQL Server, set the
option to a value that guarantees that
the memory required by the application
is not allocated by SQL Server.
The recommendation on minimum is: No such thing. The more memory the better. The SQL Sever needs as much memory as it can get or it will trash your IO.
Stop the SQL Server. Run your other applications and take note to the amount of memory they need. Subtract that from your total available RAM, and use that number for the MAX memory setting in the SQL Server.
A:
so id like to be able to run the vm on
768 mb or less if possible.
That will depend on your data and the size of your database. But I usually like to give SQL server at least a GB
A:
It really depends on what else is going on on the machine. Get things running under a typical load and have a look at Task Manager to see what you need for everything else. Try that number to start with.
For production machines, of course, it is best to give control of the machine to Sql Server (Processors -> Boost Sql Server Priority) and let it have all the RAM it wants.
Since you are using VMs, maybe you could create a dedicated one just for Sql Server and run everything else on a different VM.
A:
Since this is a development environment, I agree with Greg, just use trial and error. It's not that crucial to get it perfectly right.
But if you do a lot of work in the VM, why not give it at least half of the 2GB?
|
SQL Server 2k5 memory consumption?
|
I have a development vm which is running sql server as well as some other apps for my stack, and I found that the other apps are performing awfully. After doing some digging, SQL Server was hogging the memory. After a quick web search I discovered that by default, it will consume as much memory as it can in order to cache data and give it back to the system as other apps request it, but this process often doesn't happen fast enough, apparently my situation is a common problem.
There however is a way to limit the memory SQL Server is allowed to have. My question is, how should I set this limit. Obviously I'm going to need to do some guess and check, but is there an absolute minimum threshhold? Any recommendations are appreciated.
Edit:
I'll note that out developer machines have 2 gigs of memory so I'd like to be able to run the vm on 768 mb or less if possible. This vm will be only used for local dev and testing , so the load will be very minimal. After code has been tested locally it goes to another environment where the SQL server box is dedicated. What I'm really looking for here is recommendations on minimums
|
[
"Extracted fromt he SQL Server documentation: \n\nMaximum server memory (in MB)\nSpecifies the maximum amount of memory\n SQL Server can allocate when it starts\n and while it runs. This configuration\n option can be set to a specific value\n if you know there are multiple\n applications running at the same time\n as SQL Server and you want to\n guarantee that these applications have\n sufficient memory to run. If these\n other applications, such as Web or\n e-mail servers, request memory only as\n needed, then do not set the option,\n because SQL Server will release memory\n to them as needed. However,\n applications often use whatever memory\n is available when they start and do\n not request more if needed. If an\n application that behaves in this\n manner runs on the same computer at\n the same time as SQL Server, set the\n option to a value that guarantees that\n the memory required by the application\n is not allocated by SQL Server.\n\nThe recommendation on minimum is: No such thing. The more memory the better. The SQL Sever needs as much memory as it can get or it will trash your IO.\nStop the SQL Server. Run your other applications and take note to the amount of memory they need. Subtract that from your total available RAM, and use that number for the MAX memory setting in the SQL Server.\n",
"\nso id like to be able to run the vm on\n 768 mb or less if possible.\n\nThat will depend on your data and the size of your database. But I usually like to give SQL server at least a GB\n",
"It really depends on what else is going on on the machine. Get things running under a typical load and have a look at Task Manager to see what you need for everything else. Try that number to start with.\nFor production machines, of course, it is best to give control of the machine to Sql Server (Processors -> Boost Sql Server Priority) and let it have all the RAM it wants.\nSince you are using VMs, maybe you could create a dedicated one just for Sql Server and run everything else on a different VM.\n",
"Since this is a development environment, I agree with Greg, just use trial and error. It's not that crucial to get it perfectly right.\nBut if you do a lot of work in the VM, why not give it at least half of the 2GB?\n"
] |
[
3,
1,
1,
1
] |
[] |
[] |
[
"performance",
"sql_server"
] |
stackoverflow_0000028387_performance_sql_server.txt
|
Q:
Fast, Pixel Precision 2D Drawing API for Graphics App?
I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.)
Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas?
addendum: I need the ability to draw on-screen. Drawing out to a file I've got figured out.
A:
I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: Pyglet week 2, better vertex throughput (or 3D stuff using the same basic ideas)
It is very fast (relatively speaking, for python) I have managed to get around 1,000 independently positioned and oriented objects moving around the screen, each with about 50 vertices.
It is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it.
Some of these posts are very old, with broken links between them. You should be able to find all the relevant posts using the 'graphics' tag.
A:
The Pyglet library for Python might suit your needs. It lets you use OpenGL, a cross-platform graphics API. You can disable anti-aliasing and capture regions of the screen to a buffer or a file. In addition, you can use its event handling, resource loading, and image manipulation systems. You can probably also tie it into PIL (Python Image Library), and definitely Cairo, a popular cross-platform vector graphics library.
I mention Pyglet instead of pure PyOpenGL because Pyglet handles a lot of ugly OpenGL stuff transparently with no effort on your part.
A friend and I are currently working on a drawing program using Pyglet. There are a few quirks - for example, OpenGL is always double buffered on OS X, so we have to draw everything twice, once for the current frame and again for the other frame, since they are flipped whenever the display refreshes. You can look at our current progress in this subversion repository. (Splatterboard.py in trunk is the file you'll want to run.) If you're not up on using svn, I would be happy to email you a .zip of the latest source. Feel free to steal code if you look into it.
A:
If language choice is open, a Flash file created with Haxe might have a place. Haxe is free, and a full, dynamic programming language. Then there's the related Neko, a virtual machine (like Java's, Ruby's, Parrot...) to run on Mac, Windows and Linux. Being in some ways a new improved form of Flash, naturally it can draw stuff. http://haxe.org/
A:
QT's Canvas an QPainter are very good for this job if you'd like to use C++. and it is cross platform.
There is a python binding for QT but I've never used it.
As for Java, using SWT, pixel level manipulation of a canvas is somewhat difficult and slow so I would not recommend it. On the other hand Swing's Canvas is pretty good and responsive. I've never used the AWT option but you probably don't want to go there.
A:
I would recommend wxPython
It's beautifully cross platform and you can get per pixel control and if you change your mind about that you can use it with libraries such as pyglet or agg.
You can find some useful examples for just what you are trying to do in the docs and demos download.
|
Fast, Pixel Precision 2D Drawing API for Graphics App?
|
I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.)
Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas?
addendum: I need the ability to draw on-screen. Drawing out to a file I've got figured out.
|
[
"I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: Pyglet week 2, better vertex throughput (or 3D stuff using the same basic ideas)\nIt is very fast (relatively speaking, for python) I have managed to get around 1,000 independently positioned and oriented objects moving around the screen, each with about 50 vertices.\nIt is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it.\nSome of these posts are very old, with broken links between them. You should be able to find all the relevant posts using the 'graphics' tag.\n",
"The Pyglet library for Python might suit your needs. It lets you use OpenGL, a cross-platform graphics API. You can disable anti-aliasing and capture regions of the screen to a buffer or a file. In addition, you can use its event handling, resource loading, and image manipulation systems. You can probably also tie it into PIL (Python Image Library), and definitely Cairo, a popular cross-platform vector graphics library.\nI mention Pyglet instead of pure PyOpenGL because Pyglet handles a lot of ugly OpenGL stuff transparently with no effort on your part.\nA friend and I are currently working on a drawing program using Pyglet. There are a few quirks - for example, OpenGL is always double buffered on OS X, so we have to draw everything twice, once for the current frame and again for the other frame, since they are flipped whenever the display refreshes. You can look at our current progress in this subversion repository. (Splatterboard.py in trunk is the file you'll want to run.) If you're not up on using svn, I would be happy to email you a .zip of the latest source. Feel free to steal code if you look into it.\n",
"If language choice is open, a Flash file created with Haxe might have a place. Haxe is free, and a full, dynamic programming language. Then there's the related Neko, a virtual machine (like Java's, Ruby's, Parrot...) to run on Mac, Windows and Linux. Being in some ways a new improved form of Flash, naturally it can draw stuff. http://haxe.org/ \n",
"QT's Canvas an QPainter are very good for this job if you'd like to use C++. and it is cross platform.\nThere is a python binding for QT but I've never used it.\nAs for Java, using SWT, pixel level manipulation of a canvas is somewhat difficult and slow so I would not recommend it. On the other hand Swing's Canvas is pretty good and responsive. I've never used the AWT option but you probably don't want to go there.\n",
"I would recommend wxPython\nIt's beautifully cross platform and you can get per pixel control and if you change your mind about that you can use it with libraries such as pyglet or agg.\nYou can find some useful examples for just what you are trying to do in the docs and demos download.\n"
] |
[
5,
2,
1,
0,
0
] |
[] |
[] |
[
"2d",
"drawing",
"graphics"
] |
stackoverflow_0000067000_2d_drawing_graphics.txt
|
Q:
.NET visual components
I really like DevX components, but they are pretty expensive, maybe anyone knows free equivalents ? or web site where I can look for some kind of free visual component for .NET
A:
Check out free Krypton Toolkit of Component Factory.
A:
I also found that DevExpress offers some free components.
A:
I second that. Krypton all the way. Some of their controls actually outperform the same Telerik control, too.
|
.NET visual components
|
I really like DevX components, but they are pretty expensive, maybe anyone knows free equivalents ? or web site where I can look for some kind of free visual component for .NET
|
[
"Check out free Krypton Toolkit of Component Factory.\n",
"I also found that DevExpress offers some free components.\n",
"I second that. Krypton all the way. Some of their controls actually outperform the same Telerik control, too.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
".net",
"components",
"visual_studio"
] |
stackoverflow_0000059684_.net_components_visual_studio.txt
|
Q:
How do you test cookies in MVC .net?
http://stephenwalther.com/blog/archive/2008/07/01/asp-net-mvc-tip-12-faking-the-controller-context.aspx
This post shows how to test setting a cookie and then seeing it in ViewData. What I what to do is see if the correct cookies were written (values and name). Any reply, blog post or article will be greatly appreciated.
A:
Are you looking for something more like this? (untested, just typed it up in the reply box)
var cookies = new HttpCookieCollection();
controller.ControllerContext = new FakeControllerContext(controller, cookies);
var result = controller.TestCookie() as ViewResult;
Assert.AreEqual("somevaluethatshouldbethere", cookies["somecookieitem"].Value);
As in, did you mean you want to test the writing of a cookie instead of reading one?
Please make your request clearer if possible :)
A:
Perhaps you need to pass in a Fake Response object that the cookies are written to, and you test what is returned in that from the Controller.
|
How do you test cookies in MVC .net?
|
http://stephenwalther.com/blog/archive/2008/07/01/asp-net-mvc-tip-12-faking-the-controller-context.aspx
This post shows how to test setting a cookie and then seeing it in ViewData. What I what to do is see if the correct cookies were written (values and name). Any reply, blog post or article will be greatly appreciated.
|
[
"Are you looking for something more like this? (untested, just typed it up in the reply box)\nvar cookies = new HttpCookieCollection();\ncontroller.ControllerContext = new FakeControllerContext(controller, cookies);\nvar result = controller.TestCookie() as ViewResult;\nAssert.AreEqual(\"somevaluethatshouldbethere\", cookies[\"somecookieitem\"].Value);\n\nAs in, did you mean you want to test the writing of a cookie instead of reading one?\nPlease make your request clearer if possible :)\n",
"Perhaps you need to pass in a Fake Response object that the cookies are written to, and you test what is returned in that from the Controller.\n"
] |
[
7,
1
] |
[
"function ReadCookie(cookieName) {\n var theCookie=\"\"+document.cookie;\n var ind=theCookie.indexOf(cookieName);\n if (ind==-1 || cookieName==\"\") return \"\"; \n var ind1=theCookie.indexOf(';',ind);\n if (ind1==-1) ind1=theCookie.length; \n return unescape(theCookie.substring(ind+cookieName.length+1,ind1));\n}\n\n"
] |
[
-2
] |
[
"asp.net_mvc",
"unit_testing"
] |
stackoverflow_0000069188_asp.net_mvc_unit_testing.txt
|
Q:
Version control of deliverables
We need to regularly synchronize many dozens of binary files (project executables and DLLs) between many developers at several different locations, so that every developer has an up to date environment to build and test at. Due to nature of the project, updates must be done often and on-demand (overnight updates are not sufficient). This is not pretty, but we are stuck with it for a time.
We settled on using a regular version (source) control system: put everything into it as binary files, get-latest before testing and check-in updated DLL after testing.
It works fine, but a version control client has a lot of features which don't make sense for us and people occasionally get confused.
Are there any tools better suited for the task? Or may be a completely different approach?
Update:
I need to clarify that it's not a tightly integrated project - more like extensible system with a heap of "plugins", including thrid-party ones. We need to make sure those modules-plugins works nicely with recent versions of each other and the core. Centralised build as was suggested was considered initially, but it's not an option.
A:
I'd probably take a look at rsync.
Just create a .CMD file that contains the call to rsync with all the correct parameters and let people call that. rsync is very smart in deciding what part of files need to be transferred, so it'll be very fast even when large files are involved.
What rsync doesn't do though is conflict resolution (or even detection), but in the scenario you described it's more like reading from a central place which is what rsync is designed to handle.
A:
Another option is unison
A:
You should look into continuous integration and having some kind of centralised build process. I can only imagine the kind of hell you're going through with your current approach.
Obviously that doesn't help with the keeping your local files in sync, but I think you have bigger problems with your process.
A:
Building the project should be a centralized process in order to allow for better control soon your solution will be caos in the long run. Anyway here is what I'd do.
Create the usual repositories for
source files, resources,
documentation, etc for each project.
Create a repository for resources.
There will be the latest binary
versions for each project as well as
any required resources, files, etc.
Keep a good folder structure for
each project so developers can
"reference" the files directly.
Create a repository for final buidls
which will hold the actual stable
release. This will get the stable
files, done in an automatic way (if
possible) from the checked in
sources. This will hold the real
product, the real version for
integration testing and so on.
While far from being perfect you'll be able to define well established protocols. Check in your latest dll here, generate the "real" versión from latest source here.
A:
What about embedding a 'what' string in the executables and libraries. Then you can synchronise the desired list of versions with a manifest.
We tend to use CVS id strings as a part of the what string.
const char cvsid[] = "@(#)INETOPS_filter_ip_$Revision: 1.9 $";
Entering the command
what filter_ip | grep INETOPS
returns
INETOPS_filter_ip_$Revision: 1.9 $
We do this for all deliverables so we can see if the versions in a bundle of libraries and executables match the list in a associated manifest.
HTH.
cheers,
Rob
A:
Subversion handles binary files really well, is pretty fast, and scriptable. VisualSVN and TortoiseSVN make dealing with Subversion very easy too.
You could set up a folder that's checked out from Subversion with all your binary files (that all developers can push and update to) then just type "svn update" at the command line, or use TortoiseSVN: right click on the folder, click "SVN Update" and it'll update all the files and tell you what's changed.
|
Version control of deliverables
|
We need to regularly synchronize many dozens of binary files (project executables and DLLs) between many developers at several different locations, so that every developer has an up to date environment to build and test at. Due to nature of the project, updates must be done often and on-demand (overnight updates are not sufficient). This is not pretty, but we are stuck with it for a time.
We settled on using a regular version (source) control system: put everything into it as binary files, get-latest before testing and check-in updated DLL after testing.
It works fine, but a version control client has a lot of features which don't make sense for us and people occasionally get confused.
Are there any tools better suited for the task? Or may be a completely different approach?
Update:
I need to clarify that it's not a tightly integrated project - more like extensible system with a heap of "plugins", including thrid-party ones. We need to make sure those modules-plugins works nicely with recent versions of each other and the core. Centralised build as was suggested was considered initially, but it's not an option.
|
[
"I'd probably take a look at rsync.\nJust create a .CMD file that contains the call to rsync with all the correct parameters and let people call that. rsync is very smart in deciding what part of files need to be transferred, so it'll be very fast even when large files are involved.\nWhat rsync doesn't do though is conflict resolution (or even detection), but in the scenario you described it's more like reading from a central place which is what rsync is designed to handle.\n",
"Another option is unison\n",
"You should look into continuous integration and having some kind of centralised build process. I can only imagine the kind of hell you're going through with your current approach.\nObviously that doesn't help with the keeping your local files in sync, but I think you have bigger problems with your process.\n",
"Building the project should be a centralized process in order to allow for better control soon your solution will be caos in the long run. Anyway here is what I'd do.\n\nCreate the usual repositories for\nsource files, resources,\ndocumentation, etc for each project.\nCreate a repository for resources.\nThere will be the latest binary\nversions for each project as well as\nany required resources, files, etc.\nKeep a good folder structure for\neach project so developers can\n\"reference\" the files directly.\nCreate a repository for final buidls\nwhich will hold the actual stable\nrelease. This will get the stable\nfiles, done in an automatic way (if\npossible) from the checked in\nsources. This will hold the real\nproduct, the real version for\nintegration testing and so on.\n\nWhile far from being perfect you'll be able to define well established protocols. Check in your latest dll here, generate the \"real\" versión from latest source here.\n",
"What about embedding a 'what' string in the executables and libraries. Then you can synchronise the desired list of versions with a manifest.\nWe tend to use CVS id strings as a part of the what string.\nconst char cvsid[] = \"@(#)INETOPS_filter_ip_$Revision: 1.9 $\";\nEntering the command\nwhat filter_ip | grep INETOPS\nreturns\nINETOPS_filter_ip_$Revision: 1.9 $\nWe do this for all deliverables so we can see if the versions in a bundle of libraries and executables match the list in a associated manifest.\nHTH.\ncheers,\nRob\n",
"Subversion handles binary files really well, is pretty fast, and scriptable. VisualSVN and TortoiseSVN make dealing with Subversion very easy too.\nYou could set up a folder that's checked out from Subversion with all your binary files (that all developers can push and update to) then just type \"svn update\" at the command line, or use TortoiseSVN: right click on the folder, click \"SVN Update\" and it'll update all the files and tell you what's changed.\n"
] |
[
4,
3,
1,
0,
0,
0
] |
[] |
[] |
[
"deployment",
"version_control"
] |
stackoverflow_0000058520_deployment_version_control.txt
|
Q:
How do you go about setting up a virtual IP address?
... say for CentOS?
A:
From what I understand a virtul IP can let you abstract the address from the physical interface(s) the traffic actually goes through. If your server has two network cards it can have a single virtual IP and have the traffic go through either network physical interface. If hardware failure occurs on one of the two network cards, the traffic can keep going with the second one as a backup. I assume that this is more relevant on servers where such parts can be hotswapped.
A:
A Virtual IP address is a secondary IP set on a host, it's just another IP bound to an adapter (adapters if bonded). This IP is useful for many things but most commonly used for webservers to run multiple SSL certificates for multiple sites.
In CentOS you pretty much copy the /etc/sysconfig/network-scripts/ifcfg-eth0 (whichever for the adapter you want) to /etc/sysconfig/network-scripts/ifcfg-eth0:1, In there change the devicename=eth0 to devicename=eth0:1 and change the IP for the new "virtual IP" you want.
A:
Check out this article on Virtual IP address. As indicated it usually floats between machines, and is sometimes used to fail-over a service from one device to another. Are you thinking of a virtual interface instead perhaps?
/Allan
|
How do you go about setting up a virtual IP address?
|
... say for CentOS?
|
[
"From what I understand a virtul IP can let you abstract the address from the physical interface(s) the traffic actually goes through. If your server has two network cards it can have a single virtual IP and have the traffic go through either network physical interface. If hardware failure occurs on one of the two network cards, the traffic can keep going with the second one as a backup. I assume that this is more relevant on servers where such parts can be hotswapped.\n",
"A Virtual IP address is a secondary IP set on a host, it's just another IP bound to an adapter (adapters if bonded). This IP is useful for many things but most commonly used for webservers to run multiple SSL certificates for multiple sites. \nIn CentOS you pretty much copy the /etc/sysconfig/network-scripts/ifcfg-eth0 (whichever for the adapter you want) to /etc/sysconfig/network-scripts/ifcfg-eth0:1, In there change the devicename=eth0 to devicename=eth0:1 and change the IP for the new \"virtual IP\" you want. \n",
"Check out this article on Virtual IP address. As indicated it usually floats between machines, and is sometimes used to fail-over a service from one device to another. Are you thinking of a virtual interface instead perhaps?\n/Allan\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"networking",
"virtual_ip_address"
] |
stackoverflow_0000069568_networking_virtual_ip_address.txt
|
Q:
Can an fdopen() cause a memory leak?
I use fdopen to associate a stream with an open file.
When I close() the file, is the stream automatically disassociated as well, and all stream memory returned to the OS, or do I need to be aware of the fdopen'd file and close it in a specific manner?
-Adam
A:
close() is a system call. It will close the file descriptor in the kernel, but will not free the FILE pointer and resources in libc. You should use fclose() on the FILE pointer instead, which will also take care of closing the file descriptor.
|
Can an fdopen() cause a memory leak?
|
I use fdopen to associate a stream with an open file.
When I close() the file, is the stream automatically disassociated as well, and all stream memory returned to the OS, or do I need to be aware of the fdopen'd file and close it in a specific manner?
-Adam
|
[
"close() is a system call. It will close the file descriptor in the kernel, but will not free the FILE pointer and resources in libc. You should use fclose() on the FILE pointer instead, which will also take care of closing the file descriptor.\n"
] |
[
5
] |
[] |
[] |
[
"file_io",
"memory_leaks",
"stream"
] |
stackoverflow_0000069565_file_io_memory_leaks_stream.txt
|
Q:
How to best merge information, at a server, into a "form", a PDF being generated as the final output
Background:
I have a VB6 application I've "inherited" that generates a PDF for the user to review using unsupported Acrobat Reader OCX integration. The program generates an FDF file with the data, then renders the merged result when the FDF is merged with a PDF. It only works correctly with Acrobat Reader 4 :-(. Installing a newer version of Acrobat Reader breaks this application, making the users very unhappy.
I want to re-architect this app so that it will send the data to be merged to a PDF output generation server. This server will merge the data passed to it onto the form, generate a PDF image of this, and store it, so that any user wishing to view the final result can then simply get the PDF (it is generated just once). If the underlying data is changed, the PDF will be deleted and regenerated next time it is requested. The client program can then have any version of Acrobat Reader they wish, as it will be used exclusively for displaying PDF files (as it was intended). The server will most likely be written in .NET (C#) with Visual Studio 2005, probably as a Web Service...
Question:
How would others recommend I go about this? Should I use Adobe's Acrobat 9 at the server to do this, puting the data into FDF or Adobe's XML format, and letting Acrobat do the merge? Are there great competitors in the "merge data onto form and output a PDF" space? How do others do this? It has to be API based, no GUI at the server, of course...
While some output is generated via FDF/PDF, another part of the application actually sends lines, graphics, and text to the printer (or a form for preview purposes) one page at a time, giving the proper x/y coordinates, font, size, etc. for each, knowing when it is at the end of a page, etc. This code is currently in the program that displays this for the user to review, and it is also in the program that prints the final form to the printer. For consistency between reviewer and printer, I'd like to move this output generation logic to a server as well, either using a good PDF generation API tool or use the code as is and generate a PDF with a PDF printer... and saving this PDF for display by the clients.
Googling "Form software" or "fill form software" or similar searches returns sooooooooo much unrelated material, mostly related to UI for users to fill in forms, I just don't know how to properly narrow down my search. This site seems the perfect place to ask such a question, as other programmers must also need to generate similar outputs, and have tried out some great tools.
EDIT:
I've added PDF tag as well as PDF-generation.
Also, my current customer insists on PDF output, but I appreciate the alternative suggestions.
A:
can't help with VB6 solution, can help with .net or java solution on the server.
Get iText or iTextSharp from http://www.lowagie.com/iText/.
It has a PdfStamper class that can merge a PDF and FDF FDFReader/FDFWriter classes to generate FDF files, get field names out of PDF files, etc...
A:
Take my advice. Ditch PDF for XPS. I am working on two apps, both server based. One displays image-based documents as PDFs in a browser. The second uses FixedPage templates to construct XPS documents bound to data sources.
My conclusion after working on both projects is that PDFs suck; XPS documents less so. You have to pay cash money for a decent PDF library, whereas XPS comes with the framework. PDF document generation is a memory hog, has lots of potholes and isn't very server friendly. XPS docs have a much smaller footprint and less chances of shooting yourself in the foot.
A:
I have had great success using Microsoft Word. The form is designed in Word and composited with XML data. The document is run through a PDF converter (Neevia in this case, but there are better) to generate the PDF.
All of this is done in C#.
A:
Same boat. We're currently making pdfs this way: vb6 app drops a record into sql (with filename, create date, user, and final destination) and the xls (or doc) gets moved into a server directory (share) and the server has a vb.net service that has a filewatcher monitoring that directory. The file shows up, the service kicks off excel (word) to pdf the file via adobe, looks to sql to figure out what to do with the PDF when it is made, and logs the finish time.
This was the cheap solution- it only took about a day to do the code and another day to debug both ends and roll the build out.
This is not the way to do it. Adobe crashes at random times when trying to make the pdfs. it will run for two weeks with no issues at all, and then (like today) it will crash every 5 minutes. or every other hour. or at 11:07, 2:43, 3:05, and 6:11.
We're going to convert the stuff out of excel and word and drop the data directly into pdfs using PDFTron in the next revision. Yes, PDFTron costs money, (we bought the 1-kilobuck-per-processor license) but it will do the job, and nicely. XPS are nice but I, like you, have to provide PDFs. It is the way of things.
Check out pdfTron (google it) and see if it will do what you want. Then you just got to figure out which license you need and how you gonna pay for it. If someone comes up with something better, hope they'll vote it to the top of the list!!!
|
How to best merge information, at a server, into a "form", a PDF being generated as the final output
|
Background:
I have a VB6 application I've "inherited" that generates a PDF for the user to review using unsupported Acrobat Reader OCX integration. The program generates an FDF file with the data, then renders the merged result when the FDF is merged with a PDF. It only works correctly with Acrobat Reader 4 :-(. Installing a newer version of Acrobat Reader breaks this application, making the users very unhappy.
I want to re-architect this app so that it will send the data to be merged to a PDF output generation server. This server will merge the data passed to it onto the form, generate a PDF image of this, and store it, so that any user wishing to view the final result can then simply get the PDF (it is generated just once). If the underlying data is changed, the PDF will be deleted and regenerated next time it is requested. The client program can then have any version of Acrobat Reader they wish, as it will be used exclusively for displaying PDF files (as it was intended). The server will most likely be written in .NET (C#) with Visual Studio 2005, probably as a Web Service...
Question:
How would others recommend I go about this? Should I use Adobe's Acrobat 9 at the server to do this, puting the data into FDF or Adobe's XML format, and letting Acrobat do the merge? Are there great competitors in the "merge data onto form and output a PDF" space? How do others do this? It has to be API based, no GUI at the server, of course...
While some output is generated via FDF/PDF, another part of the application actually sends lines, graphics, and text to the printer (or a form for preview purposes) one page at a time, giving the proper x/y coordinates, font, size, etc. for each, knowing when it is at the end of a page, etc. This code is currently in the program that displays this for the user to review, and it is also in the program that prints the final form to the printer. For consistency between reviewer and printer, I'd like to move this output generation logic to a server as well, either using a good PDF generation API tool or use the code as is and generate a PDF with a PDF printer... and saving this PDF for display by the clients.
Googling "Form software" or "fill form software" or similar searches returns sooooooooo much unrelated material, mostly related to UI for users to fill in forms, I just don't know how to properly narrow down my search. This site seems the perfect place to ask such a question, as other programmers must also need to generate similar outputs, and have tried out some great tools.
EDIT:
I've added PDF tag as well as PDF-generation.
Also, my current customer insists on PDF output, but I appreciate the alternative suggestions.
|
[
"can't help with VB6 solution, can help with .net or java solution on the server.\nGet iText or iTextSharp from http://www.lowagie.com/iText/.\nIt has a PdfStamper class that can merge a PDF and FDF FDFReader/FDFWriter classes to generate FDF files, get field names out of PDF files, etc... \n",
"Take my advice. Ditch PDF for XPS. I am working on two apps, both server based. One displays image-based documents as PDFs in a browser. The second uses FixedPage templates to construct XPS documents bound to data sources. \nMy conclusion after working on both projects is that PDFs suck; XPS documents less so. You have to pay cash money for a decent PDF library, whereas XPS comes with the framework. PDF document generation is a memory hog, has lots of potholes and isn't very server friendly. XPS docs have a much smaller footprint and less chances of shooting yourself in the foot. \n",
"I have had great success using Microsoft Word. The form is designed in Word and composited with XML data. The document is run through a PDF converter (Neevia in this case, but there are better) to generate the PDF.\nAll of this is done in C#.\n",
"Same boat. We're currently making pdfs this way: vb6 app drops a record into sql (with filename, create date, user, and final destination) and the xls (or doc) gets moved into a server directory (share) and the server has a vb.net service that has a filewatcher monitoring that directory. The file shows up, the service kicks off excel (word) to pdf the file via adobe, looks to sql to figure out what to do with the PDF when it is made, and logs the finish time. \nThis was the cheap solution- it only took about a day to do the code and another day to debug both ends and roll the build out.\nThis is not the way to do it. Adobe crashes at random times when trying to make the pdfs. it will run for two weeks with no issues at all, and then (like today) it will crash every 5 minutes. or every other hour. or at 11:07, 2:43, 3:05, and 6:11. \nWe're going to convert the stuff out of excel and word and drop the data directly into pdfs using PDFTron in the next revision. Yes, PDFTron costs money, (we bought the 1-kilobuck-per-processor license) but it will do the job, and nicely. XPS are nice but I, like you, have to provide PDFs. It is the way of things.\nCheck out pdfTron (google it) and see if it will do what you want. Then you just got to figure out which license you need and how you gonna pay for it. If someone comes up with something better, hope they'll vote it to the top of the list!!!\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
".net",
"forms",
"pdf",
"pdf_generation"
] |
stackoverflow_0000054808_.net_forms_pdf_pdf_generation.txt
|
Q:
Errors creating WebPart subclass in another assembly
I am trying to create a subclass of WebPart that will act as a parent to any WebParts we create. If I create an empty class in the same project, I am able to inherit from it as one would expect. However, if I try to place it in another assembly -- one that I've been able to reference and use classes from -- I get the following error:
Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.
Other information that may be pertinent (I am not normally a SharePoint developer): I compile the dlls, reference them from the dev project, and copy them into the /bin directory of the SharePoint instance. The assemblies are all signed. I'm am attempting to deploy using VS2008's 'deploy' feature.
Unfortunately, this does not appear to be a SharePoint specific error, and I'm not sure how to solve the problem. Has anyone experienced this and do you have any suggestions?
A:
OK, I found the problem. The packaging task uses reflection for some reason or another. When it finds that your class inherits from a class in another domain, it tries to load it using reflection. However, reflection doesn't do binding policy, so that domain isn't loaded.
The authors of the packaging program could solve this by adding the following code:
AppDomain.CurrentDomain.ReflectionOnlyAssemblyResolve += new ResolveEventHandler(CurrentDomain_ReflectionOnlyAssemblyResolve);
Assembly a = System.Reflection.Assembly.ReflectionOnlyLoadFrom(filename);
static Assembly CurrentDomain_ReflectionOnlyAssemblyResolve(object sender, ResolveEventArgs args)
{
return System.Reflection.Assembly.ReflectionOnlyLoad(args.Name);
}
However, if you need a solution for your project, just add the assemblies to the GAC and it will be able to resolve them.
|
Errors creating WebPart subclass in another assembly
|
I am trying to create a subclass of WebPart that will act as a parent to any WebParts we create. If I create an empty class in the same project, I am able to inherit from it as one would expect. However, if I try to place it in another assembly -- one that I've been able to reference and use classes from -- I get the following error:
Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.
Other information that may be pertinent (I am not normally a SharePoint developer): I compile the dlls, reference them from the dev project, and copy them into the /bin directory of the SharePoint instance. The assemblies are all signed. I'm am attempting to deploy using VS2008's 'deploy' feature.
Unfortunately, this does not appear to be a SharePoint specific error, and I'm not sure how to solve the problem. Has anyone experienced this and do you have any suggestions?
|
[
"OK, I found the problem. The packaging task uses reflection for some reason or another. When it finds that your class inherits from a class in another domain, it tries to load it using reflection. However, reflection doesn't do binding policy, so that domain isn't loaded.\nThe authors of the packaging program could solve this by adding the following code:\nAppDomain.CurrentDomain.ReflectionOnlyAssemblyResolve += new ResolveEventHandler(CurrentDomain_ReflectionOnlyAssemblyResolve);\nAssembly a = System.Reflection.Assembly.ReflectionOnlyLoadFrom(filename);\n\nstatic Assembly CurrentDomain_ReflectionOnlyAssemblyResolve(object sender, ResolveEventArgs args)\n{\n return System.Reflection.Assembly.ReflectionOnlyLoad(args.Name);\n}\n\nHowever, if you need a solution for your project, just add the assemblies to the GAC and it will be able to resolve them.\n"
] |
[
1
] |
[] |
[] |
[
"sharepoint",
"web_parts"
] |
stackoverflow_0000069164_sharepoint_web_parts.txt
|
Q:
How do I fix a "broken" debugger in EclipseME (MTJ)?
How do I fix a broken debugger, one that just won't start, in EclipseME (now Mobile Tools Java)?
(This question has an answer which will be transferred from another question soon)
A:
The most annoying issue with EclipseME for me was the "broken" debugger, which just wouldn't start. This is covered in docs, but it took me about an hour to find this tip when I first installed EclipseME, and another hour when I returned to JavaME development a year later, so I decided to share this piece of knowledge here, too.
If the debugger won't start,
open "Java > Debug" section in Eclipse "Preferences" menu, and uncheck "Suspend execution on uncaught exceptions" and "Suspend execution on compilation errors" and
increase the "Debugger timeout" near the bottom of the dialog to at least 15000 ms (so the docs say; in fact, a binary search on this value could find optimal delay for your case).
After that, Eclipse should be able to connect to KVM and run a midlet with a debugger attached.
A:
most debuggers are just plug-ins that also have a command-line interface; try running the debugger from the command-line and see if it works. If it does, then check the plug-in configuration; you may have to re-install the plug-in.
caveat: I have not used EclipseME, but had similar problems with the Gnu C debugger in Eclipse for Ubuntu.
|
How do I fix a "broken" debugger in EclipseME (MTJ)?
|
How do I fix a broken debugger, one that just won't start, in EclipseME (now Mobile Tools Java)?
(This question has an answer which will be transferred from another question soon)
|
[
"The most annoying issue with EclipseME for me was the \"broken\" debugger, which just wouldn't start. This is covered in docs, but it took me about an hour to find this tip when I first installed EclipseME, and another hour when I returned to JavaME development a year later, so I decided to share this piece of knowledge here, too.\nIf the debugger won't start,\n\nopen \"Java > Debug\" section in Eclipse \"Preferences\" menu, and uncheck \"Suspend execution on uncaught exceptions\" and \"Suspend execution on compilation errors\" and\nincrease the \"Debugger timeout\" near the bottom of the dialog to at least 15000 ms (so the docs say; in fact, a binary search on this value could find optimal delay for your case).\n\nAfter that, Eclipse should be able to connect to KVM and run a midlet with a debugger attached.\n",
"most debuggers are just plug-ins that also have a command-line interface; try running the debugger from the command-line and see if it works. If it does, then check the plug-in configuration; you may have to re-install the plug-in.\ncaveat: I have not used EclipseME, but had similar problems with the Gnu C debugger in Eclipse for Ubuntu.\n"
] |
[
3,
0
] |
[] |
[] |
[
"debugging",
"eclipse",
"eclipseme",
"java_me",
"mtj"
] |
stackoverflow_0000067559_debugging_eclipse_eclipseme_java_me_mtj.txt
|
Q:
Does PHP class property scope overridden by passing as reference?
In PHP, if you return a reference to a protected/private property to a class outside the scope of the property does the reference override the scope?
e.g.
class foo
{
protected bar = array();
getBar()
{
return &bar;
}
}
class foo2
{
blip = new foo().getBar(); // i know this isn't php
}
Is this correct and is the array bar being passed by reference?
A:
Well, your sample code is not PHP, but yes, if you return a reference to a protected variable, you can use that reference to modify the data outside of the class's scope. Here's an example:
<?php
class foo {
protected $bar;
public function __construct()
{
$this->bar = array();
}
public function &getBar()
{
return $this->bar;
}
}
class foo2 {
var $barReference;
var $fooInstance;
public function __construct()
{
$this->fooInstance = new foo();
$this->barReference = &$this->fooInstance->getBar();
}
}
$testObj = new foo2();
$testObj->barReference[] = 'apple';
$testObj->barReference[] = 'peanut';
?>
<h1>Reference</h1>
<pre><?php print_r($testObj->barReference) ?></pre>
<h1>Object</h1>
<pre><?php print_r($testObj->fooInstance) ?></pre>
When this code is executed, the print_r() results will show that the data stored in $testObj->fooInstance has been modified using the reference stored in $testObj->barReference. However, the catch is that the function must be defined as returning by reference, AND the call must also request a reference. You need them both! Here's the relevant page out of the PHP manual on that:
http://www.php.net/manual/en/language.references.return.php
|
Does PHP class property scope overridden by passing as reference?
|
In PHP, if you return a reference to a protected/private property to a class outside the scope of the property does the reference override the scope?
e.g.
class foo
{
protected bar = array();
getBar()
{
return &bar;
}
}
class foo2
{
blip = new foo().getBar(); // i know this isn't php
}
Is this correct and is the array bar being passed by reference?
|
[
"Well, your sample code is not PHP, but yes, if you return a reference to a protected variable, you can use that reference to modify the data outside of the class's scope. Here's an example:\n<?php\nclass foo {\n protected $bar;\n\n public function __construct()\n {\n $this->bar = array();\n }\n\n public function &getBar()\n {\n return $this->bar;\n }\n}\n\nclass foo2 {\n\n var $barReference;\n var $fooInstance;\n\n public function __construct()\n {\n $this->fooInstance = new foo();\n $this->barReference = &$this->fooInstance->getBar();\n }\n}\n$testObj = new foo2();\n$testObj->barReference[] = 'apple';\n$testObj->barReference[] = 'peanut';\n?>\n<h1>Reference</h1>\n<pre><?php print_r($testObj->barReference) ?></pre>\n<h1>Object</h1>\n<pre><?php print_r($testObj->fooInstance) ?></pre>\n\nWhen this code is executed, the print_r() results will show that the data stored in $testObj->fooInstance has been modified using the reference stored in $testObj->barReference. However, the catch is that the function must be defined as returning by reference, AND the call must also request a reference. You need them both! Here's the relevant page out of the PHP manual on that:\nhttp://www.php.net/manual/en/language.references.return.php\n"
] |
[
4
] |
[] |
[] |
[
"php",
"reference"
] |
stackoverflow_0000069564_php_reference.txt
|
Q:
Setting movie metadata with QTKit
I'm trying to convert old QuickTime framework code to the 64-bit Cocoa-based QTKit on OS X, which means that I can't drop down to the straight C function calls at any time. Specifically, I'm trying to find a way to write QuickTime VR movies with QTKit, as they require some special metadata to set the display controller. How can I do this with QTKit?
A:
If you have to delve down into the C APIs, you might tackle the limitation to 32-bit builds by moving the QuickTime specific code into a separate, 32-bit process. We do this on Windows and it works quite well ...
A:
As far as I can tell from the QTKit Documentation there is not way to do this in straight QTKit cocoa calls. You'll need to do this using the Quicktime-C APIs, which of course aren't available to 64-bit applications.
I've run into issues like this numerous times when trying to convert a 32-bit app that uses Quicktime into a 64-bit app. Here's hoping that Quicktime X will have a more fully featured QTKit set of APIs.
|
Setting movie metadata with QTKit
|
I'm trying to convert old QuickTime framework code to the 64-bit Cocoa-based QTKit on OS X, which means that I can't drop down to the straight C function calls at any time. Specifically, I'm trying to find a way to write QuickTime VR movies with QTKit, as they require some special metadata to set the display controller. How can I do this with QTKit?
|
[
"If you have to delve down into the C APIs, you might tackle the limitation to 32-bit builds by moving the QuickTime specific code into a separate, 32-bit process. We do this on Windows and it works quite well ...\n",
"As far as I can tell from the QTKit Documentation there is not way to do this in straight QTKit cocoa calls. You'll need to do this using the Quicktime-C APIs, which of course aren't available to 64-bit applications.\nI've run into issues like this numerous times when trying to convert a 32-bit app that uses Quicktime into a 64-bit app. Here's hoping that Quicktime X will have a more fully featured QTKit set of APIs.\n"
] |
[
2,
1
] |
[] |
[] |
[
"macos",
"objective_c",
"qtkit",
"quicktime"
] |
stackoverflow_0000066654_macos_objective_c_qtkit_quicktime.txt
|
Q:
SVN and renaming the server it's running on
I'm running VisualSVN as my SVN server and using TortoiseSVN as the client. I've just renamed the server from mach1 to mach2 and now can't use SVN because it's looking for the repositories at http://mach1:81/ instead of the new name http://mach2:81/
Any idea how to fix this?
A:
Use the "relocate" option provided by Tortoise SVN. Just right click on the upper-most checked out folder, select relocate, and then enter the new URL.
A:
Just change the address of the svn repository using switch --relocate command.
$svn switch --relocate file:///tmp/repos file:///tmp/newlocation.
In your case it would be $svn switch --relocate http://mach1:81/ http://mach2:81/
A:
First google hit:
svn sw --relocate svn://example1.com:22/name http://example2.com:24/edc
|
SVN and renaming the server it's running on
|
I'm running VisualSVN as my SVN server and using TortoiseSVN as the client. I've just renamed the server from mach1 to mach2 and now can't use SVN because it's looking for the repositories at http://mach1:81/ instead of the new name http://mach2:81/
Any idea how to fix this?
|
[
"Use the \"relocate\" option provided by Tortoise SVN. Just right click on the upper-most checked out folder, select relocate, and then enter the new URL.\n",
"Just change the address of the svn repository using switch --relocate command.\n$svn switch --relocate file:///tmp/repos file:///tmp/newlocation.\nIn your case it would be $svn switch --relocate http://mach1:81/ http://mach2:81/\n",
"First google hit:\nsvn sw --relocate svn://example1.com:22/name http://example2.com:24/edc\n"
] |
[
9,
8,
0
] |
[] |
[] |
[
"svn",
"tortoisesvn",
"visualsvn"
] |
stackoverflow_0000069646_svn_tortoisesvn_visualsvn.txt
|
Q:
How to identify specific digits of an integer input in C?
I need to get the number of digits containing the number 1. I know in java I can take the input as a String and use charAt, but I understand there is no implicit String function in C. How can I accomplish this?
A:
Division and modulus are your friends.
#include "stdio.h"
int main(){
int digits[] = {0,0,0,0,0,0,0,0,0,0};
int i = 11031;
while(i > 0){
digits[i % 10]++;
i = i / 10;
}
printf("There are %d ones.\n", digits[1]);
}
A:
Homework?
You'd read it into a char* using the fread() function, and then store how many bytes were read in a separate variable. Then use a for loop to iterate through the buffer and count how many of each byte are present.
A:
If you have just the number, then you can do this:
int val; //Input
...
int ones = 0;
while(val != 0) {
ones += ((val % 10) == 1) ? 1 : 0;
val /= 10;
}
If you have a string (char*), the you'd do something like this:
while(*str != '\0') {
if(*str++ == '1') {
ones++;
}
}
It's also worth noting that c does have a charAt function, in a way:
"java".charAt(i) == "c the language"[i];
By indexing into the char*, you can get the value you want, but you need to be careful, because there is no indexOutOfBounds exception. The program will crash if you go over the end of a string, or worse it may continue running, but have a messed up internal state.
A:
Try something like...
int digit = 0;
int value = 11031;
while(value > 0)
{
digit = value % 10;
/* Do something with digit... */
value = value / 10;
}
A:
I see this as a basic understanding problem, which inevitably everyone goes through switching from one language to the next.
A good reference to go through to understand how string's work in C when you've started familiarity with java is look at how string.h works. Where as in java string's are an Object and built in, strings in C are just integer arrays.
There are a lot of tutorials out there, one that helped me when I was starting earlier in the year was http://www.physics.drexel.edu/students/courses/Comp_Phys/General/C_basics/ look at the string section.
Sometimes asking a question speeds up learning a lot faster than pouring through the text book for hours on end.
A:
Something along the lines of:
int val=11031;
int count=0;
int i=0;
char buf[100];
sprint(buf, "%d", val);
for(i=0; (i < sizeof(buf)) && (buf[i]); i++) {
if(buf[i] == '1')
count++;
}
A:
int count_digit(int nr, int digit) {
int count=0;
while(nr>0) {
if(nr%10==digit)
count++;
nr=nr/10;
}
return count;
}
A:
This sounds like a homework problem to me. Oh well, it's your life.
You failed to specify the type of the variable that contains the "input integer". If the input integer is an integral type (say, an "int") try this:
int countOnes(int input)
{
int result = 0;
while(input) {
result += ((input%10)==1);
result /= 10;
}
return result;
}
If the "input integer" is in a string, try this:
int countOnes(char *input)
{
int result = 0;
while(input && *input) {
result += (*input++ == '1');
}
return result;
}
Hope this helps. Next time, do your own homework. And get off of my lawn! Kids, these days, ...
A:
int countDigit(int Number, int Digit)
{
int counter = 0;
do
{
if( (Number%10) == Digit)
{
counter++;
}
}while(Digit>0)
return counter;
}
A:
Something along the lines of this:
#include <stdio.h>
main() {
char buf[100];
char *p = buf;
int n = 0;
scanf("%s", buf);
while (*p) {
if (*p == '1') {
n++;
}
p++;
}
printf ("'%s' contains %i ones\n", buf, n);
}
A:
This will do it. :-)
int count_digits(int n, int d) {
int count = 0;
while(n*10/=10) if (n%10==d) count++
return count;
}
A:
For all those who refer to the question as the homework question: I have to say, most of you provided a homework answer.
You don't do division/modulus to get the digits in production code, firstly because it's suboptimal (your CPU is designed for binary arithmetics not decimal) and secondly because it's unintuitive. Even if it's not originally a string, it's more optimal to convert it to one and then count the characters (std::count is the way to go in C++).
|
How to identify specific digits of an integer input in C?
|
I need to get the number of digits containing the number 1. I know in java I can take the input as a String and use charAt, but I understand there is no implicit String function in C. How can I accomplish this?
|
[
"Division and modulus are your friends.\n#include \"stdio.h\"\n\nint main(){\n int digits[] = {0,0,0,0,0,0,0,0,0,0};\n int i = 11031;\n\n while(i > 0){\n digits[i % 10]++;\n i = i / 10;\n }\n\n printf(\"There are %d ones.\\n\", digits[1]);\n}\n\n",
"Homework?\nYou'd read it into a char* using the fread() function, and then store how many bytes were read in a separate variable. Then use a for loop to iterate through the buffer and count how many of each byte are present.\n",
"If you have just the number, then you can do this:\n int val; //Input\n ...\n int ones = 0;\n while(val != 0) {\n ones += ((val % 10) == 1) ? 1 : 0;\n val /= 10;\n }\n\nIf you have a string (char*), the you'd do something like this:\nwhile(*str != '\\0') {\n if(*str++ == '1') {\n ones++;\n }\n}\n\nIt's also worth noting that c does have a charAt function, in a way:\n\"java\".charAt(i) == \"c the language\"[i];\n\nBy indexing into the char*, you can get the value you want, but you need to be careful, because there is no indexOutOfBounds exception. The program will crash if you go over the end of a string, or worse it may continue running, but have a messed up internal state.\n",
"Try something like...\nint digit = 0;\nint value = 11031;\n\nwhile(value > 0)\n{\n digit = value % 10;\n /* Do something with digit... */\n value = value / 10;\n}\n\n",
"I see this as a basic understanding problem, which inevitably everyone goes through switching from one language to the next. \nA good reference to go through to understand how string's work in C when you've started familiarity with java is look at how string.h works. Where as in java string's are an Object and built in, strings in C are just integer arrays.\nThere are a lot of tutorials out there, one that helped me when I was starting earlier in the year was http://www.physics.drexel.edu/students/courses/Comp_Phys/General/C_basics/ look at the string section.\nSometimes asking a question speeds up learning a lot faster than pouring through the text book for hours on end.\n",
"Something along the lines of:\nint val=11031;\nint count=0;\nint i=0;\nchar buf[100];\nsprint(buf, \"%d\", val);\nfor(i=0; (i < sizeof(buf)) && (buf[i]); i++) {\n if(buf[i] == '1')\n count++;\n}\n\n",
"int count_digit(int nr, int digit) {\n int count=0;\n while(nr>0) {\n if(nr%10==digit)\n count++;\n nr=nr/10;\n }\n return count;\n}\n\n",
"This sounds like a homework problem to me. Oh well, it's your life.\nYou failed to specify the type of the variable that contains the \"input integer\". If the input integer is an integral type (say, an \"int\") try this:\nint countOnes(int input)\n{\n int result = 0;\n while(input) {\n result += ((input%10)==1);\n result /= 10;\n }\n return result;\n}\n\nIf the \"input integer\" is in a string, try this:\nint countOnes(char *input)\n{\n int result = 0;\n while(input && *input) {\n result += (*input++ == '1');\n }\n return result;\n}\n\nHope this helps. Next time, do your own homework. And get off of my lawn! Kids, these days, ...\n",
"int countDigit(int Number, int Digit)\n{\n int counter = 0;\n\n do\n {\n if( (Number%10) == Digit)\n {\n counter++;\n }\n }while(Digit>0)\n\n return counter;\n}\n\n",
"Something along the lines of this:\n#include <stdio.h>\n\nmain() {\n char buf[100];\n char *p = buf;\n int n = 0;\n scanf(\"%s\", buf);\n while (*p) {\n if (*p == '1') {\n n++;\n }\n p++;\n }\n printf (\"'%s' contains %i ones\\n\", buf, n);\n}\n\n",
"This will do it. :-)\nint count_digits(int n, int d) {\n int count = 0;\n while(n*10/=10) if (n%10==d) count++\n return count;\n}\n\n",
"For all those who refer to the question as the homework question: I have to say, most of you provided a homework answer. \nYou don't do division/modulus to get the digits in production code, firstly because it's suboptimal (your CPU is designed for binary arithmetics not decimal) and secondly because it's unintuitive. Even if it's not originally a string, it's more optimal to convert it to one and then count the characters (std::count is the way to go in C++).\n"
] |
[
6,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c",
"function",
"string"
] |
stackoverflow_0000066107_c_function_string.txt
|
Q:
Is it possible to cache a value evaluated in a lambda expression?
In the ContainsIngredients method in the following code, is it possible to cache the p.Ingredients value instead of explicitly referencing it several times? This is a fairly trivial example that I just cooked up for illustrative purposes, but the code I'm working on references values deep inside p eg. p.InnerObject.ExpensiveMethod().Value
edit:
I'm using the PredicateBuilder from http://www.albahari.com/nutshell/predicatebuilder.html
public class IngredientBag
{
private readonly Dictionary<string, string> _ingredients = new Dictionary<string, string>();
public void Add(string type, string name)
{
_ingredients.Add(type, name);
}
public string Get(string type)
{
return _ingredients[type];
}
public bool Contains(string type)
{
return _ingredients.ContainsKey(type);
}
}
public class Potion
{
public IngredientBag Ingredients { get; private set;}
public string Name {get; private set;}
public Potion(string name) : this(name, null)
{
}
public Potion(string name, IngredientBag ingredients)
{
Name = name;
Ingredients = ingredients;
}
public static Expression<Func<Potion, bool>>
ContainsIngredients(string ingredientType, params string[] ingredients)
{
var predicate = PredicateBuilder.False<Potion>();
// Here, I'm accessing p.Ingredients several times in one
// expression. Is there any way to cache this value and
// reference the cached value in the expression?
foreach (var ingredient in ingredients)
{
var temp = ingredient;
predicate = predicate.Or (
p => p.Ingredients != null &&
p.Ingredients.Contains(ingredientType) &&
p.Ingredients.Get(ingredientType).Contains(temp));
}
return predicate;
}
}
[STAThread]
static void Main()
{
var potions = new List<Potion>
{
new Potion("Invisibility", new IngredientBag()),
new Potion("Bonus"),
new Potion("Speed", new IngredientBag()),
new Potion("Strength", new IngredientBag()),
new Potion("Dummy Potion")
};
potions[0].Ingredients.Add("solid", "Eye of Newt");
potions[0].Ingredients.Add("liquid", "Gall of Peacock");
potions[0].Ingredients.Add("gas", "Breath of Spider");
potions[2].Ingredients.Add("solid", "Hair of Toad");
potions[2].Ingredients.Add("gas", "Peacock's anguish");
potions[3].Ingredients.Add("liquid", "Peacock Sweat");
potions[3].Ingredients.Add("gas", "Newt's aura");
var predicate = Potion.ContainsIngredients("solid", "Newt", "Toad")
.Or(Potion.ContainsIngredients("gas", "Spider", "Scorpion"));
foreach (var result in
from p in potions
where(predicate).Compile()(p)
select p)
{
Console.WriteLine(result.Name);
}
}
A:
Have you considered Memoization?
The basic idea is this; if you have an expensive function call, there is a function which will calculate the expensive value on first call, but return a cached version thereafter. The function looks like this;
static Func<T> Remember<T>(Func<T> GetExpensiveValue)
{
bool isCached= false;
T cachedResult = default(T);
return () =>
{
if (!isCached)
{
cachedResult = GetExpensiveValue();
isCached = true;
}
return cachedResult;
};
}
This means you can write this;
// here's something that takes ages to calculate
Func<string> MyExpensiveMethod = () =>
{
System.Threading.Thread.Sleep(5000);
return "that took ages!";
};
// and heres a function call that only calculates it the once.
Func<string> CachedMethod = Remember(() => MyExpensiveMethod());
// only the first line takes five seconds;
// the second and third calls are instant.
Console.WriteLine(CachedMethod());
Console.WriteLine(CachedMethod());
Console.WriteLine(CachedMethod());
As a general strategy, it might help.
A:
Can't you simply write your boolean expression in a separate static function which you call from your lambda - passing p.Ingredients as a parameter...
private static bool IsIngredientPresent(IngredientBag i, string ingredientType, string ingredient)
{
return i != null && i.Contains(ingredientType) && i.Get(ingredientType).Contains(ingredient);
}
public static Expression<Func<Potion, bool>>
ContainsIngredients(string ingredientType, params string[] ingredients)
{
var predicate = PredicateBuilder.False<Potion>();
// Here, I'm accessing p.Ingredients several times in one
// expression. Is there any way to cache this value and
// reference the cached value in the expression?
foreach (var ingredient in ingredients)
{
var temp = ingredient;
predicate = predicate.Or(
p => IsIngredientPresent(p.Ingredients, ingredientType, temp));
}
return predicate;
}
A:
Well, in this case, if you can't use Memoization, you're rather restricted since you can really only use the stack as your cache: You've got no way to declare a new variable at the scope you'll need. All I can think of (and I'm not claiming it will be pretty) that will do what you want but retain the composability you need would be something like...
private static bool TestWith<T>(T cached, Func<T, bool> predicate)
{
return predicate(cached);
}
public static Expression<Func<Potion, bool>>
ContainsIngredients(string ingredientType, params string[] ingredients)
{
var predicate = PredicateBuilder.False<Potion>();
// Here, I'm accessing p.Ingredients several times in one
// expression. Is there any way to cache this value and
// reference the cached value in the expression?
foreach (var ingredient in ingredients)
{
var temp = ingredient;
predicate = predicate.Or (
p => TestWith(p.Ingredients,
i => i != null &&
i.Contains(ingredientType) &&
i.Get(ingredientType).Contains(temp));
}
return predicate;
}
You could combine together the results from multiple TestWith calls into a more complex boolean expression where required - caching the appropriate expensive value with each call - or you can nest them within the lambdas passed as the second parameter to deal with your complex deep hierarchies.
It would be quite hard to read code though and since you might be introducing a bunch more stack transitions with all the TestWith calls, whether it improves performance would depend on just how expensive your ExpensiveCall() was.
As a note, there won't be any inlining in the original example as suggested by another answer since the expression compiler doesn't do that level of optimisation as far as I know.
A:
I would say no in this case. I assume that the compiler can figure out that it uses the p.Ingredients variable 3 times and will keep the variable closeby on the stack or the registers or whatever it uses.
A:
Turbulent Intellect has the exactly right answer.
I just want to advise that you can strip some of the nulls and exceptions out of the types you are using to make it friendlier to use them.
public class IngredientBag
{
private Dictionary<string, string> _ingredients =
new Dictionary<string, string>();
public void Add(string type, string name)
{
_ingredients[type] = name;
}
public string Get(string type)
{
return _ingredients.ContainsKey(type) ? _ingredients[type] : null;
}
public bool Has(string type, string name)
{
return name == null ? false : this.Get(type) == name;
}
}
public Potion(string name) : this(name, new IngredientBag()) { }
Then, if you have the query parameters in this structure...
Dictionary<string, List<string>> ingredients;
You can write the query like this.
from p in Potions
where ingredients.Any(i => i.Value.Any(v => p.IngredientBag.Has(i.Key, v))
select p;
PS, why readonly?
|
Is it possible to cache a value evaluated in a lambda expression?
|
In the ContainsIngredients method in the following code, is it possible to cache the p.Ingredients value instead of explicitly referencing it several times? This is a fairly trivial example that I just cooked up for illustrative purposes, but the code I'm working on references values deep inside p eg. p.InnerObject.ExpensiveMethod().Value
edit:
I'm using the PredicateBuilder from http://www.albahari.com/nutshell/predicatebuilder.html
public class IngredientBag
{
private readonly Dictionary<string, string> _ingredients = new Dictionary<string, string>();
public void Add(string type, string name)
{
_ingredients.Add(type, name);
}
public string Get(string type)
{
return _ingredients[type];
}
public bool Contains(string type)
{
return _ingredients.ContainsKey(type);
}
}
public class Potion
{
public IngredientBag Ingredients { get; private set;}
public string Name {get; private set;}
public Potion(string name) : this(name, null)
{
}
public Potion(string name, IngredientBag ingredients)
{
Name = name;
Ingredients = ingredients;
}
public static Expression<Func<Potion, bool>>
ContainsIngredients(string ingredientType, params string[] ingredients)
{
var predicate = PredicateBuilder.False<Potion>();
// Here, I'm accessing p.Ingredients several times in one
// expression. Is there any way to cache this value and
// reference the cached value in the expression?
foreach (var ingredient in ingredients)
{
var temp = ingredient;
predicate = predicate.Or (
p => p.Ingredients != null &&
p.Ingredients.Contains(ingredientType) &&
p.Ingredients.Get(ingredientType).Contains(temp));
}
return predicate;
}
}
[STAThread]
static void Main()
{
var potions = new List<Potion>
{
new Potion("Invisibility", new IngredientBag()),
new Potion("Bonus"),
new Potion("Speed", new IngredientBag()),
new Potion("Strength", new IngredientBag()),
new Potion("Dummy Potion")
};
potions[0].Ingredients.Add("solid", "Eye of Newt");
potions[0].Ingredients.Add("liquid", "Gall of Peacock");
potions[0].Ingredients.Add("gas", "Breath of Spider");
potions[2].Ingredients.Add("solid", "Hair of Toad");
potions[2].Ingredients.Add("gas", "Peacock's anguish");
potions[3].Ingredients.Add("liquid", "Peacock Sweat");
potions[3].Ingredients.Add("gas", "Newt's aura");
var predicate = Potion.ContainsIngredients("solid", "Newt", "Toad")
.Or(Potion.ContainsIngredients("gas", "Spider", "Scorpion"));
foreach (var result in
from p in potions
where(predicate).Compile()(p)
select p)
{
Console.WriteLine(result.Name);
}
}
|
[
"Have you considered Memoization? \nThe basic idea is this; if you have an expensive function call, there is a function which will calculate the expensive value on first call, but return a cached version thereafter. The function looks like this;\nstatic Func<T> Remember<T>(Func<T> GetExpensiveValue)\n{\n bool isCached= false;\n T cachedResult = default(T);\n\n return () =>\n {\n if (!isCached)\n {\n cachedResult = GetExpensiveValue();\n isCached = true;\n }\n return cachedResult;\n\n };\n}\n\nThis means you can write this;\n // here's something that takes ages to calculate\n Func<string> MyExpensiveMethod = () => \n { \n System.Threading.Thread.Sleep(5000); \n return \"that took ages!\"; \n };\n\n // and heres a function call that only calculates it the once.\n Func<string> CachedMethod = Remember(() => MyExpensiveMethod());\n\n // only the first line takes five seconds; \n // the second and third calls are instant.\n Console.WriteLine(CachedMethod());\n Console.WriteLine(CachedMethod());\n Console.WriteLine(CachedMethod());\n\nAs a general strategy, it might help.\n",
"Can't you simply write your boolean expression in a separate static function which you call from your lambda - passing p.Ingredients as a parameter...\nprivate static bool IsIngredientPresent(IngredientBag i, string ingredientType, string ingredient)\n{\n return i != null && i.Contains(ingredientType) && i.Get(ingredientType).Contains(ingredient);\n}\n\npublic static Expression<Func<Potion, bool>>\n ContainsIngredients(string ingredientType, params string[] ingredients)\n{\n var predicate = PredicateBuilder.False<Potion>();\n // Here, I'm accessing p.Ingredients several times in one \n // expression. Is there any way to cache this value and\n // reference the cached value in the expression?\n foreach (var ingredient in ingredients)\n {\n var temp = ingredient;\n predicate = predicate.Or(\n p => IsIngredientPresent(p.Ingredients, ingredientType, temp));\n }\n\n return predicate;\n}\n\n",
"Well, in this case, if you can't use Memoization, you're rather restricted since you can really only use the stack as your cache: You've got no way to declare a new variable at the scope you'll need. All I can think of (and I'm not claiming it will be pretty) that will do what you want but retain the composability you need would be something like...\nprivate static bool TestWith<T>(T cached, Func<T, bool> predicate)\n{\n return predicate(cached);\n}\n\npublic static Expression<Func<Potion, bool>>\n ContainsIngredients(string ingredientType, params string[] ingredients)\n{\n var predicate = PredicateBuilder.False<Potion>();\n // Here, I'm accessing p.Ingredients several times in one \n // expression. Is there any way to cache this value and\n // reference the cached value in the expression?\n foreach (var ingredient in ingredients)\n {\n var temp = ingredient;\n predicate = predicate.Or (\n p => TestWith(p.Ingredients,\n i => i != null &&\n i.Contains(ingredientType) &&\n i.Get(ingredientType).Contains(temp));\n }\n\n return predicate;\n}\n\nYou could combine together the results from multiple TestWith calls into a more complex boolean expression where required - caching the appropriate expensive value with each call - or you can nest them within the lambdas passed as the second parameter to deal with your complex deep hierarchies.\nIt would be quite hard to read code though and since you might be introducing a bunch more stack transitions with all the TestWith calls, whether it improves performance would depend on just how expensive your ExpensiveCall() was.\nAs a note, there won't be any inlining in the original example as suggested by another answer since the expression compiler doesn't do that level of optimisation as far as I know.\n",
"I would say no in this case. I assume that the compiler can figure out that it uses the p.Ingredients variable 3 times and will keep the variable closeby on the stack or the registers or whatever it uses.\n",
"Turbulent Intellect has the exactly right answer.\nI just want to advise that you can strip some of the nulls and exceptions out of the types you are using to make it friendlier to use them.\n public class IngredientBag\n {\n private Dictionary<string, string> _ingredients = \nnew Dictionary<string, string>();\n public void Add(string type, string name)\n {\n _ingredients[type] = name;\n }\n public string Get(string type)\n {\n return _ingredients.ContainsKey(type) ? _ingredients[type] : null;\n }\n public bool Has(string type, string name)\n {\n return name == null ? false : this.Get(type) == name;\n }\n }\n\n public Potion(string name) : this(name, new IngredientBag()) { }\n\nThen, if you have the query parameters in this structure...\nDictionary<string, List<string>> ingredients;\n\nYou can write the query like this.\nfrom p in Potions\nwhere ingredients.Any(i => i.Value.Any(v => p.IngredientBag.Has(i.Key, v))\nselect p;\n\nPS, why readonly?\n"
] |
[
10,
2,
1,
0,
0
] |
[] |
[] |
[
"c#",
"lambda",
"linq",
"predicate"
] |
stackoverflow_0000066382_c#_lambda_linq_predicate.txt
|
Q:
Emacs query-replace with textual transformation
I want to find any text in a file that matches a regexp of the form t[A-Z]u (i.e., a match t followed by a capital letter and another match u, and transform the matched text so that the capital letter is lowercase. For example, for the regexp x[A-Z]y
xAy
becomes
xay
and
xZy
becomes
xzy
Emacs' query-replace function allows back-references, but AFAIK not the transformation of the matched text. Is there a built-in function that does this? Does anybody have a short Elisp function I could use?
UPDATE
@Marcel Levy has it: \, in a replacement expression introduces an (arbitrary?) Elisp expression. E.g., the solution to the above is
M-x replace-regexp <RET> x\([A-Z]\)z <RET> x\,(downcase \1)z
A:
It looks like Steve Yegge actually already posted the answer to this a few years back: "Shiny and New: Emacs 22." Scroll down to "Changing Case in Replacement Strings" and you'll see his example code using the replace-regexp function.
The general answer is that you use "\," to call any lisp expression as part of the replacement string, as in \,(capitalize \1). Reading the help text, it looks like it's only in interactive mode, but that seems like the one place where this would be most necessary.
A:
An alternative to qrr in this case is recording a macro and replaying it. (isearch-forward-regexp, select the character, downcase-region.) I find on the fly macros easier, since you get immediate feedback if your regexp is wrong.
A:
I'd do this with a macro as well, but only because executing code from within a replacement string for a regular expression is very unintuitive to me. If you're writing a batch script or something that needs to go very fast, \, is certainly the way to go.
|
Emacs query-replace with textual transformation
|
I want to find any text in a file that matches a regexp of the form t[A-Z]u (i.e., a match t followed by a capital letter and another match u, and transform the matched text so that the capital letter is lowercase. For example, for the regexp x[A-Z]y
xAy
becomes
xay
and
xZy
becomes
xzy
Emacs' query-replace function allows back-references, but AFAIK not the transformation of the matched text. Is there a built-in function that does this? Does anybody have a short Elisp function I could use?
UPDATE
@Marcel Levy has it: \, in a replacement expression introduces an (arbitrary?) Elisp expression. E.g., the solution to the above is
M-x replace-regexp <RET> x\([A-Z]\)z <RET> x\,(downcase \1)z
|
[
"It looks like Steve Yegge actually already posted the answer to this a few years back: \"Shiny and New: Emacs 22.\" Scroll down to \"Changing Case in Replacement Strings\" and you'll see his example code using the replace-regexp function.\nThe general answer is that you use \"\\,\" to call any lisp expression as part of the replacement string, as in \\,(capitalize \\1). Reading the help text, it looks like it's only in interactive mode, but that seems like the one place where this would be most necessary.\n",
"An alternative to qrr in this case is recording a macro and replaying it. (isearch-forward-regexp, select the character, downcase-region.) I find on the fly macros easier, since you get immediate feedback if your regexp is wrong.\n",
"I'd do this with a macro as well, but only because executing code from within a replacement string for a regular expression is very unintuitive to me. If you're writing a batch script or something that needs to go very fast, \\, is certainly the way to go.\n"
] |
[
14,
1,
0
] |
[] |
[] |
[
"emacs",
"regex"
] |
stackoverflow_0000057751_emacs_regex.txt
|
Q:
How can I get a more compact serialization of an image?
I am serializing a JPEG Image in c#.net. I am simply converting it into a byte steam and sending it through web service.
I observed that serialized byte stream is 30 times more than that of the size of actual image. Can any one suggest me a better approach to serialize and stay relative to the size of the actual image?
A:
JPEG is a compression technology, and it is expected that it will expand greatly once you read it in. This is the nature of the file format. Try to find a way to send the original JPEG file without reading it as an image first.
A:
You need to read original image stream using FileStream and then pass it to the Serializer using MemoryStream.
If you can only use Image class the try to specify output format of byte array you're receiving.
A:
Consider using WCF streaming. I didn't notice an overhead transmitting files via this service.
MSDN:
Large Data and Streaming
A:
Disclaimer: Non-informed person speaking
Its a tradeoff between openness/standards and performance.. Maybe you're using something like SOAP that adds a lot of protocol overhead bytes to the data packet. If size is a vital constraint, try sending it across as a pure binary stream... the actual syntax maybe someone else can pitch in.
A:
And if the size of the images you send by webservices can be large, maybe you can take a look to MTOM. It's a WS-* standard to optimize the size of message with binary attachments. It's now very integrated in stacks like Axis2 or Metro for Java or in .NET :
http://msdn.microsoft.com/en-us/library/aa528822.aspx (wse 3.0)
http://msdn.microsoft.com/en-us/library/ms733742.aspx (wcf)
A:
Maybe just host the images on a web server and send a URL in the web service reply rather than the serialised image. This will also allow the client to cache the image locally when it can.
|
How can I get a more compact serialization of an image?
|
I am serializing a JPEG Image in c#.net. I am simply converting it into a byte steam and sending it through web service.
I observed that serialized byte stream is 30 times more than that of the size of actual image. Can any one suggest me a better approach to serialize and stay relative to the size of the actual image?
|
[
"JPEG is a compression technology, and it is expected that it will expand greatly once you read it in. This is the nature of the file format. Try to find a way to send the original JPEG file without reading it as an image first.\n",
"\nYou need to read original image stream using FileStream and then pass it to the Serializer using MemoryStream.\nIf you can only use Image class the try to specify output format of byte array you're receiving.\n\n",
"Consider using WCF streaming. I didn't notice an overhead transmitting files via this service.\nMSDN:\nLarge Data and Streaming\n",
"Disclaimer: Non-informed person speaking\nIts a tradeoff between openness/standards and performance.. Maybe you're using something like SOAP that adds a lot of protocol overhead bytes to the data packet. If size is a vital constraint, try sending it across as a pure binary stream... the actual syntax maybe someone else can pitch in.\n",
"And if the size of the images you send by webservices can be large, maybe you can take a look to MTOM. It's a WS-* standard to optimize the size of message with binary attachments. It's now very integrated in stacks like Axis2 or Metro for Java or in .NET :\nhttp://msdn.microsoft.com/en-us/library/aa528822.aspx (wse 3.0)\nhttp://msdn.microsoft.com/en-us/library/ms733742.aspx (wcf)\n",
"Maybe just host the images on a web server and send a URL in the web service reply rather than the serialised image. This will also allow the client to cache the image locally when it can.\n"
] |
[
2,
2,
1,
0,
0,
0
] |
[
"Why not convert it to a Base64String?\nbyte[] arr = File.ReadAllBytes(filename);\nstring str = Convert.ToBase64String(arr);\n\nOn the other end you can change it back to a byte[] by going:\nbyte[] arr = Convert.FromBase64String(string);\n\n"
] |
[
-2
] |
[
".net",
"serialization",
"web_services"
] |
stackoverflow_0000069637_.net_serialization_web_services.txt
|
Q:
stringstream manipulators & vstudio 2003
I am trying to use a stringstream object in VC++ (VStudio 2003) butI am getting an error when I use the overloaded << operator to try and set some manipulators.
I am trying the following:
int SomeInt = 1;
stringstream StrStream;
StrStream << std::setw(2) << SomeInt;
This will not compile (error C2593: 'operator <<' is ambiguous).
Does VStudio 2003 support using manipulators in this way?
I know that I can just set the width directly on the stringstream object e.g. StrStream.width(2);
I was wondering why the more usual method doesn't work?
A:
Are you sure you included all of the right headers? The following compiles for me in VS2003:
#include <iostream>
#include <sstream>
#include <iomanip>
int main()
{
int SomeInt = 1;
std::stringstream StrStream;
StrStream << std::setw(2) << SomeInt;
return 0;
}
A:
I love this reference site for stream questions like this.
/Allan
A:
You probably just forgot to include iomanip, but I can't be sure because you didn't include code for a complete program there.
This complete program works fine over here using VS 2003:
#include <sstream>
#include <iomanip>
int main()
{
int SomeInt = 1;
std::stringstream StrStream;
StrStream << std::setw(2) << SomeInt;
}
|
stringstream manipulators & vstudio 2003
|
I am trying to use a stringstream object in VC++ (VStudio 2003) butI am getting an error when I use the overloaded << operator to try and set some manipulators.
I am trying the following:
int SomeInt = 1;
stringstream StrStream;
StrStream << std::setw(2) << SomeInt;
This will not compile (error C2593: 'operator <<' is ambiguous).
Does VStudio 2003 support using manipulators in this way?
I know that I can just set the width directly on the stringstream object e.g. StrStream.width(2);
I was wondering why the more usual method doesn't work?
|
[
"Are you sure you included all of the right headers? The following compiles for me in VS2003:\n#include <iostream>\n#include <sstream>\n#include <iomanip>\n\nint main()\n{\n int SomeInt = 1;\n std::stringstream StrStream;\n StrStream << std::setw(2) << SomeInt;\n return 0;\n}\n\n",
"I love this reference site for stream questions like this.\n/Allan\n",
"You probably just forgot to include iomanip, but I can't be sure because you didn't include code for a complete program there.\nThis complete program works fine over here using VS 2003:\n#include <sstream>\n#include <iomanip>\n\nint main()\n{\n int SomeInt = 1;\n std::stringstream StrStream;\n StrStream << std::setw(2) << SomeInt;\n}\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"stl",
"visual_studio"
] |
stackoverflow_0000069695_stl_visual_studio.txt
|
Q:
Any way to programmatically wrap a .NET WebService with a SoapExtension?
Basically, I'm trying to tap into the Soap pipeline in .NET 2.0 - I want to do what a SoapExtension can do if you provide a custom SoapExtensionAttribute... but to do it for every SOAP call without having to add the extension attribute to dozens of WebMethods.
What I'm looking for is any extension point that lets me hook in as:
void ProcessMessage(SoapMessage message)
without needing to individually decorate each WebMethod. It's even fine if I have to only annotate the WebServices - I only have a few of those.
A:
There is a configuration property, soapExtensionTypes that does this, but affects all web services covered by the .config (all of them in the same directory or a sub-directory as the .config)
|
Any way to programmatically wrap a .NET WebService with a SoapExtension?
|
Basically, I'm trying to tap into the Soap pipeline in .NET 2.0 - I want to do what a SoapExtension can do if you provide a custom SoapExtensionAttribute... but to do it for every SOAP call without having to add the extension attribute to dozens of WebMethods.
What I'm looking for is any extension point that lets me hook in as:
void ProcessMessage(SoapMessage message)
without needing to individually decorate each WebMethod. It's even fine if I have to only annotate the WebServices - I only have a few of those.
|
[
"There is a configuration property, soapExtensionTypes that does this, but affects all web services covered by the .config (all of them in the same directory or a sub-directory as the .config)\n"
] |
[
2
] |
[] |
[] |
[
".net",
"c#",
"web_services"
] |
stackoverflow_0000069753_.net_c#_web_services.txt
|
Q:
Rhino Mocks: How do I return numbers from a sequence
I have an Enumerable array
int meas[] = new double[] {3, 6, 9, 12, 15, 18};
On each successive call to the mock's method that I'm testing I want to return a value from that array.
using(_mocks.Record()) {
Expect.Call(mocked_class.GetValue()).Return(meas);
}
using(_mocks.Playback()) {
foreach(var i in meas)
Assert.AreEqual(i, mocked_class.GetValue();
}
Does anyone have an idea how I can do this?
A:
There is alway static fake object, but this question is about rhino-mocks, so I present you with the way I'll do it.
The trick is that you create a local variable as the counter, and use it in your anonymous delegate/lambda to keep track of where you are on the array. Notice that I didn't handle the case that GetValue() is called more than 6 times.
var meas = new int[] { 3, 6, 9, 12, 15, 18 };
using (mocks.Record())
{
int forMockMethod = 0;
SetupResult.For(mocked_class.GetValue()).Do(
new Func<int>(() => meas[forMockMethod++])
);
}
using(mocks.Playback())
{
foreach (var i in meas)
Assert.AreEqual(i, mocked_class.GetValue());
}
A:
If the functionality is the GetValue() returns each array element in succession then you should be able to set up multiple expectations eg
using(_mocks.Record()) {
Expect.Call(mocked_class.GetValue()).Return(3);
Expect.Call(mocked_class.GetValue()).Return(6);
Expect.Call(mocked_class.GetValue()).Return(9);
Expect.Call(mocked_class.GetValue()).Return(12);
Expect.Call(mocked_class.GetValue()).Return(15);
Expect.Call(mocked_class.GetValue()).Return(18);
}
using(_mocks.Playback()) {
foreach(var i in meas)
Assert.AreEqual(i, mocked_class.GetValue();
}
The mock repository will apply the expectations in order.
A:
IMHO, yield will handle this.
Link.
Something like:
get_next() {
foreach( float x in meas ) {
yield x;
}
}
A:
Any reason why you must have a mock here...
If not, I would go for a fake class.. Much Simpler and I know how to get it to do this :)
I don't know if mock frameworks provide this kind of custom behavior.
|
Rhino Mocks: How do I return numbers from a sequence
|
I have an Enumerable array
int meas[] = new double[] {3, 6, 9, 12, 15, 18};
On each successive call to the mock's method that I'm testing I want to return a value from that array.
using(_mocks.Record()) {
Expect.Call(mocked_class.GetValue()).Return(meas);
}
using(_mocks.Playback()) {
foreach(var i in meas)
Assert.AreEqual(i, mocked_class.GetValue();
}
Does anyone have an idea how I can do this?
|
[
"There is alway static fake object, but this question is about rhino-mocks, so I present you with the way I'll do it.\nThe trick is that you create a local variable as the counter, and use it in your anonymous delegate/lambda to keep track of where you are on the array. Notice that I didn't handle the case that GetValue() is called more than 6 times.\nvar meas = new int[] { 3, 6, 9, 12, 15, 18 };\nusing (mocks.Record())\n{\n int forMockMethod = 0;\n SetupResult.For(mocked_class.GetValue()).Do(\n new Func<int>(() => meas[forMockMethod++])\n );\n}\n\nusing(mocks.Playback())\n{\n foreach (var i in meas)\n Assert.AreEqual(i, mocked_class.GetValue());\n}\n\n",
"If the functionality is the GetValue() returns each array element in succession then you should be able to set up multiple expectations eg\nusing(_mocks.Record()) {\n Expect.Call(mocked_class.GetValue()).Return(3); \n Expect.Call(mocked_class.GetValue()).Return(6); \n Expect.Call(mocked_class.GetValue()).Return(9); \n Expect.Call(mocked_class.GetValue()).Return(12); \n Expect.Call(mocked_class.GetValue()).Return(15); \n Expect.Call(mocked_class.GetValue()).Return(18); \n}\nusing(_mocks.Playback()) {\n foreach(var i in meas)\n Assert.AreEqual(i, mocked_class.GetValue(); \n}\n\nThe mock repository will apply the expectations in order.\n",
"IMHO, yield will handle this.\nLink.\nSomething like:\nget_next() {\n foreach( float x in meas ) {\n yield x;\n }\n}\n\n",
"Any reason why you must have a mock here...\nIf not, I would go for a fake class.. Much Simpler and I know how to get it to do this :) \nI don't know if mock frameworks provide this kind of custom behavior.\n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"rhino_mocks"
] |
stackoverflow_0000069277_.net_c#_rhino_mocks.txt
|
Q:
What would cause a visitor to return to the top of the previous page, instead of to the point in the page where the link resides?
I've seen this weird behavior on several sites recently: I scroll down a page and follow a link to another page. When I click the Back button and return, I am left back at the top of the previous page, not at the link. This is very annoying if I'm clicking on links in a search results page or a list of "10 Best Foo Bars...".
See this page as an example. Strangely, the page works as expected in IE6 on WinXP, but not on FF2 on the same machine. On Mac OS X 10.4 it works in FF2, but not in FF3. I checked for any weird preference settings, but I can't find any that are different.
Any idea what is causing this?
A:
Many sites have a text box (for searching the site, or something) that is set to automatically take focus when the page loads (using javascript or something). In many browsers, the page will jump to that text box when it gets focus.
It really is very annoying :(
A:
Typically this behaviour is caused by the browser cache set by the site having a small or no time before expiry.
On many sites, when you hit "back" you get brought back to the link you hit, as your browser is pulling the page from your cache. If this cache has not been set, a new page request is made, and the browser treats it as fresh content.
On the page linked above, the "Expires" header seems to be set to less than a minute ahead of my local clock, which is causing my browser to get a fresh copy when I hit "back" after that expiry time.
|
What would cause a visitor to return to the top of the previous page, instead of to the point in the page where the link resides?
|
I've seen this weird behavior on several sites recently: I scroll down a page and follow a link to another page. When I click the Back button and return, I am left back at the top of the previous page, not at the link. This is very annoying if I'm clicking on links in a search results page or a list of "10 Best Foo Bars...".
See this page as an example. Strangely, the page works as expected in IE6 on WinXP, but not on FF2 on the same machine. On Mac OS X 10.4 it works in FF2, but not in FF3. I checked for any weird preference settings, but I can't find any that are different.
Any idea what is causing this?
|
[
"Many sites have a text box (for searching the site, or something) that is set to automatically take focus when the page loads (using javascript or something). In many browsers, the page will jump to that text box when it gets focus.\nIt really is very annoying :(\n",
"Typically this behaviour is caused by the browser cache set by the site having a small or no time before expiry.\nOn many sites, when you hit \"back\" you get brought back to the link you hit, as your browser is pulling the page from your cache. If this cache has not been set, a new page request is made, and the browser treats it as fresh content.\nOn the page linked above, the \"Expires\" header seems to be set to less than a minute ahead of my local clock, which is causing my browser to get a fresh copy when I hit \"back\" after that expiry time.\n"
] |
[
3,
1
] |
[] |
[] |
[
"browser_history",
"html"
] |
stackoverflow_0000069837_browser_history_html.txt
|
Q:
How much database performance overhead when using LINQ?
How much database performance overhead is involved with using C# and LINQ compared to custom optimized queries loaded with mostly low-level C, both with a SQL Server 2008 backend?
I'm specifically thinking here of a case where you have a fairly data-intensive program and will be doing a data refresh or update at least once per screen and will have 50-100 simultaneous users.
A:
Thanks Stu. Bottom line seems to be that LINQ to SQL probably doesn't have a significant database performance overhead with the newer versions if you are able to use a compiled select, and the slower functions of updating are likely to be faster unless you have a REALLY sharp expert doing most of the coding.
A:
In my experience the overhead is minimal, provided that the person writing the queries knows what he/she is doing, and take the usual precautions to ensure the generated queries are optimal, that the necessary indexes are in place etc etc. In other words, the database impact should be the same; there is a minimal but usually negligible overhead on the app side.
That said... there is one exception to this; if a single query generates multiple aggregates the L2S provider translates it to a large query with one sub-query per aggregate. For a large table this can have a significant I/O impact as the db I/O cost for the query grows by magnitudes for each new aggregate in the query.
The workaround for that is of course to move the aggregates to stored proc or view. Matt Warren has some sample code for an alternative query provider that translate that kind of queries in a more efficient way.
Resources:
https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=334211
http://blogs.msdn.com/mattwar/archive/2008/07/08/linq-building-an-iqueryable-provider-part-x.aspx
|
How much database performance overhead when using LINQ?
|
How much database performance overhead is involved with using C# and LINQ compared to custom optimized queries loaded with mostly low-level C, both with a SQL Server 2008 backend?
I'm specifically thinking here of a case where you have a fairly data-intensive program and will be doing a data refresh or update at least once per screen and will have 50-100 simultaneous users.
|
[
"Thanks Stu. Bottom line seems to be that LINQ to SQL probably doesn't have a significant database performance overhead with the newer versions if you are able to use a compiled select, and the slower functions of updating are likely to be faster unless you have a REALLY sharp expert doing most of the coding.\n",
"In my experience the overhead is minimal, provided that the person writing the queries knows what he/she is doing, and take the usual precautions to ensure the generated queries are optimal, that the necessary indexes are in place etc etc. In other words, the database impact should be the same; there is a minimal but usually negligible overhead on the app side.\nThat said... there is one exception to this; if a single query generates multiple aggregates the L2S provider translates it to a large query with one sub-query per aggregate. For a large table this can have a significant I/O impact as the db I/O cost for the query grows by magnitudes for each new aggregate in the query.\nThe workaround for that is of course to move the aggregates to stored proc or view. Matt Warren has some sample code for an alternative query provider that translate that kind of queries in a more efficient way.\nResources:\nhttps://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=334211\nhttp://blogs.msdn.com/mattwar/archive/2008/07/08/linq-building-an-iqueryable-provider-part-x.aspx\n"
] |
[
2,
2
] |
[] |
[] |
[
"linq",
"linq_to_sql",
"performance",
"sql_server"
] |
stackoverflow_0000004782_linq_linq_to_sql_performance_sql_server.txt
|
Q:
Is WindowsFormsHost fit for purpose (.net WPF hosting WinForms)?
A GUI driven application needs to host some prebuilt WinForms based components.
These components provide high performance interactive views using a mixture of GDI+ and DirectX.
The views handle control input and display custom graphical renderings.
The components are tested in a WinForms harness by the supplier.
Can a commericial application use WPF for its GUI and rely on WindowsFormsHost to host the WinForms components or
have you experience of technical glitches e.g. input lags, update issues that would make you cautious?
A:
We're currently using WindowsFormsHost in our software to host the WinForms DataGridView control, and we've not had any real problems with it. A few things to watch out for though:
The first is the air-space restrictions. Practically speaking, this means that WinForms content always appears on top of WPF content. So if you are using WPF adorners they will appear to be "trimmed" if they hit up against a WinForms region in your app.
The second is that, because they use Windows resources, you have to manage the lifetimes of WinForms components more carefully. Unlike WPF components, WinForms controls expect to be Disposed when they're finished with. This makes it tricky to include them in a pure XAML view.
The last thing is that WinForms controls don't seem to resize as smoothly as the rest of the WPF display: they tend to snap to their new size once you've finished making an adjustment.
A:
One problem I've run into is that embedded Win Forms controls do not participate in any transform operations applied to their WPF container. This results in visual flashing effects and the embedded control appearing in an innappropriate location. I worked around this by binding the visibility of the Windows Forms Host to the animation state of its WPF container, so that the embedded control was hidden until the animation completed, like below.
<WindowsFormsHost Grid.Row="1" Grid.Column="1" Margin="8,0,0,0"
Visibility="{Binding ActualHeight, RelativeSource={RelativeSource
Mode=FindAncestor, AncestorType=UserControl},
Converter={StaticResource WinFormsControlVisibilityConverter}}" >
<winforms:DateTimePicker x:Name="datepickerOrderExpected" Width="140"
Format="Custom" CustomFormat="M/dd/yy h:mm tt"
ValueChanged="OnEditDateTimeOrderExpected" />
</WindowsFormsHost>
A:
I hosted WPF controls in WinForms and vice versa without problems. Though, I would test such scenarios extensively 'cause it's hard to predict how complex control will behave.
A:
Do note the absence of a WPF Application object when hosting in Winforms. This can result in problems if you're taking an existing WPF component and hosting it in Winforms, since resource lookups and the likes will never look in application scope. You can create your own Application object if it is a problem.
A:
As @Kent Boogaart mentioned, I've run into the situation where a WPF application hosted in WinForms doesn't have the WPF Application object (i.e. Application.Current). This can cause many issues such as Dispatchers not invoking threads back to the UI thread. This would only apply if you're hosting in WinForms, not the other way around.
I've also had strange issues with modal dialogs behaving strangely (i.e. ShowModal calls). I'm assuming this is because, in WinForms, each control has its own Win32 handle while in WPF, there is only one handle for the entire Window.
Whatever you do, test :)
A:
You can solve the Airspace problem by using .net 3.5 SP1:
These types of airspace restrictions
represent a huge limitation in a
framework, like WPF, where element
composition is used to create very
rich user experiences. With a D3DImage
solution, these restrictions are no
longer present!
See Introduction to D3DImage.
|
Is WindowsFormsHost fit for purpose (.net WPF hosting WinForms)?
|
A GUI driven application needs to host some prebuilt WinForms based components.
These components provide high performance interactive views using a mixture of GDI+ and DirectX.
The views handle control input and display custom graphical renderings.
The components are tested in a WinForms harness by the supplier.
Can a commericial application use WPF for its GUI and rely on WindowsFormsHost to host the WinForms components or
have you experience of technical glitches e.g. input lags, update issues that would make you cautious?
|
[
"We're currently using WindowsFormsHost in our software to host the WinForms DataGridView control, and we've not had any real problems with it. A few things to watch out for though:\nThe first is the air-space restrictions. Practically speaking, this means that WinForms content always appears on top of WPF content. So if you are using WPF adorners they will appear to be \"trimmed\" if they hit up against a WinForms region in your app.\nThe second is that, because they use Windows resources, you have to manage the lifetimes of WinForms components more carefully. Unlike WPF components, WinForms controls expect to be Disposed when they're finished with. This makes it tricky to include them in a pure XAML view.\nThe last thing is that WinForms controls don't seem to resize as smoothly as the rest of the WPF display: they tend to snap to their new size once you've finished making an adjustment.\n",
"One problem I've run into is that embedded Win Forms controls do not participate in any transform operations applied to their WPF container. This results in visual flashing effects and the embedded control appearing in an innappropriate location. I worked around this by binding the visibility of the Windows Forms Host to the animation state of its WPF container, so that the embedded control was hidden until the animation completed, like below.\n<WindowsFormsHost Grid.Row=\"1\" Grid.Column=\"1\" Margin=\"8,0,0,0\"\n Visibility=\"{Binding ActualHeight, RelativeSource={RelativeSource\n Mode=FindAncestor, AncestorType=UserControl},\n Converter={StaticResource WinFormsControlVisibilityConverter}}\" >\n\n <winforms:DateTimePicker x:Name=\"datepickerOrderExpected\" Width=\"140\"\n Format=\"Custom\" CustomFormat=\"M/dd/yy h:mm tt\"\n ValueChanged=\"OnEditDateTimeOrderExpected\" />\n\n</WindowsFormsHost>\n\n",
"I hosted WPF controls in WinForms and vice versa without problems. Though, I would test such scenarios extensively 'cause it's hard to predict how complex control will behave.\n",
"Do note the absence of a WPF Application object when hosting in Winforms. This can result in problems if you're taking an existing WPF component and hosting it in Winforms, since resource lookups and the likes will never look in application scope. You can create your own Application object if it is a problem.\n",
"As @Kent Boogaart mentioned, I've run into the situation where a WPF application hosted in WinForms doesn't have the WPF Application object (i.e. Application.Current). This can cause many issues such as Dispatchers not invoking threads back to the UI thread. This would only apply if you're hosting in WinForms, not the other way around.\nI've also had strange issues with modal dialogs behaving strangely (i.e. ShowModal calls). I'm assuming this is because, in WinForms, each control has its own Win32 handle while in WPF, there is only one handle for the entire Window.\nWhatever you do, test :)\n",
"You can solve the Airspace problem by using .net 3.5 SP1:\n\nThese types of airspace restrictions\n represent a huge limitation in a\n framework, like WPF, where element\n composition is used to create very\n rich user experiences. With a D3DImage\n solution, these restrictions are no\n longer present!\n\nSee Introduction to D3DImage.\n"
] |
[
21,
4,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"winforms",
"wpf"
] |
stackoverflow_0000053796_.net_winforms_wpf.txt
|
Q:
How do I determine using TSQL what roles are granted execute permissions on a specific stored procedure?
How do I determine using TSQL what roles are granted execute permissions on a specific stored procedure? Is there a system stored procedure or a system view I can use?
A:
In 7.0 or 2000, you can modify and use the following code:
SELECT convert(varchar(100),
'GRANT ' +
CASE WHEN actadd & 32 = 32 THEN 'EXECUTE'
ELSE
CASE WHEN actadd & 1 = 1 THEN 'SELECT' + CASE WHEN actadd & (8|2|16) > 0 THEN ', ' ELSE '' END ELSE '' END +
CASE WHEN actadd & 8 = 8 THEN 'INSERT' + CASE WHEN actadd & (2|16) > 0 THEN ', ' ELSE '' END ELSE '' END +
CASE WHEN actadd & 2 = 2 THEN 'UPDATE' + CASE WHEN actadd & (16) > 0 THEN ', ' ELSE '' END ELSE '' END +
CASE WHEN actadd & 16 = 16 THEN 'DELETE' ELSE '' END
END + ' ON [' + o.name + '] TO [' + u.name + ']') AS '--Permissions--'
FROM syspermissions p
INNER JOIN sysusers u ON u.uid = p.grantee
INNER JOIN sysobjects o ON p.id = o.id
WHERE o.type <> 'S'
AND o.name NOT LIKE 'dt%'
--AND o.name = '<specific procedure/table>'
--AND u.name = '<specific user>'
ORDER BY u.name, o.name
A:
You can try something like this. Note, I believe 3 is EXECUTE.
SELECT
grantee_principal.name AS [Grantee],
CASE grantee_principal.type WHEN 'R' THEN 3 WHEN 'A' THEN 4 ELSE 2 END - CASE 'database' WHEN 'database' THEN 0 ELSE 2 END AS [GranteeType]
FROM
sys.all_objects AS sp
INNER JOIN sys.database_permissions AS prmssn ON prmssn.major_id=sp.object_id AND prmssn.minor_id=0 AND prmssn.class=1
INNER JOIN sys.database_principals AS grantee_principal ON grantee_principal.principal_id = prmssn.grantee_principal_id
WHERE
(sp.type = N'P' OR sp.type = N'RF' OR sp.type='PC')and(sp.name=N'myProcedure' and SCHEMA_N
I got that example by simply using SQL Profiler while looking at the permissions on a procedure. I hope that helps.
|
How do I determine using TSQL what roles are granted execute permissions on a specific stored procedure?
|
How do I determine using TSQL what roles are granted execute permissions on a specific stored procedure? Is there a system stored procedure or a system view I can use?
|
[
"In 7.0 or 2000, you can modify and use the following code:\nSELECT convert(varchar(100),\n 'GRANT ' +\n CASE WHEN actadd & 32 = 32 THEN 'EXECUTE'\n ELSE\n CASE WHEN actadd & 1 = 1 THEN 'SELECT' + CASE WHEN actadd & (8|2|16) > 0 THEN ', ' ELSE '' END ELSE '' END +\n CASE WHEN actadd & 8 = 8 THEN 'INSERT' + CASE WHEN actadd & (2|16) > 0 THEN ', ' ELSE '' END ELSE '' END +\n CASE WHEN actadd & 2 = 2 THEN 'UPDATE' + CASE WHEN actadd & (16) > 0 THEN ', ' ELSE '' END ELSE '' END +\n CASE WHEN actadd & 16 = 16 THEN 'DELETE' ELSE '' END\n END + ' ON [' + o.name + '] TO [' + u.name + ']') AS '--Permissions--'\nFROM syspermissions p\nINNER JOIN sysusers u ON u.uid = p.grantee\nINNER JOIN sysobjects o ON p.id = o.id\nWHERE o.type <> 'S'\nAND o.name NOT LIKE 'dt%'\n--AND o.name = '<specific procedure/table>'\n--AND u.name = '<specific user>'\nORDER BY u.name, o.name \n\n",
"You can try something like this. Note, I believe 3 is EXECUTE.\nSELECT\ngrantee_principal.name AS [Grantee],\nCASE grantee_principal.type WHEN 'R' THEN 3 WHEN 'A' THEN 4 ELSE 2 END - CASE 'database' WHEN 'database' THEN 0 ELSE 2 END AS [GranteeType]\nFROM\nsys.all_objects AS sp\nINNER JOIN sys.database_permissions AS prmssn ON prmssn.major_id=sp.object_id AND prmssn.minor_id=0 AND prmssn.class=1\nINNER JOIN sys.database_principals AS grantee_principal ON grantee_principal.principal_id = prmssn.grantee_principal_id\nWHERE\n(sp.type = N'P' OR sp.type = N'RF' OR sp.type='PC')and(sp.name=N'myProcedure' and SCHEMA_N\n\nI got that example by simply using SQL Profiler while looking at the permissions on a procedure. I hope that helps.\n"
] |
[
1,
0
] |
[] |
[] |
[
"sql_server",
"stored_procedures",
"tsql"
] |
stackoverflow_0000067244_sql_server_stored_procedures_tsql.txt
|
Q:
Using Java JAR file in .NET
What options / methods / software are available to convert a JAR file to a managed .NET assembly?
Please provide all commercial and non-commercial methods in the answer.
These don't include solutions which require Java to be installed on the host machine.
A:
I could be wrong, but I'm pretty sure that's impossible. The java byte code is different to the code produced to run on the CLR.
Snarky answer: Get the source code, and port it.
EDIT: A little poking comes up with http://sourceforge.net/projects/ikvm/, a Java Virtual Machine implementation for .NET. Not quite what you asked for, but it's probably going to be the best you can do.
|
Using Java JAR file in .NET
|
What options / methods / software are available to convert a JAR file to a managed .NET assembly?
Please provide all commercial and non-commercial methods in the answer.
These don't include solutions which require Java to be installed on the host machine.
|
[
"I could be wrong, but I'm pretty sure that's impossible. The java byte code is different to the code produced to run on the CLR.\nSnarky answer: Get the source code, and port it.\nEDIT: A little poking comes up with http://sourceforge.net/projects/ikvm/, a Java Virtual Machine implementation for .NET. Not quite what you asked for, but it's probably going to be the best you can do.\n"
] |
[
9
] |
[
"Confronted with this situation last year, I wrote a small wrapper (in java) that read the inputs from a temp file, invoked the jar and placed the output in anther temp file. The .NET project would create the input file, call the JVM and start the wrapper, wait for it to finish and read the output file. Quick and Dirty. at least in my case\n"
] |
[
-1
] |
[
".net",
"jar",
"java",
"managed"
] |
stackoverflow_0000069108_.net_jar_java_managed.txt
|
Q:
Is inline code in your aspx pages a good practice?
If I use the following code I lose the ability to right click on variables in the code behind and refactor (rename in this case) them
<a href='<%# "/Admin/Content/EditResource.aspx?ResourceId=" + Eval("Id").ToString() %>'>Edit</a>
I see this practice everywhere but it seems weird to me as I no longer am able to get compile time errors if I change the property name.
My preferred approach is to do something like this
<a runat="server" id="MyLink">Edit</a>
and then in the code behind
MyLink.Href= "/Admin/Content/EditResource.aspx?ResourceId=" + myObject.Id;
I'm really interested to hear if people think the above approach is better since that's what I always see on popular coding sites and blogs (e.g. Scott Guthrie) and it's smaller code, but I tend to use ASP.NET because it is compiled and prefer to know if something is broken at compile time, not run time.
A:
I wouldnt call it bad practice (some would disagree, but why did they give us that option in the first place?), but I would say that you'll improve overall readability and maintainability if you do not submit to this practice. You already conveyed out a good point, and that is IDE feature limitation (i.e., design time inspection, compile time warning, etc.).
I could go on and on about how many principles it violates (code reuse, separation of concerns, etc.), but I can think of many applications out there that break nearly every principle, but still work after several years. I for one, prefer to make my code as modular and maintainable as possible.
A:
It's known as spaghetti code and many programmers find it objectionable... then again, if you and the other developers at your company find it readable and maintainable, who am I to tell you what to do.
For sure though, use includes to reduce redundancy (DRY - don't repeat yourself)
A:
I use it only occasionally, and generally for some particular reason. I will always be a happier developer with my code separated entirely from my HTML markup. It's somewhat a personal preference, but I would say this is a better practice.
A:
It's up to you. Sometimes "spagehetti" code is easier to maintain than building/using a full on templating system for something simple, but once you get fairly complicated pages, or more specifically, once you start including a lot of logic into the page itself, it can get dirty really quickly.
A:
I think it is interesting that more asp.net is requiring code in the aspx pages. The listview in 3.5, and even the ASP.NET MVC. The MVC has basically no code behind, but code in the pages to render information.
A:
If you think of it in terms of template development, then it is wise to keep it in the view, and not in the code behind. What if if needs to change from a anchor to a list item with unobtrusive JS to handle a click? Yes, this is not the best example, rather just that, and example.
I always try to think in terms of if I had a designer (HTML, CSS, anything), what would I have him doing and what would I be doing in the code behind, and how do we not step on each other's toes.
A:
Its only a bad practice, if you cannot encapsulate it well.
Like everything else, you can create nasty, unreadable spaghetti code, except now you have tags to content with, which by design aren't the most readable things in the world.
I try and keep tons of if's out of hte template, but excessive encapsulation, leads to having to look in 13 diferent places to see why div x isn't firing to the client, so its a trade off.
A:
It's not, but sometimes it's a necessary evil.
Take your case for an example, although code behind seems to have a better separation of concern, but the problem with it is that it may not separate out the concerns as clearly as you wish. Usually when we do the code behind stuff we are not building the apps in MVC framework. The code behind code is also not easy to maintain and test anyway, at least when compare to MVC.
If you are building ASP.NET MVC apps then I think you are surely stuck with inline code. But building in MVC pattern is the best way to go about in terms of maintainability and testability.
To sum: inline code is not a good practice, but it's a necessary evil.
My 2cents.
A:
Normally I use like this way.
<a href='<%# DataBinder.Eval(Container.DataItem,"Id",""/Admin/Content/EditResource.aspx?ResourceId={0}") %'>
|
Is inline code in your aspx pages a good practice?
|
If I use the following code I lose the ability to right click on variables in the code behind and refactor (rename in this case) them
<a href='<%# "/Admin/Content/EditResource.aspx?ResourceId=" + Eval("Id").ToString() %>'>Edit</a>
I see this practice everywhere but it seems weird to me as I no longer am able to get compile time errors if I change the property name.
My preferred approach is to do something like this
<a runat="server" id="MyLink">Edit</a>
and then in the code behind
MyLink.Href= "/Admin/Content/EditResource.aspx?ResourceId=" + myObject.Id;
I'm really interested to hear if people think the above approach is better since that's what I always see on popular coding sites and blogs (e.g. Scott Guthrie) and it's smaller code, but I tend to use ASP.NET because it is compiled and prefer to know if something is broken at compile time, not run time.
|
[
"I wouldnt call it bad practice (some would disagree, but why did they give us that option in the first place?), but I would say that you'll improve overall readability and maintainability if you do not submit to this practice. You already conveyed out a good point, and that is IDE feature limitation (i.e., design time inspection, compile time warning, etc.).\nI could go on and on about how many principles it violates (code reuse, separation of concerns, etc.), but I can think of many applications out there that break nearly every principle, but still work after several years. I for one, prefer to make my code as modular and maintainable as possible.\n",
"It's known as spaghetti code and many programmers find it objectionable... then again, if you and the other developers at your company find it readable and maintainable, who am I to tell you what to do.\nFor sure though, use includes to reduce redundancy (DRY - don't repeat yourself)\n",
"I use it only occasionally, and generally for some particular reason. I will always be a happier developer with my code separated entirely from my HTML markup. It's somewhat a personal preference, but I would say this is a better practice.\n",
"It's up to you. Sometimes \"spagehetti\" code is easier to maintain than building/using a full on templating system for something simple, but once you get fairly complicated pages, or more specifically, once you start including a lot of logic into the page itself, it can get dirty really quickly. \n",
"I think it is interesting that more asp.net is requiring code in the aspx pages. The listview in 3.5, and even the ASP.NET MVC. The MVC has basically no code behind, but code in the pages to render information.\n",
"If you think of it in terms of template development, then it is wise to keep it in the view, and not in the code behind. What if if needs to change from a anchor to a list item with unobtrusive JS to handle a click? Yes, this is not the best example, rather just that, and example.\nI always try to think in terms of if I had a designer (HTML, CSS, anything), what would I have him doing and what would I be doing in the code behind, and how do we not step on each other's toes.\n",
"Its only a bad practice, if you cannot encapsulate it well.\nLike everything else, you can create nasty, unreadable spaghetti code, except now you have tags to content with, which by design aren't the most readable things in the world.\nI try and keep tons of if's out of hte template, but excessive encapsulation, leads to having to look in 13 diferent places to see why div x isn't firing to the client, so its a trade off.\n",
"It's not, but sometimes it's a necessary evil.\nTake your case for an example, although code behind seems to have a better separation of concern, but the problem with it is that it may not separate out the concerns as clearly as you wish. Usually when we do the code behind stuff we are not building the apps in MVC framework. The code behind code is also not easy to maintain and test anyway, at least when compare to MVC.\nIf you are building ASP.NET MVC apps then I think you are surely stuck with inline code. But building in MVC pattern is the best way to go about in terms of maintainability and testability.\nTo sum: inline code is not a good practice, but it's a necessary evil. \nMy 2cents.\n",
"Normally I use like this way.\n<a href='<%# DataBinder.Eval(Container.DataItem,\"Id\",\"\"/Admin/Content/EditResource.aspx?ResourceId={0}\") %'>\n\n"
] |
[
4,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"coding_style"
] |
stackoverflow_0000068509_asp.net_coding_style.txt
|
Q:
What do the flags in a Maildir message filename mean?
I'm cleaning up some old Maildir folders, and finding messages with names like:
1095812260.M625118P61205V0300FF04I002DC537_0.redoak.cise.ufl.edu,S=2576:2,ST
They don't show up in my IMAP client, so I presume there's some semaphore indicating the message already got moved somewhere else. Is that the case, and can the files be deleted without remorse?
A:
The 'M' is just part of the unique filename and has nothing to do with the fact that the mail doesn't show up in mail clients.
The 'T' at the end of the filename, after the ':' sign, however tells the IMAP server that this message is Trashed.
See http://cr.yp.to/proto/maildir.html
A:
IMAP, is a protocol for communicating to a message storage, the actual storage is standardised in other ways. The filename looks like a Maildir filename where I think does not put any meaning into the first part of the filename, but you have to check with your software manual.
|
What do the flags in a Maildir message filename mean?
|
I'm cleaning up some old Maildir folders, and finding messages with names like:
1095812260.M625118P61205V0300FF04I002DC537_0.redoak.cise.ufl.edu,S=2576:2,ST
They don't show up in my IMAP client, so I presume there's some semaphore indicating the message already got moved somewhere else. Is that the case, and can the files be deleted without remorse?
|
[
"The 'M' is just part of the unique filename and has nothing to do with the fact that the mail doesn't show up in mail clients. \nThe 'T' at the end of the filename, after the ':' sign, however tells the IMAP server that this message is Trashed.\nSee http://cr.yp.to/proto/maildir.html\n",
"IMAP, is a protocol for communicating to a message storage, the actual storage is standardised in other ways. The filename looks like a Maildir filename where I think does not put any meaning into the first part of the filename, but you have to check with your software manual.\n"
] |
[
17,
0
] |
[] |
[] |
[
"imap",
"maildir"
] |
stackoverflow_0000070140_imap_maildir.txt
|
Q:
Whats the best way to store and retrive postal addresses using a sql server database and the .NET framework?
I'm looking for a common pattern that will store and access global addresses in database. Components or other technologies can be used. The following criteria must be adheard to...
Every line of the address is saved for every country
Postal codes are tested with a regular expression before being saved
Country of original is saved in it's own field When the data is displayed, the [address is formatted] (http://en.wikipedia.org/wiki/Postal_address) in the style of that country
When the data is input using a form the label fields are as descriptive as possible, so the labels ned to be dynamic to the country of origin.
The addresses take up the minimum space possible
A:
How about storing the addresses as text (allowing newlines). The postal code will have to be extracted from the address with a regex (selected based on a country dropdown), and should be stored in a separate column.
This doesn't deal with the "as descriptive as possible" requirement, but in general, enforcing more constraints about the format of the data will result in a percentage of valid addresses being rejected. It will also take more space than a single varchar column. Therefore, there will always be a compromise between the requirements you listed.
|
Whats the best way to store and retrive postal addresses using a sql server database and the .NET framework?
|
I'm looking for a common pattern that will store and access global addresses in database. Components or other technologies can be used. The following criteria must be adheard to...
Every line of the address is saved for every country
Postal codes are tested with a regular expression before being saved
Country of original is saved in it's own field When the data is displayed, the [address is formatted] (http://en.wikipedia.org/wiki/Postal_address) in the style of that country
When the data is input using a form the label fields are as descriptive as possible, so the labels ned to be dynamic to the country of origin.
The addresses take up the minimum space possible
|
[
"How about storing the addresses as text (allowing newlines). The postal code will have to be extracted from the address with a regex (selected based on a country dropdown), and should be stored in a separate column.\nThis doesn't deal with the \"as descriptive as possible\" requirement, but in general, enforcing more constraints about the format of the data will result in a percentage of valid addresses being rejected. It will also take more space than a single varchar column. Therefore, there will always be a compromise between the requirements you listed.\n"
] |
[
0
] |
[] |
[] |
[
".net",
"postal_code",
"regex",
"sql_server",
"street_address"
] |
stackoverflow_0000070153_.net_postal_code_regex_sql_server_street_address.txt
|
Q:
Re-using soft deleted records
If I have a table structure that is:
code, description, isdeleted
where code is the primary key.
The user creates a record, then later on deletes it. Because I am using soft deletes the isdeleted will be set to true. Then in my queries I would be doing a select with the where clause and not isdeleted
Now if a user goes to create a new record they may see that code 'ABC' doesn't exist so they tried to recreate it. The select statement won't find it because of the where clause. But there will be a primary key index error.
Should the user be allowed to re-use the record? I would think not since the idea of the soft delete is to keep the record for queries on older data so that joins to the 'deleted' record still work. If the user was allowed to re-use the code then they could change the description which might change the view of the historical data. But is it too harsh to stop them from using that code at all?
Or should I be using a completely hidden primary key and then the 'code' field can be re-used?
A:
I know many people have argued that the data should be natural, but you should be using a primary key that is completely separate from your data if you're going to be supporting soft deletes without the intention of always re-using the previous record when this situation arises.
Having a divorced primary key will allow you to have multiple records with the same 'code' value, and it will allow you to "undelete" (otherwise, why bother with a soft delete?) a value without worrying about overwriting something else.
Personally, I prefer the numeric auto-incremented style of ID, but there are many proponents of GUIDs.
A:
Or should I be using a completely
hidden primary key and then the 'code'
field can be re-used?
I think you have answered this pretty well yourself. If you want the user to be able to re-use the deleted codes, then you should have a separate primary key not visisble to the user. If it is important that the codes be unique, then the users should generally not be entering them anyway.
A:
I think it depends on the specific data you're talking about.
If the user is trying to recreate code 'ABC', is it the SAME 'ABC' that was in use last time that has now come out of retirement, or is it a completely different 'ABC'?
If it actually refers to the same real-world 'thing', then there may be no harm in simply 'undeleting' it. After all - it's the same thing, so logically speaking it should show up as the same thing in historical and new queries. If your user decides they don't need it any more, then they can delete it and it'll go away. If at some point in the future they need it again, they can effectively un-delete it by adding it in again.
If, however, the new 'ABC' refers to something (in the real world) which is different to the old 'ABC', then you could argue that the 'code' isn't actually a primary key, in which case, if your data doesn't provide any other natural choice, you may just as well create an arbitrary key.
A big downside of this is that you'll have to be pretty careful not to let the user create two active records with the same 'code', of course.
A:
When you select records (excluding soft-deletes) to display them in user interface/ output file, use where not isdeleted.
But when the user requests an insert operation, perform two queries.
Lookup all records (ignoring isdeleted value).
Based on first query result, perform an UPDATE if it exists (and reverse isdeleted flag) or perform a true INSERT if it does not exist.
The nuances of the business logic are up to you.
A:
I've done this with user tables, where the email is a unique constraint. If someone cancels there account, their information is still needed for referential integrity, so what I to is set is_deteled to true, and add '_deleted' to the email field. In this way, if the user decides to sign up again in the future, there is no problem for the user and the unique constraint is not broken.
I think soft delete is good in some situations. For example, if someone deleted their account from this site and you delete their user then all their posts and answers would be lost. I think it is much better to soft delete and display their user as "deleted user" or something similar... oh, I also believe in divorced primary keys
|
Re-using soft deleted records
|
If I have a table structure that is:
code, description, isdeleted
where code is the primary key.
The user creates a record, then later on deletes it. Because I am using soft deletes the isdeleted will be set to true. Then in my queries I would be doing a select with the where clause and not isdeleted
Now if a user goes to create a new record they may see that code 'ABC' doesn't exist so they tried to recreate it. The select statement won't find it because of the where clause. But there will be a primary key index error.
Should the user be allowed to re-use the record? I would think not since the idea of the soft delete is to keep the record for queries on older data so that joins to the 'deleted' record still work. If the user was allowed to re-use the code then they could change the description which might change the view of the historical data. But is it too harsh to stop them from using that code at all?
Or should I be using a completely hidden primary key and then the 'code' field can be re-used?
|
[
"I know many people have argued that the data should be natural, but you should be using a primary key that is completely separate from your data if you're going to be supporting soft deletes without the intention of always re-using the previous record when this situation arises.\nHaving a divorced primary key will allow you to have multiple records with the same 'code' value, and it will allow you to \"undelete\" (otherwise, why bother with a soft delete?) a value without worrying about overwriting something else.\nPersonally, I prefer the numeric auto-incremented style of ID, but there are many proponents of GUIDs.\n",
"\nOr should I be using a completely\n hidden primary key and then the 'code'\n field can be re-used?\n\nI think you have answered this pretty well yourself. If you want the user to be able to re-use the deleted codes, then you should have a separate primary key not visisble to the user. If it is important that the codes be unique, then the users should generally not be entering them anyway.\n",
"I think it depends on the specific data you're talking about.\nIf the user is trying to recreate code 'ABC', is it the SAME 'ABC' that was in use last time that has now come out of retirement, or is it a completely different 'ABC'?\nIf it actually refers to the same real-world 'thing', then there may be no harm in simply 'undeleting' it. After all - it's the same thing, so logically speaking it should show up as the same thing in historical and new queries. If your user decides they don't need it any more, then they can delete it and it'll go away. If at some point in the future they need it again, they can effectively un-delete it by adding it in again.\nIf, however, the new 'ABC' refers to something (in the real world) which is different to the old 'ABC', then you could argue that the 'code' isn't actually a primary key, in which case, if your data doesn't provide any other natural choice, you may just as well create an arbitrary key. \nA big downside of this is that you'll have to be pretty careful not to let the user create two active records with the same 'code', of course.\n",
"When you select records (excluding soft-deletes) to display them in user interface/ output file, use where not isdeleted.\nBut when the user requests an insert operation, perform two queries.\n\nLookup all records (ignoring isdeleted value).\nBased on first query result, perform an UPDATE if it exists (and reverse isdeleted flag) or perform a true INSERT if it does not exist.\n\nThe nuances of the business logic are up to you.\n",
"I've done this with user tables, where the email is a unique constraint. If someone cancels there account, their information is still needed for referential integrity, so what I to is set is_deteled to true, and add '_deleted' to the email field. In this way, if the user decides to sign up again in the future, there is no problem for the user and the unique constraint is not broken.\nI think soft delete is good in some situations. For example, if someone deleted their account from this site and you delete their user then all their posts and answers would be lost. I think it is much better to soft delete and display their user as \"deleted user\" or something similar... oh, I also believe in divorced primary keys\n"
] |
[
3,
2,
1,
0,
0
] |
[] |
[] |
[
"database_design",
"soft_delete"
] |
stackoverflow_0000070123_database_design_soft_delete.txt
|
Q:
SQLite UDF - VBA Callback
Has anybody attempted to pass a VBA (or VB6) function (via AddressOf ?) to the SQLite create a UDF function (http://www.sqlite.org/c3ref/create_function.html).
How would the resulting callback arguments be handled by VBA?
The function to be called would have the following signature...
void (xFunc)(sqlite3_context,int,sqlite3_value**)
A:
Unfortunately, you can't use a VB6/VBA function as a callback directly as VB6 only generates stdcall functions rather than the cdecl functions SQLite expects.
You will need to write a C dll to proxy the calls back and forth or recompile SQLite to to support your own custom extension.
After recompiling your dll to export the functions as stdcall, you can register a function with the following code:
'Create Function
Public Declare Function sqlite3_create_function Lib "SQLiteVB.dll" (ByVal db As Long, ByVal zFunctionName As String, ByVal nArg As Long, ByVal eTextRep As Long, ByVal pApp As Long, ByVal xFunc As Long, ByVal xStep As Long, ByVal xFinal As Long) As Long
'Gets a value
Public Declare Function sqlite3_value_type Lib "SQLiteVB.dll" (ByVal arg As Long) As SQLiteDataTypes 'Gets the type
Public Declare Function sqlite3_value_text_bstr Lib "SQLiteVB.dll" (ByVal arg As Long) As String 'Gets as String
Public Declare Function sqlite3_value_int Lib "SQLiteVB.dll" (ByVal arg As Long) As Long 'Gets as Long
'Sets the Function Result
Public Declare Sub sqlite3_result_int Lib "SQLiteVB.dll" (ByVal context As Long, ByVal value As Long)
Public Declare Sub sqlite3_result_error_code Lib "SQLiteVB.dll" (ByVal context As Long, ByVal value As Long)
Public Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" (dest As Any, source As Any, ByVal bytes As Long)
Public Property Get ArgValue(ByVal argv As Long, ByVal index As Long) As Long
CopyMemory ArgValue, ByVal (argv + index * 4), 4
End Property
Public Sub FirstCharCallback(ByVal context As Long, ByVal argc As Long, ByVal argv As Long)
Dim arg1 As String
If argc >= 1 Then
arg1 = sqlite3_value_text_bstr(ArgValue(argv, 0))
sqlite3_result_int context, AscW(arg1)
Else
sqlite3_result_error_code context, 666
End If
End Sub
Public Sub RegisterFirstChar(ByVal db As Long)
sqlite3_create_function db, "FirstChar", 1, 0, 0, AddressOf FirstCharCallback, 0, 0
'Example query: SELECT FirstChar(field) FROM Table
End Sub
|
SQLite UDF - VBA Callback
|
Has anybody attempted to pass a VBA (or VB6) function (via AddressOf ?) to the SQLite create a UDF function (http://www.sqlite.org/c3ref/create_function.html).
How would the resulting callback arguments be handled by VBA?
The function to be called would have the following signature...
void (xFunc)(sqlite3_context,int,sqlite3_value**)
|
[
"Unfortunately, you can't use a VB6/VBA function as a callback directly as VB6 only generates stdcall functions rather than the cdecl functions SQLite expects.\nYou will need to write a C dll to proxy the calls back and forth or recompile SQLite to to support your own custom extension.\nAfter recompiling your dll to export the functions as stdcall, you can register a function with the following code:\n'Create Function\nPublic Declare Function sqlite3_create_function Lib \"SQLiteVB.dll\" (ByVal db As Long, ByVal zFunctionName As String, ByVal nArg As Long, ByVal eTextRep As Long, ByVal pApp As Long, ByVal xFunc As Long, ByVal xStep As Long, ByVal xFinal As Long) As Long\n\n'Gets a value\nPublic Declare Function sqlite3_value_type Lib \"SQLiteVB.dll\" (ByVal arg As Long) As SQLiteDataTypes 'Gets the type\nPublic Declare Function sqlite3_value_text_bstr Lib \"SQLiteVB.dll\" (ByVal arg As Long) As String 'Gets as String\nPublic Declare Function sqlite3_value_int Lib \"SQLiteVB.dll\" (ByVal arg As Long) As Long 'Gets as Long\n\n'Sets the Function Result\nPublic Declare Sub sqlite3_result_int Lib \"SQLiteVB.dll\" (ByVal context As Long, ByVal value As Long)\nPublic Declare Sub sqlite3_result_error_code Lib \"SQLiteVB.dll\" (ByVal context As Long, ByVal value As Long)\n\nPublic Declare Sub CopyMemory Lib \"kernel32\" Alias \"RtlMoveMemory\" (dest As Any, source As Any, ByVal bytes As Long)\n\nPublic Property Get ArgValue(ByVal argv As Long, ByVal index As Long) As Long\n CopyMemory ArgValue, ByVal (argv + index * 4), 4\nEnd Property\n\nPublic Sub FirstCharCallback(ByVal context As Long, ByVal argc As Long, ByVal argv As Long)\n Dim arg1 As String\n If argc >= 1 Then\n arg1 = sqlite3_value_text_bstr(ArgValue(argv, 0))\n sqlite3_result_int context, AscW(arg1)\n Else\n sqlite3_result_error_code context, 666\n End If\nEnd Sub\n\nPublic Sub RegisterFirstChar(ByVal db As Long)\n sqlite3_create_function db, \"FirstChar\", 1, 0, 0, AddressOf FirstCharCallback, 0, 0\n 'Example query: SELECT FirstChar(field) FROM Table\nEnd Sub\n\n"
] |
[
6
] |
[] |
[] |
[
"sqlite",
"vb6",
"vba"
] |
stackoverflow_0000065243_sqlite_vb6_vba.txt
|
Q:
subselect vs outer join
Consider the following 2 queries:
select tblA.a,tblA.b,tblA.c,tblA.d
from tblA
where tblA.a not in (select tblB.a from tblB)
select tblA.a,tblA.b,tblA.c,tblA.d
from tblA left outer join tblB
on tblA.a = tblB.a where tblB.a is null
Which will perform better? My assumption is that in general the join will be better except in cases where the subselect returns a very small result set.
A:
RDBMSs "rewrite" queries to optimize them, so it depends on system you're using, and I would guess they end up giving the same performance on most "good" databases.
I suggest picking the one that is clearer and easier to maintain, for my money, that's the first one. It's much easier to debug the subquery as it can be run independently to check for sanity.
A:
non-correlated sub queries are fine. you should go with what describes the data you're wanting. as has been noted, this likely gets rewritten into the same plan, but isn't guaranteed to! what's more, if table A and B are not 1:1 you will get duplicate tuples from the join query (as the IN clause performs an implicit DISTINCT sort), so it's always best to code what you want and actually think about the outcome.
A:
Well, it depends on the datasets. From my experience, if You have small dataset then go for a NOT IN if it's large go for a LEFT JOIN. The NOT IN clause seems to be very slow on large datasets.
One other thing I might add is that the explain plans might be misleading. I've seen several queries where explain was sky high and the query run under 1s. On the other hand I've seen queries with excellent explain plan and they could run for hours.
So all in all do test on your data and see for yourself.
A:
I second Tom's answer that you should pick the one that is easier to understand and maintain.
The query plan of any query in any database cannot be predicted because you haven't given us indexes or data distributions. The only way to predict which is faster is to run them against your database.
As a rule of thumb I tend to use sub-selects when I do not need to include any columns from tblB in my select clause. I would definitely go for a sub-select when I want to use the 'in' predicate (and usually for the 'not in' that you included in the question), for the simple reason that these are easier to understand when you or someone else has come back and change them.
A:
The first query will be faster in SQL Server which I think is slighty counter intuitive - Sub queries seem like they should be slower. In some cases (as data volumes increase) an exists may be faster than an in.
A:
It should be noted that these queries will produce different results if TblB.a is not unique.
A:
From my observations, MSSQL server produces same query plan for these queries.
A:
I created a simple query similar to the ones in the question on MSSQL2005 and the explain plans were different. The first query appears to be faster. I am not a SQL expert but the estimated explain plan had 37% for query 1 and 63% for the query 2. It appears that the biggest cost for query 2 is the join. Both queries had two table scans.
|
subselect vs outer join
|
Consider the following 2 queries:
select tblA.a,tblA.b,tblA.c,tblA.d
from tblA
where tblA.a not in (select tblB.a from tblB)
select tblA.a,tblA.b,tblA.c,tblA.d
from tblA left outer join tblB
on tblA.a = tblB.a where tblB.a is null
Which will perform better? My assumption is that in general the join will be better except in cases where the subselect returns a very small result set.
|
[
"RDBMSs \"rewrite\" queries to optimize them, so it depends on system you're using, and I would guess they end up giving the same performance on most \"good\" databases.\nI suggest picking the one that is clearer and easier to maintain, for my money, that's the first one. It's much easier to debug the subquery as it can be run independently to check for sanity.\n",
"non-correlated sub queries are fine. you should go with what describes the data you're wanting. as has been noted, this likely gets rewritten into the same plan, but isn't guaranteed to! what's more, if table A and B are not 1:1 you will get duplicate tuples from the join query (as the IN clause performs an implicit DISTINCT sort), so it's always best to code what you want and actually think about the outcome.\n",
"Well, it depends on the datasets. From my experience, if You have small dataset then go for a NOT IN if it's large go for a LEFT JOIN. The NOT IN clause seems to be very slow on large datasets.\nOne other thing I might add is that the explain plans might be misleading. I've seen several queries where explain was sky high and the query run under 1s. On the other hand I've seen queries with excellent explain plan and they could run for hours.\nSo all in all do test on your data and see for yourself.\n",
"I second Tom's answer that you should pick the one that is easier to understand and maintain.\nThe query plan of any query in any database cannot be predicted because you haven't given us indexes or data distributions. The only way to predict which is faster is to run them against your database.\nAs a rule of thumb I tend to use sub-selects when I do not need to include any columns from tblB in my select clause. I would definitely go for a sub-select when I want to use the 'in' predicate (and usually for the 'not in' that you included in the question), for the simple reason that these are easier to understand when you or someone else has come back and change them. \n",
"The first query will be faster in SQL Server which I think is slighty counter intuitive - Sub queries seem like they should be slower. In some cases (as data volumes increase) an exists may be faster than an in.\n",
"It should be noted that these queries will produce different results if TblB.a is not unique.\n",
"From my observations, MSSQL server produces same query plan for these queries.\n",
"I created a simple query similar to the ones in the question on MSSQL2005 and the explain plans were different. The first query appears to be faster. I am not a SQL expert but the estimated explain plan had 37% for query 1 and 63% for the query 2. It appears that the biggest cost for query 2 is the join. Both queries had two table scans.\n"
] |
[
16,
4,
3,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"database",
"performance",
"sql",
"sql_server"
] |
stackoverflow_0000047433_database_performance_sql_sql_server.txt
|
Q:
Checklist for Web Site Programming Vulnerabilities
Watching SO come online has been quite an education for me. I'd like to make a checklist of various vunerabilities and exploits used against web sites, and what programming techniques can be used to defend against them.
What categories of vunerabilities?
crashing site
breaking into server
breaking into other people's logins
spam
sockpuppeting, meatpuppeting
etc...
What kind of defensive programming techniques?
etc...
A:
From the Open Web Application Security Project:
The OWASP Top Ten vulnerabilities (pdf)
For a more painfully exhaustive list: Category:Vulnerability
The top ten are:
Cross-site scripting (XSS)
Injection flaws (SQL injection, script injection)
Malicious file execution
Insecure direct object reference
Cross-site request forgery (XSRF)
Information leakage and improper error handling
Broken authentication and session management
Insecure cryptographic storage
Insecure communications
Failure to restrict URL access
A:
I second the OWASP info as being a valuable resource. The following may be of interest as well, notably the attack patterns:
CERT Top 10 Secure Coding Practices
Common Attack Pattern Enumeration and Classification
Attack Patterns
Secure Programming for Linux and Unix
A Taxonomy of Coding Errors that Affect Security
Secure Programming with Static Analysis Presentation
A:
Obviously test every field for vulnerabilities:
SQL - escape strings (e.g. mysql_real_escape_string)
XSS
HTML being printed from input fields (a good sign of XSS usually)
Anything else thatis not the specific purpose that field was created for
Search for infinite loops (the only indirect thing (if a lot of people found it accidentally) that could kill a server really).
A:
Some prevention techniques:
XSS
If you take any parameters/input from the user and ever plan on outputting it, whether in a log or a web page, sanitize it (strip/escape anything resembling HTML, quotes, javascript...) If you print the current URI of a page within itself, sanitize! Even printing PHP_SELF, for example, is unsafe. Sanitize! Reflective XSS comes mostly from unsanitized page parameters.
If you take any input from the user and save it or print it, warn them if anything dangerous/invalid is detected and have them re-input. an IDS is good for detection (such as PHPIDS.) Then sanitize before storage/printing. Then when you print something from storage/database, sanitize again!
Input -> IDS/sanitize -> store -> sanitize -> output
use a code scanner during development to help spot potentially vulnerable code.
XSRF
Never use GET request for
destructive functionality, i.e.
deleting a post. Instead, only
accept POST requests. GET makes it extra easy for hackery.
Checking the
referrer to make sure the request
came from your site does not
work. It's not hard to spoof the
referrer.
Use a random hash as a token that must be present and valid in every request, and that will expire after a while. Print the token in a hidden form field and check it on the server side when the form is posted. Bad guys would have to supply the correct token in order to forge a request, and if they managed to get the real token, it would need to be before it expired.
SQL injection
your ORM or db abstraction class should have sanitizing methods - use them, always. If you're not using an ORM or db abstraction class... you should be.
A:
SQL injection
A:
XSS (Cross Site Scripting) Attacks
A:
Easy to oversee and easy to fix: the sanitizing of data received from the client side. Checking for things such as ';' can help in preventing malicious code being injected into your application.
A:
G'day,
A good static analysis tool for security is FlawFinder written by David Wheeler. It does a good job looking for various security exploits,
However, it doesn't replace having a knowledgable someone read through your code. As David says on his web page, "A fool with a tool is still a fool!"
HTH.
cheers,
Rob
A:
You can get good firefox addons to test multiple flaws and vulnerabilities like xss and sql injections from Security Compass. Too bad they doesn't work on firefox 3.0. I hope that those will be updated soon.
|
Checklist for Web Site Programming Vulnerabilities
|
Watching SO come online has been quite an education for me. I'd like to make a checklist of various vunerabilities and exploits used against web sites, and what programming techniques can be used to defend against them.
What categories of vunerabilities?
crashing site
breaking into server
breaking into other people's logins
spam
sockpuppeting, meatpuppeting
etc...
What kind of defensive programming techniques?
etc...
|
[
"From the Open Web Application Security Project:\n\nThe OWASP Top Ten vulnerabilities (pdf)\nFor a more painfully exhaustive list: Category:Vulnerability\n\nThe top ten are:\n\nCross-site scripting (XSS)\nInjection flaws (SQL injection, script injection)\nMalicious file execution\nInsecure direct object reference\nCross-site request forgery (XSRF)\nInformation leakage and improper error handling\nBroken authentication and session management\nInsecure cryptographic storage\nInsecure communications\nFailure to restrict URL access\n\n",
"I second the OWASP info as being a valuable resource. The following may be of interest as well, notably the attack patterns:\n\nCERT Top 10 Secure Coding Practices\nCommon Attack Pattern Enumeration and Classification\nAttack Patterns\nSecure Programming for Linux and Unix\nA Taxonomy of Coding Errors that Affect Security\nSecure Programming with Static Analysis Presentation\n\n",
"Obviously test every field for vulnerabilities:\n\nSQL - escape strings (e.g. mysql_real_escape_string)\nXSS\nHTML being printed from input fields (a good sign of XSS usually)\nAnything else thatis not the specific purpose that field was created for\n\nSearch for infinite loops (the only indirect thing (if a lot of people found it accidentally) that could kill a server really).\n",
"Some prevention techniques:\nXSS\n\nIf you take any parameters/input from the user and ever plan on outputting it, whether in a log or a web page, sanitize it (strip/escape anything resembling HTML, quotes, javascript...) If you print the current URI of a page within itself, sanitize! Even printing PHP_SELF, for example, is unsafe. Sanitize! Reflective XSS comes mostly from unsanitized page parameters.\nIf you take any input from the user and save it or print it, warn them if anything dangerous/invalid is detected and have them re-input. an IDS is good for detection (such as PHPIDS.) Then sanitize before storage/printing. Then when you print something from storage/database, sanitize again!\nInput -> IDS/sanitize -> store -> sanitize -> output\nuse a code scanner during development to help spot potentially vulnerable code.\n\nXSRF\n\nNever use GET request for\ndestructive functionality, i.e.\ndeleting a post. Instead, only\naccept POST requests. GET makes it extra easy for hackery.\nChecking the\nreferrer to make sure the request\ncame from your site does not\nwork. It's not hard to spoof the\nreferrer.\nUse a random hash as a token that must be present and valid in every request, and that will expire after a while. Print the token in a hidden form field and check it on the server side when the form is posted. Bad guys would have to supply the correct token in order to forge a request, and if they managed to get the real token, it would need to be before it expired.\n\nSQL injection\n\nyour ORM or db abstraction class should have sanitizing methods - use them, always. If you're not using an ORM or db abstraction class... you should be.\n\n",
"SQL injection\n",
"XSS (Cross Site Scripting) Attacks\n",
"Easy to oversee and easy to fix: the sanitizing of data received from the client side. Checking for things such as ';' can help in preventing malicious code being injected into your application.\n",
"G'day,\nA good static analysis tool for security is FlawFinder written by David Wheeler. It does a good job looking for various security exploits,\nHowever, it doesn't replace having a knowledgable someone read through your code. As David says on his web page, \"A fool with a tool is still a fool!\"\nHTH.\ncheers,\nRob\n",
"You can get good firefox addons to test multiple flaws and vulnerabilities like xss and sql injections from Security Compass. Too bad they doesn't work on firefox 3.0. I hope that those will be updated soon.\n"
] |
[
12,
6,
2,
2,
1,
1,
1,
1,
1
] |
[] |
[] |
[
"defensive_programming",
"security"
] |
stackoverflow_0000028965_defensive_programming_security.txt
|
Q:
ASP.NET Custom Control Styling
I am in the process of beginning work on several ASP.NET custom controls. I was wondering if I could get some input on your guys/girls thoughts on how you apply styling to your controls.
I would rather push it so CSS, so for the few controls I have done in the past, I have simply stuck a string property which allows you so type in the string which in then slung in a "style" attribute when rendering. I know I could also use the "CSSClass" property and apply the "class" attribute.
I have not done much in the way of creating a "proper" Style property (in which you actually save the style object, and use the designer to specify its values). This to me seems like a lot of work, and TBH, I hate the Style editor UI and would much rather type in the CSS/class name to apply..
What are your thoughts on this?
Note: This is kind of subjective - so to be clear:
The accepted answer will be the one that:
Offers the pro's and con's of the various approaches.
Opinions are welcome, but a good answer should be constructive.
Backs it up with some real-world knowledge/experience.
There is nothing wrong with subjectivity. There is a problem with people being subjective and not thinking, being constructive or actually providing some insight and experience.
>>DO NOT<< tag this as "subjective" - that tag is a waste of time. "subjective" is not a technology or a category that people will look for. Fix the question rather than brush it off.
A:
It would depend on how the custom controls are being used - A commercial, re-distributable control should be compliant with the VS IDE, and behave the way users expect it to when they implement the control.
On the other hand there is no point in wasting a lot of time to get styling to work if you or your team are the only ones to use the control, so long as it's styling works in a sane way.
Most of the custom controls I have implemented use a property to define the controls look and feel or just expose the controls' members own CSSClass properties.
The argument comes down to consistency vs. time - any element should use consistent styling mechanisms, if strapped for time, use a string method if not, implement a more complex / IDE friendly mechanism.
A:
I think you should consider your "target market" for the custom control, e.g., the people who will use it.
If it's an internal custom control, you can pretty much mandate the use of one or the other: if it's internal to the company you will have the ability to enforce its consistency.
If it's meant for commercial consumption, however, it is required that you give an option to provide a way to use either style or class. Case in point: the ASP.NET site navigation controls, e.g., SiteMapPath, Menu, Treeview. They have a bunch of properties exposed to allow either styles, classes, or a combination of both to each aspect of the controls' appearance.
|
ASP.NET Custom Control Styling
|
I am in the process of beginning work on several ASP.NET custom controls. I was wondering if I could get some input on your guys/girls thoughts on how you apply styling to your controls.
I would rather push it so CSS, so for the few controls I have done in the past, I have simply stuck a string property which allows you so type in the string which in then slung in a "style" attribute when rendering. I know I could also use the "CSSClass" property and apply the "class" attribute.
I have not done much in the way of creating a "proper" Style property (in which you actually save the style object, and use the designer to specify its values). This to me seems like a lot of work, and TBH, I hate the Style editor UI and would much rather type in the CSS/class name to apply..
What are your thoughts on this?
Note: This is kind of subjective - so to be clear:
The accepted answer will be the one that:
Offers the pro's and con's of the various approaches.
Opinions are welcome, but a good answer should be constructive.
Backs it up with some real-world knowledge/experience.
There is nothing wrong with subjectivity. There is a problem with people being subjective and not thinking, being constructive or actually providing some insight and experience.
>>DO NOT<< tag this as "subjective" - that tag is a waste of time. "subjective" is not a technology or a category that people will look for. Fix the question rather than brush it off.
|
[
"It would depend on how the custom controls are being used - A commercial, re-distributable control should be compliant with the VS IDE, and behave the way users expect it to when they implement the control.\nOn the other hand there is no point in wasting a lot of time to get styling to work if you or your team are the only ones to use the control, so long as it's styling works in a sane way.\nMost of the custom controls I have implemented use a property to define the controls look and feel or just expose the controls' members own CSSClass properties.\nThe argument comes down to consistency vs. time - any element should use consistent styling mechanisms, if strapped for time, use a string method if not, implement a more complex / IDE friendly mechanism.\n",
"I think you should consider your \"target market\" for the custom control, e.g., the people who will use it.\nIf it's an internal custom control, you can pretty much mandate the use of one or the other: if it's internal to the company you will have the ability to enforce its consistency.\nIf it's meant for commercial consumption, however, it is required that you give an option to provide a way to use either style or class. Case in point: the ASP.NET site navigation controls, e.g., SiteMapPath, Menu, Treeview. They have a bunch of properties exposed to allow either styles, classes, or a combination of both to each aspect of the controls' appearance.\n"
] |
[
3,
2
] |
[] |
[] |
[
"asp.net",
"custom_server_controls",
"styles"
] |
stackoverflow_0000070361_asp.net_custom_server_controls_styles.txt
|
Q:
Windows C++: How can I redirect stderr for calls to fprintf?
I am wrapping existing C++ code from a BSD project in our own custom wrapper and I want to integrate it to our code with as few changes as possible. This code uses fprintf to print to stderr in order to log / report errors.
I want to redirect this to an alternative place within the same process. On Unix I have done this with a socketpair and a thread: one end of the socket is where I send stderr (via a call to dup2) and the other end is monitored in a thread, where I can then process the output.
This does not work on Windows though because a socket is not the same as a file handle.
All documents I have found on the web show how to redirect output from a child process, which is not what I want. How can I redirect stderr within the same process getting a callback of some sort when output is written? (and before you say so, I've tried SetStdHandle but cannot find any way to make this work)...
A:
You can use a similar technique on Windows, you just need to use different words for the same concepts. :) This article: http://msdn.microsoft.com/en-us/library/ms682499.aspx uses a win32 pipe to handle I/O from another process, you just have to do the same thing with threads within the same process. Of course, in your case all output to stderr from anywhere in the process will be redirected to your consumer.
Actually, other pieces of the puzzle you may need are _fdopen and _open_osfhandle. In fact, here's a related example from some code I released years ago:
DWORD CALLBACK DoDebugThread(void *)
{
AllocConsole();
SetConsoleTitle("Copilot Debugger");
// The following is a really disgusting hack to make stdin and stdout attach
// to the newly created console using the MSVC++ libraries. I hope other
// operating systems don't need this kind of kludge.. :)
stdout->_file = _open_osfhandle((long)GetStdHandle(STD_OUTPUT_HANDLE), _O_TEXT);
stdin->_file = _open_osfhandle((long)GetStdHandle(STD_INPUT_HANDLE), _O_TEXT);
debug();
stdout->_file = -1;
stdin->_file = -1;
FreeConsole();
CPU_run();
return 0;
}
In this case, the main process was a GUI process which doesn't start with stdio handles at all. It opens a console, then shoves the right handles into stdout and stdin so the debug() function (which was designed as a stdio interactive function) can interact with the newly created console. You should be able to open some pipes and do the same sort of thing to redirect stderr.
A:
You have to remember that what MSVCRT calls "OS handles" are not Win32 handles, but another layer of handles added just to confuse you. MSVCRT tries to emulate the Unix handle numbers where stdin = 0, stdout = 1, stderr = 2 and so on. Win32 handles are numbered differently and their values always happen to be a multiple of 4. Opening the pipe and getting all the handles configured properly will require getting your hands messy. Using the MSVCRT source code and a debugger is probably a requirement.
A:
You mention that you don't want to use a named pipe for internal use; it's probably worth poining out that the documentation for CreatePipe() states, "Anonymous pipes are implemented using a named pipe with a unique name. Therefore, you can often pass a handle to an anonymous pipe to a function that requires a handle to a named pipe." So, I suggest that you just write a function that creates a similar pipe with the correct settings for async reading. I tend to use a GUID as a string (generated using CoCreateGUID() and StringFromIID()) to give me a unique name and then create the server and client ends of the named pipe with the correct settings for overlapped I/O (more details on this, and code, here: http://www.lenholgate.com/blog/2008/02/process-management-using-jobs-on-windows.html).
Once I have that I wire up some code that I have to read a file using overlapped I/O with an I/O Completion Port and, well, then I just get async notifications of the data as it arrives... However, I've got a fair amount of well tested library code in there that makes it all happen...
It's probably possible to set up the named pipe and then just do an overlapped read with an event in your OVERLAPPED structure and check the event to see if data was available... I don't have any code available that does that though.
|
Windows C++: How can I redirect stderr for calls to fprintf?
|
I am wrapping existing C++ code from a BSD project in our own custom wrapper and I want to integrate it to our code with as few changes as possible. This code uses fprintf to print to stderr in order to log / report errors.
I want to redirect this to an alternative place within the same process. On Unix I have done this with a socketpair and a thread: one end of the socket is where I send stderr (via a call to dup2) and the other end is monitored in a thread, where I can then process the output.
This does not work on Windows though because a socket is not the same as a file handle.
All documents I have found on the web show how to redirect output from a child process, which is not what I want. How can I redirect stderr within the same process getting a callback of some sort when output is written? (and before you say so, I've tried SetStdHandle but cannot find any way to make this work)...
|
[
"You can use a similar technique on Windows, you just need to use different words for the same concepts. :) This article: http://msdn.microsoft.com/en-us/library/ms682499.aspx uses a win32 pipe to handle I/O from another process, you just have to do the same thing with threads within the same process. Of course, in your case all output to stderr from anywhere in the process will be redirected to your consumer.\nActually, other pieces of the puzzle you may need are _fdopen and _open_osfhandle. In fact, here's a related example from some code I released years ago:\nDWORD CALLBACK DoDebugThread(void *)\n{\n AllocConsole();\n SetConsoleTitle(\"Copilot Debugger\");\n // The following is a really disgusting hack to make stdin and stdout attach\n // to the newly created console using the MSVC++ libraries. I hope other\n // operating systems don't need this kind of kludge.. :)\n stdout->_file = _open_osfhandle((long)GetStdHandle(STD_OUTPUT_HANDLE), _O_TEXT);\n stdin->_file = _open_osfhandle((long)GetStdHandle(STD_INPUT_HANDLE), _O_TEXT);\n debug();\n stdout->_file = -1;\n stdin->_file = -1;\n FreeConsole();\n CPU_run();\n return 0;\n} \n\nIn this case, the main process was a GUI process which doesn't start with stdio handles at all. It opens a console, then shoves the right handles into stdout and stdin so the debug() function (which was designed as a stdio interactive function) can interact with the newly created console. You should be able to open some pipes and do the same sort of thing to redirect stderr.\n",
"You have to remember that what MSVCRT calls \"OS handles\" are not Win32 handles, but another layer of handles added just to confuse you. MSVCRT tries to emulate the Unix handle numbers where stdin = 0, stdout = 1, stderr = 2 and so on. Win32 handles are numbered differently and their values always happen to be a multiple of 4. Opening the pipe and getting all the handles configured properly will require getting your hands messy. Using the MSVCRT source code and a debugger is probably a requirement.\n",
"You mention that you don't want to use a named pipe for internal use; it's probably worth poining out that the documentation for CreatePipe() states, \"Anonymous pipes are implemented using a named pipe with a unique name. Therefore, you can often pass a handle to an anonymous pipe to a function that requires a handle to a named pipe.\" So, I suggest that you just write a function that creates a similar pipe with the correct settings for async reading. I tend to use a GUID as a string (generated using CoCreateGUID() and StringFromIID()) to give me a unique name and then create the server and client ends of the named pipe with the correct settings for overlapped I/O (more details on this, and code, here: http://www.lenholgate.com/blog/2008/02/process-management-using-jobs-on-windows.html).\nOnce I have that I wire up some code that I have to read a file using overlapped I/O with an I/O Completion Port and, well, then I just get async notifications of the data as it arrives... However, I've got a fair amount of well tested library code in there that makes it all happen... \nIt's probably possible to set up the named pipe and then just do an overlapped read with an event in your OVERLAPPED structure and check the event to see if data was available... I don't have any code available that does that though.\n"
] |
[
6,
3,
1
] |
[] |
[] |
[
"c++",
"redirect",
"windows"
] |
stackoverflow_0000007664_c++_redirect_windows.txt
|
Q:
What languages support covariance on inherited methods' return types?
I originally asked this question, but in finding an answer, discovered that my original problem was a lack of support in C# for covariance on inherited methods' return types. After discovering that, I became curious as to what languages do support this feature.
I will accept the answer of whoever can name the most.
EDIT: John Millikin correctly pointed out that lots of dynamic languages support this. To clarify:
I am only looking for static/strongly typed languages.
A:
C++
Java
REALbasic
Eiffel
Sather
Modula-3
A:
Any dynamic languages, of course -- Python, Ruby, Smalltalk, Javascript, etc.
A:
Basically what I'm asking is what languages support what I'm trying to do here.
Does C# let you specify different data types for the get() and set() methods? If not, I would split them into actual Leg get_leg() and set_leg(DogLeg) functions. Otherwise one of two things will happen: 1) overspecification of get_leg() 2) underspecification of set_leg().
A:
C++ supports covariant return types.
A:
Java added support for this in 1.5. It will not compile in earlier versions.
A:
As pointed out by Ivan Hamilton and Mat Noguchi, C++ supports the feature. But note that covariant return types are broken for template classes which inherit from some base in MSVC 7.X through 9.X (and probably 6 also). You get error C2555.
|
What languages support covariance on inherited methods' return types?
|
I originally asked this question, but in finding an answer, discovered that my original problem was a lack of support in C# for covariance on inherited methods' return types. After discovering that, I became curious as to what languages do support this feature.
I will accept the answer of whoever can name the most.
EDIT: John Millikin correctly pointed out that lots of dynamic languages support this. To clarify:
I am only looking for static/strongly typed languages.
|
[
"\nC++\nJava\nREALbasic\nEiffel\nSather\nModula-3\n\n",
"Any dynamic languages, of course -- Python, Ruby, Smalltalk, Javascript, etc.\n",
"\nBasically what I'm asking is what languages support what I'm trying to do here.\n\nDoes C# let you specify different data types for the get() and set() methods? If not, I would split them into actual Leg get_leg() and set_leg(DogLeg) functions. Otherwise one of two things will happen: 1) overspecification of get_leg() 2) underspecification of set_leg().\n",
"C++ supports covariant return types.\n",
"Java added support for this in 1.5. It will not compile in earlier versions. \n",
"As pointed out by Ivan Hamilton and Mat Noguchi, C++ supports the feature. But note that covariant return types are broken for template classes which inherit from some base in MSVC 7.X through 9.X (and probably 6 also). You get error C2555.\n"
] |
[
5,
2,
0,
0,
0,
0
] |
[
"\nbut I think thats what I'm asking for..or is it?\n\nI frankly don't know what you're asking. Java apparently has the same support for return-type covariance as C#, so if whatever you're looking for is lacking in C#, it's lacking in Java also.\n"
] |
[
-1
] |
[
"c#",
"covariance",
"java",
"oop",
"programming_languages"
] |
stackoverflow_0000047009_c#_covariance_java_oop_programming_languages.txt
|
Q:
When should you use standard html tags/inputs and when should you use the asp.net controls?
As I put together each asp.net page It's clear that most of the time I could use the standard HTML tags just as easily as the web forms controls. When this is the case what is the lure of the webforms controls?
A:
HTML controls will be output a lot faster than server controls since there is nothing required on part of the server.. It just literally copies the markup in the ASPX page.
Server controls however require instantiation.. Parsing of the postback data and the like, this is obviously where the work comes in for the server.
The general rule of thumb is:
If its static (i.e. you dont need programmatic support), make it a HTML control. HTML controls can easily be "upgraded" to server controls so there is no real issue of maintanence at a later time.
A:
Webform controls have more server-side pre-built functionality (server side hooks, methods and attributes), I tend to use HTML controls only when I require a high degree of formatting (styling) as that bypasses the way .Net renders it's controls (which, at times, can be very strange).
|
When should you use standard html tags/inputs and when should you use the asp.net controls?
|
As I put together each asp.net page It's clear that most of the time I could use the standard HTML tags just as easily as the web forms controls. When this is the case what is the lure of the webforms controls?
|
[
"HTML controls will be output a lot faster than server controls since there is nothing required on part of the server.. It just literally copies the markup in the ASPX page.\nServer controls however require instantiation.. Parsing of the postback data and the like, this is obviously where the work comes in for the server.\nThe general rule of thumb is:\nIf its static (i.e. you dont need programmatic support), make it a HTML control. HTML controls can easily be \"upgraded\" to server controls so there is no real issue of maintanence at a later time.\n",
"Webform controls have more server-side pre-built functionality (server side hooks, methods and attributes), I tend to use HTML controls only when I require a high degree of formatting (styling) as that bypasses the way .Net renders it's controls (which, at times, can be very strange).\n"
] |
[
5,
2
] |
[] |
[] |
[
"asp.net",
"c#",
"html",
"visual_studio",
"webforms"
] |
stackoverflow_0000070292_asp.net_c#_html_visual_studio_webforms.txt
|
Q:
What is the Simplest Tomcat/Apache Connector (Windows)?
I have apache 2.2 and tomcat 5.5 running on a Windows XP machine.
Which tomcat/apache connector is the easiest to set up and is well documented?
A:
mod_proxy_ajp would be the easiest to use if you are using Apache 2.2. It is part of the Apache distribution so you don't need to install any additional software.
In your httpd.conf you need to make sure that mod_proxy and mod_proxy_ajp are loaded:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
Then you can use the ProxyPass and ProxyPassReverse directives as follows:
ProxyPass /portal ajp://localhost:8009/portal
ProxyPassReverse /portal ajp://localhost:8009/portal
You should consult the Apache 2.2 documentation for a full catalog of the directives available.
A:
mod_jk, or simply just use mod_proxy even though it's not really a Tomcat connector.
|
What is the Simplest Tomcat/Apache Connector (Windows)?
|
I have apache 2.2 and tomcat 5.5 running on a Windows XP machine.
Which tomcat/apache connector is the easiest to set up and is well documented?
|
[
"mod_proxy_ajp would be the easiest to use if you are using Apache 2.2. It is part of the Apache distribution so you don't need to install any additional software.\nIn your httpd.conf you need to make sure that mod_proxy and mod_proxy_ajp are loaded:\n\nLoadModule proxy_module modules/mod_proxy.so\nLoadModule proxy_ajp_module modules/mod_proxy_ajp.so\n\nThen you can use the ProxyPass and ProxyPassReverse directives as follows:\n\nProxyPass /portal ajp://localhost:8009/portal\nProxyPassReverse /portal ajp://localhost:8009/portal\n\nYou should consult the Apache 2.2 documentation for a full catalog of the directives available.\n",
"mod_jk, or simply just use mod_proxy even though it's not really a Tomcat connector.\n"
] |
[
5,
0
] |
[] |
[] |
[
"apache",
"connector",
"tomcat",
"windows"
] |
stackoverflow_0000070389_apache_connector_tomcat_windows.txt
|
Q:
What does Class::MethodMaker exactly do?
I want to know what exactly is the sequence of calls that occurs when a getter/setter created through Class::MethodMaker is called?
How much costlier are getter/setters defined by MethodMaker than the native ones (overwritten in the module)?
A:
I don't have a simple answer for your question regarding Class::MethodMaker performance. As a previous answer mentioned, you can use the debugger to find out what's going on under the hood. However, I know that Class::MethodMaker generates huge amounts of code at install time. This would indicate three separate things to me:
Regarding run-time, it's probably on the faster side of the whole slew of method generators. Why generate loads of code at install time otherwise?
It installs O(Megabytes) of code on your disk!
It may potentially be slow at compile time, depending on what parts of the generated code are loaded for simple use cases.
You really need to spend a few minutes to think about what you really need. If you want simple accessor methods auto-generated but write anything more complicated by hand, maybe look at Class::Accessor::Fast. Or, if you want the fastest possible accessor-methods, investigate Class::XSAccessor, whose extra-simple methods run as C/XS code and are approximately twice as fast as the fastest Perl accessor. (Note: I wrote the latter module, so take this with a grain of salt.)
One further comment: if you're ever going to use the PAR/PAR::Packer toolkit for packaging your application, note that the large amount of code of Class::MethodMaker results in a significantly larger executable and a slower initial start-up time. Additionally, there's a known incompatibility between C::MethodMaker and PAR. But that may be considered a PAR bug.
A:
This is exactly what debugging tools are for :)
Have a look at the perldebug docs, particularly the section on profiling.
In particular, running your script with perl -dDProf filename.pl will generate a tt.out file which the dprofpp tool (distributed with Perl) can produce a report from.
I used the following simple test script:
#!/usr/bin/perl
package Foo;
use strict;
use Class::MethodMaker [ scalar => ['bar'], new => ['new'] ];
package main;
use strict;
my $foo = new Foo;
$foo->bar('baz');
print $foo->bar . "\n";
Running it with perl -d:DProf methodmakertest.pl and then using dprofpp on the output gave:
[davidp@supernova:~/tmp]$ dprofpp tmon.out
Class::MethodMaker::scalar::scal0000 has 1 unstacked calls in outer
Class::MethodMaker::Engine::new has 1 unstacked calls in outer
AutoLoader::AUTOLOAD has -2 unstacked calls in outer
Total Elapsed Time = 0.08894 Seconds
User+System Time = 0.07894 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
25.3 0.020 0.020 4 0.0050 0.0050 Class::MethodMaker::Constants::BEG
IN
25.3 0.020 0.029 12 0.0017 0.0025 Class::MethodMaker::Engine::BEGIN
12.6 0.010 0.010 1 0.0100 0.0100 DynaLoader::dl_load_file
12.6 0.010 0.010 2 0.0050 0.0050 AutoLoader::AUTOLOAD
12.6 0.010 0.010 14 0.0007 0.0007 Class::MethodMaker::V1Compat::reph
rase_prefix_option
0.00 0.000 0.000 1 0.0000 0.0000 Class::MethodMaker::scalar::scal00
00
0.00 0.000 0.000 1 0.0000 0.0000 Class::MethodMaker::Engine::new
0.00 - -0.000 1 - - DynaLoader::dl_undef_symbols
0.00 - -0.000 1 - - Class::MethodMaker::bootstrap
0.00 - -0.000 1 - - warnings::BEGIN
0.00 - -0.000 1 - - warnings::unimport
0.00 - -0.000 1 - - DynaLoader::dl_find_symbol
0.00 - -0.000 1 - - DynaLoader::dl_install_xsub
0.00 - -0.000 1 - - UNIVERSAL::VERSION
0.00 - -0.000 1 - - Foo::new
The two most expensive calls are the Class::MethodMaker::Constants::BEGIN and Class::MethodMaker::Engine::BEGIN blocks, which are obviously called at compile time only, so they may slow the compilation of your script slightly, but subsequent object creation/accessor usage is not affected by it.
A:
The real question is: does it matter?
It's yet another accessors generating module. These modules all have a speed/functionality trade-off. Just pick one that offers everything you need. It's not like accessors are likely to become a bottleneck in your application.
A:
@Leon Timmermans
I am aware of the fact that there is some speed/functionality trade off but want to get idea of how good/bad is it? And much better, if I can get specific of the implementations so that its easier to decide.
A:
Further to my previous answer, if you want to see exactly what's going on under the hood in detail, run your script in the debugger with trace mode on (perl -d filename.pl, then say "t" to trace, then "r" to run the script; expect a lot of output though!).
|
What does Class::MethodMaker exactly do?
|
I want to know what exactly is the sequence of calls that occurs when a getter/setter created through Class::MethodMaker is called?
How much costlier are getter/setters defined by MethodMaker than the native ones (overwritten in the module)?
|
[
"I don't have a simple answer for your question regarding Class::MethodMaker performance. As a previous answer mentioned, you can use the debugger to find out what's going on under the hood. However, I know that Class::MethodMaker generates huge amounts of code at install time. This would indicate three separate things to me:\n\nRegarding run-time, it's probably on the faster side of the whole slew of method generators. Why generate loads of code at install time otherwise?\nIt installs O(Megabytes) of code on your disk!\nIt may potentially be slow at compile time, depending on what parts of the generated code are loaded for simple use cases.\n\nYou really need to spend a few minutes to think about what you really need. If you want simple accessor methods auto-generated but write anything more complicated by hand, maybe look at Class::Accessor::Fast. Or, if you want the fastest possible accessor-methods, investigate Class::XSAccessor, whose extra-simple methods run as C/XS code and are approximately twice as fast as the fastest Perl accessor. (Note: I wrote the latter module, so take this with a grain of salt.)\nOne further comment: if you're ever going to use the PAR/PAR::Packer toolkit for packaging your application, note that the large amount of code of Class::MethodMaker results in a significantly larger executable and a slower initial start-up time. Additionally, there's a known incompatibility between C::MethodMaker and PAR. But that may be considered a PAR bug.\n",
"This is exactly what debugging tools are for :)\nHave a look at the perldebug docs, particularly the section on profiling.\nIn particular, running your script with perl -dDProf filename.pl will generate a tt.out file which the dprofpp tool (distributed with Perl) can produce a report from.\nI used the following simple test script:\n\n#!/usr/bin/perl\n\npackage Foo;\nuse strict;\nuse Class::MethodMaker [ scalar => ['bar'], new => ['new'] ];\n\npackage main;\nuse strict;\n\nmy $foo = new Foo;\n$foo->bar('baz');\nprint $foo->bar . \"\\n\";\n\nRunning it with perl -d:DProf methodmakertest.pl and then using dprofpp on the output gave:\n\n[davidp@supernova:~/tmp]$ dprofpp tmon.out\nClass::MethodMaker::scalar::scal0000 has 1 unstacked calls in outer\nClass::MethodMaker::Engine::new has 1 unstacked calls in outer\nAutoLoader::AUTOLOAD has -2 unstacked calls in outer\nTotal Elapsed Time = 0.08894 Seconds\n User+System Time = 0.07894 Seconds\nExclusive Times\n%Time ExclSec CumulS #Calls sec/call Csec/c Name\n 25.3 0.020 0.020 4 0.0050 0.0050 Class::MethodMaker::Constants::BEG\n IN\n 25.3 0.020 0.029 12 0.0017 0.0025 Class::MethodMaker::Engine::BEGIN\n 12.6 0.010 0.010 1 0.0100 0.0100 DynaLoader::dl_load_file\n 12.6 0.010 0.010 2 0.0050 0.0050 AutoLoader::AUTOLOAD\n 12.6 0.010 0.010 14 0.0007 0.0007 Class::MethodMaker::V1Compat::reph\n rase_prefix_option\n 0.00 0.000 0.000 1 0.0000 0.0000 Class::MethodMaker::scalar::scal00\n 00\n 0.00 0.000 0.000 1 0.0000 0.0000 Class::MethodMaker::Engine::new\n 0.00 - -0.000 1 - - DynaLoader::dl_undef_symbols\n 0.00 - -0.000 1 - - Class::MethodMaker::bootstrap\n 0.00 - -0.000 1 - - warnings::BEGIN\n 0.00 - -0.000 1 - - warnings::unimport\n 0.00 - -0.000 1 - - DynaLoader::dl_find_symbol\n 0.00 - -0.000 1 - - DynaLoader::dl_install_xsub\n 0.00 - -0.000 1 - - UNIVERSAL::VERSION\n 0.00 - -0.000 1 - - Foo::new\n\nThe two most expensive calls are the Class::MethodMaker::Constants::BEGIN and Class::MethodMaker::Engine::BEGIN blocks, which are obviously called at compile time only, so they may slow the compilation of your script slightly, but subsequent object creation/accessor usage is not affected by it.\n",
"The real question is: does it matter?\nIt's yet another accessors generating module. These modules all have a speed/functionality trade-off. Just pick one that offers everything you need. It's not like accessors are likely to become a bottleneck in your application.\n",
"@Leon Timmermans\nI am aware of the fact that there is some speed/functionality trade off but want to get idea of how good/bad is it? And much better, if I can get specific of the implementations so that its easier to decide.\n",
"Further to my previous answer, if you want to see exactly what's going on under the hood in detail, run your script in the debugger with trace mode on (perl -d filename.pl, then say \"t\" to trace, then \"r\" to run the script; expect a lot of output though!).\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"perl"
] |
stackoverflow_0000062102_perl.txt
|
Q:
How do you unit-test code that interacts with and instantiates third-party COM objects?
One of the biggest issues currently holding me back from diving full steam into unit testing is that a really large percentage of the code I write is heavily dependent on third-party COM objects from different sources that also tend to interact with each other (I'm writing add-ins for Microsoft Office using several helper libraries if you need to know).
I know I should probably use mock objects but how exactly would I go about that in this case? I can see that it's relatively easy when I just have to pass a reference to an already existing object but some of my routines instantiate external COM objects themselves and then sometimes pass them on to some other external COM-object from yet a different library.
What is the best-practice approach here? Should I have my testing code temporarily change the COM registration information in the registry so the tested code will instantiate one of my mock objects instead? Should I inject modified type library units? What other approaches are there?
I would be especially grateful for examples or tools for Delphi but would be just as happy with more general advice and higher-level explanations just as well.
Thanks,
Oliver
A:
The traditional approach says that your client code should use a wrapper, which is responsible for instantiating the COM object. This wrapper can then be easily mocked.
Because you've got parts of your code instantiating the COM objects directly, this doesn't really fit. If you can change that code, you could use the factory pattern: they use the factory to create the COM object. You can mock the factory to return alternative objects.
Whether the object is accessed via a wrapper or via the original COM interface is up to you. If you choose to mock the COM interface, remember to instrument IUnknown::QueryInterface in your mock, so you know that you've mocked all of the interfaces, particularly if the object is then passed to some other COM object.
Alternatively, check out the CoTreateAsClass method. I've never used it, but it might do what you need.
A:
It comes down to 'designing for testability'. Ideally, you should not instantiate those COM objects directly but should access them through a layer of indirection that can be replaced by a mock object.
Now, COM itself does provide a level of indirection and you could provide a mock object that provided a substitute for the real one but I suspect it would be a pain to create and I doubt if you'd get much help from an existing mocking framework.
A:
I would write a thin wrapper class around your third party COM object, which has the ability to load a mock object rather than the actual COM object in the unit testing situation. I normally do this by having a second constructor that I call passing in the mock object. The normal constructor would have just loaded the COM object as normal.
The wikipedia article has a good introduction to the subject
Wikipedia artible
|
How do you unit-test code that interacts with and instantiates third-party COM objects?
|
One of the biggest issues currently holding me back from diving full steam into unit testing is that a really large percentage of the code I write is heavily dependent on third-party COM objects from different sources that also tend to interact with each other (I'm writing add-ins for Microsoft Office using several helper libraries if you need to know).
I know I should probably use mock objects but how exactly would I go about that in this case? I can see that it's relatively easy when I just have to pass a reference to an already existing object but some of my routines instantiate external COM objects themselves and then sometimes pass them on to some other external COM-object from yet a different library.
What is the best-practice approach here? Should I have my testing code temporarily change the COM registration information in the registry so the tested code will instantiate one of my mock objects instead? Should I inject modified type library units? What other approaches are there?
I would be especially grateful for examples or tools for Delphi but would be just as happy with more general advice and higher-level explanations just as well.
Thanks,
Oliver
|
[
"The traditional approach says that your client code should use a wrapper, which is responsible for instantiating the COM object. This wrapper can then be easily mocked.\nBecause you've got parts of your code instantiating the COM objects directly, this doesn't really fit. If you can change that code, you could use the factory pattern: they use the factory to create the COM object. You can mock the factory to return alternative objects.\nWhether the object is accessed via a wrapper or via the original COM interface is up to you. If you choose to mock the COM interface, remember to instrument IUnknown::QueryInterface in your mock, so you know that you've mocked all of the interfaces, particularly if the object is then passed to some other COM object.\nAlternatively, check out the CoTreateAsClass method. I've never used it, but it might do what you need.\n",
"It comes down to 'designing for testability'. Ideally, you should not instantiate those COM objects directly but should access them through a layer of indirection that can be replaced by a mock object.\nNow, COM itself does provide a level of indirection and you could provide a mock object that provided a substitute for the real one but I suspect it would be a pain to create and I doubt if you'd get much help from an existing mocking framework.\n",
"I would write a thin wrapper class around your third party COM object, which has the ability to load a mock object rather than the actual COM object in the unit testing situation. I normally do this by having a second constructor that I call passing in the mock object. The normal constructor would have just loaded the COM object as normal. \nThe wikipedia article has a good introduction to the subject\nWikipedia artible\n"
] |
[
6,
3,
2
] |
[] |
[] |
[
"com",
"delphi",
"mocking",
"unit_testing"
] |
stackoverflow_0000070482_com_delphi_mocking_unit_testing.txt
|
Q:
VS 2005 Toolbox kind of control .NET
I'm looking for a control that the Visual Studio "Toolbox" menu uses. It can be docked and can retract (pin).
Would you know where I can find a control or COM I could use which would look like this?
A:
I would recommend the DockPanel Suite by Weifen Luo.
A:
I think you just need to use a normal form (set the form type to Tool) and use the docking property to dock to to the left or right. You can set the width if you like and use the resize event to stop the user from making it too big or small.
A:
You don't mention what language you want to use. For C++, use the Feature Pack.
|
VS 2005 Toolbox kind of control .NET
|
I'm looking for a control that the Visual Studio "Toolbox" menu uses. It can be docked and can retract (pin).
Would you know where I can find a control or COM I could use which would look like this?
|
[
"I would recommend the DockPanel Suite by Weifen Luo.\n",
"I think you just need to use a normal form (set the form type to Tool) and use the docking property to dock to to the left or right. You can set the width if you like and use the resize event to stop the user from making it too big or small.\n",
"You don't mention what language you want to use. For C++, use the Feature Pack.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
".net",
"c#",
"c++",
"vb.net",
"visual_studio_2005"
] |
stackoverflow_0000061914_.net_c#_c++_vb.net_visual_studio_2005.txt
|
Q:
How to get hashes out of arrays in Perl?
I want to write a little "DBQuery" function in perl so I can have one-liners which send an SQL statement and receive back and an array of hashes, i.e. a recordset. However, I'm running into an issue with Perl syntax (and probably some odd pointer/reference issue) which is preventing me from packing out the information from the hash that I'm getting from the database. The sample code below demonstrates the issue.
I can get the data "Jim" out of a hash inside an array with this syntax:
print $records[$index]{'firstName'}
returns "Jim"
but if I copy the hash record in the array to its own hash variable first, then I strangely can't access the data anymore in that hash:
%row = $records[$index];
$row{'firstName'};
returns "" (blank)
Here is the full sample code showing the problem. Any help is appreciated:
my @records = (
{'id' => 1, 'firstName' => 'Jim'},
{'id' => 2, 'firstName' => 'Joe'}
);
my @records2 = ();
$numberOfRecords = scalar(@records);
print "number of records: " . $numberOfRecords . "\n";
for(my $index=0; $index < $numberOfRecords; $index++) {
#works
print 'you can print the records like this: ' . $records[$index]{'firstName'} . "\n";
#does NOT work
%row = $records[$index];
print 'but not like this: ' . $row{'firstName'} . "\n";
}
A:
The nested data structure contains a hash reference, not a hash.
# Will work (the -> dereferences the reference)
$row = $records[$index];
print "This will work: ", $row->{firstName}, "\n";
# This will also work, by promoting the hash reference into a hash
%row = %{ $records[$index] };
print "This will work: ", $row{firstName}, "\n";
If you're ever presented with a deep Perl data structure, you may profit from printing it using Data::Dumper to print it into human-readable (and Perl-parsable) form.
A:
The array of hashes doesn't actually contain hashes, but rather an references to a hash.
This line:
%row = $records[$index];
assigns %row with one entry. The key is the scalar:
{'id' => 1, 'firstName' => 'Jim'},
Which is a reference to the hash, while the value is blank.
What you really want to do is this:
$row = $records[$index];
$row->{'firstName'};
or else:
$row = %{$records[$index];}
$row{'firstName'};
A:
Others have commented on hashes vs hashrefs. One other thing that I feel should be mentioned is your DBQuery function - it seems you're trying to do something that's already built into the DBI? If I understand your question correctly, you're trying to replicate something like selectall_arrayref:
This utility method combines "prepare", "execute" and "fetchall_arrayref" into a single call. It returns a reference to an array containing a reference to an array (or hash, see below) for each row of data fetched.
A:
To add to the lovely answers above, let me add that you should always, always, always (yes, three "always"es) use "use warnings" at the top of your code. If you had done so, you would have gotten the warning "Reference found where even-sized list expected at -e line 1."
A:
what you actually have in your array is a hashref, not a hash. If you don't understand this concept, its probably worth reading the perlref documentation.
to get the hash you need to do
my %hash = %{@records[$index]};
Eg.
my @records = (
{'id' => 1, 'firstName' => 'Jim'},
{'id' => 2, 'firstName' => 'Joe'}
);
my %hash = %{$records[1]};
print $hash{id}."\n";
Although. I'm not sure why you would want to do this, unless its for academic purposes. Otherwise, I'd recommend either fetchall_hashref/fetchall_arrayref in the DBI module or using something like Class::DBI.
A:
Also note a good perl idiom to use is
for my $rowHR ( @records ) {
my %row = %$rowHR;
#or whatever...
}
to iterate through the list.
|
How to get hashes out of arrays in Perl?
|
I want to write a little "DBQuery" function in perl so I can have one-liners which send an SQL statement and receive back and an array of hashes, i.e. a recordset. However, I'm running into an issue with Perl syntax (and probably some odd pointer/reference issue) which is preventing me from packing out the information from the hash that I'm getting from the database. The sample code below demonstrates the issue.
I can get the data "Jim" out of a hash inside an array with this syntax:
print $records[$index]{'firstName'}
returns "Jim"
but if I copy the hash record in the array to its own hash variable first, then I strangely can't access the data anymore in that hash:
%row = $records[$index];
$row{'firstName'};
returns "" (blank)
Here is the full sample code showing the problem. Any help is appreciated:
my @records = (
{'id' => 1, 'firstName' => 'Jim'},
{'id' => 2, 'firstName' => 'Joe'}
);
my @records2 = ();
$numberOfRecords = scalar(@records);
print "number of records: " . $numberOfRecords . "\n";
for(my $index=0; $index < $numberOfRecords; $index++) {
#works
print 'you can print the records like this: ' . $records[$index]{'firstName'} . "\n";
#does NOT work
%row = $records[$index];
print 'but not like this: ' . $row{'firstName'} . "\n";
}
|
[
"The nested data structure contains a hash reference, not a hash.\n# Will work (the -> dereferences the reference)\n$row = $records[$index];\nprint \"This will work: \", $row->{firstName}, \"\\n\";\n\n# This will also work, by promoting the hash reference into a hash\n%row = %{ $records[$index] };\nprint \"This will work: \", $row{firstName}, \"\\n\";\n\nIf you're ever presented with a deep Perl data structure, you may profit from printing it using Data::Dumper to print it into human-readable (and Perl-parsable) form.\n",
"The array of hashes doesn't actually contain hashes, but rather an references to a hash.\nThis line: \n%row = $records[$index];\n\nassigns %row with one entry. The key is the scalar:\n {'id' => 1, 'firstName' => 'Jim'},\n\nWhich is a reference to the hash, while the value is blank.\nWhat you really want to do is this:\n$row = $records[$index];\n$row->{'firstName'};\n\nor else:\n$row = %{$records[$index];}\n$row{'firstName'};\n\n",
"Others have commented on hashes vs hashrefs. One other thing that I feel should be mentioned is your DBQuery function - it seems you're trying to do something that's already built into the DBI? If I understand your question correctly, you're trying to replicate something like selectall_arrayref:\n\nThis utility method combines \"prepare\", \"execute\" and \"fetchall_arrayref\" into a single call. It returns a reference to an array containing a reference to an array (or hash, see below) for each row of data fetched.\n\n",
"To add to the lovely answers above, let me add that you should always, always, always (yes, three \"always\"es) use \"use warnings\" at the top of your code. If you had done so, you would have gotten the warning \"Reference found where even-sized list expected at -e line 1.\"\n",
"what you actually have in your array is a hashref, not a hash. If you don't understand this concept, its probably worth reading the perlref documentation.\nto get the hash you need to do\nmy %hash = %{@records[$index]};\n\nEg.\nmy @records = (\n {'id' => 1, 'firstName' => 'Jim'}, \n {'id' => 2, 'firstName' => 'Joe'}\n );\n\n my %hash = %{$records[1]};\n\n print $hash{id}.\"\\n\";\n\nAlthough. I'm not sure why you would want to do this, unless its for academic purposes. Otherwise, I'd recommend either fetchall_hashref/fetchall_arrayref in the DBI module or using something like Class::DBI.\n",
"Also note a good perl idiom to use is\nfor my $rowHR ( @records ) {\n my %row = %$rowHR; \n #or whatever...\n}\nto iterate through the list.\n"
] |
[
23,
5,
4,
3,
1,
0
] |
[] |
[] |
[
"arrays",
"hash",
"perl",
"perl_data_structures"
] |
stackoverflow_0000051195_arrays_hash_perl_perl_data_structures.txt
|
Q:
Good text on order analysis
As a self-taught computer programmer, I'm often at a loss to estimate the O() value for a particular operation. Yeah, I know off the top of my head most of the important ones, like for the major sorts and searches, but I don't know how to calculate one when something new comes along, unless it's blindingly obvious. Is there a good web site or text that explains how to do that? Heck, I don't even know what computer scientists call it, so I can't google it.
A:
It's called Big O Notation, and it's used in Computational Complexity Theory.
The wikipedia articles are a pretty good starting point, as are the bibliography at the bottom of the page.
A:
Introduction to Algorithms is the standard text used at most universities. I've used it and can recommend those chapters on order analysis. I'd start with the articles in Tim Howland's answer, though.
A:
If you really want to learn this topic, then you probably need a standard theory/algorithms textbook. I don't know of any website that can actually teach you complexity analysis ("complexity" or "time complexity" is how you call those O() values; you might also want to google for "analysis of algorithms" or "introduction to algorithms" or such).
But before that -- a free option. There are slides from a course given by Erik Demaine and Charles Leiserson in MIT, that are free and look great. I would definitely try to read them and see if that works for you. They are here.
Now, textbooks:
The classical choice for a textbook is Cormen et al's book Introduction to Algorithms (there might be a cheap version available to buy here and I remember seeing a free (possibly illegal) version online, but I don't remember where).
A more recent and modern-style book, which is IMO more fun to read and a better choice, is Kleinberg and Tardos' Algorithm Design.
Here are some websites with information (I got these by googling "algorithm analysis lecture notes" without the quotes):
Algorithms Lecture Notes
Lecture notes by Steve Skiena
The above is written by a computer science theorist. So programmers or other practical people might have some different opinions.
A:
It is called algorithm analysis and is a science in itself. Take a look at some of the books here
A:
Your links takes me to a site in
Russian that seems to want a userid
and password. Legitimate mistake, or
troll? Paul Tomblin
The site is in Bulgarian and you shouldn't need a password to access the list of files I linked to and download some of them. Unless of course there is an access restiction for IPs from outside Bulgaria, which I really don't know.
Sorry, I don't know how to make a comment.
|
Good text on order analysis
|
As a self-taught computer programmer, I'm often at a loss to estimate the O() value for a particular operation. Yeah, I know off the top of my head most of the important ones, like for the major sorts and searches, but I don't know how to calculate one when something new comes along, unless it's blindingly obvious. Is there a good web site or text that explains how to do that? Heck, I don't even know what computer scientists call it, so I can't google it.
|
[
"It's called Big O Notation, and it's used in Computational Complexity Theory.\nThe wikipedia articles are a pretty good starting point, as are the bibliography at the bottom of the page.\n",
"Introduction to Algorithms is the standard text used at most universities. I've used it and can recommend those chapters on order analysis. I'd start with the articles in Tim Howland's answer, though.\n",
"If you really want to learn this topic, then you probably need a standard theory/algorithms textbook. I don't know of any website that can actually teach you complexity analysis (\"complexity\" or \"time complexity\" is how you call those O() values; you might also want to google for \"analysis of algorithms\" or \"introduction to algorithms\" or such).\nBut before that -- a free option. There are slides from a course given by Erik Demaine and Charles Leiserson in MIT, that are free and look great. I would definitely try to read them and see if that works for you. They are here.\nNow, textbooks:\nThe classical choice for a textbook is Cormen et al's book Introduction to Algorithms (there might be a cheap version available to buy here and I remember seeing a free (possibly illegal) version online, but I don't remember where).\nA more recent and modern-style book, which is IMO more fun to read and a better choice, is Kleinberg and Tardos' Algorithm Design.\nHere are some websites with information (I got these by googling \"algorithm analysis lecture notes\" without the quotes):\n\nAlgorithms Lecture Notes\nLecture notes by Steve Skiena\n\nThe above is written by a computer science theorist. So programmers or other practical people might have some different opinions.\n",
"It is called algorithm analysis and is a science in itself. Take a look at some of the books here\n",
"\nYour links takes me to a site in\nRussian that seems to want a userid\nand password. Legitimate mistake, or\ntroll? Paul Tomblin\n\nThe site is in Bulgarian and you shouldn't need a password to access the list of files I linked to and download some of them. Unless of course there is an access restiction for IPs from outside Bulgaria, which I really don't know.\nSorry, I don't know how to make a comment.\n"
] |
[
6,
4,
2,
1,
0
] |
[] |
[] |
[
"computer_science"
] |
stackoverflow_0000062702_computer_science.txt
|
Q:
Writing a Firefox plugin for parsing a custom client-side language
I had an idea for a client-side language other than JavaScript, and I'd like to look into developing a Firefox plugin that would treat includes of this new language in a page, like <script type="newscript" src="path/script.ns" />, just as if it were a natively supported language. The plugin would do all of the language parsing and ideally be able to perform every operation on the browser and the html and css within the web page just as JavaScript can.
I've done a bunch of Googling and have found some articles on writing basic Firefox plugins, but nothing as complicated as this.
Is this even possible?
A:
If I've understood what you'd like to do, you'll need to write a Gecko plugin. Via a plugin, you will be able to register your own MIME type and then manipulate Javascript & the DOM.
This means you would need to include an <object /> or <embed /> tag on the page to load your plugin, but you could then look for <script type="application/x-yourtype" />, grab the innerText of that script tag and parse it using your plugin.
As Nickolay has suggested, the alternative is to use whatever the browser currently supports or create a custom build of the browser. Daniel Spiewak's suggestion to use a Java applet (or a Flash applet would also work) is also valid.
The information you're after is available on Mozilla's developer website:
Gecko Plugin API Reference
Plug-in Basics
A:
An interesting idea. Note that you don't actually need to write a browser-specific plugin to do this. Some people have experimented with using JRuby in an Applet to execute code embedded within <script type="text/ruby">. Such a solution may be slower on startup (due to the overhead of loading an entire JVM instance), but it will be much more flexible in the long run (cross-browser). Besides, it's a bit easier to build a custom language interpreter in a JVM language than it is to try to shoe-horn it into Gecko.
A:
@Nathan de Vries: no, actually, NPAPI plugins you suggested don't let one implement support for <script type=...>.
OP: this is not easy, but look for PyDOM and PyXPCOM - language bindings for Python. The former does exactly what you asked for - for Python, but I'm unsure about its current status. In any case, it's very likely that you need to create your own build of Firefox to support additional script types.
A:
Do you really want to tie your pages to your own custom scripting language? Or are you just looking to write your client-side code in something that's not javascript? If the latter try MileScript, Haxe, or Google Web Toolkit
|
Writing a Firefox plugin for parsing a custom client-side language
|
I had an idea for a client-side language other than JavaScript, and I'd like to look into developing a Firefox plugin that would treat includes of this new language in a page, like <script type="newscript" src="path/script.ns" />, just as if it were a natively supported language. The plugin would do all of the language parsing and ideally be able to perform every operation on the browser and the html and css within the web page just as JavaScript can.
I've done a bunch of Googling and have found some articles on writing basic Firefox plugins, but nothing as complicated as this.
Is this even possible?
|
[
"If I've understood what you'd like to do, you'll need to write a Gecko plugin. Via a plugin, you will be able to register your own MIME type and then manipulate Javascript & the DOM.\nThis means you would need to include an <object /> or <embed /> tag on the page to load your plugin, but you could then look for <script type=\"application/x-yourtype\" />, grab the innerText of that script tag and parse it using your plugin.\nAs Nickolay has suggested, the alternative is to use whatever the browser currently supports or create a custom build of the browser. Daniel Spiewak's suggestion to use a Java applet (or a Flash applet would also work) is also valid.\nThe information you're after is available on Mozilla's developer website:\n\nGecko Plugin API Reference\nPlug-in Basics\n\n",
"An interesting idea. Note that you don't actually need to write a browser-specific plugin to do this. Some people have experimented with using JRuby in an Applet to execute code embedded within <script type=\"text/ruby\">. Such a solution may be slower on startup (due to the overhead of loading an entire JVM instance), but it will be much more flexible in the long run (cross-browser). Besides, it's a bit easier to build a custom language interpreter in a JVM language than it is to try to shoe-horn it into Gecko.\n",
"@Nathan de Vries: no, actually, NPAPI plugins you suggested don't let one implement support for <script type=...>.\nOP: this is not easy, but look for PyDOM and PyXPCOM - language bindings for Python. The former does exactly what you asked for - for Python, but I'm unsure about its current status. In any case, it's very likely that you need to create your own build of Firefox to support additional script types.\n",
"Do you really want to tie your pages to your own custom scripting language? Or are you just looking to write your client-side code in something that's not javascript? If the latter try MileScript, Haxe, or Google Web Toolkit\n"
] |
[
3,
3,
2,
0
] |
[] |
[] |
[
"firefox"
] |
stackoverflow_0000069982_firefox.txt
|
Q:
List comparison
I use this question in interviews and I wonder what the best solution is.
Write a Perl sub that takes n lists, and then returns 2^n-1 lists telling you which items are in which lists; that is, which items are only in the first list, the second, list, both the first and second list, and all other combinations of lists. Assume that n is reasonably small (less than 20).
For example:
list_compare([1, 3], [2, 3]);
=> ([1], [2], [3]);
Here, the first result list gives all items that are only in list 1, the second result list gives all items that are only in list 2, and the third result list gives all items that are in both lists.
list_compare([1, 3, 5, 7], [2, 3, 6, 7], [4, 5, 6, 7])
=> ([1], [2], [3], [4], [5], [6], [7])
Here, the first list gives all items that are only in list 1, the second list gives all items that are only in list 2, and the third list gives all items that are in both lists 1 and 2, as in the first example. The fourth list gives all items that are only in list 3, the fifth list gives all items that are only in lists 1 and 3, the sixth list gives all items that are only in lists 2 and 3, and the seventh list gives all items that are in all 3 lists.
I usually give this problem as a follow up to the subset of this problem for n=2.
What is the solution?
Follow-up: The items in the lists are strings. There might be duplicates, but since they are just strings, duplicates should be squashed in the output. Order of the items in the output lists doesn't matter, the order of the lists themselves does.
A:
Your given solution can be simplified quite a bit still.
In the first loop, you can use plain addition since you are only ever ORing with single bits, and you can narrow the scope of $bit by iterating over indices. In the second loop, you can subtract 1 from the index instead of producing an unnecessary 0th output list element that needs to be shifted off, and where you unnecessarily iterate m*n times (where m is the number of output lists and n is the number of unique elements), iterating over the unique elements would reduce the iterations to just n (which is a significant win in typical use cases where m is much larger than n), and would simplify the code.
sub list_compare {
my ( @list ) = @_;
my %dest;
for my $i ( 0 .. $#list ) {
my $bit = 2**$i;
$dest{$_} += $bit for @{ $list[ $i ] };
}
my @output_list;
for my $val ( keys %dest ) {
push @{ $output_list[ $dest{ $val } - 1 ] }, $val;
}
return \@output_list;
}
Note also that once thought of in this way, the result gathering process can be written very concisely with the aid of the List::Part module:
use List::Part;
sub list_compare {
my ( @list ) = @_;
my %dest;
for my $i ( 0 .. $#list ) {
my $bit = 2**$i;
$dest{$_} += $bit for @{ $list[ $i ] };
}
return [ part { $dest{ $_ } - 1 } keys %dest ];
}
But note that list_compare is a terrible name. Something like part_elems_by_membership would be much better. Also, the imprecisions in your question Ben Tilly pointed out need to be rectified.
A:
First of all I would like to note that nohat's answer simply does not work. Try running it, and look at the output in Data::Dumper to verify that.
That said, your question is not well-posed. It looks like you are using sets as arrays. How do you wish to handle duplicates? How do you want to handle complex data structures? What order do you want elements in? For ease I'll assume that the answers are squash duplicates, it is OK to stringify complex data structures, and order does not matter. In that case the following is a perfectly adequate answer:
sub list_compare {
my @lists = @_;
my @answers;
for my $list (@lists) {
my %in_list = map {$_=>1} @$list;
# We have this list.
my @more_answers = [keys %in_list];
for my $answer (@answers) {
push @more_answers, [grep $in_list{$_}, @$answer];
}
push @answers, @more_answers;
}
return @answers;
}
If you want to adjust those assumptions, you'll need to adjust the code. For example not squashing complex data structures and not squashing duplicates can be done with:
sub list_compare {
my @lists = @_;
my @answers;
for my $list (@lists) {
my %in_list = map {$_=>1} @$list;
# We have this list.
my @more_answers = [@$list];
for my $answer (@answers) {
push @more_answers, [grep $in_list{$_}, @$answer];
}
push @answers, @more_answers;
}
return @answers;
}
This is, however, using the stringification of the data structure to check whether things that exist in one exist in another. Relaxing that condition would require somewhat more work.
A:
Here is my solution:
Construct a hash whose keys are the union of all the elements in the input lists, and the values are bit strings, where bit i is set if the element is present in list i. The bit strings are constructed using bitwise or. Then, construct the output lists by iterating over the keys of the hash, adding keys to the associated output list.
sub list_compare {
my (@lists) = @_;
my %compare;
my $bit = 1;
foreach my $list (@lists) {
$compare{$_} |= $bit foreach @$list;
$bit *= 2; # shift over one bit
}
my @output_lists;
foreach my $item (keys %compare) {
push @{ $output_lists[ $compare{$item} - 1 ] }, $item;
}
return \@output_lists;
}
Updated to include the inverted output list generation suggested by Aristotle
|
List comparison
|
I use this question in interviews and I wonder what the best solution is.
Write a Perl sub that takes n lists, and then returns 2^n-1 lists telling you which items are in which lists; that is, which items are only in the first list, the second, list, both the first and second list, and all other combinations of lists. Assume that n is reasonably small (less than 20).
For example:
list_compare([1, 3], [2, 3]);
=> ([1], [2], [3]);
Here, the first result list gives all items that are only in list 1, the second result list gives all items that are only in list 2, and the third result list gives all items that are in both lists.
list_compare([1, 3, 5, 7], [2, 3, 6, 7], [4, 5, 6, 7])
=> ([1], [2], [3], [4], [5], [6], [7])
Here, the first list gives all items that are only in list 1, the second list gives all items that are only in list 2, and the third list gives all items that are in both lists 1 and 2, as in the first example. The fourth list gives all items that are only in list 3, the fifth list gives all items that are only in lists 1 and 3, the sixth list gives all items that are only in lists 2 and 3, and the seventh list gives all items that are in all 3 lists.
I usually give this problem as a follow up to the subset of this problem for n=2.
What is the solution?
Follow-up: The items in the lists are strings. There might be duplicates, but since they are just strings, duplicates should be squashed in the output. Order of the items in the output lists doesn't matter, the order of the lists themselves does.
|
[
"Your given solution can be simplified quite a bit still.\nIn the first loop, you can use plain addition since you are only ever ORing with single bits, and you can narrow the scope of $bit by iterating over indices. In the second loop, you can subtract 1 from the index instead of producing an unnecessary 0th output list element that needs to be shifted off, and where you unnecessarily iterate m*n times (where m is the number of output lists and n is the number of unique elements), iterating over the unique elements would reduce the iterations to just n (which is a significant win in typical use cases where m is much larger than n), and would simplify the code.\nsub list_compare {\n my ( @list ) = @_;\n my %dest;\n\n for my $i ( 0 .. $#list ) {\n my $bit = 2**$i;\n $dest{$_} += $bit for @{ $list[ $i ] };\n }\n\n my @output_list;\n\n for my $val ( keys %dest ) {\n push @{ $output_list[ $dest{ $val } - 1 ] }, $val;\n }\n\n return \\@output_list;\n}\n\nNote also that once thought of in this way, the result gathering process can be written very concisely with the aid of the List::Part module:\nuse List::Part;\n\nsub list_compare {\n my ( @list ) = @_;\n my %dest;\n\n for my $i ( 0 .. $#list ) {\n my $bit = 2**$i;\n $dest{$_} += $bit for @{ $list[ $i ] };\n }\n\n return [ part { $dest{ $_ } - 1 } keys %dest ];\n}\n\nBut note that list_compare is a terrible name. Something like part_elems_by_membership would be much better. Also, the imprecisions in your question Ben Tilly pointed out need to be rectified.\n",
"First of all I would like to note that nohat's answer simply does not work. Try running it, and look at the output in Data::Dumper to verify that.\nThat said, your question is not well-posed. It looks like you are using sets as arrays. How do you wish to handle duplicates? How do you want to handle complex data structures? What order do you want elements in? For ease I'll assume that the answers are squash duplicates, it is OK to stringify complex data structures, and order does not matter. In that case the following is a perfectly adequate answer:\nsub list_compare {\n my @lists = @_;\n\n my @answers;\n for my $list (@lists) {\n my %in_list = map {$_=>1} @$list;\n # We have this list.\n my @more_answers = [keys %in_list];\n for my $answer (@answers) {\n push @more_answers, [grep $in_list{$_}, @$answer];\n }\n push @answers, @more_answers;\n }\n\n return @answers;\n}\n\nIf you want to adjust those assumptions, you'll need to adjust the code. For example not squashing complex data structures and not squashing duplicates can be done with:\nsub list_compare {\n my @lists = @_;\n\n my @answers;\n for my $list (@lists) {\n my %in_list = map {$_=>1} @$list;\n # We have this list.\n my @more_answers = [@$list];\n for my $answer (@answers) {\n push @more_answers, [grep $in_list{$_}, @$answer];\n }\n push @answers, @more_answers;\n }\n\n return @answers;\n}\n\nThis is, however, using the stringification of the data structure to check whether things that exist in one exist in another. Relaxing that condition would require somewhat more work.\n",
"Here is my solution:\nConstruct a hash whose keys are the union of all the elements in the input lists, and the values are bit strings, where bit i is set if the element is present in list i. The bit strings are constructed using bitwise or. Then, construct the output lists by iterating over the keys of the hash, adding keys to the associated output list.\nsub list_compare {\n my (@lists) = @_;\n my %compare;\n my $bit = 1;\n foreach my $list (@lists) {\n $compare{$_} |= $bit foreach @$list;\n $bit *= 2; # shift over one bit\n }\n\n\n my @output_lists;\n foreach my $item (keys %compare) {\n push @{ $output_lists[ $compare{$item} - 1 ] }, $item;\n }\n\n return \\@output_lists;\n\n}\n\nUpdated to include the inverted output list generation suggested by Aristotle\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"list",
"perl"
] |
stackoverflow_0000068352_list_perl.txt
|
Q:
How do you keep search engines from indexing text ads?
Is there any way to keep search engines from indexing text ads?
These are basically stylized links.
I have thought about generating images with text or using javascript to write them into a DIV.
What is the best and most accepted way?
A:
One way is to use iFrames to show the ads, and use meta tags in them to tell Google not to index them.
Another way would be to use JavaScript to print the ads, so they would not be there when the browser does not support JavaScript (Google Bot doesn't execute JavaScript).
A lot of ad systems use the JavaScript one, but I don't really know if that's the best way to do it - but it's a way.
|
How do you keep search engines from indexing text ads?
|
Is there any way to keep search engines from indexing text ads?
These are basically stylized links.
I have thought about generating images with text or using javascript to write them into a DIV.
What is the best and most accepted way?
|
[
"One way is to use iFrames to show the ads, and use meta tags in them to tell Google not to index them.\nAnother way would be to use JavaScript to print the ads, so they would not be there when the browser does not support JavaScript (Google Bot doesn't execute JavaScript).\nA lot of ad systems use the JavaScript one, but I don't really know if that's the best way to do it - but it's a way.\n"
] |
[
3
] |
[] |
[] |
[
"ads",
"search_engine"
] |
stackoverflow_0000070728_ads_search_engine.txt
|
Q:
Refactoring two basic classes
How would you refactor these two classes to abstract out the similarities? An abstract class? Simple inheritance? What would the refactored class(es) look like?
public class LanguageCode
{
/// <summary>
/// Get the lowercase two-character ISO 639-1 language code.
/// </summary>
public readonly string Value;
public LanguageCode(string language)
{
this.Value = new CultureInfo(language).TwoLetterISOLanguageName;
}
public static LanguageCode TryParse(string language)
{
if (language == null)
{
return null;
}
if (language.Length > 2)
{
language = language.Substring(0, 2);
}
try
{
return new LanguageCode(language);
}
catch (ArgumentException)
{
return null;
}
}
}
public class RegionCode
{
/// <summary>
/// Get the uppercase two-character ISO 3166 region/country code.
/// </summary>
public readonly string Value;
public RegionCode(string region)
{
this.Value = new RegionInfo(region).TwoLetterISORegionName;
}
public static RegionCode TryParse(string region)
{
if (region == null)
{
return null;
}
if (region.Length > 2)
{
region = region.Substring(0, 2);
}
try
{
return new RegionCode(region);
}
catch (ArgumentException)
{
return null;
}
}
}
A:
It depends, if they are not going to do much more, then I would probably leave them as is - IMHO factoring out stuff is likely to be more complex, in this case.
A:
This is a rather simple question and to me smells awefully like a homework assignment.
You can obviously see the common bits in the code and I'm pretty sure you can make an attempt at it yourself by putting such things into a super-class.
A:
You could maybe combine them into a Locale class, which stores both Language code and Region code, has accessors for Region and Language plus one parse function which also allows for strings like "en_gb"...
That's how I've seen locales be handled in various frameworks.
A:
These two, as they stand, aren't going to refactor well because of the static methods.
You'd either end up with some kind of factory method on a base class that returns an a type of that base class (which would subsequently need casting) or you'd need some kind of additional helper class.
Given the amount of extra code and subsequent casting to the appropriate type, it's not worth it.
A:
I'm sure there is a better generics based solution. But still gave it a shot.
EDIT: As the comment says, static methods can't be overridden so one option would be to retain it and use TwoLetterCode objects around and cast them, but, as some other person has already pointed out, that is rather useless.
How about this?
public class TwoLetterCode {
public readonly string Value;
public static TwoLetterCode TryParseSt(string tlc) {
if (tlc == null)
{
return null;
}
if (tlc.Length > 2)
{
tlc = tlc.Substring(0, 2);
}
try
{
return new TwoLetterCode(tlc);
}
catch (ArgumentException)
{
return null;
}
}
}
//Likewise for Region
public class LanguageCode : TwoLetterCode {
public LanguageCode(string language)
{
this.Value = new CultureInfo(language).TwoLetterISOLanguageName;
}
public static LanguageCode TryParse(string language) {
return (LanguageCode)TwoLetterCode.TryParseSt(language);
}
}
A:
Create a generic base class (eg AbstractCode<T>)
add abstract methods like
protected T GetConstructor(string code);
override in base classes like
protected override RegionCode GetConstructor(string code)
{
return new RegionCode(code);
}
Finally, do the same with string GetIsoName(string code), eg
protected override GetIsoName(string code)
{
return new RegionCode(code).TowLetterISORegionName;
}
That will refactor the both. Chris Kimpton does raise the important question as to whether the effort is worth it.
A:
Unless you have a strong reason for refactoring (because you are going to add more classes like those in near future) the penalty of changing the design for such a small and contrived example would overcome the gain in maintenance or overhead in this scenario. Anyhow here is a possible design based on generic and lambda expressions.
public class TwoLetterCode<T>
{
private readonly string value;
public TwoLetterCode(string value, Func<string, string> predicate)
{
this.value = predicate(value);
}
public static T TryParse(string value, Func<string, T> predicate)
{
if (value == null)
{
return default(T);
}
if (value.Length > 2)
{
value = value.Substring(0, 2);
}
try
{
return predicate(value);
}
catch (ArgumentException)
{
return default(T);
}
}
public string Value { get { return this.value; } }
}
public class LanguageCode : TwoLetterCode<LanguageCode> {
public LanguageCode(string language)
: base(language, v => new CultureInfo(v).TwoLetterISOLanguageName)
{
}
public static LanguageCode TryParse(string language)
{
return TwoLetterCode<LanguageCode>.TryParse(language, v => new LanguageCode(v));
}
}
public class RegionCode : TwoLetterCode<RegionCode>
{
public RegionCode(string language)
: base(language, v => new CultureInfo(v).TwoLetterISORegionName)
{
}
public static RegionCode TryParse(string language)
{
return TwoLetterCode<RegionCode>.TryParse(language, v => new RegionCode(v));
}
}
|
Refactoring two basic classes
|
How would you refactor these two classes to abstract out the similarities? An abstract class? Simple inheritance? What would the refactored class(es) look like?
public class LanguageCode
{
/// <summary>
/// Get the lowercase two-character ISO 639-1 language code.
/// </summary>
public readonly string Value;
public LanguageCode(string language)
{
this.Value = new CultureInfo(language).TwoLetterISOLanguageName;
}
public static LanguageCode TryParse(string language)
{
if (language == null)
{
return null;
}
if (language.Length > 2)
{
language = language.Substring(0, 2);
}
try
{
return new LanguageCode(language);
}
catch (ArgumentException)
{
return null;
}
}
}
public class RegionCode
{
/// <summary>
/// Get the uppercase two-character ISO 3166 region/country code.
/// </summary>
public readonly string Value;
public RegionCode(string region)
{
this.Value = new RegionInfo(region).TwoLetterISORegionName;
}
public static RegionCode TryParse(string region)
{
if (region == null)
{
return null;
}
if (region.Length > 2)
{
region = region.Substring(0, 2);
}
try
{
return new RegionCode(region);
}
catch (ArgumentException)
{
return null;
}
}
}
|
[
"It depends, if they are not going to do much more, then I would probably leave them as is - IMHO factoring out stuff is likely to be more complex, in this case.\n",
"This is a rather simple question and to me smells awefully like a homework assignment.\nYou can obviously see the common bits in the code and I'm pretty sure you can make an attempt at it yourself by putting such things into a super-class.\n",
"You could maybe combine them into a Locale class, which stores both Language code and Region code, has accessors for Region and Language plus one parse function which also allows for strings like \"en_gb\"...\nThat's how I've seen locales be handled in various frameworks.\n",
"These two, as they stand, aren't going to refactor well because of the static methods.\nYou'd either end up with some kind of factory method on a base class that returns an a type of that base class (which would subsequently need casting) or you'd need some kind of additional helper class.\nGiven the amount of extra code and subsequent casting to the appropriate type, it's not worth it.\n",
"I'm sure there is a better generics based solution. But still gave it a shot. \nEDIT: As the comment says, static methods can't be overridden so one option would be to retain it and use TwoLetterCode objects around and cast them, but, as some other person has already pointed out, that is rather useless. \nHow about this?\npublic class TwoLetterCode {\n public readonly string Value;\n public static TwoLetterCode TryParseSt(string tlc) {\n if (tlc == null)\n {\n return null;\n }\n\n if (tlc.Length > 2)\n {\n tlc = tlc.Substring(0, 2);\n }\n\n try\n {\n return new TwoLetterCode(tlc);\n }\n catch (ArgumentException)\n {\n return null;\n }\n }\n}\n//Likewise for Region\npublic class LanguageCode : TwoLetterCode {\n public LanguageCode(string language)\n {\n this.Value = new CultureInfo(language).TwoLetterISOLanguageName;\n }\n public static LanguageCode TryParse(string language) {\n return (LanguageCode)TwoLetterCode.TryParseSt(language);\n }\n}\n\n",
"\nCreate a generic base class (eg AbstractCode<T>)\nadd abstract methods like\nprotected T GetConstructor(string code);\n\noverride in base classes like\nprotected override RegionCode GetConstructor(string code)\n{\n return new RegionCode(code);\n}\n\nFinally, do the same with string GetIsoName(string code), eg\nprotected override GetIsoName(string code)\n{\n return new RegionCode(code).TowLetterISORegionName;\n}\n\n\nThat will refactor the both. Chris Kimpton does raise the important question as to whether the effort is worth it. \n",
"Unless you have a strong reason for refactoring (because you are going to add more classes like those in near future) the penalty of changing the design for such a small and contrived example would overcome the gain in maintenance or overhead in this scenario. Anyhow here is a possible design based on generic and lambda expressions.\npublic class TwoLetterCode<T>\n{\n private readonly string value;\n\n public TwoLetterCode(string value, Func<string, string> predicate)\n {\n this.value = predicate(value);\n }\n\n public static T TryParse(string value, Func<string, T> predicate)\n {\n if (value == null)\n {\n return default(T);\n }\n\n if (value.Length > 2)\n {\n value = value.Substring(0, 2);\n }\n\n try\n {\n return predicate(value);\n }\n catch (ArgumentException)\n {\n return default(T);\n }\n }\n\n public string Value { get { return this.value; } }\n}\n\npublic class LanguageCode : TwoLetterCode<LanguageCode> {\n public LanguageCode(string language)\n : base(language, v => new CultureInfo(v).TwoLetterISOLanguageName)\n {\n }\n\n public static LanguageCode TryParse(string language)\n {\n return TwoLetterCode<LanguageCode>.TryParse(language, v => new LanguageCode(v));\n }\n}\n\npublic class RegionCode : TwoLetterCode<RegionCode>\n{\n public RegionCode(string language)\n : base(language, v => new CultureInfo(v).TwoLetterISORegionName)\n {\n }\n\n public static RegionCode TryParse(string language)\n {\n return TwoLetterCode<RegionCode>.TryParse(language, v => new RegionCode(v));\n }\n}\n\n"
] |
[
2,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"cultureinfo",
"regioninfo"
] |
stackoverflow_0000070625_cultureinfo_regioninfo.txt
|
Q:
How do you deploy your SharePoint solutions?
I am now in the process of planning the deployment of a SharePoint solution into a production environment.
I have read about some tools that promise an easy way to automate this process, but nothing that seems to fit my scenario.
In the testing phase I have used SharePoint Designer to copy site content between the different development and testing servers, but this process is manual and it seems a bit unnecessary.
The site is made up of SharePoint web part pages with custom web parts, and a lot of Reporting Services report definitions.
So, is there any good advice out there in this vast land of geeks on how to most efficiently create and deploy a SharePoint site for a multiple deployment scenario?
Edit
Just to clarify. I need to deploy several "SharePoint Sites" into an existing site collection. Since SharePoint likes to have its sites in the SharePoint content database, just putting the files into IIS is not an option at this time.
A:
I would also suggest checking out the SharePoint Content Deployment Wizard by Chris O'Brien.
http://www.codeplex.com/SPDeploymentWizard
Should help smooth the process you describe, and it's a nice tool for your kitbag regardless
A:
We have a BizTalk 2006 with Web Application and Several WebServices that need to go from Dev to UAT to Live.
We use MSBuild right from within VS to build, run tests, dependent on test result, complie, zip and ship to servers.
Small MSBuild script on server to unzip, move the files, install clean web app, unlist biztalk bits, install new biztalk bits, re enlist and then start the stuff.
MSBuild is hugh and more people need to use it as it there now right in the platform =>
Use MSBuild
A:
Note that "solution" has a specific meaning in Sharepoint: a collection of features (like web parts, list definitions and so on) packaged for deployment as a .wsp file.
You typically build sharepoint solutions in Visual Studio and package and deploy them using some tool like Sharepoint SmartTemplates http://www.codeplex.com/smarttemplates
However in your case you already have content in a live sharepoint site which you want to move to another site. It will probably be too cumbersome to use a solution for this, especially if you want to do it more than once (though it is possible to generate a solution from a live site using SharePoint Solution Generator).
The easiest way to deploy all content from one live site to another is to create a backup of the site using stsadm and then restore it to the new site again using stsadm restore. This completely overwrites the new site.
You can move select files/lists using import/export (rather than backup/restore). A tool like SharePoint Content Deployment Wizard makes it easier to select the content to move.
|
How do you deploy your SharePoint solutions?
|
I am now in the process of planning the deployment of a SharePoint solution into a production environment.
I have read about some tools that promise an easy way to automate this process, but nothing that seems to fit my scenario.
In the testing phase I have used SharePoint Designer to copy site content between the different development and testing servers, but this process is manual and it seems a bit unnecessary.
The site is made up of SharePoint web part pages with custom web parts, and a lot of Reporting Services report definitions.
So, is there any good advice out there in this vast land of geeks on how to most efficiently create and deploy a SharePoint site for a multiple deployment scenario?
Edit
Just to clarify. I need to deploy several "SharePoint Sites" into an existing site collection. Since SharePoint likes to have its sites in the SharePoint content database, just putting the files into IIS is not an option at this time.
|
[
"I would also suggest checking out the SharePoint Content Deployment Wizard by Chris O'Brien.\nhttp://www.codeplex.com/SPDeploymentWizard \nShould help smooth the process you describe, and it's a nice tool for your kitbag regardless\n",
"We have a BizTalk 2006 with Web Application and Several WebServices that need to go from Dev to UAT to Live.\nWe use MSBuild right from within VS to build, run tests, dependent on test result, complie, zip and ship to servers.\nSmall MSBuild script on server to unzip, move the files, install clean web app, unlist biztalk bits, install new biztalk bits, re enlist and then start the stuff.\nMSBuild is hugh and more people need to use it as it there now right in the platform =>\nUse MSBuild\n",
"Note that \"solution\" has a specific meaning in Sharepoint: a collection of features (like web parts, list definitions and so on) packaged for deployment as a .wsp file.\nYou typically build sharepoint solutions in Visual Studio and package and deploy them using some tool like Sharepoint SmartTemplates http://www.codeplex.com/smarttemplates\nHowever in your case you already have content in a live sharepoint site which you want to move to another site. It will probably be too cumbersome to use a solution for this, especially if you want to do it more than once (though it is possible to generate a solution from a live site using SharePoint Solution Generator). \nThe easiest way to deploy all content from one live site to another is to create a backup of the site using stsadm and then restore it to the new site again using stsadm restore. This completely overwrites the new site.\nYou can move select files/lists using import/export (rather than backup/restore). A tool like SharePoint Content Deployment Wizard makes it easier to select the content to move.\n"
] |
[
4,
3,
2
] |
[] |
[] |
[
"deployment",
"production",
"sharepoint"
] |
stackoverflow_0000009543_deployment_production_sharepoint.txt
|
Q:
What is the correct way of getting the start and end date of a ISO week number in TSQL?
I have the ISO week and year but how do I correctly convert that into two dates representing the start and end of that week?
A:
There are a couple of strategies to do that:
Start of week function
End of week function
A:
If you've got some SQL chops, you could prune relevant bits from F_TABLE_DATE. Or, if you like having a monster function around, you could just use the whole shebang. You'd have to manufacture a sensible start and end date to pass into F_TABLE_DATE though.
|
What is the correct way of getting the start and end date of a ISO week number in TSQL?
|
I have the ISO week and year but how do I correctly convert that into two dates representing the start and end of that week?
|
[
"There are a couple of strategies to do that:\n\nStart of week function\nEnd of week function\n\n",
"If you've got some SQL chops, you could prune relevant bits from F_TABLE_DATE. Or, if you like having a monster function around, you could just use the whole shebang. You'd have to manufacture a sensible start and end date to pass into F_TABLE_DATE though.\n"
] |
[
1,
1
] |
[] |
[] |
[
"datetime",
"iso",
"tsql"
] |
stackoverflow_0000070516_datetime_iso_tsql.txt
|
Q:
What is GNU Screen?
What is GNU Screen?
A:
What is GNU Screen? Great!
Erm, a slightly more useful answer: it allows you to run multiple console applications, or commands, in one terminal. Kind of like a tabbed terminal emulator. In fact, that's exactly what it is (just not done with the regular GUI toolkits)
Why is it so great? Simple, you can run a program in a screen session (Run screen and it runs your default shell, run screen myapp and it runs myapp in the session), hit ctrl+a (the screen control sequence) and then press d (ctrl+a,d) to detach.
The program keeps running in the background, but, unlike doing mycmd &, you can run screen -r to reattach the session, and everything is as you left it. You can send input to the command, if it's a curses UI, everything still works just like if it were a "real" terminal.
It's very popular with console IRC clients - you can run (say) screen irssi and reattach the session from anywhere you can SSH from.
A few useful commands:
ctrl+a, c to make a new virtual terminal (or "window") in the session
ctrl+a, n and ctrl+a, p to cycle through multiple windows
ctrl+a, 1 to select window 1, ctrl+a, 4 to select window 4 and so on
ctrl+a, ctrl+a to flick between the last two active windows
ctrl+a, shift+a (upper-case a) allows you to rename the current window
ctrl+a, ` (for me, that's shift+2 - the quote mark) lists windows, you can use the arrows and select one. Also useful with the "tab bar" setting I'll list in a second
A few other useful things I've stumbled across:
Use the -U flag when you launch screen so it supports Unicode (for example, screen -xU)
The -x flag allows you to reattach the same session multiple times. (-r disconnects existing connections)
You can do interesting stuff with the status bar. I have my setup to display [ hostname ][ 0-$ bash (1*$ irssi) ][16/09 9:32] (Running on hostname, it has two windows. This is set by the hardstatus lines in my .screenrc (at the end of the answer)
startup_message off
vbell off
hardstatus alwayslastline
hardstatus string '%{gk}[ %{G}%H %{g}][%= %{wk}%?%-Lw%?%{=b kR}(%{W}%n*%f %t%?(%u)%?%{=b kR})%{= kw}%?%+Lw%?%?%= %{g}]%{=y C}[%d/%m %c]%{W}'
|
What is GNU Screen?
|
What is GNU Screen?
|
[
"What is GNU Screen? Great!\nErm, a slightly more useful answer: it allows you to run multiple console applications, or commands, in one terminal. Kind of like a tabbed terminal emulator. In fact, that's exactly what it is (just not done with the regular GUI toolkits)\nWhy is it so great? Simple, you can run a program in a screen session (Run screen and it runs your default shell, run screen myapp and it runs myapp in the session), hit ctrl+a (the screen control sequence) and then press d (ctrl+a,d) to detach.\nThe program keeps running in the background, but, unlike doing mycmd &, you can run screen -r to reattach the session, and everything is as you left it. You can send input to the command, if it's a curses UI, everything still works just like if it were a \"real\" terminal.\nIt's very popular with console IRC clients - you can run (say) screen irssi and reattach the session from anywhere you can SSH from.\nA few useful commands:\n\nctrl+a, c to make a new virtual terminal (or \"window\") in the session\nctrl+a, n and ctrl+a, p to cycle through multiple windows\nctrl+a, 1 to select window 1, ctrl+a, 4 to select window 4 and so on\nctrl+a, ctrl+a to flick between the last two active windows\nctrl+a, shift+a (upper-case a) allows you to rename the current window\nctrl+a, ` (for me, that's shift+2 - the quote mark) lists windows, you can use the arrows and select one. Also useful with the \"tab bar\" setting I'll list in a second\n\nA few other useful things I've stumbled across:\n\nUse the -U flag when you launch screen so it supports Unicode (for example, screen -xU)\nThe -x flag allows you to reattach the same session multiple times. (-r disconnects existing connections)\nYou can do interesting stuff with the status bar. I have my setup to display [ hostname ][ 0-$ bash (1*$ irssi) ][16/09 9:32] (Running on hostname, it has two windows. This is set by the hardstatus lines in my .screenrc (at the end of the answer)\n\nstartup_message off\nvbell off\nhardstatus alwayslastline\nhardstatus string '%{gk}[ %{G}%H %{g}][%= %{wk}%?%-Lw%?%{=b kR}(%{W}%n*%f %t%?(%u)%?%{=b kR})%{= kw}%?%+Lw%?%?%= %{g}]%{=y C}[%d/%m %c]%{W}'\n\n"
] |
[
29
] |
[] |
[] |
[
"gnu_screen"
] |
stackoverflow_0000070661_gnu_screen.txt
|
Q:
How to get a http file metadata?
How to get a file's creation date or file size, for example this Hello.jpg at http://www.mywebsite.com/now/Hello.jpg(note: This URL does not exist)? The purpose of this question is to make my application re-download the files from the any website when it has detected that the website has an updated version of the files and the files in my local folder are out of date. Any ideas?
A:
If you use the HEAD request it will send the headers for the resource, there you can check the cache control headers which will tell you if the resource has been modified, last modification time, size (content-length) and date.
$ telnet www.google.com 80
Trying 216.239.59.103...
Connected to www.l.google.com.
Escape character is '^]'.
HEAD /intl/en_ALL/images/logo.gif HTTP/1.0
HTTP/1.0 200 OK
Content-Type: image/gif
Last-Modified: Wed, 07 Jun 2006 19:38:24 GMT
Expires: Sun, 17 Jan 2038 19:14:07 GMT
Cache-Control: public
Date: Tue, 16 Sep 2008 09:45:42 GMT
Server: gws
Content-Length: 8558
Connection: Close
Connection closed by foreign host.
Note that you'll probably have to decorate this basic and easy approach with many heuristics depending on the craziness of each webserver's admin, as each can send whatever headers they like. If they do not provide caching headers (Last-Modified, Expires, Cache-Control) nor Content-Length nor etag, you'd be stuck with redownloading it to test.
A:
The webserver might send a last-modified and/or etag header for that purpose.
And you might send an if-modified-since header in your request.
see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
sections 14.19, 14.25 and 14.29
|
How to get a http file metadata?
|
How to get a file's creation date or file size, for example this Hello.jpg at http://www.mywebsite.com/now/Hello.jpg(note: This URL does not exist)? The purpose of this question is to make my application re-download the files from the any website when it has detected that the website has an updated version of the files and the files in my local folder are out of date. Any ideas?
|
[
"If you use the HEAD request it will send the headers for the resource, there you can check the cache control headers which will tell you if the resource has been modified, last modification time, size (content-length) and date. \n$ telnet www.google.com 80\nTrying 216.239.59.103...\nConnected to www.l.google.com.\nEscape character is '^]'.\nHEAD /intl/en_ALL/images/logo.gif HTTP/1.0\n\nHTTP/1.0 200 OK\nContent-Type: image/gif\nLast-Modified: Wed, 07 Jun 2006 19:38:24 GMT\nExpires: Sun, 17 Jan 2038 19:14:07 GMT\nCache-Control: public\nDate: Tue, 16 Sep 2008 09:45:42 GMT\nServer: gws\nContent-Length: 8558\nConnection: Close\n\nConnection closed by foreign host.\n\nNote that you'll probably have to decorate this basic and easy approach with many heuristics depending on the craziness of each webserver's admin, as each can send whatever headers they like. If they do not provide caching headers (Last-Modified, Expires, Cache-Control) nor Content-Length nor etag, you'd be stuck with redownloading it to test.\n",
"The webserver might send a last-modified and/or etag header for that purpose.\nAnd you might send an if-modified-since header in your request.\nsee http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html\nsections 14.19, 14.25 and 14.29\n"
] |
[
6,
1
] |
[] |
[] |
[
"http"
] |
stackoverflow_0000070782_http.txt
|
Q:
What is the license for unlicensed material?
Suppose I've found a “text” somewhere in open access (say, on public network share). I have no means to contact the author, I even don't know who is the author.
What can I legally do with such “text”?
Update: I am not going to publish that “text”, but rather learn from it myself.
Update: So, if I ever see an anonymous code, article, whatever, shouldn't I even open it, because otherwise I'd copy its contents to my brain?
A:
IANAL: There is no license. The original author (whoever it may be) retains copyright and all the rights associated with it, and has not granted any explicit license to anyone to do anything with their work. Please do check with an actual lawyer versed in copyright, though, since it seems like there should be a way to use the text in your particular circumstances and (s)he would likely know what that way is.
UPDATE: Copyright is chiefly concerned with (re)distribution; if you can read it, you're free to learn from it, although the DMCA places legal restrictions on what steps you can take to be able to read it, e.g., you aren't supposed to use DeCSS to read subtitles since that is a "circumvention of access control".
A:
As far as I know (without any legal training) - if you list the text or code or whathaveyou as "anonymous", you're OK.
I believe that by listing it as anonymous you're indicating you do not know where it came from, but you're admitting you didn't create it as original work.
Extending from that, you should be open to the actual author being able to prove they are the author, and changing your usage to reflect their name/license/copyright/whatever.
You should check with an Intellectual Property lawyer for details and corrections to my understanding.
|
What is the license for unlicensed material?
|
Suppose I've found a “text” somewhere in open access (say, on public network share). I have no means to contact the author, I even don't know who is the author.
What can I legally do with such “text”?
Update: I am not going to publish that “text”, but rather learn from it myself.
Update: So, if I ever see an anonymous code, article, whatever, shouldn't I even open it, because otherwise I'd copy its contents to my brain?
|
[
"IANAL: There is no license. The original author (whoever it may be) retains copyright and all the rights associated with it, and has not granted any explicit license to anyone to do anything with their work. Please do check with an actual lawyer versed in copyright, though, since it seems like there should be a way to use the text in your particular circumstances and (s)he would likely know what that way is.\nUPDATE: Copyright is chiefly concerned with (re)distribution; if you can read it, you're free to learn from it, although the DMCA places legal restrictions on what steps you can take to be able to read it, e.g., you aren't supposed to use DeCSS to read subtitles since that is a \"circumvention of access control\".\n",
"As far as I know (without any legal training) - if you list the text or code or whathaveyou as \"anonymous\", you're OK. \nI believe that by listing it as anonymous you're indicating you do not know where it came from, but you're admitting you didn't create it as original work.\nExtending from that, you should be open to the actual author being able to prove they are the author, and changing your usage to reflect their name/license/copyright/whatever.\nYou should check with an Intellectual Property lawyer for details and corrections to my understanding.\n"
] |
[
7,
1
] |
[] |
[] |
[
"licensing"
] |
stackoverflow_0000070762_licensing.txt
|
Q:
What is the difference between precedence, associativity, and order?
This confusion arises as most people are trained to evaluate arithmetic expressions as per PEDMAS or BODMAS rule whereas arithmetic expressions in programming languages like C# do not work in the same way.
What are your takes on it?
A:
Precedence rules specify priority of operators (which operators will be evaluated first, e.g. multiplication has higher precedence than addition, PEMDAS).
The associativity rules tell how the operators of same precedence are grouped. Arithmetic operators are left-associative, but the assignment is right associative (e.g. a = b = c will be evaluated as b = c, a = b).
The order is a result of applying the precedence and associativity rules and tells how the expression will be evaluated - which operators will be evaluated firs, which later, which at the end. The actual order can be changed by using braces (braces are also operator with the highest precedence).
The precedence and associativity of operators in a programming language can be found in its language manual or specification.
A:
I am not sure there really is a difference. The traditional BODMAS (brackets, orders, division, multiplication, addition, subtraction) or PEDMAS (parentheses, exponents, division, multiplication, addition, subtraction) are just subsets of all the possible operations and denote the order that such operations should be applied in. I don't know of any language in which the BODMAS/PEDMAS rules are violated, but each language typically adds various other operators - such as ++, --, = etc.
I always keep a list of operator precedence close to hand in case of confusion. However when in doubt it is usually worth using some parentheses to make the meaning clear. Just be aware that parentheses do not have the highest precedence - see http://msdn.microsoft.com/en-us/library/126fe14k.aspx for an example in C++.
A:
Precedence and associativity both specify how and in which order a term should be split into subterms. In other words does it specifies the rules where brackets are to be set implicitly if not specified explicitly.
If you've got a term without brackets, you start with operators with lowest precedence and enclose it in brackets.
For example:
Precendences:
.
!
*,/
+,-
==
&&
The term:
!person.isMarried && person.age == 25 + 2 * 5
would be grouped like that:
!(person.isMarried) && (person.age) == 25 + 2 * 5
(!(person.isMarried)) && (person.age) == 25 + 2 * 5
(!(person.isMarried)) && (person.age) == 25 + (2 * 5)
(!(person.isMarried)) && (person.age) == (25 + (2 * 5))
(!(person.isMarried)) && ((person.age) == (25 + (2 * 5)))
((!(person.isMarried)) && ((person.age) == (25 + (2 * 5))))
One very common rule is the precedence of * and / before + and - .
Associativity specifies in which direction operators of the same precedence are grouped. Most operators are left-to-right. Unary prefix operators are right-to-left.
Example:
1 + 2 + 3 + 4
is grouped like that:
(1 + 2) + 3 + 4
((1 + 2) + 3) + 4
(((1 + 2) + 3) + 4)
while
!!+1
is grouped as
!!(+1)
!(!(+1))
(!(!(+1)))
So far everything complies to the BODMAS/PEDMAS rules which differences have you experienced?
|
What is the difference between precedence, associativity, and order?
|
This confusion arises as most people are trained to evaluate arithmetic expressions as per PEDMAS or BODMAS rule whereas arithmetic expressions in programming languages like C# do not work in the same way.
What are your takes on it?
|
[
"Precedence rules specify priority of operators (which operators will be evaluated first, e.g. multiplication has higher precedence than addition, PEMDAS). \nThe associativity rules tell how the operators of same precedence are grouped. Arithmetic operators are left-associative, but the assignment is right associative (e.g. a = b = c will be evaluated as b = c, a = b). \nThe order is a result of applying the precedence and associativity rules and tells how the expression will be evaluated - which operators will be evaluated firs, which later, which at the end. The actual order can be changed by using braces (braces are also operator with the highest precedence). \nThe precedence and associativity of operators in a programming language can be found in its language manual or specification. \n",
"I am not sure there really is a difference. The traditional BODMAS (brackets, orders, division, multiplication, addition, subtraction) or PEDMAS (parentheses, exponents, division, multiplication, addition, subtraction) are just subsets of all the possible operations and denote the order that such operations should be applied in. I don't know of any language in which the BODMAS/PEDMAS rules are violated, but each language typically adds various other operators - such as ++, --, = etc.\nI always keep a list of operator precedence close to hand in case of confusion. However when in doubt it is usually worth using some parentheses to make the meaning clear. Just be aware that parentheses do not have the highest precedence - see http://msdn.microsoft.com/en-us/library/126fe14k.aspx for an example in C++.\n",
"Precedence and associativity both specify how and in which order a term should be split into subterms. In other words does it specifies the rules where brackets are to be set implicitly if not specified explicitly.\nIf you've got a term without brackets, you start with operators with lowest precedence and enclose it in brackets.\nFor example:\nPrecendences:\n\n.\n!\n*,/\n+,-\n==\n&& \n\nThe term:\n!person.isMarried && person.age == 25 + 2 * 5\n\nwould be grouped like that:\n\n!(person.isMarried) && (person.age) == 25 + 2 * 5\n(!(person.isMarried)) && (person.age) == 25 + 2 * 5\n(!(person.isMarried)) && (person.age) == 25 + (2 * 5)\n(!(person.isMarried)) && (person.age) == (25 + (2 * 5))\n(!(person.isMarried)) && ((person.age) == (25 + (2 * 5)))\n((!(person.isMarried)) && ((person.age) == (25 + (2 * 5))))\n\nOne very common rule is the precedence of * and / before + and - .\nAssociativity specifies in which direction operators of the same precedence are grouped. Most operators are left-to-right. Unary prefix operators are right-to-left.\nExample:\n1 + 2 + 3 + 4\n\nis grouped like that:\n\n(1 + 2) + 3 + 4\n((1 + 2) + 3) + 4\n(((1 + 2) + 3) + 4)\n\nwhile\n!!+1\nis grouped as\n\n!!(+1)\n!(!(+1))\n(!(!(+1)))\n\nSo far everything complies to the BODMAS/PEDMAS rules which differences have you experienced?\n"
] |
[
9,
2,
2
] |
[] |
[] |
[
"arithmetic_expressions",
"c#"
] |
stackoverflow_0000070756_arithmetic_expressions_c#.txt
|
Q:
cvs error on checkin
When trying to commit to a cvs branch after adding a new file I get this error
Assertion failed: key != NULL, file hash.c, line 317
Any idea how to fix it so I can check my code in?
Both server and client are Linux and there are pre-commits involved.
A:
sleep-er writes:
Not sure what the issue was but I solved it by going onto the server and deleting the file Attic/newfile.v in the repository and adding it again.
The "Attic" is the place where deleted files go in CVS. At some point in the past, someone checked in newfile.v, and at some later point it was deleted, hence moved to the Attic.
By deleting the ,v file from the repository you corrupted older commits that included the file "newfile". Do not do this.
The correct way is to restore the deleted file, then replace its content by the new file.
According to http://www.cs.indiana.edu/~machrist/notes/cvs.html
To recover a file that has been removed from the repository, you essentially need to update that file to its last revision number (before it was actually deleted). For example:
cvs update -r 1.7 deleted_file
This will recover deleted_file in your working repository. To find deleted files and their last revision number, issue cvs log at the command prompt.
Edited in reply to comment to explain what the ,v file in the Attic means.
A:
Are you on Windows and did you rename a file to the same name with different case (e.g. MAKEFILE vs Makefile vs makefile)? CVS used to have a problem with this (and maybe still does?):
OSDir/mailarchive - Subject: Re: hash.c.312: findnode:
Manu writes:
I try to rename "makefile" to "Makefile" in my cvs tree, then:
cvs: hash.c:312: findnode: Assertion `key != ((void *)0)' failed.
cvs [server aborted]: received abort signal
CVS was never designed to cope with case insensitive file systems. It
has been patched to the point where it mostly works, but there are still
some places where it doesn't. This is one of them.
You might want to read the rest of the messages in the thread as well.
A:
Perhaps there is some kind of pre-commit check on your repository, see here
A:
Not sure what the issue was but I solved it by going onto the server and deleting the file Attic/newfile.v in the repository and adding it again.
|
cvs error on checkin
|
When trying to commit to a cvs branch after adding a new file I get this error
Assertion failed: key != NULL, file hash.c, line 317
Any idea how to fix it so I can check my code in?
Both server and client are Linux and there are pre-commits involved.
|
[
"\nsleep-er writes:\n\nNot sure what the issue was but I solved it by going onto the server and deleting the file Attic/newfile.v in the repository and adding it again.\n\n\nThe \"Attic\" is the place where deleted files go in CVS. At some point in the past, someone checked in newfile.v, and at some later point it was deleted, hence moved to the Attic.\nBy deleting the ,v file from the repository you corrupted older commits that included the file \"newfile\". Do not do this.\nThe correct way is to restore the deleted file, then replace its content by the new file.\nAccording to http://www.cs.indiana.edu/~machrist/notes/cvs.html\n\nTo recover a file that has been removed from the repository, you essentially need to update that file to its last revision number (before it was actually deleted). For example:\ncvs update -r 1.7 deleted_file\nThis will recover deleted_file in your working repository. To find deleted files and their last revision number, issue cvs log at the command prompt.\n\nEdited in reply to comment to explain what the ,v file in the Attic means.\n",
"Are you on Windows and did you rename a file to the same name with different case (e.g. MAKEFILE vs Makefile vs makefile)? CVS used to have a problem with this (and maybe still does?):\nOSDir/mailarchive - Subject: Re: hash.c.312: findnode:\n\nManu writes:\n\nI try to rename \"makefile\" to \"Makefile\" in my cvs tree, then:\n\n\ncvs: hash.c:312: findnode: Assertion `key != ((void *)0)' failed.\ncvs [server aborted]: received abort signal\n\n\nCVS was never designed to cope with case insensitive file systems. It\n has been patched to the point where it mostly works, but there are still\n some places where it doesn't. This is one of them.\n\nYou might want to read the rest of the messages in the thread as well.\n",
"Perhaps there is some kind of pre-commit check on your repository, see here\n",
"Not sure what the issue was but I solved it by going onto the server and deleting the file Attic/newfile.v in the repository and adding it again.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"cvs",
"version_control"
] |
stackoverflow_0000070366_cvs_version_control.txt
|
Q:
Implements several interfaces with conflict in signatures
Lasty, I tried to implements an hybrid structure in Java, something that looks like:
public class MapOfSet<K, V extends HasKey<K>> implements Set<V>, Map<K, Set<V>>
Where HasKey is the following interface:
public interface HasKey<K> {
public K getKey();
}
Unfortunately, there are some conflicts between methos signature of the Set interface and the Map interface in Java. I've finally chosen to implements only the Set interface and to add the Map method without implementing this interface.
Do you see a nicer solution?
In response to the first comments, here is my goal:
Have a set structure and be able to efficiently access to a subset of values of this set, corresponding to a given key value.
At the beginning I instantiated a map and a set, but I tried to joined the two structures to optimize performances.
A:
What are you trying to accomplish? Map already exposes its keys as a Set via its [keySet()](http://java.sun.com/j2se/1.5.0/docs/api/java/util/Map.html#keySet()) method. If you want a reliable iteratior order, there's LinkedHashMap and TreeMap.
UPDATE: If you want to ensure that a value has only been inserted once, you can extend one of the classes I mentioned above to create something like a SingleEntryMap and override the implementation of put(K key, V value) to do a uniqueness check and throw an Exception when the value has already been inserted.
UPDATE: Will something like this work? (I don't have my editor up, so this may not compile)
public final class KeyedSets<K, V> implements Map<K,Set<V>> {
private final Map<K, Set<V>> internalMap = new TreeMap<K, Set<V>>;
// delegate methods go here
public Set<V> getSortedSuperset() {
final Set<V> superset = new TreeSet<V>();
for (final Map.Entry<K, V> entry : internalMap.entrySet()) {
superset.addAll(entry.getValue());
}
return superset;
}
}
A:
Perhaps you could add more information which operations do you really want. I guess you want to create a set which automatically groups their elements by a key, right? The question is which operations do you want to be able to have? How are elements added to the Set? Can elements be deleted by removing them from a grouped view? My proposal would be an interface like that:
public interface GroupedSet<K, V extends HasKey<K>> extends Set<V>{
Set<V> havingKey(K k);
}
If you want to be able to use the Set as map you can add another method
Map<K,Set<V>> asMap();
That avoids the use of multiple interface inheritance and the resulting problems.
A:
I would say that something that is meant to be sometimes used as a Map and sometimes as a Set should implement Map, since that can be viewed as a set of keys or values as well as a mapping between keys and values. That is what the Map.containsKey() and Map.containsValue() methods are for.
|
Implements several interfaces with conflict in signatures
|
Lasty, I tried to implements an hybrid structure in Java, something that looks like:
public class MapOfSet<K, V extends HasKey<K>> implements Set<V>, Map<K, Set<V>>
Where HasKey is the following interface:
public interface HasKey<K> {
public K getKey();
}
Unfortunately, there are some conflicts between methos signature of the Set interface and the Map interface in Java. I've finally chosen to implements only the Set interface and to add the Map method without implementing this interface.
Do you see a nicer solution?
In response to the first comments, here is my goal:
Have a set structure and be able to efficiently access to a subset of values of this set, corresponding to a given key value.
At the beginning I instantiated a map and a set, but I tried to joined the two structures to optimize performances.
|
[
"What are you trying to accomplish? Map already exposes its keys as a Set via its [keySet()](http://java.sun.com/j2se/1.5.0/docs/api/java/util/Map.html#keySet()) method. If you want a reliable iteratior order, there's LinkedHashMap and TreeMap.\nUPDATE: If you want to ensure that a value has only been inserted once, you can extend one of the classes I mentioned above to create something like a SingleEntryMap and override the implementation of put(K key, V value) to do a uniqueness check and throw an Exception when the value has already been inserted.\nUPDATE: Will something like this work? (I don't have my editor up, so this may not compile)\npublic final class KeyedSets<K, V> implements Map<K,Set<V>> {\n private final Map<K, Set<V>> internalMap = new TreeMap<K, Set<V>>;\n // delegate methods go here\n public Set<V> getSortedSuperset() {\n final Set<V> superset = new TreeSet<V>();\n for (final Map.Entry<K, V> entry : internalMap.entrySet()) {\n superset.addAll(entry.getValue());\n }\n return superset;\n }\n}\n\n",
"Perhaps you could add more information which operations do you really want. I guess you want to create a set which automatically groups their elements by a key, right? The question is which operations do you want to be able to have? How are elements added to the Set? Can elements be deleted by removing them from a grouped view? My proposal would be an interface like that:\npublic interface GroupedSet<K, V extends HasKey<K>> extends Set<V>{\n Set<V> havingKey(K k);\n}\n\nIf you want to be able to use the Set as map you can add another method\nMap<K,Set<V>> asMap();\n\nThat avoids the use of multiple interface inheritance and the resulting problems.\n",
"I would say that something that is meant to be sometimes used as a Map and sometimes as a Set should implement Map, since that can be viewed as a set of keys or values as well as a mapping between keys and values. That is what the Map.containsKey() and Map.containsValue() methods are for.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"collections",
"java"
] |
stackoverflow_0000070732_collections_java.txt
|
Q:
VMWare Server: Virtual Hard Drive Type
For best performance, is it better to use a virtual IDE HDD or virtual SCSI HDD?
If, SCSI, does it matter whether you use an BusLogic or LSILogic?
A:
Go for the SCSI and LSILogic. IDE and BusLogic are for compatibility reasons. Like when you do physical2virtual...
There's a whitepaper from vmware showing the difference between LSILogic and BusLogic, which in my opinion is rather small:
http://www.vmware.com/pdf/ESX2_Storage_Performance.pdf
Edit after like three years:
With current ESX environments it's best to use the Paravirtual SCSI device.
A:
I don't think that your choice of Virtual Disk type in VMWare matters for performance. What matters is the following: How much memory you have (the more the better), How many CPU cores you have (the more the better), and more specifically about disks, what matters most is the speed of the physical drive (a 15K RPM SCSI drive being best). If you have, for example, 3 physical HDs and 3 virtual HDs, then I would place one virtual HD in each physical HD. This is known to improve virtual HD performance. Also keep your virtual HDs defragmented.
|
VMWare Server: Virtual Hard Drive Type
|
For best performance, is it better to use a virtual IDE HDD or virtual SCSI HDD?
If, SCSI, does it matter whether you use an BusLogic or LSILogic?
|
[
"Go for the SCSI and LSILogic. IDE and BusLogic are for compatibility reasons. Like when you do physical2virtual...\nThere's a whitepaper from vmware showing the difference between LSILogic and BusLogic, which in my opinion is rather small:\nhttp://www.vmware.com/pdf/ESX2_Storage_Performance.pdf\nEdit after like three years:\nWith current ESX environments it's best to use the Paravirtual SCSI device.\n",
"I don't think that your choice of Virtual Disk type in VMWare matters for performance. What matters is the following: How much memory you have (the more the better), How many CPU cores you have (the more the better), and more specifically about disks, what matters most is the speed of the physical drive (a 15K RPM SCSI drive being best). If you have, for example, 3 physical HDs and 3 virtual HDs, then I would place one virtual HD in each physical HD. This is known to improve virtual HD performance. Also keep your virtual HDs defragmented.\n"
] |
[
6,
4
] |
[] |
[] |
[
"hard_drive",
"ide",
"performance",
"scsi",
"vmware"
] |
stackoverflow_0000070811_hard_drive_ide_performance_scsi_vmware.txt
|
Q:
Why is String.Format static?
Compare
String.Format("Hello {0}", "World");
with
"Hello {0}".Format("World");
Why did the .Net designers choose a static method over an instance method? What do you think?
A:
Because the Format method has nothing to do with a string's current value.
That's true for all string methods because .NET strings are immutable.
If it was non-static, you would need a string to begin with.
It does: the format string.
I believe this is just another example of the many design flaws in the .NET platform (and I don't mean this as a flame; I still find the .NET framework superior to most other frameworks).
A:
I don't actually know the answer but I suspect that it has something to do with the aspect of invoking methods on string literals directly.
If I recall correctly (I didn't actually verify this because I don't have an old IDE handy), early versions of the C# IDE had trouble detecting method calls against string literals in IntelliSense, and that has a big impact on the discoverability of the API. If that was the case, typing the following wouldn't give you any help:
"{0}".Format(12);
If you were forced to type
new String("{0}").Format(12);
It would be clear that there was no advantage to making the Format method an instance method rather than a static method.
The .NET libraries were designed by a lot of the same people that gave us MFC, and the String class in particular bears a strong resemblance to the CString class in MFC. MFC does have an instance Format method (that uses printf style formatting codes rather than the curly-brace style of .NET) which is painful because there's no such thing as a CString literal. So in a MFC codebase that I worked on I see a lot of this:
CString csTemp = "";
csTemp.Format("Some string: %s", szFoo);
which is painful. (I'm not saying that the code above is a great way to do things even in MFC, but that does seem to be the way that most of the developers on the project learned how to use CString::Format). Coming from that heritage, I can imagine that the API designers were trying to avoid that sort of situation again.
A:
Well I guess you have to be rather particular about it, but like people are saying, it makes more sense for String.Format to be static because of the implied semantics. Consider:
"Hello {0}".Format("World"); // this makes it sound like Format *modifies*
// the string, which is not possible as
// strings are immutable.
string[] parts = "Hello World".Split(' '); // this however sounds right,
// because it implies that you
// split an existing string into
// two *new* strings.
A:
The first thing I did when I got to upgrade to VS2008 and C#3, was to do this
public static string F( this string format, params object[] args )
{
return String.Format(format, args);
}
So I can now change my code from
String.Format("Hello {0}", Name);
to
"Hello {0}".F(Name);
which I preferred at the time.
Nowadays (2014) I don't bother because it's just another hassle to keep re-adding that to each random project I create, or link in some bag-of-utils library.
As for why the .NET designers chose it? Who knows. It seems entirely subjective.
My money is on either
Copying Java
The guy writing it at the time subjectively liked it more.
There aren't really any other valid reasons that I can find
A:
I think it is because Format doesn't take a string per se, but a "format string". Most strings are equal to things like "Bob Smith" or "1010 Main St" or what have you and not to "Hello {0}", generally you only put those format strings in when you are trying to use a template to create another string, like a factory method, and therefore it lends it self to a static method.
A:
I think it's because it's a creator method (not sure if there's a better name). All it does is take what you give it and return a single string object. It doesn't operate on an existing object. If it was non-static, you would need a string to begin with.
A:
Maybe the .NET designers did it this way because JAVA did it this way...
Embrace and extend. :)
See: http://discuss.techinterview.org/default.asp?joel.3.349728.40
A:
.NET Strings are Immutable
Therefore having an instance method makes absolutely no sense.
By that logic the string class should have no instance methods which return modified copies of the object, yet it has plenty (Trim, ToUpper, and so on). Furthermore, lots of other objects in the framework do this too.
I agree that if they were to make it an instance method, Format seems like it would be a bad name, but that doesn't mean the functionality shouldn't be an instance method.
Why not this? It's consistent with the rest of the .NET framework
"Hello {0}".ToString("Orion");
A:
Because the Format method has nothing to do with a string's current value. The value of the string isn't used. It takes a string and returns one.
A:
Instance methods are good when you have an object that maintains some state; the process of formatting a string does not affect the string you are operating on (read: does not modify its state), it creates a new string.
With extension methods, you can now have your cake and eat it too (i.e. you can use the latter syntax if it helps you sleep better at night).
A:
I think it looks better in general to use String.Format, but I could see a point in wanting to have a non-static function for when you already have a string stored in a variable that you want to "format".
As an aside, all functions of the string class don't act on the string, but return a new string object, because strings are immutable.
A:
@Jared:
Non-overloaded, non-inherited static methods (like Class.b(a,c)) that take an instance as the first variable are semantically equivalent to a method call (like a.b(c))
No, they aren't.
(Assuming it compiles to the same CIL, which it should.)
That's your mistake. The CIL produced is different. The distinction is that member methods can't be invoked on null values so the CIL inserts a check against null values. This obviously isn't done in the static variant.
However, String.Format does not allow null values so the developers had to insert a check manually. From this point of view, the member method variant would be technically superior.
A:
This is to avoid confusion with .ToString() methods.
For instance:
double test = 1.54d;
//string.Format pattern
string.Format("This is a test: {0:F1}", test );
//ToString pattern
"This is a test: " + test.ToString("F1");
If Format was an instance method on string this could cause confusion, as the patterns are different.
String.Format() is a utility method to turn multiple objects into a formatted string.
An instance method on a string does something to that string.
Of course, you could do:
public static string FormatInsert( this string input, params object[] args) {
return string.Format( input, args );
}
"Hello {0}, I have {1} things.".FormatInsert( "world", 3);
A:
I don't know why they did it, but it doesn't really matter anymore:
public static class StringExtension
{
public static string FormatWith(this string format, params object[] args)
{
return String.Format(format, args);
}
}
public class SomeClass
{
public string SomeMethod(string name)
{
return "Hello, {0}".FormatWith(name);
}
}
That flows a lot easier, IMHO.
A:
Another reason for String.Format is the similarity to function printf from C. It was supposed to let C developers have an easier time switching languages.
A:
A big design goal for C# was to make the transition from C/C++ to it as easy as possible. Using dot syntax on a string literal would look very strange to someone with only a C/C++ background, and formatting strings is something a developer will likely do on day one with the language. So I believe they made it static to make it closer to familiar territory.
A:
I see nothing wrong with it being static..
The semantics of the static method seem to make a lot more sense to me. Perhaps it is because it is a primitive. Where primitives are used to often, you want to make the utility code for working with them as light as possible.. Also, I think the semantics are a lot better with String.Format over "MyString BLAH BLAH {0}".Format ...
A:
I haven't tried it yet but you could make an extension method for what you want. I wouldn't do it, but I think it would work.
Also I find String.Format() more in line with other patterned static methods like Int32.Parse(), long.TryParse(), etc.
You cloud also just use a StringBuilder if you want a non static format.
StringBuilder.AppendFormat()
A:
Non-overloaded, non-inherited static methods (like Class.b(a,c)) that take an instance as the first variable are semantically equivalent to a method call (like a.b(c)) so the platform team made an arbitrary, aesthetic choice. (Assuming it compiles to the same CIL, which it should.) The only way to know would be to ask them why.
Possibly they did it to keep the two strings close to each other lexigraphically, i.e.
String.Format("Foo {0}", "Bar");
instead of
"Foo {0}".Format("bar");
You want to know what the indexes are mapped to; perhaps they thought that the ".Format" part just adds noise in the middle.
Interestingly, the ToString method (at least for numbers) is the opposite: number.ToString("000") with the format string on the right hand side.
|
Why is String.Format static?
|
Compare
String.Format("Hello {0}", "World");
with
"Hello {0}".Format("World");
Why did the .Net designers choose a static method over an instance method? What do you think?
|
[
"\nBecause the Format method has nothing to do with a string's current value.\n\nThat's true for all string methods because .NET strings are immutable.\n\nIf it was non-static, you would need a string to begin with.\n\nIt does: the format string.\nI believe this is just another example of the many design flaws in the .NET platform (and I don't mean this as a flame; I still find the .NET framework superior to most other frameworks).\n",
"I don't actually know the answer but I suspect that it has something to do with the aspect of invoking methods on string literals directly.\nIf I recall correctly (I didn't actually verify this because I don't have an old IDE handy), early versions of the C# IDE had trouble detecting method calls against string literals in IntelliSense, and that has a big impact on the discoverability of the API. If that was the case, typing the following wouldn't give you any help:\n\"{0}\".Format(12);\n\nIf you were forced to type \nnew String(\"{0}\").Format(12);\n\nIt would be clear that there was no advantage to making the Format method an instance method rather than a static method. \nThe .NET libraries were designed by a lot of the same people that gave us MFC, and the String class in particular bears a strong resemblance to the CString class in MFC. MFC does have an instance Format method (that uses printf style formatting codes rather than the curly-brace style of .NET) which is painful because there's no such thing as a CString literal. So in a MFC codebase that I worked on I see a lot of this:\nCString csTemp = \"\";\ncsTemp.Format(\"Some string: %s\", szFoo);\n\nwhich is painful. (I'm not saying that the code above is a great way to do things even in MFC, but that does seem to be the way that most of the developers on the project learned how to use CString::Format). Coming from that heritage, I can imagine that the API designers were trying to avoid that sort of situation again.\n",
"Well I guess you have to be rather particular about it, but like people are saying, it makes more sense for String.Format to be static because of the implied semantics. Consider:\n\"Hello {0}\".Format(\"World\"); // this makes it sound like Format *modifies* \n // the string, which is not possible as \n // strings are immutable.\n\nstring[] parts = \"Hello World\".Split(' '); // this however sounds right, \n // because it implies that you \n // split an existing string into \n // two *new* strings.\n\n",
"The first thing I did when I got to upgrade to VS2008 and C#3, was to do this\npublic static string F( this string format, params object[] args )\n{\n return String.Format(format, args);\n}\n\nSo I can now change my code from\nString.Format(\"Hello {0}\", Name);\n\nto\n\"Hello {0}\".F(Name);\n\nwhich I preferred at the time. \nNowadays (2014) I don't bother because it's just another hassle to keep re-adding that to each random project I create, or link in some bag-of-utils library.\nAs for why the .NET designers chose it? Who knows. It seems entirely subjective.\nMy money is on either\n\nCopying Java\nThe guy writing it at the time subjectively liked it more.\n\nThere aren't really any other valid reasons that I can find\n",
"I think it is because Format doesn't take a string per se, but a \"format string\". Most strings are equal to things like \"Bob Smith\" or \"1010 Main St\" or what have you and not to \"Hello {0}\", generally you only put those format strings in when you are trying to use a template to create another string, like a factory method, and therefore it lends it self to a static method.\n",
"I think it's because it's a creator method (not sure if there's a better name). All it does is take what you give it and return a single string object. It doesn't operate on an existing object. If it was non-static, you would need a string to begin with.\n",
"Maybe the .NET designers did it this way because JAVA did it this way...\nEmbrace and extend. :)\nSee: http://discuss.techinterview.org/default.asp?joel.3.349728.40\n",
"\n.NET Strings are Immutable\n Therefore having an instance method makes absolutely no sense.\n\nBy that logic the string class should have no instance methods which return modified copies of the object, yet it has plenty (Trim, ToUpper, and so on). Furthermore, lots of other objects in the framework do this too.\nI agree that if they were to make it an instance method, Format seems like it would be a bad name, but that doesn't mean the functionality shouldn't be an instance method.\nWhy not this? It's consistent with the rest of the .NET framework\n\"Hello {0}\".ToString(\"Orion\");\n\n",
"Because the Format method has nothing to do with a string's current value. The value of the string isn't used. It takes a string and returns one.\n",
"Instance methods are good when you have an object that maintains some state; the process of formatting a string does not affect the string you are operating on (read: does not modify its state), it creates a new string.\nWith extension methods, you can now have your cake and eat it too (i.e. you can use the latter syntax if it helps you sleep better at night).\n",
"I think it looks better in general to use String.Format, but I could see a point in wanting to have a non-static function for when you already have a string stored in a variable that you want to \"format\". \nAs an aside, all functions of the string class don't act on the string, but return a new string object, because strings are immutable.\n",
"@Jared:\n\nNon-overloaded, non-inherited static methods (like Class.b(a,c)) that take an instance as the first variable are semantically equivalent to a method call (like a.b(c))\n\nNo, they aren't.\n\n(Assuming it compiles to the same CIL, which it should.)\n\nThat's your mistake. The CIL produced is different. The distinction is that member methods can't be invoked on null values so the CIL inserts a check against null values. This obviously isn't done in the static variant.\nHowever, String.Format does not allow null values so the developers had to insert a check manually. From this point of view, the member method variant would be technically superior.\n",
"This is to avoid confusion with .ToString() methods.\nFor instance:\ndouble test = 1.54d;\n\n//string.Format pattern\nstring.Format(\"This is a test: {0:F1}\", test );\n\n//ToString pattern\n\"This is a test: \" + test.ToString(\"F1\");\n\nIf Format was an instance method on string this could cause confusion, as the patterns are different.\nString.Format() is a utility method to turn multiple objects into a formatted string.\nAn instance method on a string does something to that string.\nOf course, you could do:\npublic static string FormatInsert( this string input, params object[] args) {\n return string.Format( input, args );\n}\n\n\"Hello {0}, I have {1} things.\".FormatInsert( \"world\", 3);\n\n",
"I don't know why they did it, but it doesn't really matter anymore:\npublic static class StringExtension\n{\n public static string FormatWith(this string format, params object[] args)\n {\n return String.Format(format, args);\n }\n}\n\npublic class SomeClass\n{\n public string SomeMethod(string name)\n {\n return \"Hello, {0}\".FormatWith(name);\n }\n}\n\nThat flows a lot easier, IMHO.\n",
"Another reason for String.Format is the similarity to function printf from C. It was supposed to let C developers have an easier time switching languages.\n",
"A big design goal for C# was to make the transition from C/C++ to it as easy as possible. Using dot syntax on a string literal would look very strange to someone with only a C/C++ background, and formatting strings is something a developer will likely do on day one with the language. So I believe they made it static to make it closer to familiar territory.\n",
"I see nothing wrong with it being static..\nThe semantics of the static method seem to make a lot more sense to me. Perhaps it is because it is a primitive. Where primitives are used to often, you want to make the utility code for working with them as light as possible.. Also, I think the semantics are a lot better with String.Format over \"MyString BLAH BLAH {0}\".Format ...\n",
"I haven't tried it yet but you could make an extension method for what you want. I wouldn't do it, but I think it would work.\nAlso I find String.Format() more in line with other patterned static methods like Int32.Parse(), long.TryParse(), etc.\nYou cloud also just use a StringBuilder if you want a non static format.\nStringBuilder.AppendFormat()\n",
"Non-overloaded, non-inherited static methods (like Class.b(a,c)) that take an instance as the first variable are semantically equivalent to a method call (like a.b(c)) so the platform team made an arbitrary, aesthetic choice. (Assuming it compiles to the same CIL, which it should.) The only way to know would be to ask them why.\nPossibly they did it to keep the two strings close to each other lexigraphically, i.e.\nString.Format(\"Foo {0}\", \"Bar\");\n\ninstead of\n\"Foo {0}\".Format(\"bar\");\n\nYou want to know what the indexes are mapped to; perhaps they thought that the \".Format\" part just adds noise in the middle.\nInterestingly, the ToString method (at least for numbers) is the opposite: number.ToString(\"000\") with the format string on the right hand side.\n"
] |
[
50,
29,
9,
8,
6,
5,
4,
4,
3,
2,
2,
2,
2,
2,
2,
2,
1,
1,
1
] |
[
"String.Format takes at least one String and returns a different String. It doesn't need to modify the format string in order to return another string, so it makes little sense to do that (ignoring your formatting of it). On the other hand, it wouldn't be that much of a stretch to make String.Format be a member function, except I don't think C# allows for const member functions like C++ does. [Please correct me and this post if it does.]\n",
"String.Format has to be a static method because strings are immutable. Making it an instance method would imply you could use it to \"format\" or modify the value of an existing string. This you can't do, and making it an instance method that returned a new string would make no sense. Hence, it's a static method.\n",
".NET Strings are Immutable\nTherefore having an instance method makes absolutely no sense.\nString foo = new String();\n\nfoo.Format(\"test {0}\",1); // Makes it look like foo should be modified by the Format method. \n\nstring newFoo = String.Format(foo, 1); // Indicates that a new string will be returned, and foo will be unaltered.\n\n"
] |
[
-1,
-1,
-2
] |
[
".net",
"string"
] |
stackoverflow_0000023228_.net_string.txt
|
Q:
What SPN do I need to set for a net.tcp service?
I have a wcf application hosted in a windows service running a local windows account. Do I need to set an SPN for this account? If so, what's the protocol the SPN needs to be set under? I know how to do this for services over HTTP, but have never done it for net.tcp.
A:
Change the service account to an AD account and register the SPN's as shown. Use your own service name e.g. fooservice
setspn -A fooservice/servermachinename domain\serviceAccountName
setspn -A fooservice/servermachinename.fullyqualifieddomainname
domain\serviceAccountName
In the client config set:
<identity>
<serviceprincipalname value="fooservice/servermachinename" />
</identity>
A:
By default (i.e. out of the box) net.tcp services are unsecured and don't perform any authentication at all. So you won't need (and in fact can't) set a service principal name.
If you need to authenticate, then check the net.tcp security modes on MSDN. The best way to understand the different combinations is to experiment!
|
What SPN do I need to set for a net.tcp service?
|
I have a wcf application hosted in a windows service running a local windows account. Do I need to set an SPN for this account? If so, what's the protocol the SPN needs to be set under? I know how to do this for services over HTTP, but have never done it for net.tcp.
|
[
"Change the service account to an AD account and register the SPN's as shown. Use your own service name e.g. fooservice\n\nsetspn -A fooservice/servermachinename domain\\serviceAccountName\n setspn -A fooservice/servermachinename.fullyqualifieddomainname\n domain\\serviceAccountName\n\nIn the client config set:\n<identity>\n <serviceprincipalname value=\"fooservice/servermachinename\" />\n</identity>\n\n",
"By default (i.e. out of the box) net.tcp services are unsecured and don't perform any authentication at all. So you won't need (and in fact can't) set a service principal name. \nIf you need to authenticate, then check the net.tcp security modes on MSDN. The best way to understand the different combinations is to experiment!\n"
] |
[
6,
3
] |
[] |
[] |
[
"security",
"spn",
"wcf"
] |
stackoverflow_0000038779_security_spn_wcf.txt
|
Q:
Open-source radix/mtrie implementation in C?
I intend to use RADIX / MTRIE as my preferred data-structure for a routing implementation.
Is there a decent open source implementation available (apart from freebsd-net) which I can use for my purpose, or do I need to write one myself?
A:
There is a radix-tree implementation available under the GNU General Public License version 2, or (at your option) any later version:
http://www.gelato.unsw.edu.au/lxr/source/lib/radix-tree.c
A:
If you cant find anything else, you can always port this java version from Google Code.
|
Open-source radix/mtrie implementation in C?
|
I intend to use RADIX / MTRIE as my preferred data-structure for a routing implementation.
Is there a decent open source implementation available (apart from freebsd-net) which I can use for my purpose, or do I need to write one myself?
|
[
"There is a radix-tree implementation available under the GNU General Public License version 2, or (at your option) any later version: \nhttp://www.gelato.unsw.edu.au/lxr/source/lib/radix-tree.c\n",
"If you cant find anything else, you can always port this java version from Google Code. \n"
] |
[
1,
0
] |
[] |
[] |
[
"algorithm",
"c",
"data_structures"
] |
stackoverflow_0000070753_algorithm_c_data_structures.txt
|
Q:
rake db:migrate doesn't detect new migration?
Experienced with Rails / ActiveRecord 2.1.1
You create a first version with (for example) ruby script\generate scaffold product title:string description:text image_url:string
This create (for example) a migration file called 20080910122415_create_products.rb
You apply the migration with rake db:migrate
Now, you add a field to the product table with ruby script\generate migration add_price_to_product price:decimal
This create a migration file called 20080910125745_add_price_to_product.rb
If you try to run rake db:migrate, it will actually revert the first migration, not apply the next one! So your product table will get destroyed!
But if you ran rake alone, it would have told you that one migration was pending
Pls note that applying rake db:migrate (once the table has been destroyed) will apply all migrations in order.
The only workaround I found is to specify the version of the new migration as in:
rake db:migrate version=20080910125745
So I'm wondering: is this an expected new behavior?
A:
You should be able to use
rake db:migrate:up
to force it to go forward, but then you risk missing interleaved migrations from other people on your team
if you run
rake db:migrate
twice, it will reapply all your migrations.
I encounter the same behavior on windows with SQLite, it might be a bug specific to such an environment.
Edit -- I found why. In the railstie database.rake task you have the following code :
desc "Migrate the database through scripts in db/migrate. Target specific version with VERSION=x. Turn off output with VERBOSE=false."
task :migrate => :environment do
ActiveRecord::Migration.verbose = ENV["VERBOSE"] ? ENV["VERBOSE"] == "true" : true
ActiveRecord::Migrator.migrate("db/migrate/", ENV["VERSION"] ? ENV["VERSION"].to_i : nil)
Rake::Task["db:schema:dump"].invoke if ActiveRecord::Base.schema_format == :ruby
end
Then in my environment variables I have
echo %Version% #=> V3.5.0f
in Ruby
ENV["VERSION"] # => V3.5.0f
ENV["VERSION"].to_i #=>0 not nil !
thus the rake task calls
ActiveRecord::Migrator.migrate("db/migrate/", 0)
and in ActiveRecord::Migrator we have :
class Migrator#:nodoc:
class << self
def migrate(migrations_path, target_version = nil)
case
when target_version.nil? then up(migrations_path, target_version)
when current_version > target_version then down(migrations_path, target_version)
else up(migrations_path, target_version)
end
end
Yes, rake db:migrate VERSION=0 is the long version for rake db:migrate:down
Edit - I would go update the lighthouse bug but I the super company proxy forbids that I connect there
In the meantime you may try to unset Version before you call migrate ...
A:
This is not the expected behaviour. I was going to suggest reporting this as a bug on lighthouse, but I see you've already done so! If you provide some more information (including OS/database/ruby version) I will take a look at it.
A:
I respectfully disagree Tom! this is a bug !! V3.5.0f is not a valid version for rake migrations. Rake should not use it to migrate:down just because ruby chose to consider that "V3.5.0f".to_i is 0 ...
Rake should loudly complain that VERSION is not valid so that users know what is up
(between you and me, checking that the version is a YYYYMMDD formated timestamp by converting to integer is a bit light)
[Damn IE6 that won't allow me to comment ! and no I can't change browser thanks corporate]
A:
Jean,
Thanks a lot for your investigation. You're right, and actually I think you've uncovered a more severe bug, of species 'design bug'.
What's happening is that rake will grab whatever value you pass to the command line and store them as environment variables. The rake tasks that will eventually get called will just pull this values from the environment variable.
When db:migrate queries ENV["VERSION"], it actually requests the version parameter which you set calling rake. When you call rake db:migrate, you don't pass any version.
But we do have an environment variable called VERSION that has been set for other purposes by some other program (I don't which one yet). And the guys behind rake (or behind database.rake) haven't figured this would happen. That's a design bug. At least, they could have used more specific variable names like "RAKE_VERSION" or "RAKE_PARAM_VERSION" instead of just "VERSION".
Tom, I will definitely not close but edit my bug report on lighthouse to reflect these new findings.
And thanks again Jean for your help. I've posted this bug on lighthouse like 5 days agao and still got no answer!
Rollo
|
rake db:migrate doesn't detect new migration?
|
Experienced with Rails / ActiveRecord 2.1.1
You create a first version with (for example) ruby script\generate scaffold product title:string description:text image_url:string
This create (for example) a migration file called 20080910122415_create_products.rb
You apply the migration with rake db:migrate
Now, you add a field to the product table with ruby script\generate migration add_price_to_product price:decimal
This create a migration file called 20080910125745_add_price_to_product.rb
If you try to run rake db:migrate, it will actually revert the first migration, not apply the next one! So your product table will get destroyed!
But if you ran rake alone, it would have told you that one migration was pending
Pls note that applying rake db:migrate (once the table has been destroyed) will apply all migrations in order.
The only workaround I found is to specify the version of the new migration as in:
rake db:migrate version=20080910125745
So I'm wondering: is this an expected new behavior?
|
[
"You should be able to use \nrake db:migrate:up \n\nto force it to go forward, but then you risk missing interleaved migrations from other people on your team\nif you run \nrake db:migrate \n\ntwice, it will reapply all your migrations.\nI encounter the same behavior on windows with SQLite, it might be a bug specific to such an environment.\nEdit -- I found why. In the railstie database.rake task you have the following code :\ndesc \"Migrate the database through scripts in db/migrate. Target specific version with VERSION=x. Turn off output with VERBOSE=false.\"\ntask :migrate => :environment do\n ActiveRecord::Migration.verbose = ENV[\"VERBOSE\"] ? ENV[\"VERBOSE\"] == \"true\" : true\n ActiveRecord::Migrator.migrate(\"db/migrate/\", ENV[\"VERSION\"] ? ENV[\"VERSION\"].to_i : nil)\n Rake::Task[\"db:schema:dump\"].invoke if ActiveRecord::Base.schema_format == :ruby\nend\n\nThen in my environment variables I have \necho %Version% #=> V3.5.0f\n\nin Ruby\nENV[\"VERSION\"] # => V3.5.0f\nENV[\"VERSION\"].to_i #=>0 not nil !\n\nthus the rake task calls \nActiveRecord::Migrator.migrate(\"db/migrate/\", 0)\n\nand in ActiveRecord::Migrator we have : \nclass Migrator#:nodoc:\n class << self\n def migrate(migrations_path, target_version = nil)\n case\n when target_version.nil? then up(migrations_path, target_version)\n when current_version > target_version then down(migrations_path, target_version)\n else up(migrations_path, target_version)\n end\n end\n\nYes, rake db:migrate VERSION=0 is the long version for rake db:migrate:down \nEdit - I would go update the lighthouse bug but I the super company proxy forbids that I connect there\nIn the meantime you may try to unset Version before you call migrate ...\n",
"This is not the expected behaviour. I was going to suggest reporting this as a bug on lighthouse, but I see you've already done so! If you provide some more information (including OS/database/ruby version) I will take a look at it. \n",
"I respectfully disagree Tom! this is a bug !! V3.5.0f is not a valid version for rake migrations. Rake should not use it to migrate:down just because ruby chose to consider that \"V3.5.0f\".to_i is 0 ... \nRake should loudly complain that VERSION is not valid so that users know what is up\n(between you and me, checking that the version is a YYYYMMDD formated timestamp by converting to integer is a bit light)\n[Damn IE6 that won't allow me to comment ! and no I can't change browser thanks corporate]\n",
"Jean,\nThanks a lot for your investigation. You're right, and actually I think you've uncovered a more severe bug, of species 'design bug'.\nWhat's happening is that rake will grab whatever value you pass to the command line and store them as environment variables. The rake tasks that will eventually get called will just pull this values from the environment variable.\nWhen db:migrate queries ENV[\"VERSION\"], it actually requests the version parameter which you set calling rake. When you call rake db:migrate, you don't pass any version.\nBut we do have an environment variable called VERSION that has been set for other purposes by some other program (I don't which one yet). And the guys behind rake (or behind database.rake) haven't figured this would happen. That's a design bug. At least, they could have used more specific variable names like \"RAKE_VERSION\" or \"RAKE_PARAM_VERSION\" instead of just \"VERSION\".\nTom, I will definitely not close but edit my bug report on lighthouse to reflect these new findings.\nAnd thanks again Jean for your help. I've posted this bug on lighthouse like 5 days agao and still got no answer!\nRollo\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"activerecord",
"migration",
"ruby_on_rails"
] |
stackoverflow_0000070318_activerecord_migration_ruby_on_rails.txt
|
Q:
Server performance metric tools for LAMP
Any suggestions for tools to monitor page load times/errors and other performance metrics for a PHP application?
I am aware of the FireBug and YSlow tools, but this is for more server monitoring.
A:
There is the classic 'ab' (apachebench) program. More power comes from JMmeter. For server health, I recommend Munin, which can painlessly capture data from several systems and aggregate it on one page.
A:
Try Nagios, it's the default tool to monitor servers. You can write plugins to report just about any data.
A:
For profiling your code, there's Xdebug. Doing regression testing with Siege can also be quite useful.
A:
You can also try httperf. It's a very flexible tool and if you want to test how your application and webserver can deal with various traffic loads you should definitely give it a go.
|
Server performance metric tools for LAMP
|
Any suggestions for tools to monitor page load times/errors and other performance metrics for a PHP application?
I am aware of the FireBug and YSlow tools, but this is for more server monitoring.
|
[
"There is the classic 'ab' (apachebench) program. More power comes from JMmeter. For server health, I recommend Munin, which can painlessly capture data from several systems and aggregate it on one page. \n",
"Try Nagios, it's the default tool to monitor servers. You can write plugins to report just about any data. \n",
"For profiling your code, there's Xdebug. Doing regression testing with Siege can also be quite useful.\n",
"You can also try httperf. It's a very flexible tool and if you want to test how your application and webserver can deal with various traffic loads you should definitely give it a go.\n"
] |
[
4,
1,
1,
1
] |
[] |
[] |
[
"metrics",
"monitoring",
"performance"
] |
stackoverflow_0000069459_metrics_monitoring_performance.txt
|
Q:
Function Overloading and UDF in Excel VBA
I'm using Excel VBA to a write a UDF. I would like to overload my own UDF with a couple of different versions so that different arguments will call different functions.
As VBA doesn't seem to support this, could anyone suggest a good, non-messy way of achieving the same goal? Should I be using Optional arguments or is there a better way?
A:
Declare your arguments as Optional Variants, then you can test to see if they're missing using IsMissing() or check their type using TypeName(), as shown in the following example:
Public Function Foo(Optional v As Variant) As Variant
If IsMissing(v) Then
Foo = "Missing argument"
ElseIf TypeName(v) = "String" Then
Foo = v & " plus one"
Else
Foo = v + 1
End If
End Function
This can be called from a worksheet as =FOO(), =FOO(number), or =FOO("string").
A:
If you can distinguish by parameter count, then something like this would work:
Public Function Morph(ParamArray Args())
Select Case UBound(Args)
Case -1 '' nothing supplied
Morph = Morph_NoParams()
Case 0
Morph = Morph_One_Param(Args(0))
Case 1
Morph = Two_Param_Morph(Args(0), Args(1))
Case Else
Morph = CVErr(xlErrRef)
End Select
End Function
Private Function Morph_NoParams()
Morph_NoParams = "I'm parameterless"
End Function
Private Function Morph_One_Param(arg)
Morph_One_Param = "I has a parameter, it's " & arg
End Function
Private Function Two_Param_Morph(arg0, arg1)
Two_Param_Morph = "I is in 2-params and they is " & arg0 & "," & arg1
End Function
If the only way to distinguish the function is by types, then you're effectively going to have to do what C++ and other languages with overridden functions do, which is to call by signature. I'd suggest making the call look something like this:
Public Function MorphBySig(ParamArray args())
Dim sig As String
Dim idx As Long
Dim MorphInstance As MorphClass
For idx = LBound(args) To UBound(args)
sig = sig & TypeName(args(idx))
Next
Set MorphInstance = New MorphClass
MorphBySig = CallByName(MorphInstance, "Morph_" & sig, VbMethod, args)
End Function
and creating a class with a number of methods that match the signatures you expect. You'll probably need some error-handling though, and be warned that the types that are recognizable are limited: dates are TypeName Double, for example.
A:
VBA is messy. I'm not sure there is an easy way to do fake overloads:
In the past I've either used lots of Optionals, or used varied functions. For instance
Foo_DescriptiveName1()
Foo_DescriptiveName2()
I'd say go with Optional arguments that have sensible defaults unless the argument list is going to get stupid, then create separate functions to call for your cases.
A:
You mighta also want to consider using a variant data type for your arguments list and then figure out what's what type using the TypeOf statement, and then call the appropriate functions when you figure out what's what...
|
Function Overloading and UDF in Excel VBA
|
I'm using Excel VBA to a write a UDF. I would like to overload my own UDF with a couple of different versions so that different arguments will call different functions.
As VBA doesn't seem to support this, could anyone suggest a good, non-messy way of achieving the same goal? Should I be using Optional arguments or is there a better way?
|
[
"Declare your arguments as Optional Variants, then you can test to see if they're missing using IsMissing() or check their type using TypeName(), as shown in the following example:\nPublic Function Foo(Optional v As Variant) As Variant\n\n If IsMissing(v) Then\n Foo = \"Missing argument\"\n ElseIf TypeName(v) = \"String\" Then\n Foo = v & \" plus one\"\n Else\n Foo = v + 1\n End If\n\nEnd Function\n\nThis can be called from a worksheet as =FOO(), =FOO(number), or =FOO(\"string\").\n",
"If you can distinguish by parameter count, then something like this would work:\nPublic Function Morph(ParamArray Args())\n\n Select Case UBound(Args)\n Case -1 '' nothing supplied\n Morph = Morph_NoParams()\n Case 0\n Morph = Morph_One_Param(Args(0))\n Case 1\n Morph = Two_Param_Morph(Args(0), Args(1))\n Case Else\n Morph = CVErr(xlErrRef)\n End Select\n\nEnd Function\n\nPrivate Function Morph_NoParams()\n Morph_NoParams = \"I'm parameterless\"\nEnd Function\n\nPrivate Function Morph_One_Param(arg)\n Morph_One_Param = \"I has a parameter, it's \" & arg\nEnd Function\n\nPrivate Function Two_Param_Morph(arg0, arg1)\n Two_Param_Morph = \"I is in 2-params and they is \" & arg0 & \",\" & arg1\nEnd Function\n\nIf the only way to distinguish the function is by types, then you're effectively going to have to do what C++ and other languages with overridden functions do, which is to call by signature. I'd suggest making the call look something like this:\nPublic Function MorphBySig(ParamArray args())\n\nDim sig As String\nDim idx As Long\nDim MorphInstance As MorphClass\n\n For idx = LBound(args) To UBound(args)\n sig = sig & TypeName(args(idx))\n Next\n\n Set MorphInstance = New MorphClass\n\n MorphBySig = CallByName(MorphInstance, \"Morph_\" & sig, VbMethod, args)\n\nEnd Function\n\nand creating a class with a number of methods that match the signatures you expect. You'll probably need some error-handling though, and be warned that the types that are recognizable are limited: dates are TypeName Double, for example.\n",
"VBA is messy. I'm not sure there is an easy way to do fake overloads:\nIn the past I've either used lots of Optionals, or used varied functions. For instance \nFoo_DescriptiveName1()\n\nFoo_DescriptiveName2()\n\nI'd say go with Optional arguments that have sensible defaults unless the argument list is going to get stupid, then create separate functions to call for your cases.\n",
"You mighta also want to consider using a variant data type for your arguments list and then figure out what's what type using the TypeOf statement, and then call the appropriate functions when you figure out what's what...\n"
] |
[
60,
8,
0,
0
] |
[] |
[] |
[
"excel",
"user_defined_functions",
"vba"
] |
stackoverflow_0000064436_excel_user_defined_functions_vba.txt
|
Q:
Perl aids for regression testing
Is there a Perl module that allows me to view diffs between actual and reference output of programs (or functions)? The test fails if there are differences.
Also, in case there are differences but the output is OK (because the functionality has changed) I want to be able to commit the actual output as future reference output.
A:
Perl has excellent utilities for doing testing. The most commonly used module is probably Test::More, which provides all the infrastructure you're likely to need for writing regression tests. The prove utility provides an easy interface for running test suites and summarizing the results. The Test::Differences module (which can be used with Test::More) might be useful to you as well. It formats differences as side-by-side comparisons. As for committing the actual output as the new reference material, that will depend on how your code under test provides output and how you capture it. It should be easy if you write to files and then compare them. If that's the case you might want to use the Text::Diff module within your test suite.
A:
As mentioned, Test::Differences is one of the standard ways of accomplishing this, but I needed to mention PerlUnit: please do not use this. It's "abandonware" and does not integrate with standard Perl testing tools. Thus, for all new test modules coming out, you would have to port their functionality if you wanted to use them. (If someone has picked up the maintenance of this abandoned module, drop me a line. I need to talk to them as I maintain core testing tools I'd like to help integrate with PerlUnit).
Disclaimer: while Id didn't write it, I currently maintain Test::Differences, so I might be biased.
A:
I tend to use more of the Test::Simple and Test::More functionality. I looked at PerlUnit and it seems to provide much of the functionality which is already built into the standard libraries with the Test::Simple and Test::More libraries.
A:
I question those of you who recommend the use of PerlUnit. It hasn't had a release in 3 years. If you really want xUnit-style testing, have a look at Test::Class, it does the same job, but in a more Perlish way. The fact that it's still maintained and has regular releases doesn't hurt either.
Just make sure that it makes sense for your project. Maybe good old Test::More is all you need (it usually is for me). I recommend reading the "Why you should [not] use Test::Class" sections in the docs.
A:
For testing the output of a program, there is Test::Command. It allows to easily verify the stdout and stderr (and the exit value) of programs. E.g.:
use Test::Command tests => 3;
my $echo_test = Test::Command->new( cmd => 'echo out' );
$echo_test->exit_is_num(0, 'exit normally');
$echo_test->stdout_is_eq("out\n", 'echoes out');
$echo_test->stderr_unlike( qr/something went (wrong|bad)/, 'nothing went bad' )
The module also has a functional interface too, if it's more to your liking.
A:
The community standard workhorses are Test::Simple (for getting started with testing) and Test::More (for once you want more than Test::Simple can do for you). Both are built around the concept of expected versus actual output, and both will show you differences when they occur. The perldoc for these modules will get you on your way.
You might also want to check out the Perl QA wiki, and if you're really interested in perl testing, the perl-qa mailing list might be worth looking into -- though it's generally more about creation of testing systems for Perl than using those systems within the language.
Finally, using the module-starter tool (from Module::Starter) will give you a really nice "CPAN standard" layout for new work -- or for dropping existing code into -- including a readymade test harness setup.
|
Perl aids for regression testing
|
Is there a Perl module that allows me to view diffs between actual and reference output of programs (or functions)? The test fails if there are differences.
Also, in case there are differences but the output is OK (because the functionality has changed) I want to be able to commit the actual output as future reference output.
|
[
"Perl has excellent utilities for doing testing. The most commonly used module is probably Test::More, which provides all the infrastructure you're likely to need for writing regression tests. The prove utility provides an easy interface for running test suites and summarizing the results. The Test::Differences module (which can be used with Test::More) might be useful to you as well. It formats differences as side-by-side comparisons. As for committing the actual output as the new reference material, that will depend on how your code under test provides output and how you capture it. It should be easy if you write to files and then compare them. If that's the case you might want to use the Text::Diff module within your test suite.\n",
"As mentioned, Test::Differences is one of the standard ways of accomplishing this, but I needed to mention PerlUnit: please do not use this. It's \"abandonware\" and does not integrate with standard Perl testing tools. Thus, for all new test modules coming out, you would have to port their functionality if you wanted to use them. (If someone has picked up the maintenance of this abandoned module, drop me a line. I need to talk to them as I maintain core testing tools I'd like to help integrate with PerlUnit).\nDisclaimer: while Id didn't write it, I currently maintain Test::Differences, so I might be biased.\n",
"I tend to use more of the Test::Simple and Test::More functionality. I looked at PerlUnit and it seems to provide much of the functionality which is already built into the standard libraries with the Test::Simple and Test::More libraries.\n",
"I question those of you who recommend the use of PerlUnit. It hasn't had a release in 3 years. If you really want xUnit-style testing, have a look at Test::Class, it does the same job, but in a more Perlish way. The fact that it's still maintained and has regular releases doesn't hurt either.\nJust make sure that it makes sense for your project. Maybe good old Test::More is all you need (it usually is for me). I recommend reading the \"Why you should [not] use Test::Class\" sections in the docs.\n",
"For testing the output of a program, there is Test::Command. It allows to easily verify the stdout and stderr (and the exit value) of programs. E.g.:\nuse Test::Command tests => 3;\n\nmy $echo_test = Test::Command->new( cmd => 'echo out' );\n\n$echo_test->exit_is_num(0, 'exit normally');\n$echo_test->stdout_is_eq(\"out\\n\", 'echoes out');\n$echo_test->stderr_unlike( qr/something went (wrong|bad)/, 'nothing went bad' )\n\nThe module also has a functional interface too, if it's more to your liking.\n",
"The community standard workhorses are Test::Simple (for getting started with testing) and Test::More (for once you want more than Test::Simple can do for you). Both are built around the concept of expected versus actual output, and both will show you differences when they occur. The perldoc for these modules will get you on your way.\nYou might also want to check out the Perl QA wiki, and if you're really interested in perl testing, the perl-qa mailing list might be worth looking into -- though it's generally more about creation of testing systems for Perl than using those systems within the language.\nFinally, using the module-starter tool (from Module::Starter) will give you a really nice \"CPAN standard\" layout for new work -- or for dropping existing code into -- including a readymade test harness setup.\n"
] |
[
12,
6,
4,
3,
2,
2
] |
[] |
[] |
[
"perl",
"regression",
"testing"
] |
stackoverflow_0000066330_perl_regression_testing.txt
|
Q:
How do I retrieve IPIEHTMLDocument2 interface on IE Mobile
I wrote an Active X plugin for IE7 which implements IObjectWithSite besides some other necessary interfaces (note no IOleClient). This interface is queried and called by IE7. During the SetSite() call I retrieve a pointer to IE7's site interface which I can use to retrieve the IHTMLDocument2 interface using the following approach:
IUnknown *site = pUnkSite; /* retrieved from IE7 during SetSite() call */
IServiceProvider *sp = NULL;
IHTMLWindow2 *win = NULL;
IHTMLDocument2 *doc = NULL;
if(site) {
site->QueryInterface(IID_IServiceProvider, (void **)&sp);
if(sp) {
sp->QueryService(IID_IHTMLWindow2, IID_IHTMLWindow2, (void **)&win);
if(win) {
win->get_document(&doc);
}
}
}
if(doc) {
/* found */
}
I tried a similiar approach on PIE as well using the following code, however, even the IPIEHTMLWindow2 interface cannot be acquired, so I'm stuck:
IUnknown *site = pUnkSite; /* retrieved from PIE during SetSite() call */
IPIEHTMLWindow2 *win = NULL;
IPIEHTMLDocument1 *tmp = NULL;
IPIEHTMLDocument2 *doc = NULL;
if(site) {
site->QueryInterface(__uuidof(*win), (void **)&win);
if(win) { /* never the case */
win->get_document(&tmp);
if(tmp) {
tmp->QueryInterface(__uuidof(*doc), (void **)&doc);
}
}
}
if(doc) {
/* found */
}
Using the IServiceProvider interface doesn't work either, so I already tested this.
Any ideas?
A:
I found the following code in the Google Gears code, here. I copied the functions I think you need to here. The one you need is at the bottom (GetHtmlWindow2), but the other two are needed as well. Hopefully I didn't miss anything, but if I did the stuff you need is probably at the link.
#ifdef WINCE
// We can't get IWebBrowser2 for WinCE.
#else
HRESULT ActiveXUtils::GetWebBrowser2(IUnknown *site, IWebBrowser2 **browser2) {
CComQIPtr<IServiceProvider> service_provider = site;
if (!service_provider) { return E_FAIL; }
return service_provider->QueryService(SID_SWebBrowserApp,
IID_IWebBrowser2,
reinterpret_cast<void**>(browser2));
}
#endif
HRESULT ActiveXUtils::GetHtmlDocument2(IUnknown *site,
IHTMLDocument2 **document2) {
HRESULT hr;
#ifdef WINCE
// Follow path Window2 -> Window -> Document -> Document2
CComPtr<IPIEHTMLWindow2> window2;
hr = GetHtmlWindow2(site, &window2);
if (FAILED(hr) || !window2) { return false; }
CComQIPtr<IPIEHTMLWindow> window = window2;
CComPtr<IHTMLDocument> document;
hr = window->get_document(&document);
if (FAILED(hr) || !document) { return E_FAIL; }
return document->QueryInterface(__uuidof(*document2),
reinterpret_cast<void**>(document2));
#else
CComPtr<IWebBrowser2> web_browser2;
hr = GetWebBrowser2(site, &web_browser2);
if (FAILED(hr) || !web_browser2) { return E_FAIL; }
CComPtr<IDispatch> doc_dispatch;
hr = web_browser2->get_Document(&doc_dispatch);
if (FAILED(hr) || !doc_dispatch) { return E_FAIL; }
return doc_dispatch->QueryInterface(document2);
#endif
}
HRESULT ActiveXUtils::GetHtmlWindow2(IUnknown *site,
#ifdef WINCE
IPIEHTMLWindow2 **window2) {
// site is javascript IDispatch pointer.
return site->QueryInterface(__uuidof(*window2),
reinterpret_cast<void**>(window2));
#else
IHTMLWindow2 **window2) {
CComPtr<IHTMLDocument2> html_document2;
// To hook an event on a page's window object, follow the path
// IWebBrowser2->document->parentWindow->IHTMLWindow2
HRESULT hr = GetHtmlDocument2(site, &html_document2);
if (FAILED(hr) || !html_document2) { return E_FAIL; }
return html_document2->get_parentWindow(window2);
#endif
}
A:
Well I was aware of the gears code already. The mechanism gears uses is based on a workaround through performing an explicit method call into the gears plugin from the gears loader to set the window object and use that as site interface instead of the IUnknown provided by IE Mobile in the SetSite call. Regarding to the gears code the Google engineers are aware of the same problem I'm asking and came up with this workaround I described.
However, I believe there must be another more "official" way of dealing with this issue since explicitely setting the site on an Active X control/plugin isn't very great. I'm going to ask the MS IE Mobile team directly now and will keep you informed once I get a solution. It might be a bug in IE Mobile which is the most likely thing I can imagine of, but who knows...
But thanks anyways for your response ;))
|
How do I retrieve IPIEHTMLDocument2 interface on IE Mobile
|
I wrote an Active X plugin for IE7 which implements IObjectWithSite besides some other necessary interfaces (note no IOleClient). This interface is queried and called by IE7. During the SetSite() call I retrieve a pointer to IE7's site interface which I can use to retrieve the IHTMLDocument2 interface using the following approach:
IUnknown *site = pUnkSite; /* retrieved from IE7 during SetSite() call */
IServiceProvider *sp = NULL;
IHTMLWindow2 *win = NULL;
IHTMLDocument2 *doc = NULL;
if(site) {
site->QueryInterface(IID_IServiceProvider, (void **)&sp);
if(sp) {
sp->QueryService(IID_IHTMLWindow2, IID_IHTMLWindow2, (void **)&win);
if(win) {
win->get_document(&doc);
}
}
}
if(doc) {
/* found */
}
I tried a similiar approach on PIE as well using the following code, however, even the IPIEHTMLWindow2 interface cannot be acquired, so I'm stuck:
IUnknown *site = pUnkSite; /* retrieved from PIE during SetSite() call */
IPIEHTMLWindow2 *win = NULL;
IPIEHTMLDocument1 *tmp = NULL;
IPIEHTMLDocument2 *doc = NULL;
if(site) {
site->QueryInterface(__uuidof(*win), (void **)&win);
if(win) { /* never the case */
win->get_document(&tmp);
if(tmp) {
tmp->QueryInterface(__uuidof(*doc), (void **)&doc);
}
}
}
if(doc) {
/* found */
}
Using the IServiceProvider interface doesn't work either, so I already tested this.
Any ideas?
|
[
"I found the following code in the Google Gears code, here. I copied the functions I think you need to here. The one you need is at the bottom (GetHtmlWindow2), but the other two are needed as well. Hopefully I didn't miss anything, but if I did the stuff you need is probably at the link.\n#ifdef WINCE\n// We can't get IWebBrowser2 for WinCE.\n#else\nHRESULT ActiveXUtils::GetWebBrowser2(IUnknown *site, IWebBrowser2 **browser2) {\n CComQIPtr<IServiceProvider> service_provider = site;\n if (!service_provider) { return E_FAIL; }\n\n return service_provider->QueryService(SID_SWebBrowserApp,\n IID_IWebBrowser2,\n reinterpret_cast<void**>(browser2));\n}\n#endif\n\n\nHRESULT ActiveXUtils::GetHtmlDocument2(IUnknown *site,\n IHTMLDocument2 **document2) {\n HRESULT hr;\n\n#ifdef WINCE\n // Follow path Window2 -> Window -> Document -> Document2\n CComPtr<IPIEHTMLWindow2> window2;\n hr = GetHtmlWindow2(site, &window2);\n if (FAILED(hr) || !window2) { return false; }\n CComQIPtr<IPIEHTMLWindow> window = window2;\n CComPtr<IHTMLDocument> document;\n hr = window->get_document(&document);\n if (FAILED(hr) || !document) { return E_FAIL; }\n return document->QueryInterface(__uuidof(*document2),\n reinterpret_cast<void**>(document2));\n#else\n CComPtr<IWebBrowser2> web_browser2;\n hr = GetWebBrowser2(site, &web_browser2);\n if (FAILED(hr) || !web_browser2) { return E_FAIL; }\n\n CComPtr<IDispatch> doc_dispatch;\n hr = web_browser2->get_Document(&doc_dispatch);\n if (FAILED(hr) || !doc_dispatch) { return E_FAIL; }\n\n return doc_dispatch->QueryInterface(document2);\n#endif\n}\n\n\nHRESULT ActiveXUtils::GetHtmlWindow2(IUnknown *site,\n#ifdef WINCE\n IPIEHTMLWindow2 **window2) {\n // site is javascript IDispatch pointer.\n return site->QueryInterface(__uuidof(*window2),\n reinterpret_cast<void**>(window2));\n#else\n IHTMLWindow2 **window2) {\n CComPtr<IHTMLDocument2> html_document2;\n // To hook an event on a page's window object, follow the path\n // IWebBrowser2->document->parentWindow->IHTMLWindow2\n\n HRESULT hr = GetHtmlDocument2(site, &html_document2);\n if (FAILED(hr) || !html_document2) { return E_FAIL; }\n\n return html_document2->get_parentWindow(window2);\n#endif\n}\n\n",
"Well I was aware of the gears code already. The mechanism gears uses is based on a workaround through performing an explicit method call into the gears plugin from the gears loader to set the window object and use that as site interface instead of the IUnknown provided by IE Mobile in the SetSite call. Regarding to the gears code the Google engineers are aware of the same problem I'm asking and came up with this workaround I described.\nHowever, I believe there must be another more \"official\" way of dealing with this issue since explicitely setting the site on an Active X control/plugin isn't very great. I'm going to ask the MS IE Mobile team directly now and will keep you informed once I get a solution. It might be a bug in IE Mobile which is the most likely thing I can imagine of, but who knows...\nBut thanks anyways for your response ;))\n"
] |
[
4,
2
] |
[] |
[] |
[
"c++",
"internet_explorer",
"pocketpc",
"windows_mobile"
] |
stackoverflow_0000045658_c++_internet_explorer_pocketpc_windows_mobile.txt
|
Q:
ASP.NET Custom Controls - Composites
Summary
Hi All,
OK, further into my adventures with custom controls...
In summary, here is that I have learned of three main "classes" of custom controls. Please feel free to correct me if any of this is wrong!
UserControls - Which inherit from UserControl and are contained within an ASCX file. These are pretty limited in what they can do, but are a quick and light way to get some UI commonality with designer support.
Custom Composite Controls - These are controls that inherit from WebControl where you add pre-existing controls to the control within the CreateChildControls method. This provides great flexibility, but lack of designer support without additional coding. They are highly portable though since they can be compiled into a DLL.
Custom Rendered Controls - Similar to Custom Composite Controls, these are added to a Web Control Library project. The rendering of the control is completely controlled by the programmer by overriding the Render method.
My Thoughts..
OK, so while playing with custom composites, I found the following:
You have little/no control over the HTML output making it difficult to "debug".
The CreateChildControls (and subsequent methods) can get real busy with Controls.Add(myControl) everywhere.
I found rendering tables (be it for layout or content) to be considerably awkward.
The Question(s)..
So, I admit, I am new to this so I could be way off-base with some of my points noted above..
Do you use Composites?
Do you have any neat tricks to control the HTML output?
Do you just say "to hell with it" and go ahead and create a custom rendered control?
Its something I am keen to get really firm in my mind since I know how much good control development can cut overall development time.
I look forward to your answers ^_^
A:
I say go ahead with the custom rendered control. I find that in most cases the composite can be easier done and used in a UserControl, but anything beyond that and you'd need to have a finer degree of control (pun unintended) to merit your own rendering strategy.
There maybe controls that are simple enough to merit a composite (e.g., a textbox combined with a javascript/dhtml based datepicker, for example) but beyond that one example, it looks like custom rendered controls are the way to go.
A:
Here's another extension method that I use for custom rendering:
public static void WriteControls
(this HtmlTextWriter o, string format, params object[] args)
{
const string delimiter = "<2E01A260-BD39-47d0-8C5E-0DF814FDF9DC>";
var controls = new Dictionary<string,Control>();
for(int i =0; i < args.Length; ++i)
{
var c = args[i] as Control;
if (c==null) continue;
var guid = Guid.NewGuid().ToString();
controls[guid] = c;
args[i] = delimiter+guid+delimiter;
}
var _strings = string.Format(format, args)
.Split(new string[]{delimiter},
StringSplitOptions.None);
foreach(var s in _strings)
{
if (controls.ContainsKey(s))
controls[s].RenderControl(o);
else
o.Write(s);
}
}
Then, to render a custom composite in the RenderContents() method I write this:
protected override void RenderContents(HtmlTextWriter o)
{
o.WriteControls
(@"<table>
<tr>
<td>{0}</td>
<td>{1}</td>
</tr>
</table>"
,Text
,control1);
}
A:
Rob, you are right. The approach I mentioned is kind of a hybrid. The advantage of having ascx files around is that on every project I've seen, designers would feel most comfortable with editing actual markup and with the ascx you and a designer can work separately. If you don't plan on actual CSS/markup/design changes on the controls themselves later, you can go with a custom rendered control. As I said, my approach is only relevant for more complicated scenarios (and these are probably where you need a designer :))
A:
I often use composite controls. Instead of overriding Render or RenderContents, just assign each Control a CssClass and use stylesheets. For multiple Controls.Add, I use an extension method:
//Controls.Add(c1, c2, c3)
static void Add(this ControlCollection coll, params Control[] controls)
{ foreach(Control control in controls) coll.Add(control);
}
For quick and dirty rendering, I use something like this:
writer.Render(@"<table>
<tr><td>{0}</td></tr>
<tr>
<td>", Text);
control1.RenderControl(writer);
writer.Render("</td></tr></table>");
For initializing control properties, I use property initializer syntax:
childControl = new Control { ID="Foo"
, CssClass="class1"
, CausesValidation=true;
};
A:
Using custom composite controls has a point in a situation where you have a large web application and want to reuse large chunks in many places. Then you would only add child controls of the ones you are developing instead of repeating yourself.
On a large project I've worked recently what we did is the following:
Every composite control has a container. Used as a wrapped for everything inside the control.
Every composite control has a template. An ascx file (without the <%Control%> directive) which only contains the markup for the template.
The container (being a control in itself) is initialized from the template.
The container exposes properties for all other controls in the template.
You only use this.Controls.Add([the_container]) in your composite control.
In fact you need a base class that would take care of initializing a container with the specified template and also throw exceptions when a control is not found in the template. Of course this is likely to be an overkill in a small application. If you don't have reused code and markup and only want to write simple controls, you're better off using User Controls.
A:
You might be able to make use of this technique to make design-time easier:
http://aspadvice.com/blogs/ssmith/archive/2007/10/19/Render-User-Control-as-String-Template.aspx
Basically you create an instance of a user control at runtime using the LoadControl method, then hand it a statebag of some kind, then attach it to the control tree. So your composite control would actually function like more of a controller, and the .ascx file would be like a view.
This would save you the trouble of having to instantiate the entire control tree and style the control in C#!
|
ASP.NET Custom Controls - Composites
|
Summary
Hi All,
OK, further into my adventures with custom controls...
In summary, here is that I have learned of three main "classes" of custom controls. Please feel free to correct me if any of this is wrong!
UserControls - Which inherit from UserControl and are contained within an ASCX file. These are pretty limited in what they can do, but are a quick and light way to get some UI commonality with designer support.
Custom Composite Controls - These are controls that inherit from WebControl where you add pre-existing controls to the control within the CreateChildControls method. This provides great flexibility, but lack of designer support without additional coding. They are highly portable though since they can be compiled into a DLL.
Custom Rendered Controls - Similar to Custom Composite Controls, these are added to a Web Control Library project. The rendering of the control is completely controlled by the programmer by overriding the Render method.
My Thoughts..
OK, so while playing with custom composites, I found the following:
You have little/no control over the HTML output making it difficult to "debug".
The CreateChildControls (and subsequent methods) can get real busy with Controls.Add(myControl) everywhere.
I found rendering tables (be it for layout or content) to be considerably awkward.
The Question(s)..
So, I admit, I am new to this so I could be way off-base with some of my points noted above..
Do you use Composites?
Do you have any neat tricks to control the HTML output?
Do you just say "to hell with it" and go ahead and create a custom rendered control?
Its something I am keen to get really firm in my mind since I know how much good control development can cut overall development time.
I look forward to your answers ^_^
|
[
"I say go ahead with the custom rendered control. I find that in most cases the composite can be easier done and used in a UserControl, but anything beyond that and you'd need to have a finer degree of control (pun unintended) to merit your own rendering strategy.\nThere maybe controls that are simple enough to merit a composite (e.g., a textbox combined with a javascript/dhtml based datepicker, for example) but beyond that one example, it looks like custom rendered controls are the way to go.\n",
"Here's another extension method that I use for custom rendering:\n public static void WriteControls\n (this HtmlTextWriter o, string format, params object[] args)\n { \n const string delimiter = \"<2E01A260-BD39-47d0-8C5E-0DF814FDF9DC>\";\n var controls = new Dictionary<string,Control>();\n\n for(int i =0; i < args.Length; ++i)\n { \n var c = args[i] as Control; \n if (c==null) continue;\n var guid = Guid.NewGuid().ToString();\n controls[guid] = c;\n args[i] = delimiter+guid+delimiter;\n }\n\n var _strings = string.Format(format, args)\n .Split(new string[]{delimiter},\n StringSplitOptions.None);\n foreach(var s in _strings)\n { \n if (controls.ContainsKey(s)) \n controls[s].RenderControl(o);\n else \n o.Write(s);\n }\n}\n\nThen, to render a custom composite in the RenderContents() method I write this:\nprotected override void RenderContents(HtmlTextWriter o)\n{ \n o.WriteControls\n (@\"<table>\n <tr>\n <td>{0}</td>\n <td>{1}</td>\n </tr>\n </table>\"\n ,Text\n ,control1);\n }\n\n",
"Rob, you are right. The approach I mentioned is kind of a hybrid. The advantage of having ascx files around is that on every project I've seen, designers would feel most comfortable with editing actual markup and with the ascx you and a designer can work separately. If you don't plan on actual CSS/markup/design changes on the controls themselves later, you can go with a custom rendered control. As I said, my approach is only relevant for more complicated scenarios (and these are probably where you need a designer :))\n",
"I often use composite controls. Instead of overriding Render or RenderContents, just assign each Control a CssClass and use stylesheets. For multiple Controls.Add, I use an extension method:\n//Controls.Add(c1, c2, c3)\nstatic void Add(this ControlCollection coll, params Control[] controls)\n { foreach(Control control in controls) coll.Add(control);\n }\n\nFor quick and dirty rendering, I use something like this:\nwriter.Render(@\"<table>\n <tr><td>{0}</td></tr>\n <tr>\n <td>\", Text);\ncontrol1.RenderControl(writer);\nwriter.Render(\"</td></tr></table>\");\n\nFor initializing control properties, I use property initializer syntax:\nchildControl = new Control { ID=\"Foo\"\n , CssClass=\"class1\"\n , CausesValidation=true;\n };\n\n",
"Using custom composite controls has a point in a situation where you have a large web application and want to reuse large chunks in many places. Then you would only add child controls of the ones you are developing instead of repeating yourself.\nOn a large project I've worked recently what we did is the following:\n\nEvery composite control has a container. Used as a wrapped for everything inside the control.\nEvery composite control has a template. An ascx file (without the <%Control%> directive) which only contains the markup for the template.\nThe container (being a control in itself) is initialized from the template.\nThe container exposes properties for all other controls in the template.\nYou only use this.Controls.Add([the_container]) in your composite control.\n\nIn fact you need a base class that would take care of initializing a container with the specified template and also throw exceptions when a control is not found in the template. Of course this is likely to be an overkill in a small application. If you don't have reused code and markup and only want to write simple controls, you're better off using User Controls.\n",
"You might be able to make use of this technique to make design-time easier:\nhttp://aspadvice.com/blogs/ssmith/archive/2007/10/19/Render-User-Control-as-String-Template.aspx\nBasically you create an instance of a user control at runtime using the LoadControl method, then hand it a statebag of some kind, then attach it to the control tree. So your composite control would actually function like more of a controller, and the .ascx file would be like a view.\nThis would save you the trouble of having to instantiate the entire control tree and style the control in C#!\n"
] |
[
5,
3,
2,
1,
1,
0
] |
[] |
[] |
[
".net",
"asp.net",
"c#",
"controls",
"user_controls"
] |
stackoverflow_0000017532_.net_asp.net_c#_controls_user_controls.txt
|
Q:
“rusage” statistics
I'm trying to use “rusage” statistics in my program to get data similar to that of the time tool. However, I'm pretty sure that I'm doing something wrong. The values seem about right but can be a bit weird at times. I didn't find good resources online. Does somebody know how to do it better?
Sorry for the long code.
class StopWatch {
public:
void start() {
getrusage(RUSAGE_SELF, &m_begin);
gettimeofday(&m_tmbegin, 0);
}
void stop() {
getrusage(RUSAGE_SELF, &m_end);
gettimeofday(&m_tmend, 0);
timeval_sub(m_end.ru_utime, m_begin.ru_utime, m_diff.ru_utime);
timeval_sub(m_end.ru_stime, m_begin.ru_stime, m_diff.ru_stime);
timeval_sub(m_tmend, m_tmbegin, m_tmdiff);
}
void printf(std::ostream& out) const {
using namespace std;
timeval const& utime = m_diff.ru_utime;
timeval const& stime = m_diff.ru_stime;
format_time(out, utime);
out << "u ";
format_time(out, stime);
out << "s ";
format_time(out, m_tmdiff);
}
private:
rusage m_begin;
rusage m_end;
rusage m_diff;
timeval m_tmbegin;
timeval m_tmend;
timeval m_tmdiff;
static void timeval_add(timeval const& a, timeval const& b, timeval& ret) {
ret.tv_usec = a.tv_usec + b.tv_usec;
ret.tv_sec = a.tv_sec + b.tv_sec;
if (ret.tv_usec > 999999) {
ret.tv_usec -= 1000000;
++ret.tv_sec;
}
}
static void timeval_sub(timeval const& a, timeval const& b, timeval& ret) {
ret.tv_usec = a.tv_usec - b.tv_usec;
ret.tv_sec = a.tv_sec - b.tv_sec;
if (a.tv_usec < b.tv_usec) {
ret.tv_usec += 1000000;
--ret.tv_sec;
}
}
static void format_time(std::ostream& out, timeval const& tv) {
using namespace std;
long usec = tv.tv_usec;
while (usec >= 1000)
usec /= 10;
out << tv.tv_sec << '.' << setw(3) << setfill('0') << usec;
}
}; // class StopWatch
A:
What is the purpose of:
while (usec >= 1000)
usec /= 10;
I gather that you want the most significant three digits of the usec; in that case, the most straightforward way I can think of is to divide usec by 1000, and be done with that.
Test cases:
999999 ⇒ 999
99999 ⇒ 999 (should be 099)
9999 ⇒ 999 (should be 009)
999 ⇒ 999 (should be 000)
A:
I think there's probably a bug somewhere in your composition of sec and usec. I can't really say what exactly without knowing the kinds of errors you're seeing. A rough guess would be that usec can never be > 999999, so you're relying on overflow to know when to adjust sec. It could also just be a problem with your duration output format.
Anyway. Why not store the utime and stime components as float seconds rather than trying to build your own rusage on output? I'm pretty sure the following will give you proper seconds.
static int timeval_diff_ms(timeval const& end, timeval const& start) {
int micro_seconds = (end.tv_sec - start.tv_sec) * 1000000
+ end.tv_usec - start.tv_usec;
return micro_seconds;
}
static float timeval_diff(timeval const& end, timeval const& start) {
return (timeval_diff_ms(end, start)/1000000.0f);
}
If you want to decompose this back into an rusage, you can always int-div and modulo.
|
“rusage” statistics
|
I'm trying to use “rusage” statistics in my program to get data similar to that of the time tool. However, I'm pretty sure that I'm doing something wrong. The values seem about right but can be a bit weird at times. I didn't find good resources online. Does somebody know how to do it better?
Sorry for the long code.
class StopWatch {
public:
void start() {
getrusage(RUSAGE_SELF, &m_begin);
gettimeofday(&m_tmbegin, 0);
}
void stop() {
getrusage(RUSAGE_SELF, &m_end);
gettimeofday(&m_tmend, 0);
timeval_sub(m_end.ru_utime, m_begin.ru_utime, m_diff.ru_utime);
timeval_sub(m_end.ru_stime, m_begin.ru_stime, m_diff.ru_stime);
timeval_sub(m_tmend, m_tmbegin, m_tmdiff);
}
void printf(std::ostream& out) const {
using namespace std;
timeval const& utime = m_diff.ru_utime;
timeval const& stime = m_diff.ru_stime;
format_time(out, utime);
out << "u ";
format_time(out, stime);
out << "s ";
format_time(out, m_tmdiff);
}
private:
rusage m_begin;
rusage m_end;
rusage m_diff;
timeval m_tmbegin;
timeval m_tmend;
timeval m_tmdiff;
static void timeval_add(timeval const& a, timeval const& b, timeval& ret) {
ret.tv_usec = a.tv_usec + b.tv_usec;
ret.tv_sec = a.tv_sec + b.tv_sec;
if (ret.tv_usec > 999999) {
ret.tv_usec -= 1000000;
++ret.tv_sec;
}
}
static void timeval_sub(timeval const& a, timeval const& b, timeval& ret) {
ret.tv_usec = a.tv_usec - b.tv_usec;
ret.tv_sec = a.tv_sec - b.tv_sec;
if (a.tv_usec < b.tv_usec) {
ret.tv_usec += 1000000;
--ret.tv_sec;
}
}
static void format_time(std::ostream& out, timeval const& tv) {
using namespace std;
long usec = tv.tv_usec;
while (usec >= 1000)
usec /= 10;
out << tv.tv_sec << '.' << setw(3) << setfill('0') << usec;
}
}; // class StopWatch
|
[
"What is the purpose of:\nwhile (usec >= 1000)\n usec /= 10;\n\nI gather that you want the most significant three digits of the usec; in that case, the most straightforward way I can think of is to divide usec by 1000, and be done with that.\nTest cases:\n\n999999 ⇒ 999\n99999 ⇒ 999 (should be 099)\n9999 ⇒ 999 (should be 009)\n999 ⇒ 999 (should be 000)\n\n",
"I think there's probably a bug somewhere in your composition of sec and usec. I can't really say what exactly without knowing the kinds of errors you're seeing. A rough guess would be that usec can never be > 999999, so you're relying on overflow to know when to adjust sec. It could also just be a problem with your duration output format.\nAnyway. Why not store the utime and stime components as float seconds rather than trying to build your own rusage on output? I'm pretty sure the following will give you proper seconds. \nstatic int timeval_diff_ms(timeval const& end, timeval const& start) {\n int micro_seconds = (end.tv_sec - start.tv_sec) * 1000000 \n + end.tv_usec - start.tv_usec;\n\n return micro_seconds;\n}\n\nstatic float timeval_diff(timeval const& end, timeval const& start) {\n return (timeval_diff_ms(end, start)/1000000.0f);\n}\n\nIf you want to decompose this back into an rusage, you can always int-div and modulo.\n"
] |
[
3,
2
] |
[] |
[] |
[
"c++",
"profiling",
"time",
"unix"
] |
stackoverflow_0000024207_c++_profiling_time_unix.txt
|
Q:
Alternative to VSS for a one man show (army of one?)
I've been programming for 10+ years now for the same employer and only source code control we've ever used is VSS. (Sorry - That's what they had when I started). There's only ever been a few of us; two right now and we usually work alone, so VSS has worked ok for us. So, I have two questions: 1) Should we switch to something else like subversion, git, TFS, etc what exactly and why (please)? 2) Am I beyond all hope and destined to eternal damnation because VSS has corrupted me (as Jeff says) ?
Wow - thanks for all the great responses!
It sounds like I should clearify a few things. We are a MS shop (Gold parntner) and we mostly do VB, ASP.NET, SQL Server, sharepoint & Biztalk work. I have CS degree so I've done x86 assembly C, C++ on DEC Unix and Slackware Linux in a "time out of mind" ...
My concern with VSS is that now I'm working over a VPN a lot more and VSS's performance sux and I'm afraid that our 10+ y/o version 5 VSS database is going to get hoosed...
There's the LAN service that's supposed to speed things up, but Ive never used it and I'm not sure it helps with corruption - has anyone used the VSS LAN service? (new with VSS 2005)
A:
I'd probably go with Subversion, if I were you. I'm a total Git fanatic at this point, but Subversion certainly has some advantages:
simplicity
abundance of interoperable tools
active and supportive community
portable
Has really nice Windows shell integration
integrates with visual studio (I think - but surely through a third party)
Git has many, many other advantages, but the above tend to be the ones people care about when asking general questions like the above.
Edit: the company I now work for is using VisualSVN server, which is free. It makes setting up a Subversion repository on a Windows server stupid simple, and on the client we're using TortoiseSVN (for shell integration) and AnkhSVN for Visual Studio support. It's quite good, and should be fairly easy for even VSS users to pick up.
Latter-day Edit: So....nearly eight years later, I would never recommend Subversion to anyone for any reason. I don't really recant, per se, because I think my advice was valid at the time. However, in 2016, Subversion retains almost none of the advantages it used to have over Git. The tooling for Git is superior to (and much more diverse) what it once was, and in particular, there's GitHub and other good Git hosting providers (BitBucket, Beanstalk, Visual Studio Online, just off the top of my head). Visual Studio now has Git support out-of-the-box, and it's actually pretty good. There are even PowerShell modules to give a more native Windows experience to denizens of the console. Git is even easier to set up and use than Subversion and doesn't require a server component. Git has become as ubiquitous as any single tool can be, and you really would only be cheating yourself to not use it (unless you just really want to use something not-Git). Don't misunderstand - this isn't me hating on Subversion, but rather me recognizing that it's a tool from another time, rather like a straight razor for shaving.
A:
Looks like SubVersion is the winner here. I'd do yourself a favor and use VisualSVN Server. It's free and will save you a bunch of installation headaches.
A:
If you're used to the way VSS works, check out (no pun intended) Sourcegear's vault. It's an excellent way to migrate away from VSS as it comes with IDE integration and supports check out / check in, but when you're ready and feel comfortable you can also move to the edit update commit style of programming found in SVN.
It's free for single developers, runs on IIS and is built on .net so it should be a fairly familiar stack for you to switch to.
A:
Whatever you do, don't change for the sake of changing.
If it's working for you and you're not having problems with it, I don't see any reason to switch.
A:
For what it's worth, Perforce is a potential option if you truly stick to 1 or 2 users. Current perforce docs says you have have 2 users and 5 clients without having to start purchasing licenses.
You might have reasons to switch to perforce depending on your workflow and if you have need of branching the way perforce does it. Not being overly familar with some the other products mentioned here, I can't tell you how perforce compares in the feature department for things like branching, etc.
It is speedy, and it's been rock solid for us (300+ developers on a 10+ year old codebase). We store several T of info and it's been quite responsive. With a small number of users, I doubt that you'd experience many performance troubles assuming you had good hardware for your server.
Having used VSS before, I believe that you can get so many benefits out of a better SCM system that switching should be considered regardless of whether you have corruption or not. Branching alone might be worth it for you. A true client/server model, better interfaces (programmatically and command line) are a couple of other things that could really help just improve your workflow and help somewhat with productivity.
In summary, my view of Perforce is:
It's fast and quite reliable
Plenty of cross platform client tools (windows, unix, mac, etc)
it's free for 2 users and 5 clients
Integrates into developer studio (and other tools)
Has a powerful branching system (that might or might not be right for you).
Has several scriptable interfaces (python, perl, ruby, C++)
Certainly YMMV -- I only offer this alternative up as something that might be worthwhile looking into.
A:
I've recently started using Mercurial for some of my work. It's a distributed system like Git but seems easier to use and seems far better supported on Windows, the latter of which was crucial for me.
With distributed source code control every user has a complete local copy of the repository. If you're the only person working on a project, as you say you often are, this can simplify things a lot since you just create your own repository and do all your commits etc. locally. If you want to bring on other developers later you can just push the full contents of your repository - current versions and all history - to another system, either on a shared server or directly on to another users' workstation.
If you're working only with a local repository remember you'll need a also backup solution as there isn't a copy of all your code on a shared server.
I think that Mercurial has lots of other advantages over Subversion, but it does have a big downside which has already been mentioned as a plus point of Subversion: there a lots of third party tools and integrations for Subversion. As Mercurial hasn't been around nearly as ong the choice is much less. On Windows it seems that you either have to use the command line (my choice) or the TortoiseHg Windows Explorer integration.
A:
VSS is horrible. I may be channelling Spolsky (not sure if he's said this), but using VSS is actually worse than not using source control at all. Despite its name, it isn't safe. It creates the illusion of safety without providing it.
Without VSS, you'd probably be making regular backups of your code. With VSS, you'll think, "Mehh, it's already under source control. Why bother backing up?" Great until it corrupts your entire codebase and you lose everything. (This, incidentally, happened at a company I worked at.)
Get rid of VSS as soon as you can and switch to a real source control solution.
A:
Don't worry about VSS corrupting you, worry about VSS corrupting your data. It does not have a good track record in that department.
Back up frequently if you do not switch to a different version control system. Backups should be happening daily even with other SCMs, but it's doubly important with VSS.
A:
I like using Subversion for my personal projects. I could go down the list of features and pretend like it brings a lot to the table that other source control systems don't, but there's tons of good ones out there and the right choices is really a matter of style. If you check in after each small change (i.e. one checkin per function change), then many people can work on the same source file with very low risk of merge conflicts in practically anything but VSS (I haven't used VSS in years but from what I remember only one person at a time can work on a file.) If this isn't ever going to happen to you, I feel like the best course of action is to use what you know. VSS is better than no source control at all, but it feels restrictive to me these days.
I don't think you're beyond hope because you're asking if it would be better to switch; you're beyond hope when the answer is obvious and you ignore the evidence.
Even if you don't change source control systems, you ought to pick one like SVN or git and spend a few weeks reading about it and making a small project using it; it always helps to sharpen the saw.
A:
I don't agree with the people that say that if you don't have problems you'd better not switch.
I think that SCM is some of the disciplines a good developer should know well, and frankly, even if you master VSS you are just experimenting a small fraction of the advantages a good SCM tool and SCM strategy can do for you and for your team.
Obviously evaluate and test the alternatives first in a non-production environment.
A:
At work we use subversion with TortoiseSVN - works very nicely but it is philosophically different to VSS (not really a problem if there's just you but worth being aware of). I really like the fact that the whole repository has a revision number.
Given a free choice I'd've probably gone with vault but at the time I had zero budget.
I'm looking at things for personal use. There are reasons to use subversion and reasons to use something completely different. The alternatives I'm considering are Vault (as before, free for single use) and Bazaar. GIT I've had to dismiss as I am, unashamedly, a Windows person and right now GIT just isn't.
The distributed nature of GIT and the option of private/temporary checkins (assuming I've understood what I've read) is attractive - hence my looking at Bazaar.
Update: I did some more digging and playing and I actually went for Mercurial for personal use, integrated install with TortoiseHg makes things very simple and it seems to be well regarded. I'm still trying to work out how to force an automagic mirror of commits to a server and there appear to be some minor limitations to the ignore function but its doing the job nicely so far...
Murph
A:
I'd say stick with what works for you. Unless you are having issues with VSS, why switch? Subversion is swell, though a little sticky to begin using it. TFS is far better than VSS, though it is fairly expensive for such a small team. I have not used git so I can't really speak to it.
A:
i used vss for years until switching to svn about two years ago. my biggest complaints about vss were the poor network performance (that problem may be solved now) and the pessimistic locking of files. svn solved both those, is easy to set up (i use collabnet server and tortoisesvn client, although there are two good visual studio plugins: visualsvn - commercial, and ankhsvn - open source), easy to use and administer, and well documented.
it's tempting to say "if it's not broken then don't fix it" but you would get to learn a more modern source control tool and, perhaps more importantly, new ways of using source control (e.g. more frequent branching and merging) that the new tool would support.
A:
If you only have 2 people, and you mostly work independantly, git is going to give you a lot more flexibility, power, and be far and away the fastest to work with.
It is however a pain in the backside to use. Using VSS you're obviously programming for windows - if you're doing Win32 API stuff in C then git will be a learning curve but will be quite interesting.
If the depths of your knowledge however only extend to ASP and Visual Basic, just use subversion. Walk before you can run.
** I'm not trying to say if you only know VB you're dumb or anything like that, but that git can be very finicky and picky to use (if you've used the WinAPI in C you know all about picky and finicky), and you may want a more gradual introduction to SCM than git provides
A:
If you are a one man show and strictly a Microsoft shop, then SourceGear Vault is definitely a prime candidate for switching too.
Features:
Free for Single User, great for you
It uses SQL Server for it's backend, therefore data reliability is huge
It has atomic check-ins, all files checked-in at the same time are arranged in a group and are called a changeset.
VisualStudio integration.
Has a tool for importing from SourceSafe, therefore you can keep your history
The client communicates with the server over HTTP, therefore accessing the source outside the office remotely can be setup very easily and performs well, because they only transfer the deltas of the changes being submitted and received. You can use SSL to secure the connection.
I would definately consider this as an option.
A:
If you want a full Life Cycle in one package then you probably want want to look at Visual Studio Team System. It does require a server, but you can get a "Action Pack" from MS that includes all the licencies that you need for "Team Foundation Server Workgroup Edition" from the Partner centre.
With this you will get Bug, Risk and Issue tracking as well as many other features :)
Source Control
Work Item Tracking (Requirements, Bugs, Issues, Risks and Tasks)
Reporting on your project data (Work Item Tracking, Build, Checkins and more in one qube)
Code Analysis
Unit Testing
Load Testing
Performance Analysis
Automated Build
|
Alternative to VSS for a one man show (army of one?)
|
I've been programming for 10+ years now for the same employer and only source code control we've ever used is VSS. (Sorry - That's what they had when I started). There's only ever been a few of us; two right now and we usually work alone, so VSS has worked ok for us. So, I have two questions: 1) Should we switch to something else like subversion, git, TFS, etc what exactly and why (please)? 2) Am I beyond all hope and destined to eternal damnation because VSS has corrupted me (as Jeff says) ?
Wow - thanks for all the great responses!
It sounds like I should clearify a few things. We are a MS shop (Gold parntner) and we mostly do VB, ASP.NET, SQL Server, sharepoint & Biztalk work. I have CS degree so I've done x86 assembly C, C++ on DEC Unix and Slackware Linux in a "time out of mind" ...
My concern with VSS is that now I'm working over a VPN a lot more and VSS's performance sux and I'm afraid that our 10+ y/o version 5 VSS database is going to get hoosed...
There's the LAN service that's supposed to speed things up, but Ive never used it and I'm not sure it helps with corruption - has anyone used the VSS LAN service? (new with VSS 2005)
|
[
"I'd probably go with Subversion, if I were you. I'm a total Git fanatic at this point, but Subversion certainly has some advantages: \n\nsimplicity\nabundance of interoperable tools\nactive and supportive community\nportable\nHas really nice Windows shell integration\nintegrates with visual studio (I think - but surely through a third party)\n\nGit has many, many other advantages, but the above tend to be the ones people care about when asking general questions like the above.\nEdit: the company I now work for is using VisualSVN server, which is free. It makes setting up a Subversion repository on a Windows server stupid simple, and on the client we're using TortoiseSVN (for shell integration) and AnkhSVN for Visual Studio support. It's quite good, and should be fairly easy for even VSS users to pick up.\nLatter-day Edit: So....nearly eight years later, I would never recommend Subversion to anyone for any reason. I don't really recant, per se, because I think my advice was valid at the time. However, in 2016, Subversion retains almost none of the advantages it used to have over Git. The tooling for Git is superior to (and much more diverse) what it once was, and in particular, there's GitHub and other good Git hosting providers (BitBucket, Beanstalk, Visual Studio Online, just off the top of my head). Visual Studio now has Git support out-of-the-box, and it's actually pretty good. There are even PowerShell modules to give a more native Windows experience to denizens of the console. Git is even easier to set up and use than Subversion and doesn't require a server component. Git has become as ubiquitous as any single tool can be, and you really would only be cheating yourself to not use it (unless you just really want to use something not-Git). Don't misunderstand - this isn't me hating on Subversion, but rather me recognizing that it's a tool from another time, rather like a straight razor for shaving. \n",
"Looks like SubVersion is the winner here. I'd do yourself a favor and use VisualSVN Server. It's free and will save you a bunch of installation headaches.\n",
"If you're used to the way VSS works, check out (no pun intended) Sourcegear's vault. It's an excellent way to migrate away from VSS as it comes with IDE integration and supports check out / check in, but when you're ready and feel comfortable you can also move to the edit update commit style of programming found in SVN.\nIt's free for single developers, runs on IIS and is built on .net so it should be a fairly familiar stack for you to switch to.\n",
"Whatever you do, don't change for the sake of changing.\nIf it's working for you and you're not having problems with it, I don't see any reason to switch.\n",
"For what it's worth, Perforce is a potential option if you truly stick to 1 or 2 users. Current perforce docs says you have have 2 users and 5 clients without having to start purchasing licenses.\nYou might have reasons to switch to perforce depending on your workflow and if you have need of branching the way perforce does it. Not being overly familar with some the other products mentioned here, I can't tell you how perforce compares in the feature department for things like branching, etc.\nIt is speedy, and it's been rock solid for us (300+ developers on a 10+ year old codebase). We store several T of info and it's been quite responsive. With a small number of users, I doubt that you'd experience many performance troubles assuming you had good hardware for your server.\nHaving used VSS before, I believe that you can get so many benefits out of a better SCM system that switching should be considered regardless of whether you have corruption or not. Branching alone might be worth it for you. A true client/server model, better interfaces (programmatically and command line) are a couple of other things that could really help just improve your workflow and help somewhat with productivity.\nIn summary, my view of Perforce is:\n\nIt's fast and quite reliable\nPlenty of cross platform client tools (windows, unix, mac, etc)\nit's free for 2 users and 5 clients\nIntegrates into developer studio (and other tools)\nHas a powerful branching system (that might or might not be right for you).\nHas several scriptable interfaces (python, perl, ruby, C++)\n\nCertainly YMMV -- I only offer this alternative up as something that might be worthwhile looking into.\n",
"I've recently started using Mercurial for some of my work. It's a distributed system like Git but seems easier to use and seems far better supported on Windows, the latter of which was crucial for me.\nWith distributed source code control every user has a complete local copy of the repository. If you're the only person working on a project, as you say you often are, this can simplify things a lot since you just create your own repository and do all your commits etc. locally. If you want to bring on other developers later you can just push the full contents of your repository - current versions and all history - to another system, either on a shared server or directly on to another users' workstation.\nIf you're working only with a local repository remember you'll need a also backup solution as there isn't a copy of all your code on a shared server. \nI think that Mercurial has lots of other advantages over Subversion, but it does have a big downside which has already been mentioned as a plus point of Subversion: there a lots of third party tools and integrations for Subversion. As Mercurial hasn't been around nearly as ong the choice is much less. On Windows it seems that you either have to use the command line (my choice) or the TortoiseHg Windows Explorer integration.\n",
"VSS is horrible. I may be channelling Spolsky (not sure if he's said this), but using VSS is actually worse than not using source control at all. Despite its name, it isn't safe. It creates the illusion of safety without providing it.\nWithout VSS, you'd probably be making regular backups of your code. With VSS, you'll think, \"Mehh, it's already under source control. Why bother backing up?\" Great until it corrupts your entire codebase and you lose everything. (This, incidentally, happened at a company I worked at.)\nGet rid of VSS as soon as you can and switch to a real source control solution.\n",
"Don't worry about VSS corrupting you, worry about VSS corrupting your data. It does not have a good track record in that department.\nBack up frequently if you do not switch to a different version control system. Backups should be happening daily even with other SCMs, but it's doubly important with VSS.\n",
"I like using Subversion for my personal projects. I could go down the list of features and pretend like it brings a lot to the table that other source control systems don't, but there's tons of good ones out there and the right choices is really a matter of style. If you check in after each small change (i.e. one checkin per function change), then many people can work on the same source file with very low risk of merge conflicts in practically anything but VSS (I haven't used VSS in years but from what I remember only one person at a time can work on a file.) If this isn't ever going to happen to you, I feel like the best course of action is to use what you know. VSS is better than no source control at all, but it feels restrictive to me these days.\nI don't think you're beyond hope because you're asking if it would be better to switch; you're beyond hope when the answer is obvious and you ignore the evidence.\nEven if you don't change source control systems, you ought to pick one like SVN or git and spend a few weeks reading about it and making a small project using it; it always helps to sharpen the saw.\n",
"I don't agree with the people that say that if you don't have problems you'd better not switch.\nI think that SCM is some of the disciplines a good developer should know well, and frankly, even if you master VSS you are just experimenting a small fraction of the advantages a good SCM tool and SCM strategy can do for you and for your team.\nObviously evaluate and test the alternatives first in a non-production environment.\n",
"At work we use subversion with TortoiseSVN - works very nicely but it is philosophically different to VSS (not really a problem if there's just you but worth being aware of). I really like the fact that the whole repository has a revision number.\nGiven a free choice I'd've probably gone with vault but at the time I had zero budget. \nI'm looking at things for personal use. There are reasons to use subversion and reasons to use something completely different. The alternatives I'm considering are Vault (as before, free for single use) and Bazaar. GIT I've had to dismiss as I am, unashamedly, a Windows person and right now GIT just isn't.\nThe distributed nature of GIT and the option of private/temporary checkins (assuming I've understood what I've read) is attractive - hence my looking at Bazaar. \nUpdate: I did some more digging and playing and I actually went for Mercurial for personal use, integrated install with TortoiseHg makes things very simple and it seems to be well regarded. I'm still trying to work out how to force an automagic mirror of commits to a server and there appear to be some minor limitations to the ignore function but its doing the job nicely so far...\nMurph\n",
"I'd say stick with what works for you. Unless you are having issues with VSS, why switch? Subversion is swell, though a little sticky to begin using it. TFS is far better than VSS, though it is fairly expensive for such a small team. I have not used git so I can't really speak to it.\n",
"i used vss for years until switching to svn about two years ago. my biggest complaints about vss were the poor network performance (that problem may be solved now) and the pessimistic locking of files. svn solved both those, is easy to set up (i use collabnet server and tortoisesvn client, although there are two good visual studio plugins: visualsvn - commercial, and ankhsvn - open source), easy to use and administer, and well documented.\nit's tempting to say \"if it's not broken then don't fix it\" but you would get to learn a more modern source control tool and, perhaps more importantly, new ways of using source control (e.g. more frequent branching and merging) that the new tool would support.\n",
"If you only have 2 people, and you mostly work independantly, git is going to give you a lot more flexibility, power, and be far and away the fastest to work with.\nIt is however a pain in the backside to use. Using VSS you're obviously programming for windows - if you're doing Win32 API stuff in C then git will be a learning curve but will be quite interesting.\nIf the depths of your knowledge however only extend to ASP and Visual Basic, just use subversion. Walk before you can run.\n** I'm not trying to say if you only know VB you're dumb or anything like that, but that git can be very finicky and picky to use (if you've used the WinAPI in C you know all about picky and finicky), and you may want a more gradual introduction to SCM than git provides\n",
"If you are a one man show and strictly a Microsoft shop, then SourceGear Vault is definitely a prime candidate for switching too.\nFeatures:\n\nFree for Single User, great for you\nIt uses SQL Server for it's backend, therefore data reliability is huge\nIt has atomic check-ins, all files checked-in at the same time are arranged in a group and are called a changeset.\nVisualStudio integration.\nHas a tool for importing from SourceSafe, therefore you can keep your history\nThe client communicates with the server over HTTP, therefore accessing the source outside the office remotely can be setup very easily and performs well, because they only transfer the deltas of the changes being submitted and received. You can use SSL to secure the connection.\n\nI would definately consider this as an option.\n",
"If you want a full Life Cycle in one package then you probably want want to look at Visual Studio Team System. It does require a server, but you can get a \"Action Pack\" from MS that includes all the licencies that you need for \"Team Foundation Server Workgroup Edition\" from the Partner centre.\nWith this you will get Bug, Risk and Issue tracking as well as many other features :)\n\nSource Control\nWork Item Tracking (Requirements, Bugs, Issues, Risks and Tasks)\nReporting on your project data (Work Item Tracking, Build, Checkins and more in one qube)\nCode Analysis\nUnit Testing\nLoad Testing\nPerformance Analysis\nAutomated Build\n\n"
] |
[
28,
10,
9,
6,
6,
5,
5,
4,
3,
2,
2,
1,
1,
1,
1,
1
] |
[] |
[] |
[
"version_control",
"visual_sourcesafe"
] |
stackoverflow_0000031627_version_control_visual_sourcesafe.txt
|
Q:
Backup/Restore database for oracle 10g testing using sqlplus or rman
Using Oracle 10g with our testing server what is the most efficient/easy way to backup and restore a database to a static point, assuming that you always want to go back to the given point once a backup has been created.
A sample use case would be the following
install and configure all software
Modify data to the base testing point
take a backup somehow (this is part of the question, how to do this)
do testing
return to step 3 state (restore back to backup point, this is the other half of the question)
Optimally this would be completed through sqlplus or rman or some other scriptable method.
A:
You do not need to take a backup at your base time. Just enable flashback database, create a guaranteed restore point, run your tests and flashback to the previously created restore point.
The steps for this would be:
Startup the instance in mount mode.
startup force mount;
Create the restore point.
create restore point before_test guarantee flashback database;
Open the database.
alter database open;
Run your tests.
Shutdown and mount the instance.
shutdown immediate;
startup mount;
Flashback to the restore point.
flashback database to restore point before_test;
Open the database.
alter database open;
A:
You could use a feature in Oracle called Flashback which allows you to create a restore point, which you can easily jump back to after you've done testing.
Quoted from the site,
Flashback Database is like a 'rewind
button' for your database. It provides
database point in time recovery
without requiring a backup of the
database to first be restored. When
you eliminate the time it takes to
restore a database backup from tape,
database point in time recovery is
fast.
A:
From my experience import/export is probably the way to go. Export creates a logical snapshot of your DB so you won't find it useful for big DBs or exacting performance requirements. However it works great for making snapshots and whatnot to use on a number of machines.
I used it on a rails project to get a prod snapshot that we could swap between developers for integration testing and we did the job within rake scripts. We wrote a small sqlplus script that destroyed the DB then imported the dump file over the top.
Some articles you may want to check:
OraFAQ Cheatsheet
Oracle Wiki
Oracle apparently don't like imp/exp any more in favour of data pump, when we used data pump we needed things we couldn't have (i.e. SYSDBA privileges we couldn't get in a shared environment). So take a look but don't be disheartened if data pump is not your bag, the old imp/exp are still there :)
I can't recommend RMAN for this kind of thing becuase RMAN takes a lot of setup and will need config in the DB (it also has its own catalog DB for backups which is a pain in the proverbial for a bare metal restore).
A:
If you are using a filesystem that supports copy-on-write snapshots, you could set up the database to the state that you want. Then shut down everything and take a filesystem snapshot. Then go about your testing and when you're ready to start over you could roll back the snapshot. This might be simpler than other options, assuming you have a filesystem which supports snapshots.
A:
@Michael Ridley solution is perfectly scriptable, and will work with any version of oracle.
This is exactly what I do, I have a script which runs weekly to
Rollback the file system
Apply production archive logs
Take new "Pre-Data-Masking" FS snapshot
Reset logs
Apply "preproduction" data masking.
Take new "Post-Data-Masking" snapshot (allows rollback to post masked data)
Open database
This allows us to keep our development databases close to our production database.
To do this I use ZFS.
This method can also be used for your applications, or even you entire "environment" (eg, you could "rollback" your entire environment with a single (scripted) command.
If you are running 10g though, the first thing you'd probably want to look into is Flashback, as its built into the database.
|
Backup/Restore database for oracle 10g testing using sqlplus or rman
|
Using Oracle 10g with our testing server what is the most efficient/easy way to backup and restore a database to a static point, assuming that you always want to go back to the given point once a backup has been created.
A sample use case would be the following
install and configure all software
Modify data to the base testing point
take a backup somehow (this is part of the question, how to do this)
do testing
return to step 3 state (restore back to backup point, this is the other half of the question)
Optimally this would be completed through sqlplus or rman or some other scriptable method.
|
[
"You do not need to take a backup at your base time. Just enable flashback database, create a guaranteed restore point, run your tests and flashback to the previously created restore point.\nThe steps for this would be:\n\nStartup the instance in mount mode.\nstartup force mount;\nCreate the restore point.\ncreate restore point before_test guarantee flashback database;\nOpen the database.\nalter database open;\nRun your tests.\nShutdown and mount the instance.\nshutdown immediate;\nstartup mount;\nFlashback to the restore point.\nflashback database to restore point before_test;\nOpen the database.\nalter database open;\n\n",
"You could use a feature in Oracle called Flashback which allows you to create a restore point, which you can easily jump back to after you've done testing.\nQuoted from the site,\n\nFlashback Database is like a 'rewind\n button' for your database. It provides\n database point in time recovery\n without requiring a backup of the\n database to first be restored. When\n you eliminate the time it takes to\n restore a database backup from tape,\n database point in time recovery is\n fast.\n\n",
"From my experience import/export is probably the way to go. Export creates a logical snapshot of your DB so you won't find it useful for big DBs or exacting performance requirements. However it works great for making snapshots and whatnot to use on a number of machines.\nI used it on a rails project to get a prod snapshot that we could swap between developers for integration testing and we did the job within rake scripts. We wrote a small sqlplus script that destroyed the DB then imported the dump file over the top.\nSome articles you may want to check:\nOraFAQ Cheatsheet\nOracle Wiki\nOracle apparently don't like imp/exp any more in favour of data pump, when we used data pump we needed things we couldn't have (i.e. SYSDBA privileges we couldn't get in a shared environment). So take a look but don't be disheartened if data pump is not your bag, the old imp/exp are still there :) \nI can't recommend RMAN for this kind of thing becuase RMAN takes a lot of setup and will need config in the DB (it also has its own catalog DB for backups which is a pain in the proverbial for a bare metal restore).\n",
"If you are using a filesystem that supports copy-on-write snapshots, you could set up the database to the state that you want. Then shut down everything and take a filesystem snapshot. Then go about your testing and when you're ready to start over you could roll back the snapshot. This might be simpler than other options, assuming you have a filesystem which supports snapshots.\n",
"@Michael Ridley solution is perfectly scriptable, and will work with any version of oracle.\nThis is exactly what I do, I have a script which runs weekly to\n\nRollback the file system\nApply production archive logs\nTake new \"Pre-Data-Masking\" FS snapshot\nReset logs\nApply \"preproduction\" data masking.\nTake new \"Post-Data-Masking\" snapshot (allows rollback to post masked data)\nOpen database\n\nThis allows us to keep our development databases close to our production database.\nTo do this I use ZFS.\nThis method can also be used for your applications, or even you entire \"environment\" (eg, you could \"rollback\" your entire environment with a single (scripted) command.\nIf you are running 10g though, the first thing you'd probably want to look into is Flashback, as its built into the database.\n"
] |
[
6,
5,
2,
1,
0
] |
[] |
[] |
[
"backup",
"oracle",
"oracle10g",
"rman",
"sql"
] |
stackoverflow_0000067666_backup_oracle_oracle10g_rman_sql.txt
|
Q:
Set a UserControl Property to Not Show Up in VS Properties Window
I have a UserControl in my Asp.net project that has a public property. I do not want this property to show up in the Visual Studio Property Window when a user highlights an instance of the UserControl in the IDE. What attribute (or other method) should I use to prevent it from showing up?
class MyControl : System.Web.UI.UserControl {
// Attribute to prevent property from showing in VS Property Window?
public bool SampleProperty { get; set; }
// other stuff
}
A:
Use the following attribute ...
using System.ComponentModel;
[Browsable(false)]
public bool SampleProperty { get; set; }
In VB.net, this will be:
<System.ComponentModel.Browsable(False)>
A:
Tons of attributes out there to control how the PropertyGrid works.
[Browsable(false)]
public bool HiddenProperty {get;set;}
A:
Use the System.ComponentModel.Browsable attribute to
> ' VB
>
> <System.ComponentModel.Browsable(False)>
or
// C#
[System.ComponentModel.Browsable(false)]
|
Set a UserControl Property to Not Show Up in VS Properties Window
|
I have a UserControl in my Asp.net project that has a public property. I do not want this property to show up in the Visual Studio Property Window when a user highlights an instance of the UserControl in the IDE. What attribute (or other method) should I use to prevent it from showing up?
class MyControl : System.Web.UI.UserControl {
// Attribute to prevent property from showing in VS Property Window?
public bool SampleProperty { get; set; }
// other stuff
}
|
[
"Use the following attribute ...\nusing System.ComponentModel;\n\n[Browsable(false)]\npublic bool SampleProperty { get; set; }\n\nIn VB.net, this will be:\n<System.ComponentModel.Browsable(False)>\n\n",
"Tons of attributes out there to control how the PropertyGrid works.\n[Browsable(false)]\npublic bool HiddenProperty {get;set;}\n\n",
"Use the System.ComponentModel.Browsable attribute to\n> ' VB\n> \n> <System.ComponentModel.Browsable(False)>\n\nor\n// C#\n [System.ComponentModel.Browsable(false)]\n\n"
] |
[
11,
3,
2
] |
[] |
[] |
[
"asp.net",
"attributes",
"c#",
"properties",
"visual_studio"
] |
stackoverflow_0000071440_asp.net_attributes_c#_properties_visual_studio.txt
|
Q:
C++ : Opening a file in non exclusive mode
I have to develop an application which parses a log file and sends specific data to a server. It has to run on both Linux and Windows.
The problem appears when I want to test the log rolling system (which appends .1 to the name of the creates a new one with the same name). On Windows (haven't tested yet on Linux) I can't rename a file that I have opened with std::ifstream() (exclusive access?) even if I open it in "input mode" (ios::in).
Is there a cross-platform way to open file in a non-exclusive way?
A:
Is there a way to open file in a non-exclusive way,
Yes, using Win32, passing the various FILE_SHARE_Xxxx flags to CreateFile.
is it cross platform?
No, it requires platform-specific code.
Due to annoying backwards compatibility concerns (DOS applications, being single-tasking, assume that nothing can delete a file out from under them, i.e. that they can fclose() and then fopen() without anything going amiss; Win16 preserved this assumption to make porting DOS applications easier, Win32 preserved this assumption to make porting Win16 applications easier, and it's awful), Windows defaults to opening files exclusively.
The underlying OS infrastructure supports deleting/renaming open files (although I believe it does have the restriction that memory-mapped files cannot be deleted, which I think isn't a restriction found on *nix), but the default opening semantics do not.
C++ has no notion of any of this; the C++ operating environment is much the same as the DOS operating environment--no other applications running concurrently, so no need to control file sharing.
A:
It's not the reading operation that's requiring the exclusive mode, it's the rename, because this is essentially the same as moving the file to a new location.
I'm not sure but I don't think this can be done. Try copying the file instead, and later delete/replace the old file when it is no longer read.
A:
Win32 filesystem semantics require that a file you rename not be open (in any mode) at the time you do the rename. You will need to close the file, rename it, and then create the new log file.
Unix filesystem semantics allow you to rename a file that's open because the filename is just a pointer to the inode.
A:
If you are only reading from the file I know it can be done with windows api CreateFile. Just specify FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE as the input to dwShareMode.
Unfortunally this is not crossplatform. But there might be something similar for Linux.
See msdn for more info on CreateFile.
EDIT: Just a quick note about Greg Hewgill comment. I've just tested with the FILE_SHARE* stuff (too be 100% sure). And it is possible to both delete and rename files in windows if you open read only and specify the FILE_SHARE* parameters.
A:
I'd make sure you don't keep files open. This leads to weird stuff if your app crashes for example.
What I'd do:
Abstract (reading / writing / rolling over to a new file) into one class, and arrange closing of the file when you want to roll over to a new one in that class. (this is the neatest way, and since you already have the roll-over code you're already halfway there.)
If you must have multiple read/write access points, need all features of fstreams and don't want to write that complete a wrapper then the only cross platform solution I can think of is to always close the file when you don't need it, and have the roll-over code try to acquire exclusive access to the file a few times when it needs to roll-over before giving up.
|
C++ : Opening a file in non exclusive mode
|
I have to develop an application which parses a log file and sends specific data to a server. It has to run on both Linux and Windows.
The problem appears when I want to test the log rolling system (which appends .1 to the name of the creates a new one with the same name). On Windows (haven't tested yet on Linux) I can't rename a file that I have opened with std::ifstream() (exclusive access?) even if I open it in "input mode" (ios::in).
Is there a cross-platform way to open file in a non-exclusive way?
|
[
"\nIs there a way to open file in a non-exclusive way,\n\nYes, using Win32, passing the various FILE_SHARE_Xxxx flags to CreateFile.\n\nis it cross platform?\n\nNo, it requires platform-specific code.\nDue to annoying backwards compatibility concerns (DOS applications, being single-tasking, assume that nothing can delete a file out from under them, i.e. that they can fclose() and then fopen() without anything going amiss; Win16 preserved this assumption to make porting DOS applications easier, Win32 preserved this assumption to make porting Win16 applications easier, and it's awful), Windows defaults to opening files exclusively.\nThe underlying OS infrastructure supports deleting/renaming open files (although I believe it does have the restriction that memory-mapped files cannot be deleted, which I think isn't a restriction found on *nix), but the default opening semantics do not.\nC++ has no notion of any of this; the C++ operating environment is much the same as the DOS operating environment--no other applications running concurrently, so no need to control file sharing.\n",
"It's not the reading operation that's requiring the exclusive mode, it's the rename, because this is essentially the same as moving the file to a new location.\nI'm not sure but I don't think this can be done. Try copying the file instead, and later delete/replace the old file when it is no longer read.\n",
"Win32 filesystem semantics require that a file you rename not be open (in any mode) at the time you do the rename. You will need to close the file, rename it, and then create the new log file.\nUnix filesystem semantics allow you to rename a file that's open because the filename is just a pointer to the inode.\n",
"If you are only reading from the file I know it can be done with windows api CreateFile. Just specify FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE as the input to dwShareMode.\nUnfortunally this is not crossplatform. But there might be something similar for Linux.\nSee msdn for more info on CreateFile.\nEDIT: Just a quick note about Greg Hewgill comment. I've just tested with the FILE_SHARE* stuff (too be 100% sure). And it is possible to both delete and rename files in windows if you open read only and specify the FILE_SHARE* parameters.\n",
"I'd make sure you don't keep files open. This leads to weird stuff if your app crashes for example.\nWhat I'd do:\n\nAbstract (reading / writing / rolling over to a new file) into one class, and arrange closing of the file when you want to roll over to a new one in that class. (this is the neatest way, and since you already have the roll-over code you're already halfway there.)\nIf you must have multiple read/write access points, need all features of fstreams and don't want to write that complete a wrapper then the only cross platform solution I can think of is to always close the file when you don't need it, and have the roll-over code try to acquire exclusive access to the file a few times when it needs to roll-over before giving up.\n\n"
] |
[
3,
1,
1,
1,
0
] |
[] |
[] |
[
"c++",
"filesystems",
"linux",
"windows"
] |
stackoverflow_0000027700_c++_filesystems_linux_windows.txt
|
Q:
What are the best practices for the Middleware API?
We are developing a middleware SDK, both in C++ and Java to be used as a library/DLL by, for example, game developers, animation software developers, Avatar developers to enhance their products.
What I would like to know is this: Are there standard "Best Practices" for the development of these types of API?
I am thinking in terms of usability, readability, efficiency etc.
A:
My two favourite resources on the subject: http://mollyrocket.com/873 and http://video.google.com/videoplay?docid=-3733345136856180693
A:
From using third party libraries on Windows I've learned the following two things:
Try to distribute your library as a DLL rather than a static library. This gives way better compatibility between different c compilers and linkers. Another problem with static libraries in visual c++ is that the choice of runtime library can make libraries incompatible with code using a different runtime library and you may end up needing to distribute one version of the library for each runtime library.
Avoid c++ if possible. The c++ name mangling differs alot between different compilers and it's unlikely that a library built for visual c++ will be possible to link from another build environment in windows. When it comes to C, things are much better, in particular if you use dll's.
If you really want to get the good parts of c++ (such as resource management through constructors and destructors), build a convenience layer in c++ that you distribute as source code that hides away your c functions. Since the user has the source and compiles it locally, it won't have any name mangiling or abi issues with the local environment.
Without knowing too much about calling c/c++ code from Java, I expect it to be way easier to work with c code than c++ code because of the name mangling issues.
The book "Imperfect C++" has some discussion on library compatibility that I found very helpful.
A:
The video from Josh Bloch mentioned by yrp is a classic - I second that recommendation.
Some general guidelines:
DO define your API primarily in terms of interfaces, factories, and builders.
DO clearly specify exactly which packages and classes are part of the API.
DO provide a jar specifically used for compiling against the API.
DO NOT rely heavily on inheritance or the template method pattern - over time this becomes fragile and broken.
DO NOT use the singleton pattern or at least use it with extreme caution.
DO create package and class level javadoc explaining usage and concepts.
A:
There are lots of ways to design apis, depending on what you are solving. I think a full answer to this question would be worthy off a whole book, such as the gang of four patterns book. For Java specifically, and also just OO programming in general, I would recommend Effective Java 2nd Edition. The first is general and a lot of popular programming patterns, when they apply and their benefits. Effective Java is Java centered, but parts of it is general enough to apply to any programming language.
A:
Take a look at Framework Design Guidelines. I know it is .NET specific, but you can probably learn a lot of general information from it too.
|
What are the best practices for the Middleware API?
|
We are developing a middleware SDK, both in C++ and Java to be used as a library/DLL by, for example, game developers, animation software developers, Avatar developers to enhance their products.
What I would like to know is this: Are there standard "Best Practices" for the development of these types of API?
I am thinking in terms of usability, readability, efficiency etc.
|
[
"My two favourite resources on the subject: http://mollyrocket.com/873 and http://video.google.com/videoplay?docid=-3733345136856180693\n",
"From using third party libraries on Windows I've learned the following two things:\nTry to distribute your library as a DLL rather than a static library. This gives way better compatibility between different c compilers and linkers. Another problem with static libraries in visual c++ is that the choice of runtime library can make libraries incompatible with code using a different runtime library and you may end up needing to distribute one version of the library for each runtime library.\nAvoid c++ if possible. The c++ name mangling differs alot between different compilers and it's unlikely that a library built for visual c++ will be possible to link from another build environment in windows. When it comes to C, things are much better, in particular if you use dll's.\nIf you really want to get the good parts of c++ (such as resource management through constructors and destructors), build a convenience layer in c++ that you distribute as source code that hides away your c functions. Since the user has the source and compiles it locally, it won't have any name mangiling or abi issues with the local environment.\nWithout knowing too much about calling c/c++ code from Java, I expect it to be way easier to work with c code than c++ code because of the name mangling issues.\nThe book \"Imperfect C++\" has some discussion on library compatibility that I found very helpful.\n",
"The video from Josh Bloch mentioned by yrp is a classic - I second that recommendation. \nSome general guidelines:\n\nDO define your API primarily in terms of interfaces, factories, and builders.\nDO clearly specify exactly which packages and classes are part of the API.\nDO provide a jar specifically used for compiling against the API.\nDO NOT rely heavily on inheritance or the template method pattern - over time this becomes fragile and broken.\nDO NOT use the singleton pattern or at least use it with extreme caution.\nDO create package and class level javadoc explaining usage and concepts.\n\n",
"There are lots of ways to design apis, depending on what you are solving. I think a full answer to this question would be worthy off a whole book, such as the gang of four patterns book. For Java specifically, and also just OO programming in general, I would recommend Effective Java 2nd Edition. The first is general and a lot of popular programming patterns, when they apply and their benefits. Effective Java is Java centered, but parts of it is general enough to apply to any programming language.\n",
"Take a look at Framework Design Guidelines. I know it is .NET specific, but you can probably learn a lot of general information from it too.\n"
] |
[
6,
3,
1,
0,
0
] |
[] |
[] |
[
"api",
"c++",
"java",
"middleware"
] |
stackoverflow_0000062398_api_c++_java_middleware.txt
|
Q:
Loading Assemblies from the Network
This is related to the this question and the answer maybe the same but
I'll ask anyways.
I understand that we can start managed executables from the network from .NET
3.5 SP1 but what about assemblies loaded from inside the executable?
Does the same thing apply?
A:
You have been able to load Assemblies from the network at leasst from .NET 2.0. I have used this on a previous project. The only thing to watch is the size of the assembly and the number and size of the dependancies that it is loading.
If you are using a seperate AppDomain, then you will need to take special consideration of the dependancies.
A:
My understanding is yes, you're trying to load an untrusted module into your local app domain.
|
Loading Assemblies from the Network
|
This is related to the this question and the answer maybe the same but
I'll ask anyways.
I understand that we can start managed executables from the network from .NET
3.5 SP1 but what about assemblies loaded from inside the executable?
Does the same thing apply?
|
[
"You have been able to load Assemblies from the network at leasst from .NET 2.0. I have used this on a previous project. The only thing to watch is the size of the assembly and the number and size of the dependancies that it is loading.\nIf you are using a seperate AppDomain, then you will need to take special consideration of the dependancies.\n",
"My understanding is yes, you're trying to load an untrusted module into your local app domain.\n"
] |
[
2,
1
] |
[] |
[] |
[
".net_3.5",
"assemblies",
"dll",
"networking",
"security"
] |
stackoverflow_0000061057_.net_3.5_assemblies_dll_networking_security.txt
|
Q:
C++ UI resources
Now that I know C++ I want to get into desktop application that have a UI instead of Command Prompt stuff, where should I start?, and what are some good online resources?
A:
wxWidgets is a cross platform GUI library for C++ (and other languages). The main site should have enough pointers to resources to get going.
You might also want to check out this question/answer here on stack overflow if you are specifically thinking of Windows
A:
If cross platform support is important then I would second the suggestion to look at Qt. It supports Windows, Linux and the Mac. For free software it is free (there is a GPL version on Unix but not for Windows) but for comercial software it is not particulary cheap. There are now several books on Programming with Qt.
It does come with a large number of extra libraries for networking, parsing XML etc. It also has integration with Visual Studio on Windows.
One downside with Qt is that there are not as many add on libraries as with some other GUI frameworks. Ot will depend on the type of applications that you wish to write whether this is important to you or not.
A:
I use Codegear's C++ Builder. It's C++ language support is not 100% but it more than makes up for it by having a great two-way RAD IDE and the ability to use a huge library of existing Delphi components.
A:
How about QT? Its cross-platform and its is used in a lot of commercial softwares.
A:
On Linux and maybe Windows, you can use Gtk+ with Glade. Gtk+ is the GUI toolkit. Glade is a GUI drag and drop GUI editor. If you came from Windows or Java and thought GUI programming is hard, this stuff is easy.
A:
If marketability is a concern, then C++/CLI with WinForms and WPF which really translates to "just learn WinForms and WPF, regardless of what specific language you use".
CodeProject has a ton of WinForms/WPF samples/tutorials to get you started.
A:
The Fox GUI Toolkit
Really decent tried-and-true toolkit with a very nice event system. I've used the Ruby port, and my Windows apps had a very native look and feel.
A:
It might lack some features, but FLTK is an incredibly simple cross-platform GUI library.
A:
If you are using Windows the traditional place to start is Petzold
There is a nice simple framework here which will help you on the way without abstracting too much away.
A:
Get Visual Studio Express, and start with a MFC "Dialog Based" application. All the window toolkits mentioned are good, but MFC will look the best on a resume!
|
C++ UI resources
|
Now that I know C++ I want to get into desktop application that have a UI instead of Command Prompt stuff, where should I start?, and what are some good online resources?
|
[
"wxWidgets is a cross platform GUI library for C++ (and other languages). The main site should have enough pointers to resources to get going.\nYou might also want to check out this question/answer here on stack overflow if you are specifically thinking of Windows\n",
"If cross platform support is important then I would second the suggestion to look at Qt. It supports Windows, Linux and the Mac. For free software it is free (there is a GPL version on Unix but not for Windows) but for comercial software it is not particulary cheap. There are now several books on Programming with Qt. \nIt does come with a large number of extra libraries for networking, parsing XML etc. It also has integration with Visual Studio on Windows. \nOne downside with Qt is that there are not as many add on libraries as with some other GUI frameworks. Ot will depend on the type of applications that you wish to write whether this is important to you or not.\n",
"I use Codegear's C++ Builder. It's C++ language support is not 100% but it more than makes up for it by having a great two-way RAD IDE and the ability to use a huge library of existing Delphi components.\n",
"How about QT? Its cross-platform and its is used in a lot of commercial softwares.\n",
"On Linux and maybe Windows, you can use Gtk+ with Glade. Gtk+ is the GUI toolkit. Glade is a GUI drag and drop GUI editor. If you came from Windows or Java and thought GUI programming is hard, this stuff is easy.\n",
"If marketability is a concern, then C++/CLI with WinForms and WPF which really translates to \"just learn WinForms and WPF, regardless of what specific language you use\".\nCodeProject has a ton of WinForms/WPF samples/tutorials to get you started.\n",
"The Fox GUI Toolkit\nReally decent tried-and-true toolkit with a very nice event system. I've used the Ruby port, and my Windows apps had a very native look and feel.\n",
"It might lack some features, but FLTK is an incredibly simple cross-platform GUI library.\n",
"If you are using Windows the traditional place to start is Petzold \nThere is a nice simple framework here which will help you on the way without abstracting too much away.\n",
"Get Visual Studio Express, and start with a MFC \"Dialog Based\" application. All the window toolkits mentioned are good, but MFC will look the best on a resume!\n"
] |
[
8,
2,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c++",
"resources",
"user_interface"
] |
stackoverflow_0000048299_c++_resources_user_interface.txt
|
Q:
google maps traffic info
Google maps in some region can serve traffic information showing the blocked roads and so on. I was wondering if there is any code example demonstrating how can I serve traffice information for my own region.
A:
"Google Maps Hacks" has a hack, "Hack 30. Stay Out of Traffic Jams", on that.
You can also find out how to get U.S. traffic info from John Resig's "Traffic Conditions Data" article.
A:
For your own data, you'll want to implement a custom tile overlay.
A:
Google is mum on what source they use for their traffic data. You might contact them directly to see if they want to implement something for you, but my guess is that they'd simply refer you to their provider if they really wanted your data.
Keep in mind that traffic data is available for more than just the metropolitan areas, but Google isn't using it for a variety of reasons - one of the big reasons is that the entire tile set for the traffic overlay in areas with traffic tiles has to be regenerated every 15 minutes or so. It just doesn't scale.
So even if you managed to get your data in their flow, it likely won't be rendered.
-Adam
A:
I found that googl has a class called GTrafficOverlay and this is based on extending the GOverlay class. Now, it is getting clearer that I am looking for an open implementation of the GTrafficOverlay
|
google maps traffic info
|
Google maps in some region can serve traffic information showing the blocked roads and so on. I was wondering if there is any code example demonstrating how can I serve traffice information for my own region.
|
[
"\"Google Maps Hacks\" has a hack, \"Hack 30. Stay Out of Traffic Jams\", on that.\nYou can also find out how to get U.S. traffic info from John Resig's \"Traffic Conditions Data\" article.\n",
"For your own data, you'll want to implement a custom tile overlay.\n",
"Google is mum on what source they use for their traffic data. You might contact them directly to see if they want to implement something for you, but my guess is that they'd simply refer you to their provider if they really wanted your data.\nKeep in mind that traffic data is available for more than just the metropolitan areas, but Google isn't using it for a variety of reasons - one of the big reasons is that the entire tile set for the traffic overlay in areas with traffic tiles has to be regenerated every 15 minutes or so. It just doesn't scale.\nSo even if you managed to get your data in their flow, it likely won't be rendered.\n-Adam\n",
"I found that googl has a class called GTrafficOverlay and this is based on extending the GOverlay class. Now, it is getting clearer that I am looking for an open implementation of the GTrafficOverlay\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"google_maps"
] |
stackoverflow_0000067009_google_maps.txt
|
Q:
How to create project specific respository post-commit actions
Presently, we've got several main projects each in their own repository. We will have to version-control up to a dozen additional projects. VisualSVN recommends to create 1 respository for our company and then vc all projects inside that.
It's a good practice to create one repository for the entire company or department and store all your projects in this repository. Creating separate repository for each project is not a good idea because in that case you will not be able to perform Subversion operations like copy, diff and merge cross-project. VisualSvn.com
Currently we're using post-commit hooks to update the testing server with the latest commit and do other project specific actions (such as emailing certain people for one project but not for others) depending on which project has been committed.
As post-commit runs for the whole repository, is this still possible in such a situation? How would I go about decerning which project has changes? filter folder structure?
A:
You can check the paths of the commited files to determine which project they belongs to. Just remember that a commit can modify several files at once, and each file could theoretically belong to a different project.
A:
From the post-commit hook, run the svnlook changed command to find out which paths are affected by a commit. You could use a grep to see if they include some project path.
A:
I'm not sure I would agree with that VisualSVN recommendation. I have always set up separate repositories per project, and I've never run into a situation where I wish I could have merged across projects or something.
If there is a chunk of common code that is shared among projects at your company, it should become a shared library project of its own (with its own repository, too).
|
How to create project specific respository post-commit actions
|
Presently, we've got several main projects each in their own repository. We will have to version-control up to a dozen additional projects. VisualSVN recommends to create 1 respository for our company and then vc all projects inside that.
It's a good practice to create one repository for the entire company or department and store all your projects in this repository. Creating separate repository for each project is not a good idea because in that case you will not be able to perform Subversion operations like copy, diff and merge cross-project. VisualSvn.com
Currently we're using post-commit hooks to update the testing server with the latest commit and do other project specific actions (such as emailing certain people for one project but not for others) depending on which project has been committed.
As post-commit runs for the whole repository, is this still possible in such a situation? How would I go about decerning which project has changes? filter folder structure?
|
[
"You can check the paths of the commited files to determine which project they belongs to. Just remember that a commit can modify several files at once, and each file could theoretically belong to a different project.\n",
"From the post-commit hook, run the svnlook changed command to find out which paths are affected by a commit. You could use a grep to see if they include some project path.\n",
"I'm not sure I would agree with that VisualSVN recommendation. I have always set up separate repositories per project, and I've never run into a situation where I wish I could have merged across projects or something.\nIf there is a chunk of common code that is shared among projects at your company, it should become a shared library project of its own (with its own repository, too).\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"svn",
"version_control"
] |
stackoverflow_0000071365_svn_version_control.txt
|
Q:
Threads or asynch?
How do you make your application multithreaded ?
Do you use asynch functions ?
or do you spawn a new thread ?
I think that asynch functions are already spawning a thread so if your job is doing just some file reading, being lazy and just spawning your job on a thread would just "waste" ressources...
So is there some kind of design when using thread or asynch functions ?
A:
If you are talking about .Net, then don't forget the ThreadPool. The thread pool is also what asynch functions often use. Spawning to much threads can actually hurt your performance. A thread pool is designed to spawn just enough threads to do the work the fastest. So do use a thread pool instead of spwaning your own threads, unless the thread pool doesn't meet your needs.
PS: And keep an eye out on the Parallel Extensions from Microsoft
A:
Spawning threads is only going to waste resources if you start spawning tons of them, one or two extra threads isn't going to effect the platforms proformance, infact System currently has over 70 threads for me, and msn is using 32 (I really have no idea how a messenger can use that many threads, exspecialy when its minimised and not really doing anything...)
Useualy a good time to spawn a thread is when something will take a long time, but you need to keep doing something else.
eg say a calculation will take 30 seconds. The best thing to do is spawn a new thread for the calculation, so that you can continue to update the screen, and handle any user input because users will hate it if your app freezes untill its finished doing the calculation.
On the other hand, creating threads to do something that can be done almost instantly is nearly pointless, since the overhead of creating (or even just passing work to an existing thread using a thread pool) will be higher than just doing the job in the first place.
Sometimes you can break your app into a couple of seprate parts which run in their own threads. For example in games the updates/physics etc may be one thread, while grahpics are another, sound/music is a third, and networking is another. The problem here is you really have to think about how these parts will interact or else you may have worse proformance, bugs that happen seemingly "randomly", or it may even deadlock.
A:
I'll second Fire Lancer's answer - creating your own threads is an excellent way to process big tasks or to handle a task that would otherwise be "blocking" to the rest of synchronous app, but you have to have a clear understanding of the problem that you must solve and develope in a way that clearly defines the task of a thread, and limits the scope of what it does.
For an example I recently worked on - a Java console app runs periodically to capture data by essentially screen-scraping urls, parsing the document with DOM, extracting data and storing it in a database.
As a single threaded application, it, as you would expect, took an age, averaging around 1 url a second for a 50kb page. Not too bad, but when you scale out to needing to processes thousands of urls in a batch, it's no good.
Profiling the app showed that most of the time the active thread was idle - it was waiting for I/O operations - opening of a socket to the remote URL, opening a connection to the database etc. It's this sort of situation that can easily be improved with multithreading. Rewriting to be multi-threaded and with just 5 threads instead of one, even on a single core cpu, gave an increase in throughput of over 20 times.
In this example, each "worker" thread was explicitly limited to what it did - open the remote a remote url, parse the data, store it in the db. All the "high level" processing - generating the list of urls to parse, working out which next, handling errors, all remained with the control of the main thread.
A:
The use of threads makes you think more about the way your application needs threading and can in the long run make it easier to improve / control your performance.
Async methods are faster to use but they are a bit magic - a lot of things happen to make them possible - so it's probable that at some point you will need something that they can't give you. Then you can try and roll some custom threading code.
It all depends on your needs.
A:
The answer is "it depends".
It depends on what you're trying to achieve. I'm going to assume that you're aiming for more performance.
The simplest solution is to find another way to improve your performance. Run a profiler. Look for hot spots. Reduce unnecessary IO.
The next solution is to break your program into multiple processes, each of which can run in their own address space. This is easiest because there is no chance of the individual processes messing each other up.
The next solution is to use threads. At this point you're opening a major can of worms, so start small, and only multi-thread the critical path of the code.
The next solution is to use asynch IO. Generally only recommended for people writing some of very heavily loaded server, and even then I would rather re-use one of the existing frameworks that abstract away the details e.g. the C++ framework ICE, or an EJB server under java.
Note that each of these solutions has multiple sub-solutions - there are different breeds of threads and different kinds of asynch IO, each with slightly different performance characteristics, but again, it's generally best to let the framework handle it for you.
|
Threads or asynch?
|
How do you make your application multithreaded ?
Do you use asynch functions ?
or do you spawn a new thread ?
I think that asynch functions are already spawning a thread so if your job is doing just some file reading, being lazy and just spawning your job on a thread would just "waste" ressources...
So is there some kind of design when using thread or asynch functions ?
|
[
"If you are talking about .Net, then don't forget the ThreadPool. The thread pool is also what asynch functions often use. Spawning to much threads can actually hurt your performance. A thread pool is designed to spawn just enough threads to do the work the fastest. So do use a thread pool instead of spwaning your own threads, unless the thread pool doesn't meet your needs.\nPS: And keep an eye out on the Parallel Extensions from Microsoft\n",
"Spawning threads is only going to waste resources if you start spawning tons of them, one or two extra threads isn't going to effect the platforms proformance, infact System currently has over 70 threads for me, and msn is using 32 (I really have no idea how a messenger can use that many threads, exspecialy when its minimised and not really doing anything...)\nUseualy a good time to spawn a thread is when something will take a long time, but you need to keep doing something else.\neg say a calculation will take 30 seconds. The best thing to do is spawn a new thread for the calculation, so that you can continue to update the screen, and handle any user input because users will hate it if your app freezes untill its finished doing the calculation.\nOn the other hand, creating threads to do something that can be done almost instantly is nearly pointless, since the overhead of creating (or even just passing work to an existing thread using a thread pool) will be higher than just doing the job in the first place.\nSometimes you can break your app into a couple of seprate parts which run in their own threads. For example in games the updates/physics etc may be one thread, while grahpics are another, sound/music is a third, and networking is another. The problem here is you really have to think about how these parts will interact or else you may have worse proformance, bugs that happen seemingly \"randomly\", or it may even deadlock.\n",
"I'll second Fire Lancer's answer - creating your own threads is an excellent way to process big tasks or to handle a task that would otherwise be \"blocking\" to the rest of synchronous app, but you have to have a clear understanding of the problem that you must solve and develope in a way that clearly defines the task of a thread, and limits the scope of what it does.\nFor an example I recently worked on - a Java console app runs periodically to capture data by essentially screen-scraping urls, parsing the document with DOM, extracting data and storing it in a database.\nAs a single threaded application, it, as you would expect, took an age, averaging around 1 url a second for a 50kb page. Not too bad, but when you scale out to needing to processes thousands of urls in a batch, it's no good.\nProfiling the app showed that most of the time the active thread was idle - it was waiting for I/O operations - opening of a socket to the remote URL, opening a connection to the database etc. It's this sort of situation that can easily be improved with multithreading. Rewriting to be multi-threaded and with just 5 threads instead of one, even on a single core cpu, gave an increase in throughput of over 20 times. \nIn this example, each \"worker\" thread was explicitly limited to what it did - open the remote a remote url, parse the data, store it in the db. All the \"high level\" processing - generating the list of urls to parse, working out which next, handling errors, all remained with the control of the main thread.\n",
"The use of threads makes you think more about the way your application needs threading and can in the long run make it easier to improve / control your performance.\nAsync methods are faster to use but they are a bit magic - a lot of things happen to make them possible - so it's probable that at some point you will need something that they can't give you. Then you can try and roll some custom threading code.\nIt all depends on your needs.\n",
"The answer is \"it depends\".\nIt depends on what you're trying to achieve. I'm going to assume that you're aiming for more performance.\nThe simplest solution is to find another way to improve your performance. Run a profiler. Look for hot spots. Reduce unnecessary IO.\nThe next solution is to break your program into multiple processes, each of which can run in their own address space. This is easiest because there is no chance of the individual processes messing each other up.\nThe next solution is to use threads. At this point you're opening a major can of worms, so start small, and only multi-thread the critical path of the code.\nThe next solution is to use asynch IO. Generally only recommended for people writing some of very heavily loaded server, and even then I would rather re-use one of the existing frameworks that abstract away the details e.g. the C++ framework ICE, or an EJB server under java.\nNote that each of these solutions has multiple sub-solutions - there are different breeds of threads and different kinds of asynch IO, each with slightly different performance characteristics, but again, it's generally best to let the framework handle it for you.\n"
] |
[
7,
6,
2,
0,
0
] |
[] |
[] |
[
"language_agnostic",
"multithreading"
] |
stackoverflow_0000061342_language_agnostic_multithreading.txt
|
Q:
Help with SQL server stack dump
We're running SQL 2005 standard SP2 on a 4cpu box. Suddenly it crashdumps, after which all pooled connections are invalid and it goes into admin-only mode (only sa can connect)
The short stackdump is below. After the dump a number of errors show up like '2008-09-16 10:49:34.48 Server Resource Monitor (0xec4) Worker 0x03D1C0E8 appears to be non-yielding on Node 0. Memory freed: 232408 KB. Approx CPU Used: kernel 203 ms, user 140 ms, Interval: 250250.'
Have Googled around but couldn't find a definate answer. Anyone?
2008-09-16 10:46:24.98 Server Using 'dbghelp.dll' version '4.0.5'
2008-09-16 10:46:25.40 Server **Dump thread - spid = 0, PSS = 0x00000000, EC = 0x00000000
2008-09-16 10:46:25.40 Server ***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0009.txt
2008-09-16 10:46:25.40 Server * *******************************************************************************
2008-09-16 10:46:25.40 Server *
2008-09-16 10:46:25.40 Server * BEGIN STACK DUMP:
2008-09-16 10:46:25.40 Server * 09/16/08 10:46:25 spid 0
2008-09-16 10:46:25.42 Server *
2008-09-16 10:46:25.42 Server * Non-yielding Resource Monitor
2008-09-16 10:46:25.42 Server *
2008-09-16 10:46:25.42 Server * *******************************************************************************
2008-09-16 10:46:25.42 Server * -------------------------------------------------------------------------------
2008-09-16 10:46:25.42 Server * Short Stack Dump
2008-09-16 10:46:25.76 Server Stack Signature for the dump is 0x00000352
2008-09-16 10:46:32.70 Server External dump process return code 0x20000001.
A:
See How It Works: Non-Yielding Resource Monitor on the PSS SQL Server Engineers blog.
If this, and the linked whitepaper, don't help, then you're probably best to contact PSS (Microsoft Product Support Services) directly.
|
Help with SQL server stack dump
|
We're running SQL 2005 standard SP2 on a 4cpu box. Suddenly it crashdumps, after which all pooled connections are invalid and it goes into admin-only mode (only sa can connect)
The short stackdump is below. After the dump a number of errors show up like '2008-09-16 10:49:34.48 Server Resource Monitor (0xec4) Worker 0x03D1C0E8 appears to be non-yielding on Node 0. Memory freed: 232408 KB. Approx CPU Used: kernel 203 ms, user 140 ms, Interval: 250250.'
Have Googled around but couldn't find a definate answer. Anyone?
2008-09-16 10:46:24.98 Server Using 'dbghelp.dll' version '4.0.5'
2008-09-16 10:46:25.40 Server **Dump thread - spid = 0, PSS = 0x00000000, EC = 0x00000000
2008-09-16 10:46:25.40 Server ***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0009.txt
2008-09-16 10:46:25.40 Server * *******************************************************************************
2008-09-16 10:46:25.40 Server *
2008-09-16 10:46:25.40 Server * BEGIN STACK DUMP:
2008-09-16 10:46:25.40 Server * 09/16/08 10:46:25 spid 0
2008-09-16 10:46:25.42 Server *
2008-09-16 10:46:25.42 Server * Non-yielding Resource Monitor
2008-09-16 10:46:25.42 Server *
2008-09-16 10:46:25.42 Server * *******************************************************************************
2008-09-16 10:46:25.42 Server * -------------------------------------------------------------------------------
2008-09-16 10:46:25.42 Server * Short Stack Dump
2008-09-16 10:46:25.76 Server Stack Signature for the dump is 0x00000352
2008-09-16 10:46:32.70 Server External dump process return code 0x20000001.
|
[
"See How It Works: Non-Yielding Resource Monitor on the PSS SQL Server Engineers blog.\nIf this, and the linked whitepaper, don't help, then you're probably best to contact PSS (Microsoft Product Support Services) directly.\n"
] |
[
2
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000071166_sql_server.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.