content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Using yield to iterate over a datareader might not close the connection?
Here is a sample code to retrieve data from a database using the yield keyword that I found in a few place while googling around :
public IEnumerable<object> ExecuteSelect(string commandText)
{
using (IDbConnection connection = CreateConnection())
{
using (IDbCommand cmd = CreateCommand(commandText, connection))
{
connection.Open();
using (IDbDataReader reader = cmd.ExecuteReader())
{
while(reader.Read())
{
yield return reader["SomeField"];
}
}
connection.Close();
}
}
}
Am I correct in thinking that in this sample code, the connection would not be closed if we do not iterate over the whole datareader ?
Here is an example that would not close the connection, if I understand yield correctly..
foreach(object obj in ExecuteSelect(commandText))
{
break;
}
For a db connection that might not be catastrophic, I suppose the GC would clean it up eventually, but what if instead of a connection it was a more critical resource?
A:
The Iterator that the compiler synthesises implements IDisposable, which foreach calls when the foreach loop is exited.
The Iterator's Dispose() method will clean up the using statements on early exit.
As long as you use the iterator in a foreach loop, using() block, or call the Dispose() method in some other way, the cleanup of the Iterator will happen.
A:
Connection will be closed automatically since you're using it inside "using" block.
A:
From the simple test I have tried, aku is right, dispose is called as soon as the foreach block exit.
@David : However call stack is kept between call, so the connection would not be closed because on the next call we would return to the next instruction after the yield, which is the while block.
My understanding is that when the iterator is disposed, the connection would also be disposed with it. I also think that the Connection.Close would not be needed because it would be taken care of when the object is disposed because of the using clause.
Here is a simple program I tried to test the behavior...
class Program
{
static void Main(string[] args)
{
foreach (int v in getValues())
{
Console.WriteLine(v);
}
Console.ReadKey();
foreach (int v in getValues())
{
Console.WriteLine(v);
break;
}
Console.ReadKey();
}
public static IEnumerable<int> getValues()
{
using (TestDisposable t = new TestDisposable())
{
for(int i = 0; i<10; i++)
yield return t.GetValue();
}
}
}
public class TestDisposable : IDisposable
{
private int value;
public void Dispose()
{
Console.WriteLine("Disposed");
}
public int GetValue()
{
value += 1;
return value;
}
}
A:
Judging from this technical explanation, your code will not work as expected, but abort on the second item, because the connection was already closed when returning the first item.
@Joel Gauvreau : Yes, I should have read on. Part 3 of this series explains that the compiler adds special handling for finally blocks to trigger only at the real end.
|
Using yield to iterate over a datareader might not close the connection?
|
Here is a sample code to retrieve data from a database using the yield keyword that I found in a few place while googling around :
public IEnumerable<object> ExecuteSelect(string commandText)
{
using (IDbConnection connection = CreateConnection())
{
using (IDbCommand cmd = CreateCommand(commandText, connection))
{
connection.Open();
using (IDbDataReader reader = cmd.ExecuteReader())
{
while(reader.Read())
{
yield return reader["SomeField"];
}
}
connection.Close();
}
}
}
Am I correct in thinking that in this sample code, the connection would not be closed if we do not iterate over the whole datareader ?
Here is an example that would not close the connection, if I understand yield correctly..
foreach(object obj in ExecuteSelect(commandText))
{
break;
}
For a db connection that might not be catastrophic, I suppose the GC would clean it up eventually, but what if instead of a connection it was a more critical resource?
|
[
"The Iterator that the compiler synthesises implements IDisposable, which foreach calls when the foreach loop is exited.\nThe Iterator's Dispose() method will clean up the using statements on early exit.\nAs long as you use the iterator in a foreach loop, using() block, or call the Dispose() method in some other way, the cleanup of the Iterator will happen.\n",
"Connection will be closed automatically since you're using it inside \"using\" block.\n",
"From the simple test I have tried, aku is right, dispose is called as soon as the foreach block exit. \n@David : However call stack is kept between call, so the connection would not be closed because on the next call we would return to the next instruction after the yield, which is the while block.\nMy understanding is that when the iterator is disposed, the connection would also be disposed with it. I also think that the Connection.Close would not be needed because it would be taken care of when the object is disposed because of the using clause.\nHere is a simple program I tried to test the behavior...\nclass Program\n{\n static void Main(string[] args)\n {\n foreach (int v in getValues())\n {\n Console.WriteLine(v);\n }\n Console.ReadKey();\n\n foreach (int v in getValues())\n {\n Console.WriteLine(v);\n break;\n }\n Console.ReadKey();\n }\n\n public static IEnumerable<int> getValues()\n {\n using (TestDisposable t = new TestDisposable())\n {\n for(int i = 0; i<10; i++)\n yield return t.GetValue();\n }\n }\n}\n\npublic class TestDisposable : IDisposable\n{\n private int value;\n\n public void Dispose()\n {\n Console.WriteLine(\"Disposed\");\n }\n\n public int GetValue()\n {\n value += 1;\n return value;\n }\n}\n\n",
"Judging from this technical explanation, your code will not work as expected, but abort on the second item, because the connection was already closed when returning the first item.\n@Joel Gauvreau : Yes, I should have read on. Part 3 of this series explains that the compiler adds special handling for finally blocks to trigger only at the real end.\n"
] |
[
12,
2,
2,
0
] |
[] |
[] |
[
".net_2.0",
"yield"
] |
stackoverflow_0000047521_.net_2.0_yield.txt
|
Q:
Passing large files to WCF service
We have an encryption service that we've exposed over net. tcp. Most of the time, the service is used to encrypt/decrypt strings. However, every now and then, we the need to encrypt large documents (pdf, JPG, bmp, etc.).
What are the best endpoint settings for a scenario like this? Should I accept/return a stream? I've read a lot about this, but no one gives guidance on what to do when the large file doesn't occur frequently.
A:
MSDN describes how to enable streaming over WCF rather well.
Note, if the link between client and server needs to be encrypted, then you'll need to "roll your own" encryption mechanism. The default net.tcp encryption requires X.509 certificates, which won't work with streams as this kind of encryption needs to work on an entire message in one go rather than a stream of bytes.
This, in turn, means that you won't be able to authenticate the client using the default WCF security mechanisms as authentication requires encryption. The only work-around for this that I know of is to implement your own custom behaviour extensions on client and server to handle authentication.
A really good reference on how to add custom behaviour extensions is here: this documents how to provide custom configuration, too (something that I don't think is discussed anywhere in the MSDN documents at this time).
A:
One pattern you could follow is to have an asynchronous service that works on files on a shared file system location:
Place the file to be encrypted on a shared location
Call the service and tell it to encrypt the file, passing both the location and name of the file, and the addres of a callback service on the client
The service would encrypt the file and place the encrypted copy in a shared location (the same as where the unencrypted was placed or different, doesn't matter)
The service would call back to the client, giving the name and location of the encrypted file
The client can retrieve the encrypted file
|
Passing large files to WCF service
|
We have an encryption service that we've exposed over net. tcp. Most of the time, the service is used to encrypt/decrypt strings. However, every now and then, we the need to encrypt large documents (pdf, JPG, bmp, etc.).
What are the best endpoint settings for a scenario like this? Should I accept/return a stream? I've read a lot about this, but no one gives guidance on what to do when the large file doesn't occur frequently.
|
[
"MSDN describes how to enable streaming over WCF rather well. \nNote, if the link between client and server needs to be encrypted, then you'll need to \"roll your own\" encryption mechanism. The default net.tcp encryption requires X.509 certificates, which won't work with streams as this kind of encryption needs to work on an entire message in one go rather than a stream of bytes. \nThis, in turn, means that you won't be able to authenticate the client using the default WCF security mechanisms as authentication requires encryption. The only work-around for this that I know of is to implement your own custom behaviour extensions on client and server to handle authentication. \nA really good reference on how to add custom behaviour extensions is here: this documents how to provide custom configuration, too (something that I don't think is discussed anywhere in the MSDN documents at this time).\n",
"One pattern you could follow is to have an asynchronous service that works on files on a shared file system location:\n\nPlace the file to be encrypted on a shared location\nCall the service and tell it to encrypt the file, passing both the location and name of the file, and the addres of a callback service on the client\nThe service would encrypt the file and place the encrypted copy in a shared location (the same as where the unencrypted was placed or different, doesn't matter)\nThe service would call back to the client, giving the name and location of the encrypted file\nThe client can retrieve the encrypted file\n\n"
] |
[
4,
1
] |
[] |
[] |
[
"wcf",
"web_services"
] |
stackoverflow_0000046277_wcf_web_services.txt
|
Q:
Is there a way to parse a dependency tree from a build script output?
I have an inherited project that uses a build script (not make) to build and link the project with various libraries.
When it performs a build I would like to parse the build output to determine what and where the actual static libraries being linked into the final executable are and where are they coming from.
The script is compiling and linking with GNU tools.
A:
You might try using the nm tool. Given the right options, it will look at a binary (archive or linked image) and tell you what objects were linked into it.
Actually, here's a one-liner I use at work:
#!/bin/sh
nm -Ag $* | sed 's/^.*\/\(.*\.a\):/\1/' | sort -k 3 | grep -v ' U '
to find the culprits for undefined symbols. Just chop off the last grep expression and it should pretty much give you what you want.
A:
Static libraries, that makes life more difficult in this regard. In case of dynamic libraries you could just have used ldd on the resulting executable and be done with it. The best bet would be some kind of configuration file. Alternatively you could try to look for -l arguments to gcc/ld. Those are used to specify libraries. You could write a script for extracting it from the output, though I suspect that you will have to do it manually because by the time you know what the script should look for you probably already know the answer.
A:
It is probably possible to do something useful using e.g. Perl, but you would have to provide more details. On the other hand, it could be easier to simply analyze the script...
|
Is there a way to parse a dependency tree from a build script output?
|
I have an inherited project that uses a build script (not make) to build and link the project with various libraries.
When it performs a build I would like to parse the build output to determine what and where the actual static libraries being linked into the final executable are and where are they coming from.
The script is compiling and linking with GNU tools.
|
[
"You might try using the nm tool. Given the right options, it will look at a binary (archive or linked image) and tell you what objects were linked into it.\nActually, here's a one-liner I use at work:\n#!/bin/sh\n\nnm -Ag $* | sed 's/^.*\\/\\(.*\\.a\\):/\\1/' | sort -k 3 | grep -v ' U '\n\nto find the culprits for undefined symbols. Just chop off the last grep expression and it should pretty much give you what you want.\n",
"Static libraries, that makes life more difficult in this regard. In case of dynamic libraries you could just have used ldd on the resulting executable and be done with it. The best bet would be some kind of configuration file. Alternatively you could try to look for -l arguments to gcc/ld. Those are used to specify libraries. You could write a script for extracting it from the output, though I suspect that you will have to do it manually because by the time you know what the script should look for you probably already know the answer.\n",
"It is probably possible to do something useful using e.g. Perl, but you would have to provide more details. On the other hand, it could be easier to simply analyze the script... \n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"c++",
"gcc",
"gnu",
"linker"
] |
stackoverflow_0000047780_c++_gcc_gnu_linker.txt
|
Q:
Tips for database design in a web application
Does someone have any tips/advice on database design for a web application? The kind of stuff that can save me a lot of time/effort in the future when/if the application I'm working on takes off and starts having a lot of usage.
To be a bit more specific, the application is a strategy game (browser based, just text) that will mostly involve players issuing "orders" that will be stored in the database and processed later, with the results also being stored there (the history of "orders" and the corresponding results will probably get quite big).
Edited to add more details (as requested):
platform: Django
database engine: I was thinking of using MySQL (unless there's a big advantage in using another)
the schema: all I have now are some Django models, and that's far too much detail to post here. And if I start posting schemas this becomes too specific, and I was looking for general tips. For example, consider that I issue "orders" that will be later processed and return a result that I have to store to display some kind of "history". In this case is it better to have a separate table for the "history" or just one that aggregates both the "orders" and the result? I guess I could cache the "history" table, but this would take more space in the database and also more database operations because I would have to constantly create new rows instead of just altering them in the aggregate table.
A:
You have probably touched on a much larger issue of designing for high scalability and performance in general.
Essentially, for your database design I would follow good practices such as adding foreign keys and indexes to data you expect to be used frequently, normalise your data by splitting it into smaller tables and identify which data is to be read frequently and which is to be written frequently and optimise.
Much more important than your database design for high performance web applications, is your effective use of caching both at the client level through HTML page caching and at the server level through cached data or serving up static files in place of dynamic files.
The great thing about caching is that it can be added as it is needed, so that when your application does take off then you evolve accordingly.
As far as your historical data is concerned, this is a great thing to cache as you do not expect it to change frequently. If you wish to produce regular and fairly intensive reports from your data, then it is good practise to put this data into another database so as not to bring your web application to a halt whilst they run.
Of course this kind of optimisation really isn't necessary unless you think your application will warrant it.
A:
Database Normalization, and a giving a good thought to indexes, are two things that you just can't miss. Especially if you consider a game, where SELECTS happen much more frequently than UPDATEs.
For the long run, you should also take a look at memcached, as database querys can be the bottleneck whenever you have more than a few users.
A:
Why don't you post the schema you have now? It's too broad a question to answer usefully without some detail of what platform and database you're going to use and the table structure you're proposing...
A:
You should denormalize your tables if you find yourself joining 6+ tables in one query to retrieve data for a reporting type web page that will be hit often. Also, if you use ORM libraries like Hibernate or ActiveRecord make sure to spend some time on the default mappings they generate and the sql that ends up generating. They tend to be very chatty with the database when you could have achieve the same results with one round trip to the database.
|
Tips for database design in a web application
|
Does someone have any tips/advice on database design for a web application? The kind of stuff that can save me a lot of time/effort in the future when/if the application I'm working on takes off and starts having a lot of usage.
To be a bit more specific, the application is a strategy game (browser based, just text) that will mostly involve players issuing "orders" that will be stored in the database and processed later, with the results also being stored there (the history of "orders" and the corresponding results will probably get quite big).
Edited to add more details (as requested):
platform: Django
database engine: I was thinking of using MySQL (unless there's a big advantage in using another)
the schema: all I have now are some Django models, and that's far too much detail to post here. And if I start posting schemas this becomes too specific, and I was looking for general tips. For example, consider that I issue "orders" that will be later processed and return a result that I have to store to display some kind of "history". In this case is it better to have a separate table for the "history" or just one that aggregates both the "orders" and the result? I guess I could cache the "history" table, but this would take more space in the database and also more database operations because I would have to constantly create new rows instead of just altering them in the aggregate table.
|
[
"You have probably touched on a much larger issue of designing for high scalability and performance in general.\nEssentially, for your database design I would follow good practices such as adding foreign keys and indexes to data you expect to be used frequently, normalise your data by splitting it into smaller tables and identify which data is to be read frequently and which is to be written frequently and optimise.\nMuch more important than your database design for high performance web applications, is your effective use of caching both at the client level through HTML page caching and at the server level through cached data or serving up static files in place of dynamic files.\nThe great thing about caching is that it can be added as it is needed, so that when your application does take off then you evolve accordingly.\nAs far as your historical data is concerned, this is a great thing to cache as you do not expect it to change frequently. If you wish to produce regular and fairly intensive reports from your data, then it is good practise to put this data into another database so as not to bring your web application to a halt whilst they run.\nOf course this kind of optimisation really isn't necessary unless you think your application will warrant it.\n",
"Database Normalization, and a giving a good thought to indexes, are two things that you just can't miss. Especially if you consider a game, where SELECTS happen much more frequently than UPDATEs.\nFor the long run, you should also take a look at memcached, as database querys can be the bottleneck whenever you have more than a few users.\n",
"Why don't you post the schema you have now? It's too broad a question to answer usefully without some detail of what platform and database you're going to use and the table structure you're proposing...\n",
"You should denormalize your tables if you find yourself joining 6+ tables in one query to retrieve data for a reporting type web page that will be hit often. Also, if you use ORM libraries like Hibernate or ActiveRecord make sure to spend some time on the default mappings they generate and the sql that ends up generating. They tend to be very chatty with the database when you could have achieve the same results with one round trip to the database.\n"
] |
[
10,
3,
1,
1
] |
[] |
[] |
[
"database_design",
"web_applications"
] |
stackoverflow_0000047486_database_design_web_applications.txt
|
Q:
How Does gcc on Solaris Find Its Libraries?
I'm trying to install 'quadrupel', a library that relies on ffmpeg on Solaris x86.
I managed to build ffmpeg and its libraries live in /opt/gnu/lib and the includes are in /opt/gnu/include but when I try to build quadrupel, it can't find the ffmpeg headers.
What flags/configuration is required to include those two directories in the proper search paths for libraries and includes? I'm not much of a Makefile hacker.
A:
I believe you need to add the following to the Makefile:
CFLAGS += -I/opt/gnu/include
LDFLAGS += -L/opt/gnu/lib -R/opt/gnu/lib
The -I argument tell gcc where to find the include files. The -L flag tells ld where to find the libraries while linking. The -R flag writes /opt/gnu/lib into the library search path in the quadrupel binary, so it can find its libraries when it starts.
A:
You can override the path by setting the environmental variable LD_LIBRARY_PATH. However I would suggest changing the system paths as well so you don't have to change the library path for all users. This can be done using crel.
crle -l -c /var/ld/ld.config -l /usr/lib:/usr/local/lib:/opt/gnu/lib
For the includes just add -I/opt/gnu/include to your CFLAGS variable.
|
How Does gcc on Solaris Find Its Libraries?
|
I'm trying to install 'quadrupel', a library that relies on ffmpeg on Solaris x86.
I managed to build ffmpeg and its libraries live in /opt/gnu/lib and the includes are in /opt/gnu/include but when I try to build quadrupel, it can't find the ffmpeg headers.
What flags/configuration is required to include those two directories in the proper search paths for libraries and includes? I'm not much of a Makefile hacker.
|
[
"I believe you need to add the following to the Makefile:\nCFLAGS += -I/opt/gnu/include\nLDFLAGS += -L/opt/gnu/lib -R/opt/gnu/lib\n\nThe -I argument tell gcc where to find the include files. The -L flag tells ld where to find the libraries while linking. The -R flag writes /opt/gnu/lib into the library search path in the quadrupel binary, so it can find its libraries when it starts.\n",
"You can override the path by setting the environmental variable LD_LIBRARY_PATH. However I would suggest changing the system paths as well so you don't have to change the library path for all users. This can be done using crel. \ncrle -l -c /var/ld/ld.config -l /usr/lib:/usr/local/lib:/opt/gnu/lib\n\nFor the includes just add -I/opt/gnu/include to your CFLAGS variable. \n"
] |
[
5,
1
] |
[] |
[] |
[
"ffmpeg",
"gcc",
"makefile",
"solaris"
] |
stackoverflow_0000047883_ffmpeg_gcc_makefile_solaris.txt
|
Q:
Upgrade database from SQL Server 2000 to 2005 -- and rebuild full-text indexes?
I'm loading a SQL Server 2000 database into my new SQL Server 2005 instance. As expected, the full-text catalogs don't come with it. How can I rebuild them?
Right-clicking my full text catalogs and hitting "rebuild indexes" just hangs for hours and hours without doing anything, so it doesn't appear to be that simple...
A:
Try it using SQL.
CREATE FULLTEXT CATALOG
ALTER FULLTEXT CATALOG
Here's an example from Microsoft.
--Change to accent insensitive
USE AdventureWorks;
GO
ALTER FULLTEXT CATALOG ftCatalog
REBUILD WITH ACCENT_SENSITIVITY=OFF;
GO
-- Check Accentsensitivity
SELECT FULLTEXTCATALOGPROPERTY('ftCatalog', 'accentsensitivity');
GO
--Returned 0, which means the catalog is not accent sensitive.
A:
Thanks, that helped because it showed what was wrong: My file paths were different. Here's how I fixed it:
1) Load database from SQL 2000 backup
2) Set compatibility mode to SQL 2005
USE mydb
GO
ALTER DATABASE mydb SET COMPATIBILITY_LEVEL = 90
GO
3) Get the filegroup names
SELECT name
FROM sys.master_files mf
WHERE type = 4
AND EXISTS( SELECT *
FROM sys.databases db
WHERE db.database_id = mf.database_id
AND name = 'mydb')
4) Then for each name (I did this in a little script)
ALTER DATABASE mydb
MODIFY FILE( NAME = {full text catalog name}, FILENAME="N:\ew\path\to\wherever")
5) Then collect all the "readable" names of the catalogs:
SELECT name FROM sys.sysfulltextcatalogs
6) Finally, now you can rebuild each one:
ALTER FULLTEXT CATALOG {full text catalog name} REBUILD
|
Upgrade database from SQL Server 2000 to 2005 -- and rebuild full-text indexes?
|
I'm loading a SQL Server 2000 database into my new SQL Server 2005 instance. As expected, the full-text catalogs don't come with it. How can I rebuild them?
Right-clicking my full text catalogs and hitting "rebuild indexes" just hangs for hours and hours without doing anything, so it doesn't appear to be that simple...
|
[
"Try it using SQL.\n\nCREATE FULLTEXT CATALOG\nALTER FULLTEXT CATALOG\n\nHere's an example from Microsoft.\n--Change to accent insensitive\nUSE AdventureWorks;\nGO\nALTER FULLTEXT CATALOG ftCatalog \nREBUILD WITH ACCENT_SENSITIVITY=OFF;\nGO\n-- Check Accentsensitivity\nSELECT FULLTEXTCATALOGPROPERTY('ftCatalog', 'accentsensitivity');\nGO\n--Returned 0, which means the catalog is not accent sensitive.\n\n",
"Thanks, that helped because it showed what was wrong: My file paths were different. Here's how I fixed it:\n1) Load database from SQL 2000 backup\n2) Set compatibility mode to SQL 2005\nUSE mydb\nGO\n\nALTER DATABASE mydb SET COMPATIBILITY_LEVEL = 90\nGO\n\n3) Get the filegroup names\nSELECT name \n FROM sys.master_files mf \n WHERE type = 4 \n AND EXISTS( SELECT * \n FROM sys.databases db \n WHERE db.database_id = mf.database_id \n AND name = 'mydb')\n\n4) Then for each name (I did this in a little script)\nALTER DATABASE mydb \nMODIFY FILE( NAME = {full text catalog name}, FILENAME=\"N:\\ew\\path\\to\\wherever\")\n\n5) Then collect all the \"readable\" names of the catalogs:\nSELECT name FROM sys.sysfulltextcatalogs\n\n6) Finally, now you can rebuild each one:\nALTER FULLTEXT CATALOG {full text catalog name} REBUILD\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"full_text_search",
"recovery",
"sql_server"
] |
stackoverflow_0000047862_full_text_search_recovery_sql_server.txt
|
Q:
Good Stripes tutorials / examples?
The company I just started working for is using Stripes for parts of its web page development these days, and while it seems to be a nice enough web framework it no one really uses it-- it is almost non existent on the 'net. It's not even first in it's google search and the result you do get is for its old home page.
So, do any of you people use Stripes? Of your own volition? Do you know of any good tutorials / examples?
A:
I recommend checking out the book referenced by jko:
a book from The Pragmatic Bookshelf called Stripes: ...and Java web development is fun again
Whilst still in 'beta' the book covers everything very well.
Another good place to start is this ONJava article.
I have used Stripes on a few projects now and have liked it a lot.
It may sound crazy but the Stripes quickstart and sample application documentation on the website does a pretty good job of covering the bases.
This is helped by the fact there is little to Stripes, probably because it is relatively new and not trying to be all things to all people. I would say give the quick-start a try and if by the end of it you are unsatisfied look elsewhere. At the end of the day you and your company have to be happy (and productive) with what you are using irrespective of how many people are using it.
A:
I've never used (or even heard of) Stripes.
Regardless, there's a book from The Pragmatic Bookshelf called Stripes: ...and Java web development is fun again that may be worth checking out. You could also check out the Stripes mailing list archive.
A:
It's a shame that some people perceive Stripes as a framework for which "there really just isn't much support or information for it." In reality, the Stripes community is very supportive - have a look at the mailing list and you'll see how friendly and responsive people are. In fact, some have said on the #stripes IRC channel that they have had better response for Hibernate-related questions than on #hibernate itself!
Give Stripes a good, serious look instead of dismissing it because of misconceptions.
A:
Stripes is a great framework. We converted a major project from a home grown framework to stripes and it took less than one week.
The book referenced above is a great resources, as is the mailing list.
There's also an active irc channel #stripes on freenode.
It's a very powerful framework that doesn't get in your way.
A:
We considered it when we were looking at open source frameworks. But we saw the same thing your did that there really just isn't much support or information for it. You should always weight the community support factor surrounding open source projects before picking one. (which is what you are doing here)
|
Good Stripes tutorials / examples?
|
The company I just started working for is using Stripes for parts of its web page development these days, and while it seems to be a nice enough web framework it no one really uses it-- it is almost non existent on the 'net. It's not even first in it's google search and the result you do get is for its old home page.
So, do any of you people use Stripes? Of your own volition? Do you know of any good tutorials / examples?
|
[
"I recommend checking out the book referenced by jko:\n\na book from The Pragmatic Bookshelf called Stripes: ...and Java web development is fun again\n\nWhilst still in 'beta' the book covers everything very well.\nAnother good place to start is this ONJava article. \nI have used Stripes on a few projects now and have liked it a lot. \nIt may sound crazy but the Stripes quickstart and sample application documentation on the website does a pretty good job of covering the bases. \nThis is helped by the fact there is little to Stripes, probably because it is relatively new and not trying to be all things to all people. I would say give the quick-start a try and if by the end of it you are unsatisfied look elsewhere. At the end of the day you and your company have to be happy (and productive) with what you are using irrespective of how many people are using it.\n",
"I've never used (or even heard of) Stripes.\nRegardless, there's a book from The Pragmatic Bookshelf called Stripes: ...and Java web development is fun again that may be worth checking out. You could also check out the Stripes mailing list archive.\n",
"It's a shame that some people perceive Stripes as a framework for which \"there really just isn't much support or information for it.\" In reality, the Stripes community is very supportive - have a look at the mailing list and you'll see how friendly and responsive people are. In fact, some have said on the #stripes IRC channel that they have had better response for Hibernate-related questions than on #hibernate itself!\nGive Stripes a good, serious look instead of dismissing it because of misconceptions.\n",
"Stripes is a great framework. We converted a major project from a home grown framework to stripes and it took less than one week.\nThe book referenced above is a great resources, as is the mailing list.\nThere's also an active irc channel #stripes on freenode.\nIt's a very powerful framework that doesn't get in your way.\n",
"We considered it when we were looking at open source frameworks. But we saw the same thing your did that there really just isn't much support or information for it. You should always weight the community support factor surrounding open source projects before picking one. (which is what you are doing here)\n"
] |
[
9,
4,
3,
2,
0
] |
[] |
[] |
[
"java",
"stripes"
] |
stackoverflow_0000015695_java_stripes.txt
|
Q:
What's the best approach to naming classes?
Coming up with good, precise names for classes is notoriously difficult. Done right, it makes code more self-documenting and provides a vocabulary for reasoning about code at a higher level of abstraction.
Classes which implement a particular design pattern might be given a name based on the well known pattern name (e.g. FooFactory, FooFacade), and classes which directly model domain concepts can take their names from the problem domain, but what about other classes? Is there anything like a programmer's thesaurus that I can turn to when I'm lacking inspiration, and want to avoid using generic class names (like FooHandler, FooProcessor, FooUtils, and FooManager)?
A:
I'll cite some passages from Implementation Patterns by Kent Beck:
Simple Superclass Name
"[...] The names should be short and punchy.
However, to make the names precise
sometimes seems to require several
words. A way out of this dilemma is
picking a strong metaphor for the
computation. With a metaphor in mind,
even single words bring with them a
rich web of associations, connections,
and implications. For example, in the
HotDraw drawing framework, my first
name for an object in a drawing was
DrawingObject. Ward Cunningham came
along with the typography metaphor: a
drawing is like a printed, laid-out
page. Graphical items on a page are
figures, so the class became Figure.
In the context of the metaphor, Figure
is simultaneously shorter, richer, and
more precise than DrawingObject."
Qualified Subclass Name
"The names of subclasses have two jobs.
They need to communicate what class
they are like and how they are
different. [...] Unlike the names at
the roots of hierarchies, subclass
names aren’t used nearly as often in
conversation, so they can be
expressive at the cost of being
concise. [...]
Give subclasses that serve as the
roots of hierarchies their own simple
names. For example, HotDraw has a
class Handle which presents figure-
editing operations when a figure is
selected. It is called, simply, Handle
in spite of extending Figure. There is
a whole family of handles and they
most appropriately have names like
StretchyHandle and TransparencyHandle.
Because Handle is the root of its own
hierarchy, it deserves a simple
superclass name more than a qualified
subclass name.
Another wrinkle in
subclass naming is multiple-level
hierarchies. [...] Rather than blindly
prepend the modifiers to the immediate
superclass, think about the name from
the reader’s perspective. What class
does he need to know this class is
like? Use that superclass as the basis
for the subclass name."
Interface
Two styles of naming interfaces depend on how you are thinking of the interfaces.
Interfaces as classes without implementations should be named as if they were classes
(Simple Superclass Name, Qualified Subclass Name). One problem with this style of
naming is that the good names are used up before you get to naming classes. An
interface called File needs an implementation class called something like
ActualFile, ConcreteFile, or (yuck!) FileImpl (both a suffix and an
abbreviation). In general, communicating whether one is dealing with a concrete or
abstract object is important, whether the abstract object is implemented as an
interface or a superclass is less important. Deferring the distinction between
interfaces and superclasses is well >supported by this style of naming, leaving you
free to change your mind later if that >becomes necessary.
Sometimes, naming concrete classes simply is more important to communication than
hiding the use of interfaces. In this case, prefix interface names with “I”. If the
interface is called IFile, the class can be simply called File.
For more detailed discussion, buy the book! It's worth it! :)
A:
Always go for MyClassA, MyClassB - It allows for a nice alpha sort..
I'm kidding!
This is a good question, and something I experienced not too long ago. I was reorganising my codebase at work and was having problems of where to put what, and what to call it..
The real problem?
I had classes doing too much. If you try to adhere to the single responsibility principle it will make everything all come together much nicer.. Rather than one monolithic PrintHandler class, you could break it down into PageHandler , PageFormatter (and so on) and then have a master Printer class which brings it all together.
In my re-org, it took me time, but I ended up binning a lot of duplicate code, got my codebase much more logical and learned a hell of a lot when it comes to thinking before throwing an extra method in a class :D
I would not however recommend putting things like pattern names into the class name. The classes interface should make that obvious (like hiding the constructor for a singleton). There is nothing wrong with the generic name, if the class is serving a generic purpose.
Good luck!
A:
Josh Bloch's excellent talk about good API design has a few good bits of advice:
Classes should do one thing and do it well.
If a class is hard to name or explain then it's probably not following the advice in the previous bullet point.
A class name should instantly communicate what the class is.
Good names drive good designs.
If your problem is what to name exposed internal classes, maybe you should consolidate them into a larger class.
If your problem is naming a class that is doing a lot of different stuff, you should consider breaking it into multiple classes.
If that's good advice for a public API then it can't hurt for any other class.
A:
If you're stuck with a name, sometimes just giving it any half-sensible name with commitment to revising it later is a good strategy.
Don't get naming paralysis. Yes, names are very important but they're not important enough to waste huge amounts of time on. If you can't think up a good name in 10 minutes, move on.
A:
If a good name doesn't spring to mind, I would probably question whether there is a deeper problem - is the class serving a good purpose? If it is, naming it should be pretty straightforward.
A:
If your "FooProcessor" really does process foos, then don't be reluctant to give it that name just because you already have a BarProcessor, BazProcessor, etc. When in doubt, obvious is best. The other developers who have to read your code may not be using the same thesaurus you are.
That said, more specificity wouldn't hurt for this particular example. "Process" is a pretty broad word. Is it really a "FooUpdateProcessor" (which might become "FooUpdater"), for example? You don't have to get too "creative" about the naming, but if you wrote the code you probably have a fairly good idea of what it does and doesn't do.
Finally, remember that the bare class name isn't all that you and the readers of your code have to go on - there are usually namespaces in play as well. Those can often give readers enough context to see clearly what your class if really for, even if its bare name is fairly generic.
|
What's the best approach to naming classes?
|
Coming up with good, precise names for classes is notoriously difficult. Done right, it makes code more self-documenting and provides a vocabulary for reasoning about code at a higher level of abstraction.
Classes which implement a particular design pattern might be given a name based on the well known pattern name (e.g. FooFactory, FooFacade), and classes which directly model domain concepts can take their names from the problem domain, but what about other classes? Is there anything like a programmer's thesaurus that I can turn to when I'm lacking inspiration, and want to avoid using generic class names (like FooHandler, FooProcessor, FooUtils, and FooManager)?
|
[
"I'll cite some passages from Implementation Patterns by Kent Beck:\nSimple Superclass Name\n\n\"[...] The names should be short and punchy.\nHowever, to make the names precise\nsometimes seems to require several\nwords. A way out of this dilemma is\npicking a strong metaphor for the\ncomputation. With a metaphor in mind,\neven single words bring with them a\nrich web of associations, connections,\nand implications. For example, in the\nHotDraw drawing framework, my first\nname for an object in a drawing was\nDrawingObject. Ward Cunningham came\nalong with the typography metaphor: a\ndrawing is like a printed, laid-out\npage. Graphical items on a page are\nfigures, so the class became Figure.\nIn the context of the metaphor, Figure\nis simultaneously shorter, richer, and\nmore precise than DrawingObject.\"\n\nQualified Subclass Name\n\n\"The names of subclasses have two jobs.\nThey need to communicate what class\nthey are like and how they are\ndifferent. [...] Unlike the names at\nthe roots of hierarchies, subclass\nnames aren’t used nearly as often in\nconversation, so they can be\nexpressive at the cost of being\nconcise. [...]\nGive subclasses that serve as the\nroots of hierarchies their own simple\nnames. For example, HotDraw has a\nclass Handle which presents figure-\nediting operations when a figure is\nselected. It is called, simply, Handle\nin spite of extending Figure. There is\na whole family of handles and they\nmost appropriately have names like\nStretchyHandle and TransparencyHandle.\nBecause Handle is the root of its own\nhierarchy, it deserves a simple\nsuperclass name more than a qualified\nsubclass name.\nAnother wrinkle in\nsubclass naming is multiple-level\nhierarchies. [...] Rather than blindly\nprepend the modifiers to the immediate\nsuperclass, think about the name from\nthe reader’s perspective. What class\ndoes he need to know this class is\nlike? Use that superclass as the basis\nfor the subclass name.\"\n\nInterface\n\nTwo styles of naming interfaces depend on how you are thinking of the interfaces.\nInterfaces as classes without implementations should be named as if they were classes\n(Simple Superclass Name, Qualified Subclass Name). One problem with this style of\nnaming is that the good names are used up before you get to naming classes. An\ninterface called File needs an implementation class called something like\nActualFile, ConcreteFile, or (yuck!) FileImpl (both a suffix and an\nabbreviation). In general, communicating whether one is dealing with a concrete or\nabstract object is important, whether the abstract object is implemented as an\ninterface or a superclass is less important. Deferring the distinction between\ninterfaces and superclasses is well >supported by this style of naming, leaving you\nfree to change your mind later if that >becomes necessary.\nSometimes, naming concrete classes simply is more important to communication than\nhiding the use of interfaces. In this case, prefix interface names with “I”. If the\ninterface is called IFile, the class can be simply called File.\n\nFor more detailed discussion, buy the book! It's worth it! :)\n",
"Always go for MyClassA, MyClassB - It allows for a nice alpha sort.. \nI'm kidding!\nThis is a good question, and something I experienced not too long ago. I was reorganising my codebase at work and was having problems of where to put what, and what to call it..\nThe real problem?\nI had classes doing too much. If you try to adhere to the single responsibility principle it will make everything all come together much nicer.. Rather than one monolithic PrintHandler class, you could break it down into PageHandler , PageFormatter (and so on) and then have a master Printer class which brings it all together.\nIn my re-org, it took me time, but I ended up binning a lot of duplicate code, got my codebase much more logical and learned a hell of a lot when it comes to thinking before throwing an extra method in a class :D\nI would not however recommend putting things like pattern names into the class name. The classes interface should make that obvious (like hiding the constructor for a singleton). There is nothing wrong with the generic name, if the class is serving a generic purpose.\nGood luck!\n",
"Josh Bloch's excellent talk about good API design has a few good bits of advice:\n\nClasses should do one thing and do it well.\nIf a class is hard to name or explain then it's probably not following the advice in the previous bullet point. \nA class name should instantly communicate what the class is.\nGood names drive good designs.\n\nIf your problem is what to name exposed internal classes, maybe you should consolidate them into a larger class.\nIf your problem is naming a class that is doing a lot of different stuff, you should consider breaking it into multiple classes.\nIf that's good advice for a public API then it can't hurt for any other class.\n",
"If you're stuck with a name, sometimes just giving it any half-sensible name with commitment to revising it later is a good strategy.\nDon't get naming paralysis. Yes, names are very important but they're not important enough to waste huge amounts of time on. If you can't think up a good name in 10 minutes, move on.\n",
"If a good name doesn't spring to mind, I would probably question whether there is a deeper problem - is the class serving a good purpose? If it is, naming it should be pretty straightforward.\n",
"If your \"FooProcessor\" really does process foos, then don't be reluctant to give it that name just because you already have a BarProcessor, BazProcessor, etc. When in doubt, obvious is best. The other developers who have to read your code may not be using the same thesaurus you are.\nThat said, more specificity wouldn't hurt for this particular example. \"Process\" is a pretty broad word. Is it really a \"FooUpdateProcessor\" (which might become \"FooUpdater\"), for example? You don't have to get too \"creative\" about the naming, but if you wrote the code you probably have a fairly good idea of what it does and doesn't do.\nFinally, remember that the bare class name isn't all that you and the readers of your code have to go on - there are usually namespaces in play as well. Those can often give readers enough context to see clearly what your class if really for, even if its bare name is fairly generic.\n"
] |
[
66,
40,
29,
15,
10,
3
] |
[] |
[] |
[
"naming"
] |
stackoverflow_0000038019_naming.txt
|
Q:
Is there any way to repopulate an Html Select's Options without firing the Change event (using jQuery)?
I have multiple selects:
<select id="one">
<option value="1">one</option>
<option value="2">two</option>
<option value="3">three</option>
</select>
<select id="two">
<option value="1">one</option>
<option value="2">two</option>
<option value="3">three</option>
</select>
What I want is to select "one" from the first select, then have that option be removed from the second one.
Then if you select "two" from the second one, I want that one removed from the first one.
Here's the JS I have currently:
$(function () {
var $one = $("#one");
var $two = $("#two");
var selectOptions = [];
$("select").each(function (index) {
selectOptions[index] = [];
for (var i = 0; i < this.options.length; i++) {
selectOptions[index][i] = this.options[i];
}
});
$one.change(function () {
var selectedValue = $("option:selected", this).val();
for (var i = 0; i < selectOptions[1].length; i++) {
var exists = false;
for (var x = 0; x < $two[0].options.length; x++) {
if ($two[0].options[x].value == selectOptions[1][i].value)
exists = true;
}
if (!exists)
$two.append(selectOptions[1][i]);
}
$("option[value='" + selectedValue + "']", $two).remove();
});
$two.change(function () {
var selectedValue = $("option:selected", this).val();
for (var i = 0; i < selectOptions[0].length; i++) {
var exists = false;
for (var x = 0; x < $one[0].options.length; x++) {
if ($one[0].options[x].value == selectOptions[0][i].value)
exists = true;
}
if (!exists)
$one.append(selectOptions[0][i]);
}
$("option[value='" + selectedValue + "']", $one).remove();
});
});
But when the elements get repopulated, it fires the change event in the select whose options are changing. I tried just setting the disabled attribute on the option I want to remove, but that doesn't work with IE6.
A:
I am not (currently) a user of jQuery, but I can tell you that you need to temporarily disconnect your event handler while you repopulate the items or, at the least, set a flag that you then test for and based on its value, handle the change.
A:
Here's the final code that I ended up using, the flag (changeOnce) worked great, thanks @Jason.
$(function () {
var $one = $("#one");
var $two = $("#two");
var selectOptions = [];
$("select").each(function (index) {
selectOptions[index] = [];
for (var i = 0; i < this.options.length; i++) {
selectOptions[index][i] = this.options[i];
}
});
var changeOnce = false;
$one.change(function () {
if (changeOnce) return;
changeOnce = true;
var selectedValue = $("option:selected", this).val();
filterSelect(selectedValue, $two, 1);
changeOnce = false;
});
$two.change(function () {
if (changeOnce) return;
changeOnce = true;
var selectedValue = $("option:selected", this).val();
filterSelect(selectedValue, $one, 0);
changeOnce = false;
});
function filterSelect(selectedValue, $selectToFilter, selectIndex) {
for (var i = 0; i < selectOptions[selectIndex].length; i++) {
var exists = false;
for (var x = 0; x < $selectToFilter[0].options.length; x++) {
if ($selectToFilter[0].options[x].value == selectOptions[selectIndex][i].value)
exists = true;
}
if (!exists)
$selectToFilter.append(selectOptions[selectIndex][i]);
}
$("option[value='" + selectedValue + "']", $selectToFilter).remove();
sortSelect($selectToFilter[0]);
}
function sortSelect(selectToSort) {
var arrOptions = [];
for (var i = 0; i < selectToSort.options.length; i++) {
arrOptions[i] = [];
arrOptions[i][0] = selectToSort.options[i].value;
arrOptions[i][1] = selectToSort.options[i].text;
arrOptions[i][2] = selectToSort.options[i].selected;
}
arrOptions.sort();
for (var i = 0; i < selectToSort.options.length; i++) {
selectToSort.options[i].value = arrOptions[i][0];
selectToSort.options[i].text = arrOptions[i][1];
selectToSort.options[i].selected = arrOptions[i][2];
}
}
});
A:
Or you can just hide the option you don't want to show...
function hideSelected($one, $two)
{
$one.bind('change', function()
{
var val = $one.val();
$two.find('option:not(:visible)').show().end()
.find('option[value='+val+']').hide().end();
})
}
hideSelected($one, $two);
hideSelected($two, $one);
EDIT: Oh sorry, this code does not work with IE6...
|
Is there any way to repopulate an Html Select's Options without firing the Change event (using jQuery)?
|
I have multiple selects:
<select id="one">
<option value="1">one</option>
<option value="2">two</option>
<option value="3">three</option>
</select>
<select id="two">
<option value="1">one</option>
<option value="2">two</option>
<option value="3">three</option>
</select>
What I want is to select "one" from the first select, then have that option be removed from the second one.
Then if you select "two" from the second one, I want that one removed from the first one.
Here's the JS I have currently:
$(function () {
var $one = $("#one");
var $two = $("#two");
var selectOptions = [];
$("select").each(function (index) {
selectOptions[index] = [];
for (var i = 0; i < this.options.length; i++) {
selectOptions[index][i] = this.options[i];
}
});
$one.change(function () {
var selectedValue = $("option:selected", this).val();
for (var i = 0; i < selectOptions[1].length; i++) {
var exists = false;
for (var x = 0; x < $two[0].options.length; x++) {
if ($two[0].options[x].value == selectOptions[1][i].value)
exists = true;
}
if (!exists)
$two.append(selectOptions[1][i]);
}
$("option[value='" + selectedValue + "']", $two).remove();
});
$two.change(function () {
var selectedValue = $("option:selected", this).val();
for (var i = 0; i < selectOptions[0].length; i++) {
var exists = false;
for (var x = 0; x < $one[0].options.length; x++) {
if ($one[0].options[x].value == selectOptions[0][i].value)
exists = true;
}
if (!exists)
$one.append(selectOptions[0][i]);
}
$("option[value='" + selectedValue + "']", $one).remove();
});
});
But when the elements get repopulated, it fires the change event in the select whose options are changing. I tried just setting the disabled attribute on the option I want to remove, but that doesn't work with IE6.
|
[
"I am not (currently) a user of jQuery, but I can tell you that you need to temporarily disconnect your event handler while you repopulate the items or, at the least, set a flag that you then test for and based on its value, handle the change.\n",
"Here's the final code that I ended up using, the flag (changeOnce) worked great, thanks @Jason.\n$(function () {\n var $one = $(\"#one\");\n var $two = $(\"#two\");\n\n var selectOptions = [];\n $(\"select\").each(function (index) {\n selectOptions[index] = [];\n for (var i = 0; i < this.options.length; i++) {\n selectOptions[index][i] = this.options[i];\n }\n });\n\n var changeOnce = false;\n $one.change(function () {\n if (changeOnce) return;\n changeOnce = true;\n var selectedValue = $(\"option:selected\", this).val();\n filterSelect(selectedValue, $two, 1);\n changeOnce = false;\n });\n $two.change(function () {\n if (changeOnce) return;\n changeOnce = true;\n var selectedValue = $(\"option:selected\", this).val();\n filterSelect(selectedValue, $one, 0);\n changeOnce = false;\n });\n\n function filterSelect(selectedValue, $selectToFilter, selectIndex) {\n for (var i = 0; i < selectOptions[selectIndex].length; i++) {\n var exists = false;\n for (var x = 0; x < $selectToFilter[0].options.length; x++) {\n if ($selectToFilter[0].options[x].value == selectOptions[selectIndex][i].value)\n exists = true;\n }\n if (!exists)\n $selectToFilter.append(selectOptions[selectIndex][i]);\n }\n $(\"option[value='\" + selectedValue + \"']\", $selectToFilter).remove();\n sortSelect($selectToFilter[0]);\n }\n\n function sortSelect(selectToSort) {\n var arrOptions = [];\n\n for (var i = 0; i < selectToSort.options.length; i++) {\n arrOptions[i] = [];\n arrOptions[i][0] = selectToSort.options[i].value;\n arrOptions[i][1] = selectToSort.options[i].text;\n arrOptions[i][2] = selectToSort.options[i].selected;\n }\n\n arrOptions.sort();\n\n for (var i = 0; i < selectToSort.options.length; i++) {\n selectToSort.options[i].value = arrOptions[i][0];\n selectToSort.options[i].text = arrOptions[i][1];\n selectToSort.options[i].selected = arrOptions[i][2];\n }\n }\n});\n\n",
"Or you can just hide the option you don't want to show...\nfunction hideSelected($one, $two)\n{\n $one.bind('change', function()\n {\n var val = $one.val();\n $two.find('option:not(:visible)').show().end()\n .find('option[value='+val+']').hide().end();\n })\n}\nhideSelected($one, $two);\nhideSelected($two, $one);\n\nEDIT: Oh sorry, this code does not work with IE6...\n"
] |
[
3,
3,
0
] |
[] |
[] |
[
"dom",
"html_select",
"javascript",
"jquery"
] |
stackoverflow_0000044903_dom_html_select_javascript_jquery.txt
|
Q:
Organization of C files
I'm used to doing all my coding in one C file. However, I'm working on a project large enough that it becomes impractical to do so. I've been #including them together but I've run into cases where I'm #including some files multiple times, etc. I've heard of .h files, but I'm not sure what their function is (or why having 2 files is better than 1).
What strategies should I use for organizing my code? Is it possible to separate "public" functions from "private" ones for a particular file?
This question precipitated my inquiry. The tea.h file makes no reference to the tea.c file. Does the compiler "know" that every .h file has a corresponding .c file?
A:
You should regard .h files as interface files of your .c file. Every .c file represents a module with a certain amount of functionality. If functions in a .c file are used by other modules (i.e. other .c files) put the function prototype in the .h interface file. By including the interface file in your original modules .c file and every other .c file you need the function in, you make this function available to other modules.
If you only need a function in a certain .c file (not in any other module), declare its scope static. This means it can only be called from within the c file it is defined in.
Same goes for variables that are used across multiple modules. They should go in the header file and there they have to marked with the keyword 'extern'. Note: For functions the keyword 'extern' is optional. Functions are always considered 'extern'.
The inclusion guards in header files help to not include the same header file multiple times.
For example:
Module1.c:
#include "Module1.h"
static void MyLocalFunction(void);
static unsigned int MyLocalVariable;
unsigned int MyExternVariable;
void MyExternFunction(void)
{
MyLocalVariable = 1u;
/* Do something */
MyLocalFunction();
}
static void MyLocalFunction(void)
{
/* Do something */
MyExternVariable = 2u;
}
Module1.h:
#ifndef __MODULE1.H
#define __MODULE1.H
extern unsigned int MyExternVariable;
void MyExternFunction(void);
#endif
Module2.c
#include "Module.1.h"
static void MyLocalFunction(void);
static void MyLocalFunction(void)
{
MyExternVariable = 1u;
MyExternFunction();
}
A:
Try to make each .c focus on a particular area of functionality. Use the corresponding .h file to declare those functions.
Each .h file should have a 'header' guard around it's content. For example:
#ifndef ACCOUNTS_H
#define ACCOUNTS_H
....
#endif
That way you can include "accounts.h" as many times as you want, and the first time it's seen in a particular compilation unit will be the only one that actually pulls in its content.
A:
Compiler
You can see an example of a C 'module' at this topic - Note that there are two files - the header tea.h, and the code tea.c. You declare all the public defines, variables, and function prototypes that you want other programs to access in the header. In your main project you'll #include and that code can now access the functions and variables of the tea module that are mentioned in the header.
It gets a little more complex after that. If you're using Visual Studio and many other IDEs that manage your build for you, then ignore this part - they take care of compiling and linking objects.
Linker
When you compile two separate C files the compiler produces individual object files - so main.c becomes main.o, and tea.c becomes tea.o. The linker's job is to look at all the object files (your main.o and tea.o), and match up the references - so when you call a tea function in main, the linker modifies that call so it actually does call the right function in tea. The linker produces the executable file.
There is a great tutorial that goes into more depth on this subject, including scope and other issue you'll run into.
Good luck!
-Adam
A:
A couple of simple rules to start:
Put those declarations that you want to make "public" into the header file for the C implementation file you are creating.
Only #include header files in the C file that are needed to implement the C file.
include header files in a header file only if required for the declarations within that header file.
Use the include guard method described by Andrew OR use #pragma once if the compiler supports it (which does the same thing -- sometimes more efficiently)
A:
To answer your additional question:
This
question precipitated my inquiry. The
tea.h file makes no reference to the
tea.c file. Does the compiler "know"
that every .h file has a corresponding
.c file?
The compiler is not primarily concerned with header files. Each invocation of the compiler compiles a source (.c) file into an object (.o) file. Behind the scenes (i.e. in the make file or project file) a command line equivalent to this is being generated:
compiler --options tea.c
The source file #includes all the header files for the resources it references, which is how the compiler finds header files.
(I'm glossing over some details here. There is a lot to learn about building C projects.)
A:
As well as the answers supplied above, one small advantage of splinting up your code into modules (separate files) is that if you have to have any global variables, you can limit their scope to a single module by the use of the key word 'static'. (You could also apply this to functions). Note that this use of 'static' is different from its use inside a function.
A:
Your question makes it clear that you haven't really done much serious development. The usual case is that your code will generally be far too large to fit into one file. A good rule is that you should split the functionality into logical units (.c files) and each file should contain no more than what you can easily hold in your head at one time.
A given software product then generally includes the output from many different .c files. How this is normally done is that the compiler produces a number of object files (in unix systems ".o" files, VC generates .obj files). It is the purpose of the "linker" to compose these object files into the output (either a shared library or executable).
Generally your implementation (.c) files contain actual executable code, while the header files (.h) have the declarations of the public functions in those implementation files. You can quite easily have more header files than there are implementation files, and sometimes header files can contain inline code as well.
It is generally quite unusual for implementation files to include each other. A good practice is to ensure that each implementation file separates its concerns from the other files.
I would recommend you download and look at the source for the linux kernel. It is quite massive for a C program, but well organised into separate areas of functionality.
A:
The .h files should be used to define the prototypes for your functions. This is necessary so you can include the prototypes that you need in your C-file without declaring every function that you need all in one file.
For instance, when you #include <stdio.h>, this provides the prototypes for printf and other IO functions. The symbols for these functions are normally loaded by the compiler by default. You can look at the system's .h files under /usr/include if you're interested in the normal idioms involved with these files.
If you're only writing trivial applications with not many functions, it's not really necessary to modularize everything out into logical groupings of procedures. However, if you have the need to develop a large system, then you'll need to pay some consideration as to where to define each of your functions.
|
Organization of C files
|
I'm used to doing all my coding in one C file. However, I'm working on a project large enough that it becomes impractical to do so. I've been #including them together but I've run into cases where I'm #including some files multiple times, etc. I've heard of .h files, but I'm not sure what their function is (or why having 2 files is better than 1).
What strategies should I use for organizing my code? Is it possible to separate "public" functions from "private" ones for a particular file?
This question precipitated my inquiry. The tea.h file makes no reference to the tea.c file. Does the compiler "know" that every .h file has a corresponding .c file?
|
[
"You should regard .h files as interface files of your .c file. Every .c file represents a module with a certain amount of functionality. If functions in a .c file are used by other modules (i.e. other .c files) put the function prototype in the .h interface file. By including the interface file in your original modules .c file and every other .c file you need the function in, you make this function available to other modules. \nIf you only need a function in a certain .c file (not in any other module), declare its scope static. This means it can only be called from within the c file it is defined in. \nSame goes for variables that are used across multiple modules. They should go in the header file and there they have to marked with the keyword 'extern'. Note: For functions the keyword 'extern' is optional. Functions are always considered 'extern'.\nThe inclusion guards in header files help to not include the same header file multiple times.\nFor example:\nModule1.c:\n\n #include \"Module1.h\"\n\n static void MyLocalFunction(void);\n static unsigned int MyLocalVariable; \n unsigned int MyExternVariable;\n\n void MyExternFunction(void)\n {\n MyLocalVariable = 1u; \n\n /* Do something */\n\n MyLocalFunction();\n }\n\n static void MyLocalFunction(void)\n {\n /* Do something */\n\n MyExternVariable = 2u;\n }\n\nModule1.h:\n\n #ifndef __MODULE1.H\n #define __MODULE1.H\n\n extern unsigned int MyExternVariable;\n\n void MyExternFunction(void); \n\n #endif\n\nModule2.c\n\n #include \"Module.1.h\"\n\n static void MyLocalFunction(void);\n\n static void MyLocalFunction(void)\n {\n MyExternVariable = 1u;\n MyExternFunction();\n }\n\n",
"Try to make each .c focus on a particular area of functionality. Use the corresponding .h file to declare those functions.\nEach .h file should have a 'header' guard around it's content. For example:\n#ifndef ACCOUNTS_H\n#define ACCOUNTS_H\n....\n#endif\n\nThat way you can include \"accounts.h\" as many times as you want, and the first time it's seen in a particular compilation unit will be the only one that actually pulls in its content.\n",
"Compiler\nYou can see an example of a C 'module' at this topic - Note that there are two files - the header tea.h, and the code tea.c. You declare all the public defines, variables, and function prototypes that you want other programs to access in the header. In your main project you'll #include and that code can now access the functions and variables of the tea module that are mentioned in the header.\nIt gets a little more complex after that. If you're using Visual Studio and many other IDEs that manage your build for you, then ignore this part - they take care of compiling and linking objects.\nLinker\nWhen you compile two separate C files the compiler produces individual object files - so main.c becomes main.o, and tea.c becomes tea.o. The linker's job is to look at all the object files (your main.o and tea.o), and match up the references - so when you call a tea function in main, the linker modifies that call so it actually does call the right function in tea. The linker produces the executable file.\nThere is a great tutorial that goes into more depth on this subject, including scope and other issue you'll run into.\nGood luck!\n-Adam\n",
"A couple of simple rules to start:\n\nPut those declarations that you want to make \"public\" into the header file for the C implementation file you are creating.\nOnly #include header files in the C file that are needed to implement the C file.\ninclude header files in a header file only if required for the declarations within that header file.\n\nUse the include guard method described by Andrew OR use #pragma once if the compiler supports it (which does the same thing -- sometimes more efficiently)\n\n",
"To answer your additional question:\n\nThis\n question precipitated my inquiry. The\n tea.h file makes no reference to the\n tea.c file. Does the compiler \"know\"\n that every .h file has a corresponding\n .c file?\n\nThe compiler is not primarily concerned with header files. Each invocation of the compiler compiles a source (.c) file into an object (.o) file. Behind the scenes (i.e. in the make file or project file) a command line equivalent to this is being generated:\ncompiler --options tea.c\n\nThe source file #includes all the header files for the resources it references, which is how the compiler finds header files.\n(I'm glossing over some details here. There is a lot to learn about building C projects.)\n",
"As well as the answers supplied above, one small advantage of splinting up your code into modules (separate files) is that if you have to have any global variables, you can limit their scope to a single module by the use of the key word 'static'. (You could also apply this to functions). Note that this use of 'static' is different from its use inside a function.\n",
"Your question makes it clear that you haven't really done much serious development. The usual case is that your code will generally be far too large to fit into one file. A good rule is that you should split the functionality into logical units (.c files) and each file should contain no more than what you can easily hold in your head at one time.\nA given software product then generally includes the output from many different .c files. How this is normally done is that the compiler produces a number of object files (in unix systems \".o\" files, VC generates .obj files). It is the purpose of the \"linker\" to compose these object files into the output (either a shared library or executable).\nGenerally your implementation (.c) files contain actual executable code, while the header files (.h) have the declarations of the public functions in those implementation files. You can quite easily have more header files than there are implementation files, and sometimes header files can contain inline code as well.\nIt is generally quite unusual for implementation files to include each other. A good practice is to ensure that each implementation file separates its concerns from the other files.\nI would recommend you download and look at the source for the linux kernel. It is quite massive for a C program, but well organised into separate areas of functionality.\n",
"The .h files should be used to define the prototypes for your functions. This is necessary so you can include the prototypes that you need in your C-file without declaring every function that you need all in one file. \nFor instance, when you #include <stdio.h>, this provides the prototypes for printf and other IO functions. The symbols for these functions are normally loaded by the compiler by default. You can look at the system's .h files under /usr/include if you're interested in the normal idioms involved with these files. \nIf you're only writing trivial applications with not many functions, it's not really necessary to modularize everything out into logical groupings of procedures. However, if you have the need to develop a large system, then you'll need to pay some consideration as to where to define each of your functions. \n"
] |
[
37,
10,
8,
7,
3,
3,
2,
0
] |
[] |
[] |
[
"c",
"file_organization",
"header"
] |
stackoverflow_0000047919_c_file_organization_header.txt
|
Q:
Is there a difference between :: and . when calling class methods in Ruby?
Simple question, but one that I've been curious about...is there a functional difference between the following two commands?
String::class
String.class
They both do what I expect -- that is to say they return Class -- but what is the difference between using the :: and the .?
I notice that on those classes that have constants defined, IRB's auto-completion will return the constants as available options when you press tab after :: but not after ., but I don't know what the reason for this is...
A:
The . operator basically says "send this message to the object". In your example it is calling that particular member. The :: operator "drills down" to the scope defined to the left of the operator, and then calls the member defined on the right side of operator.
When you use :: you have to be referencing members that are defined. When using . you are simply sending a message to the object. Because that message could be anything, auto-completion does not work for . while it does for ::.
A:
Actually, auto-completion does work for .. The completion options are found by calling #methods on the object. You can see this for yourself by overriding Object.methods:
>> def Object.methods; ["foo", "bar"]; end
=> nil
>> Object.[TAB]
Object.foo Object.bar
>> Object.
Note that this only works when the expression to the left of the . is a literal. Otherwise, getting the object to call #methods on would involve evaluating the left-hand side, which could have side-effects. You can see this for yourself as well:
[continuing from above...]
>> def Object.baz; Object; end
=> nil
>> Object.baz.[TAB]
Display all 1022 possibilities? (y or n)
We add a method #baz to Object which returns Object itself. Then we auto-complete to get the methods we can call on Object.baz. If IRB called Object.baz.methods, it would get the same thing as Object.methods. Instead, IRB has 1022 suggestions. I'm not sure where they come from, but it's clearly a generic list which isn't actually based on context.
The :: operator is (also) used for getting a module's constants, while . is not. That's why HTTP will show up in the completion for Net::, but not for Net.. Net.HTTP isn't correct, but Net::HTTP is.
|
Is there a difference between :: and . when calling class methods in Ruby?
|
Simple question, but one that I've been curious about...is there a functional difference between the following two commands?
String::class
String.class
They both do what I expect -- that is to say they return Class -- but what is the difference between using the :: and the .?
I notice that on those classes that have constants defined, IRB's auto-completion will return the constants as available options when you press tab after :: but not after ., but I don't know what the reason for this is...
|
[
"The . operator basically says \"send this message to the object\". In your example it is calling that particular member. The :: operator \"drills down\" to the scope defined to the left of the operator, and then calls the member defined on the right side of operator.\nWhen you use :: you have to be referencing members that are defined. When using . you are simply sending a message to the object. Because that message could be anything, auto-completion does not work for . while it does for ::.\n",
"Actually, auto-completion does work for .. The completion options are found by calling #methods on the object. You can see this for yourself by overriding Object.methods:\n>> def Object.methods; [\"foo\", \"bar\"]; end\n=> nil\n>> Object.[TAB]\nObject.foo Object.bar\n>> Object.\n\nNote that this only works when the expression to the left of the . is a literal. Otherwise, getting the object to call #methods on would involve evaluating the left-hand side, which could have side-effects. You can see this for yourself as well:\n[continuing from above...]\n>> def Object.baz; Object; end\n=> nil\n>> Object.baz.[TAB]\nDisplay all 1022 possibilities? (y or n)\n\nWe add a method #baz to Object which returns Object itself. Then we auto-complete to get the methods we can call on Object.baz. If IRB called Object.baz.methods, it would get the same thing as Object.methods. Instead, IRB has 1022 suggestions. I'm not sure where they come from, but it's clearly a generic list which isn't actually based on context.\nThe :: operator is (also) used for getting a module's constants, while . is not. That's why HTTP will show up in the completion for Net::, but not for Net.. Net.HTTP isn't correct, but Net::HTTP is.\n"
] |
[
36,
12
] |
[] |
[] |
[
"ruby",
"syntax"
] |
stackoverflow_0000043134_ruby_syntax.txt
|
Q:
Programmatically determine how many comments a blog post has
What is the most efficient way to determine how many comments a particular blog post has? We want to store the data for a new web app. We have a list of permalink URl's as well as the RSS feeds.
A:
If the blog is controlled by you, a "Select count(commentid) FROM comments WHERE postID = 2" will possibly the best thing. If you only have the URL but still it's your blog/db, you need to create a subquery "WHERE postID = (SELECT whatever FROM posts WHERE permalink = url)" or whatever your way to join the comments to the posts from a URL.
If it's a remote blog, you have the problem that each blog has different HTML. Essentially, you're going to need to build a parser that parses the HTML and looks for repeating elements like "div class=comment". But that will be mostly a manual labour for each different blogs.
Some blogs may have better ways like a comment count somewhere in the HTML or some interface, but i'm not aware of any standardized way.
EDIT: If you got a Comment-RSS feed, you may have luck using a mechanism that counts XML nodes, like XPath's Count.
A:
If I understand correctly, you want a heuristic to estimate the number of comments in an HTML page which is known to be a blog post, yes?
Very often, a specific blog will have some features which make it easy to work out. If you look at mine over at http://kstruct.com/ you'll see that all the pages with comments say 'X Responses', so if you were able to do some work on a per blog basis, it's probably not really difficult.
If you needed something generic, I guess there are a few common features that comments have that you might be able to detect. For one, any links in them are quite likely to have rel="nofollow" attributes, so seeing that within a block might imply that it's a comment.
The main interesting thing to look for would be changes in the structure of posts for m the same site. For example, there's also a very good chance that each comment will have its own anchor so people can link directly to it, so you could look at the differing numbers of <a name="XXX"> tags in a given page on the same site to get an idea of the relative numbers of comments.
As Michael Stum pointed out, if the pages have a Comment-RSS feed, your life is made a lot easier because you can get the comment data in a structured format.
All in all, though, I think it's going to be quite a challenging problem to solve in general.
A:
Blogs almost always have an RSS feed for comments. If you have that, then you can determine the exact number of comments, since the feeds 99% of the time follow a standard. Even if the blog is your own, if you are already generating an RSS feed, then don't bother making a call to your DB. You already did that to generate the feed, so it makes sense that you would just traverse the XML nodes. That way you don't have additional overhead (depending on how often you want to get this information).
|
Programmatically determine how many comments a blog post has
|
What is the most efficient way to determine how many comments a particular blog post has? We want to store the data for a new web app. We have a list of permalink URl's as well as the RSS feeds.
|
[
"If the blog is controlled by you, a \"Select count(commentid) FROM comments WHERE postID = 2\" will possibly the best thing. If you only have the URL but still it's your blog/db, you need to create a subquery \"WHERE postID = (SELECT whatever FROM posts WHERE permalink = url)\" or whatever your way to join the comments to the posts from a URL.\nIf it's a remote blog, you have the problem that each blog has different HTML. Essentially, you're going to need to build a parser that parses the HTML and looks for repeating elements like \"div class=comment\". But that will be mostly a manual labour for each different blogs.\nSome blogs may have better ways like a comment count somewhere in the HTML or some interface, but i'm not aware of any standardized way.\nEDIT: If you got a Comment-RSS feed, you may have luck using a mechanism that counts XML nodes, like XPath's Count.\n",
"If I understand correctly, you want a heuristic to estimate the number of comments in an HTML page which is known to be a blog post, yes?\nVery often, a specific blog will have some features which make it easy to work out. If you look at mine over at http://kstruct.com/ you'll see that all the pages with comments say 'X Responses', so if you were able to do some work on a per blog basis, it's probably not really difficult.\nIf you needed something generic, I guess there are a few common features that comments have that you might be able to detect. For one, any links in them are quite likely to have rel=\"nofollow\" attributes, so seeing that within a block might imply that it's a comment. \nThe main interesting thing to look for would be changes in the structure of posts for m the same site. For example, there's also a very good chance that each comment will have its own anchor so people can link directly to it, so you could look at the differing numbers of <a name=\"XXX\"> tags in a given page on the same site to get an idea of the relative numbers of comments.\nAs Michael Stum pointed out, if the pages have a Comment-RSS feed, your life is made a lot easier because you can get the comment data in a structured format.\nAll in all, though, I think it's going to be quite a challenging problem to solve in general.\n",
"Blogs almost always have an RSS feed for comments. If you have that, then you can determine the exact number of comments, since the feeds 99% of the time follow a standard. Even if the blog is your own, if you are already generating an RSS feed, then don't bother making a call to your DB. You already did that to generate the feed, so it makes sense that you would just traverse the XML nodes. That way you don't have additional overhead (depending on how often you want to get this information).\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"comments",
"rss"
] |
stackoverflow_0000047869_comments_rss.txt
|
Q:
Standards Document
I am writing a coding standards document for a team of about 15 developers with a project load of between 10 and 15 projects a year. Amongst other sections (which I may post here as I get to them) I am writing a section on code formatting. So to start with, I think it is wise that, for whatever reason, we establish some basic, consistent code formatting/naming standards.
I've looked at roughly 10 projects written over the last 3 years from this team and I'm, obviously, finding a pretty wide range of styles. Contractors come in and out and at times, and sometimes even double the team size.
I am looking for a few suggestions for code formatting and naming standards that have really paid off ... but that can also really be justified. I think consistency and shared-patterns go a long way to making the code more maintainable ... but, are there other things I ought to consider when defining said standards?
How do you lineup parenthesis? Do you follow the same parenthesis guidelines when dealing with classes, methods, try catch blocks, switch statements, if else blocks, etc.
Do you line up fields on a column? Do you notate/prefix private variables with an underscore? Do you follow any naming conventions to make it easier to find particulars in a file? How do you order the members of your class?
What about suggestions for namespaces, packaging or source code folder/organization standards? I tend to start with something like:
<com|org|...>.<company>.<app>.<layer>.<function>.ClassName
I'm curious to see if there are other, more accepted, practices than what I am accustomed to -- before I venture off dictating these standards. Links to standards already published online would be great too -- even though I've done a bit of that already.
A:
First find a automated code-formatter that works with your language. Reason: Whatever the document says, people will inevitably break the rules. It's much easier to run code through a formatter than to nit-pick in a code review.
If you're using a language with an existing standard (e.g. Java, C#), it's easiest to use it, or at least start with it as a first draft. Sun put a lot of thought into their formatting rules; you might as well take advantage of it.
In any case, remember that much research has shown that varying things like brace position and whitespace use has no measurable effect on productivity or understandability or prevalence of bugs. Just having any standard is the key.
A:
Coming from the automotive industry, here's a few style standards used for concrete reasons:
Always used braces in control structures, and place them on separate lines. This eliminates problems with people adding code and including it or not including it mistakenly inside a control structure.
if(...)
{
}
All switches/selects have a default case. The default case logs an error if it's not a valid path.
For the same reason as above, any if...elseif... control structures MUST end with a default else that also logs an error if it's not a valid path. A single if statement does not require this.
In the occasional case where a loop or control structure is intentionally empty, a semicolon is always placed within to indicate that this is intentional.
while(stillwaiting())
{
;
}
Naming standards have very different styles for typedefs, defined constants, module global variables, etc. Variable names include type. You can look at the name and have a good idea of what module it pertains to, its scope, and type. This makes it easy to detect errors related to types, etc.
There are others, but these are the top off my head.
-Adam
A:
I'm going to second Jason's suggestion.
I just completed a standards document for a team of 10-12 that work mostly in perl. The document says to use "perltidy-like indentation for complex data structures." We also provided everyone with example perltidy settings that would clean up their code to meet this standard. It was very clear and very much industry-standard for the language so we had great buyoff on it by the team.
When setting out to write this document, I asked around for some examples of great code in our repository and googled a bit to find other standards documents that smarter architects than I to construct a template. It was tough being concise and pragmatic without crossing into micro-manager territory but very much worth it; having any standard is indeed key.
Hope it works out!
A:
It obviously varies depending on languages and technologies. By the look of your example name space I am going to guess java, in which case http://java.sun.com/docs/codeconv/ is a really good place to start. You might also want to look at something like maven's standard directory structure which will make all your projects look similar.
|
Standards Document
|
I am writing a coding standards document for a team of about 15 developers with a project load of between 10 and 15 projects a year. Amongst other sections (which I may post here as I get to them) I am writing a section on code formatting. So to start with, I think it is wise that, for whatever reason, we establish some basic, consistent code formatting/naming standards.
I've looked at roughly 10 projects written over the last 3 years from this team and I'm, obviously, finding a pretty wide range of styles. Contractors come in and out and at times, and sometimes even double the team size.
I am looking for a few suggestions for code formatting and naming standards that have really paid off ... but that can also really be justified. I think consistency and shared-patterns go a long way to making the code more maintainable ... but, are there other things I ought to consider when defining said standards?
How do you lineup parenthesis? Do you follow the same parenthesis guidelines when dealing with classes, methods, try catch blocks, switch statements, if else blocks, etc.
Do you line up fields on a column? Do you notate/prefix private variables with an underscore? Do you follow any naming conventions to make it easier to find particulars in a file? How do you order the members of your class?
What about suggestions for namespaces, packaging or source code folder/organization standards? I tend to start with something like:
<com|org|...>.<company>.<app>.<layer>.<function>.ClassName
I'm curious to see if there are other, more accepted, practices than what I am accustomed to -- before I venture off dictating these standards. Links to standards already published online would be great too -- even though I've done a bit of that already.
|
[
"First find a automated code-formatter that works with your language. Reason: Whatever the document says, people will inevitably break the rules. It's much easier to run code through a formatter than to nit-pick in a code review.\nIf you're using a language with an existing standard (e.g. Java, C#), it's easiest to use it, or at least start with it as a first draft. Sun put a lot of thought into their formatting rules; you might as well take advantage of it.\nIn any case, remember that much research has shown that varying things like brace position and whitespace use has no measurable effect on productivity or understandability or prevalence of bugs. Just having any standard is the key.\n",
"Coming from the automotive industry, here's a few style standards used for concrete reasons:\nAlways used braces in control structures, and place them on separate lines. This eliminates problems with people adding code and including it or not including it mistakenly inside a control structure.\nif(...)\n{\n\n}\n\nAll switches/selects have a default case. The default case logs an error if it's not a valid path.\nFor the same reason as above, any if...elseif... control structures MUST end with a default else that also logs an error if it's not a valid path. A single if statement does not require this.\nIn the occasional case where a loop or control structure is intentionally empty, a semicolon is always placed within to indicate that this is intentional.\nwhile(stillwaiting())\n{\n ;\n}\n\nNaming standards have very different styles for typedefs, defined constants, module global variables, etc. Variable names include type. You can look at the name and have a good idea of what module it pertains to, its scope, and type. This makes it easy to detect errors related to types, etc.\nThere are others, but these are the top off my head.\n-Adam\n",
"I'm going to second Jason's suggestion.\nI just completed a standards document for a team of 10-12 that work mostly in perl. The document says to use \"perltidy-like indentation for complex data structures.\" We also provided everyone with example perltidy settings that would clean up their code to meet this standard. It was very clear and very much industry-standard for the language so we had great buyoff on it by the team.\nWhen setting out to write this document, I asked around for some examples of great code in our repository and googled a bit to find other standards documents that smarter architects than I to construct a template. It was tough being concise and pragmatic without crossing into micro-manager territory but very much worth it; having any standard is indeed key.\nHope it works out!\n",
"It obviously varies depending on languages and technologies. By the look of your example name space I am going to guess java, in which case http://java.sun.com/docs/codeconv/ is a really good place to start. You might also want to look at something like maven's standard directory structure which will make all your projects look similar. \n"
] |
[
3,
3,
2,
1
] |
[] |
[] |
[
"coding_style"
] |
stackoverflow_0000047658_coding_style.txt
|
Q:
Free text search integrated with code coverage
Is there any tool which will allow me to perform a free text search over a system's code, but only over the code which was actually executed during a particular invocation?
To give a bit of background, when learning my way around a new system, I frequently find myself wanting to discover where some particular value came from, but searching the entire code base turns up far more matches than I can reasonably assess individually.
For what it's worth, I've wanted this in Perl and Java at one time or another, but I'd love to know if any languages have a system supporting this feature.
A:
You can generally twist a code coverage tool's arm and get a report that shows the paths that have been executed during a given run. This report should show the code itself, with the first few columns marked up according to the coverage tool's particular notation on whether a given path was executed.
You might be able to use this straight up, or you might have to preprocess it and either remove the code that was not executed, or add a new notation on each line that tells whether it was executed (most tools will only show path information at control points):
So from a coverage tool you might get a report like this:
T- if(sometest)
{
x somecode;
}
else
{
- someother_code;
}
The notation T- indicates that the if statement only ever evaluated to true, and so only the first part of the code executed. The later notation 'x' indicates that this line was executed.
You should be able to form a regex that matches only when the first column contains a T, F, or x so you can capture all the control statements executed and lines executed.
Sometimes you'll only get coverage information at each control point, which then requires you to parse the C file and mark the execute lines yourself. Not as easy, but not impossible either.
Still, this sounds like an interesting question where the solution is probably more work than it's worth...
-Adam
|
Free text search integrated with code coverage
|
Is there any tool which will allow me to perform a free text search over a system's code, but only over the code which was actually executed during a particular invocation?
To give a bit of background, when learning my way around a new system, I frequently find myself wanting to discover where some particular value came from, but searching the entire code base turns up far more matches than I can reasonably assess individually.
For what it's worth, I've wanted this in Perl and Java at one time or another, but I'd love to know if any languages have a system supporting this feature.
|
[
"You can generally twist a code coverage tool's arm and get a report that shows the paths that have been executed during a given run. This report should show the code itself, with the first few columns marked up according to the coverage tool's particular notation on whether a given path was executed.\nYou might be able to use this straight up, or you might have to preprocess it and either remove the code that was not executed, or add a new notation on each line that tells whether it was executed (most tools will only show path information at control points):\nSo from a coverage tool you might get a report like this:\nT- if(sometest)\n {\nx somecode;\n }\n else\n {\n- someother_code;\n }\n\nThe notation T- indicates that the if statement only ever evaluated to true, and so only the first part of the code executed. The later notation 'x' indicates that this line was executed.\nYou should be able to form a regex that matches only when the first column contains a T, F, or x so you can capture all the control statements executed and lines executed.\nSometimes you'll only get coverage information at each control point, which then requires you to parse the C file and mark the execute lines yourself. Not as easy, but not impossible either.\nStill, this sounds like an interesting question where the solution is probably more work than it's worth...\n-Adam\n"
] |
[
1
] |
[] |
[] |
[
"code_coverage",
"debugging",
"language_agnostic",
"search"
] |
stackoverflow_0000048110_code_coverage_debugging_language_agnostic_search.txt
|
Q:
How can I allow incoming connections to a server inside of VirtualBox?
I have a NAT configured to run when loading up my favorite Linux distribution in VitualBox. This allows outgoing connections to work successfully.
How do I allow incoming connections to this box, like, say, Web traffic? The IP address is 10.0.2.15. A ping request from my main box results in a Timeout.
A:
VirtualBox (after version 1.3.8, anyway) will let you map incoming connections in the NAT configuration. There's an excellent tutorial on Aviran's Place that describes the steps to configure port mapping.
|
How can I allow incoming connections to a server inside of VirtualBox?
|
I have a NAT configured to run when loading up my favorite Linux distribution in VitualBox. This allows outgoing connections to work successfully.
How do I allow incoming connections to this box, like, say, Web traffic? The IP address is 10.0.2.15. A ping request from my main box results in a Timeout.
|
[
"VirtualBox (after version 1.3.8, anyway) will let you map incoming connections in the NAT configuration. There's an excellent tutorial on Aviran's Place that describes the steps to configure port mapping.\n"
] |
[
5
] |
[] |
[] |
[
"virtualbox"
] |
stackoverflow_0000048135_virtualbox.txt
|
Q:
Configure static routes on Windows
There is a netsh and a route command on Windows. From their help text it looks like both can be used to configure static routes. When should you use one and not the other? Is IPv6 a distinguishing factor here?
A:
route is a very old and basic tool for displaying and modifying the entries in the local IP routing table while netsh is the newer, more robust command-line scripting utility that allows you to, either locally or remotely, manipulate the network configuration.
netsh has a zillion more features than route; it can even save your current settings as a script that another instance of netsh can parse. Check out Using netsh to see the giant feature set and compare it to how very basic and simple routes is.
|
Configure static routes on Windows
|
There is a netsh and a route command on Windows. From their help text it looks like both can be used to configure static routes. When should you use one and not the other? Is IPv6 a distinguishing factor here?
|
[
"route is a very old and basic tool for displaying and modifying the entries in the local IP routing table while netsh is the newer, more robust command-line scripting utility that allows you to, either locally or remotely, manipulate the network configuration.\nnetsh has a zillion more features than route; it can even save your current settings as a script that another instance of netsh can parse. Check out Using netsh to see the giant feature set and compare it to how very basic and simple routes is.\n"
] |
[
4
] |
[] |
[] |
[
"networking",
"windows"
] |
stackoverflow_0000048157_networking_windows.txt
|
Q:
Where's the Win32 resource for the mouse cursor for dragging splitters?
I am building a custom win32 control/widget and would like to change the cursor to a horizontal "splitter" symbol when hovering over a particular vertical line in the control. IE: I want to drag this vertical line (splitter bar) left and right (WEST and EAST).
Of the the system cursors (OCR_*), the only cursor that makes sense is the OCR_SIZEWE. Unfortunately, that is the big, awkward cursor the system uses when resizing a window. Instead, I am looking for the cursor that is about 20 pixels tall and around 3 or 4 pixel wide with two small arrows pointing left and right.
I can easily draw this and include it as a resource in my application but the cursor itself is so prevalent that I wanted to be sure it wasn't missing something.
For example: when you use the COM drag and drop mechanism (CLSID_DragDropHelper, IDropTarget, etc) you implicitly have access to the "drag" icon (little box under the pointer). I didn't see an explicit OCR_* constant for this guy ... so likewise, if I can't find this splitter cursor outright, I am wondering if it is part of a COM object or something else in the win32 lib.
A:
There are all sorts of icons, cursors, and images in use throughout the Windows UI which are not publicly available to 3rd-party software. Of course, you could still load up the module in which they reside and use them, but there's really no guarantee your program will keep working after a system update / upgrade.
Include your own. The last thing you want is adding an extra dependency over a tiny little cursor.
A:
I had this exact problem. When I looked back over some old code for a vertical splitter thinking I had an easy answer, it turned out that I had build and loaded my own resource:
SetCursor( LoadCursor( ghInstance, "IDC_SPLITVERT" ));
I vaguely remember investing some considerable time and effort into find the system way of doing it, so (my guess) is that there is not a system ICON readily available to do the job, so you are better off rolling your own.
This is one of those times when I would like to be wrong, as I would have liked there to be a system icon for this job.
|
Where's the Win32 resource for the mouse cursor for dragging splitters?
|
I am building a custom win32 control/widget and would like to change the cursor to a horizontal "splitter" symbol when hovering over a particular vertical line in the control. IE: I want to drag this vertical line (splitter bar) left and right (WEST and EAST).
Of the the system cursors (OCR_*), the only cursor that makes sense is the OCR_SIZEWE. Unfortunately, that is the big, awkward cursor the system uses when resizing a window. Instead, I am looking for the cursor that is about 20 pixels tall and around 3 or 4 pixel wide with two small arrows pointing left and right.
I can easily draw this and include it as a resource in my application but the cursor itself is so prevalent that I wanted to be sure it wasn't missing something.
For example: when you use the COM drag and drop mechanism (CLSID_DragDropHelper, IDropTarget, etc) you implicitly have access to the "drag" icon (little box under the pointer). I didn't see an explicit OCR_* constant for this guy ... so likewise, if I can't find this splitter cursor outright, I am wondering if it is part of a COM object or something else in the win32 lib.
|
[
"There are all sorts of icons, cursors, and images in use throughout the Windows UI which are not publicly available to 3rd-party software. Of course, you could still load up the module in which they reside and use them, but there's really no guarantee your program will keep working after a system update / upgrade. \nInclude your own. The last thing you want is adding an extra dependency over a tiny little cursor.\n",
"I had this exact problem. When I looked back over some old code for a vertical splitter thinking I had an easy answer, it turned out that I had build and loaded my own resource:\n\nSetCursor( LoadCursor( ghInstance, \"IDC_SPLITVERT\" ));\n\nI vaguely remember investing some considerable time and effort into find the system way of doing it, so (my guess) is that there is not a system ICON readily available to do the job, so you are better off rolling your own.\nThis is one of those times when I would like to be wrong, as I would have liked there to be a system icon for this job.\n"
] |
[
5,
0
] |
[] |
[] |
[
"c++",
"native",
"winapi"
] |
stackoverflow_0000047538_c++_native_winapi.txt
|
Q:
How to use webclient in a secure site?
I need to automate a process involving a website that is using a login form. I need to capture some data in the pages following the login page.
I know how to screen-scrape normal pages, but not those behind a secure site.
Can this be done with the .NET WebClient class?
How would I automatically login?
How would I keep logged in for the other pages?
A:
One way would be through automating a browser -- you mentioned WebClient, so I'm guessing you might be referring to WebClient in .NET.Two main points:There's nothing special about https related to WebClient - it just worksCookies are typically used to carry authentication -- you'll need to capture and replay them
Here's the steps I'd follow:GET the login form, capture the the cookie in the response.Using Xpath and HtmlAgilityPack, find the "input type=hidden" field names and values.POST to login form's action with user name, password, and hidden field values in the request body. Include the cookie in the request headers. Again, capture the cookie in the response.GET the pages you want, again, with the cookie in the request headers.
On step 2, I mention a somewhat complicated method for automating the login. Usually, you can post with username and password directly to the known login form action without getting the initial form or relaying the hidden fields. Some sites have form validation (different from field validation) on their forms which makes this method not work.HtmlAgilityPack is a .NET library that allows you to turn ill-formed html into an XmlDocument so you can XPath over it. Quite useful.Finally, you may run into a situation where the form relies on client script to alter the form values before submitting. You may need to simulate this behavior.Using a tool to view the http traffic for this type of work is extremely helpful - I recommend ieHttpHeaders, Fiddler, or FireBug (net tab).
A:
You can easily simulate user input. You can submit form on the web page from you program by sending post\get request to a website.
Typical login form looks like:
<form name="loginForm" method="post" Action="target_page.html">
<input type="Text" name="Username">
<input type="Password" name="Password">
</form>
You can send a post request to the website providing values for Username & Password fields. What happens after you send your request is largely depends on a website, usually you will be redirected to some page. You authorization info will be stored in the sessions\cookie. So if you scrape client can maintain web session\understands cookies you will be able to access protected pages.
It's not clear from your question what language\framework you're going to use. For example there is a framework for screen scraping (including login functionality) written in perl - WWW::Mechanize
Note, that you can face some problems if site you're trying to login to uses java scripts or some kind of CAPTCHA.
A:
Can you please clarify? Is the WebClient class you speak of the one in HTTPUnit/Java?
If so, your session should be saved automatically.
A:
It isn't clear from your question which WebClient class (or language) you are referring to.
If have a Java Runtime you can use the Apache HttpClient class; here's an example I wrote using Groovy that accesses the delicious API over SSL:
def client = new HttpClient()
def credentials = new UsernamePasswordCredentials( "username", "password" )
def authScope = new AuthScope("api.del.icio.us", 443, AuthScope.ANY_REALM)
client.getState().setCredentials( authScope, credentials )
def url = "https://api.del.icio.us/v1/posts/get"
def method = new PostMethod( url )
method.addParameter( "tag", tag )
client.executeMethod( method )
|
How to use webclient in a secure site?
|
I need to automate a process involving a website that is using a login form. I need to capture some data in the pages following the login page.
I know how to screen-scrape normal pages, but not those behind a secure site.
Can this be done with the .NET WebClient class?
How would I automatically login?
How would I keep logged in for the other pages?
|
[
"One way would be through automating a browser -- you mentioned WebClient, so I'm guessing you might be referring to WebClient in .NET.Two main points:There's nothing special about https related to WebClient - it just worksCookies are typically used to carry authentication -- you'll need to capture and replay them\nHere's the steps I'd follow:GET the login form, capture the the cookie in the response.Using Xpath and HtmlAgilityPack, find the \"input type=hidden\" field names and values.POST to login form's action with user name, password, and hidden field values in the request body. Include the cookie in the request headers. Again, capture the cookie in the response.GET the pages you want, again, with the cookie in the request headers.\nOn step 2, I mention a somewhat complicated method for automating the login. Usually, you can post with username and password directly to the known login form action without getting the initial form or relaying the hidden fields. Some sites have form validation (different from field validation) on their forms which makes this method not work.HtmlAgilityPack is a .NET library that allows you to turn ill-formed html into an XmlDocument so you can XPath over it. Quite useful.Finally, you may run into a situation where the form relies on client script to alter the form values before submitting. You may need to simulate this behavior.Using a tool to view the http traffic for this type of work is extremely helpful - I recommend ieHttpHeaders, Fiddler, or FireBug (net tab).\n",
"You can easily simulate user input. You can submit form on the web page from you program by sending post\\get request to a website.\nTypical login form looks like:\n<form name=\"loginForm\" method=\"post\" Action=\"target_page.html\">\n <input type=\"Text\" name=\"Username\">\n <input type=\"Password\" name=\"Password\">\n</form>\n\nYou can send a post request to the website providing values for Username & Password fields. What happens after you send your request is largely depends on a website, usually you will be redirected to some page. You authorization info will be stored in the sessions\\cookie. So if you scrape client can maintain web session\\understands cookies you will be able to access protected pages.\nIt's not clear from your question what language\\framework you're going to use. For example there is a framework for screen scraping (including login functionality) written in perl - WWW::Mechanize \nNote, that you can face some problems if site you're trying to login to uses java scripts or some kind of CAPTCHA.\n",
"Can you please clarify? Is the WebClient class you speak of the one in HTTPUnit/Java?\nIf so, your session should be saved automatically.\n",
"It isn't clear from your question which WebClient class (or language) you are referring to. \nIf have a Java Runtime you can use the Apache HttpClient class; here's an example I wrote using Groovy that accesses the delicious API over SSL:\n def client = new HttpClient()\n\n def credentials = new UsernamePasswordCredentials( \"username\", \"password\" )\n def authScope = new AuthScope(\"api.del.icio.us\", 443, AuthScope.ANY_REALM)\n client.getState().setCredentials( authScope, credentials )\n\n def url = \"https://api.del.icio.us/v1/posts/get\"\n\n def method = new PostMethod( url )\n method.addParameter( \"tag\", tag )\n client.executeMethod( method )\n\n"
] |
[
9,
1,
0,
0
] |
[] |
[] |
[
".net",
"screen_scraping"
] |
stackoverflow_0000048224_.net_screen_scraping.txt
|
Q:
Deleting messages from Exchange IMAP mailbox on iPhone
I have a secondary Exchange mailbox configured on my iPhone using IMAP. This all appears to work fine except when a message is deleted on the phone, it still shows normally in Outlook. It does not seem to matter what I set the "remove deleted messages" setting to on the phone.
I understand this is due to a combination of the phone not expunging the deleted messages and Exchange showing deleted but not expunged messages in Outlook.
I'm looking for an automated solution to this that does not have a large delay between deleting the message on the phone and it disappearing in Outlook. The message should also show in the Deleted Items when deleted from the phone.
I've thought about creating a background process which connects to the mailbox via IMAP and sits in IDLE mode until there's a deleted message in the folder. It will then expunge the folder and return to IDLE mode. This wouldn't work with more than one folder (without multiple instances) but it would probably do the job.
Any recommendations on an easily scriptable tool or library that supports IMAP IDLE?
A:
I can wholeheartedly recommend writing such a process with a simple Perl client using the Mail::MAPClient module.
#!/usr/bin/perl -w
use strict;
use Mail::IMAPClient;
# returns an unconnected Mail::IMAPClient object:
my $imap = Mail::IMAPClient->new(
Server => $host,
User => $id,
Password=> $pass,
) or die "Cannot connect to $host as $id: $@";
$imap->expunge();
This can then be run from the crontab or some other scheduler.
|
Deleting messages from Exchange IMAP mailbox on iPhone
|
I have a secondary Exchange mailbox configured on my iPhone using IMAP. This all appears to work fine except when a message is deleted on the phone, it still shows normally in Outlook. It does not seem to matter what I set the "remove deleted messages" setting to on the phone.
I understand this is due to a combination of the phone not expunging the deleted messages and Exchange showing deleted but not expunged messages in Outlook.
I'm looking for an automated solution to this that does not have a large delay between deleting the message on the phone and it disappearing in Outlook. The message should also show in the Deleted Items when deleted from the phone.
I've thought about creating a background process which connects to the mailbox via IMAP and sits in IDLE mode until there's a deleted message in the folder. It will then expunge the folder and return to IDLE mode. This wouldn't work with more than one folder (without multiple instances) but it would probably do the job.
Any recommendations on an easily scriptable tool or library that supports IMAP IDLE?
|
[
"I can wholeheartedly recommend writing such a process with a simple Perl client using the Mail::MAPClient module.\n#!/usr/bin/perl -w\nuse strict;\nuse Mail::IMAPClient;\n\n# returns an unconnected Mail::IMAPClient object:\nmy $imap = Mail::IMAPClient->new( \n Server => $host,\n User => $id,\n Password=> $pass,\n) or die \"Cannot connect to $host as $id: $@\";\n$imap->expunge();\n\nThis can then be run from the crontab or some other scheduler.\n"
] |
[
2
] |
[] |
[] |
[
"exchange_server",
"imap",
"ios"
] |
stackoverflow_0000036019_exchange_server_imap_ios.txt
|
Q:
jQuery & Objects, trying to make a lightweight widget
Trying to make a make generic select "control" that I can dynamically add elements to, but I am having trouble getting functions to work right.
This is what I started with.
$select = $("<select></select>");
$select.addOption = function(value,text){
$(this).append($("<option></option>").val(value).text(text));
};
This worked fine alone but anytime $select is .clone(true)'ed the addOption() function is lost.
This is my object approach but still the function does not work.
function $selectX() {
return $("<select></select>");
}
$selectX.prototype.addOption() = function(value,text){
$(this).append($("<option></option>").val(value).text(text));
};
Hack solution is to add the function manually after creation:
$nameSelect= new $selectX;
$nameSelect.addOption = function(value,text){
$(this).append($("<option></option>").val(value).text(text));
};
Am I barking up the wrong tree?
A:
To add new method to jQuery You need to use jQuery.fn.methodName attribute, so in this case it will be:
jQuery.fn.addOption = function (value, text) {
jQuery(this).append(jQuery('<option></option>').val(value).text(text));
};
But keep in mind that this addOption will be accessible from result of any $() call.
|
jQuery & Objects, trying to make a lightweight widget
|
Trying to make a make generic select "control" that I can dynamically add elements to, but I am having trouble getting functions to work right.
This is what I started with.
$select = $("<select></select>");
$select.addOption = function(value,text){
$(this).append($("<option></option>").val(value).text(text));
};
This worked fine alone but anytime $select is .clone(true)'ed the addOption() function is lost.
This is my object approach but still the function does not work.
function $selectX() {
return $("<select></select>");
}
$selectX.prototype.addOption() = function(value,text){
$(this).append($("<option></option>").val(value).text(text));
};
Hack solution is to add the function manually after creation:
$nameSelect= new $selectX;
$nameSelect.addOption = function(value,text){
$(this).append($("<option></option>").val(value).text(text));
};
Am I barking up the wrong tree?
|
[
"To add new method to jQuery You need to use jQuery.fn.methodName attribute, so in this case it will be:\njQuery.fn.addOption = function (value, text) {\n jQuery(this).append(jQuery('<option></option>').val(value).text(text));\n};\n\nBut keep in mind that this addOption will be accessible from result of any $() call.\n"
] |
[
7
] |
[] |
[] |
[
"javascript",
"jquery"
] |
stackoverflow_0000048215_javascript_jquery.txt
|
Q:
What is the best calendar pop-up to populate a web form?
I want to be able to make an HTTP call updating some select boxes after a date is selected. I would like to be in control of updating the textbox so I know when there has been a "true" change (in the event the same date was selected). Ideally, I would call a function to pop-up the calendar and be able to evaluate the date before populating the text box...so I can do my validation before making a server call.
A:
JQuery's datepicker is an extremely flexible tool. With the ability to attach handlers prior to opening or after date selection, themes, range selection and a variety of other incredibly useful options, I've found that it meets all my needs.
The fact that I sit next to one of its maintainers here at work is also fairly useful...
A:
I've been playing with the jquery datePicker script - you should be able to do everything you need to with this.
A:
YUI and ExtJs both have very nice looking and flexible calendars.
A:
If you ever end up considering a JavaScript library/toolkit, Dijit, a widget system which layers on top of Dojo, has a calendar (Dijit calendar test page). I found it relatively simple to implement.
//Disclaimer: I'm in the middle of a love-hate relationship w/ Dojo at the moment, as I am in the process of learning and using it better.
A:
I don't like the MS ASP.NET ajax, but their datepicker is superb. Otherwise, jQuery datepicker.
A:
Check out the ASP.NET AJAX Calendar Extender or Steve Orr's drop down Calendar control.
A:
I'm using the JSCalendar from Dynarch for a project I'm currently working on. It is LGPL licensed and really flexible (easy to customize to your needs). It has lots of features and looks good, too.
http://www.dynarch.com/projects/calendar/
|
What is the best calendar pop-up to populate a web form?
|
I want to be able to make an HTTP call updating some select boxes after a date is selected. I would like to be in control of updating the textbox so I know when there has been a "true" change (in the event the same date was selected). Ideally, I would call a function to pop-up the calendar and be able to evaluate the date before populating the text box...so I can do my validation before making a server call.
|
[
"JQuery's datepicker is an extremely flexible tool. With the ability to attach handlers prior to opening or after date selection, themes, range selection and a variety of other incredibly useful options, I've found that it meets all my needs.\nThe fact that I sit next to one of its maintainers here at work is also fairly useful... \n",
"I've been playing with the jquery datePicker script - you should be able to do everything you need to with this.\n",
"YUI and ExtJs both have very nice looking and flexible calendars.\n",
"If you ever end up considering a JavaScript library/toolkit, Dijit, a widget system which layers on top of Dojo, has a calendar (Dijit calendar test page). I found it relatively simple to implement. \n//Disclaimer: I'm in the middle of a love-hate relationship w/ Dojo at the moment, as I am in the process of learning and using it better. \n",
"I don't like the MS ASP.NET ajax, but their datepicker is superb. Otherwise, jQuery datepicker.\n",
"Check out the ASP.NET AJAX Calendar Extender or Steve Orr's drop down Calendar control.\n",
"I'm using the JSCalendar from Dynarch for a project I'm currently working on. It is LGPL licensed and really flexible (easy to customize to your needs). It has lots of features and looks good, too.\nhttp://www.dynarch.com/projects/calendar/\n"
] |
[
9,
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"calendar",
"javascript"
] |
stackoverflow_0000025367_calendar_javascript.txt
|
Q:
Unexpected behaviour of Process.MainWindowHandle
I've been trying to understand Process.MainWindowHandle.
According to MSDN; "The main window is the window that is created when the process is started. After initialization, other windows may be opened, including the Modal and TopLevel windows, but the first window associated with the process remains the main window." (Emphasis added)
But while debugging I noticed that MainWindowHandle seemed to change value... which I wasn't expecting, especially after consulting the documentation above.
To confirm the behaviour I created a standalone WinForms app with a timer to check the MainWindowHandle of the "DEVENV" (Visual Studio) process every 100ms.
Here's the interesting part of this test app...
IntPtr oldHWnd = IntPtr.Zero;
void GetMainwindowHandle()
{
Process[] processes = Process.GetProcessesByName("DEVENV");
if (processes.Length!=1)
return;
IntPtr newHWnd = processes[0].MainWindowHandle;
if (newHWnd != oldHWnd)
{
oldHWnd = newHWnd;
textBox1.AppendText(processes[0].MainWindowHandle.ToString("X")+"\r\n");
}
}
private void timer1Tick(object sender, EventArgs e)
{
GetMainwindowHandle();
}
You can see the value of MainWindowHandle changing when you (for example) click on a drop-down menu inside VS.
Perhaps I've misunderstood the documentation.
Can anyone shed light?
A:
@edg,
I guess it's an error in MSDN. You can clearly see in Relfector, that "Main window" check in .NET looks like:
private bool IsMainWindow(IntPtr handle)
{
return (!(NativeMethods.GetWindow(new HandleRef(this, handle), 4) != IntPtr.Zero)
&& NativeMethods.IsWindowVisible(new HandleRef(this, handle)));
}
When .NET code enumerates windows, it's pretty obvious that first visible window (i.e. top level window) will match this criteria.
A:
Actually Process.MainWindowHandle is a handle of top-most window, it's not really the "Main Window Handle"
|
Unexpected behaviour of Process.MainWindowHandle
|
I've been trying to understand Process.MainWindowHandle.
According to MSDN; "The main window is the window that is created when the process is started. After initialization, other windows may be opened, including the Modal and TopLevel windows, but the first window associated with the process remains the main window." (Emphasis added)
But while debugging I noticed that MainWindowHandle seemed to change value... which I wasn't expecting, especially after consulting the documentation above.
To confirm the behaviour I created a standalone WinForms app with a timer to check the MainWindowHandle of the "DEVENV" (Visual Studio) process every 100ms.
Here's the interesting part of this test app...
IntPtr oldHWnd = IntPtr.Zero;
void GetMainwindowHandle()
{
Process[] processes = Process.GetProcessesByName("DEVENV");
if (processes.Length!=1)
return;
IntPtr newHWnd = processes[0].MainWindowHandle;
if (newHWnd != oldHWnd)
{
oldHWnd = newHWnd;
textBox1.AppendText(processes[0].MainWindowHandle.ToString("X")+"\r\n");
}
}
private void timer1Tick(object sender, EventArgs e)
{
GetMainwindowHandle();
}
You can see the value of MainWindowHandle changing when you (for example) click on a drop-down menu inside VS.
Perhaps I've misunderstood the documentation.
Can anyone shed light?
|
[
"@edg,\nI guess it's an error in MSDN. You can clearly see in Relfector, that \"Main window\" check in .NET looks like:\nprivate bool IsMainWindow(IntPtr handle)\n{\n return (!(NativeMethods.GetWindow(new HandleRef(this, handle), 4) != IntPtr.Zero) \n && NativeMethods.IsWindowVisible(new HandleRef(this, handle)));\n}\n\nWhen .NET code enumerates windows, it's pretty obvious that first visible window (i.e. top level window) will match this criteria.\n",
"Actually Process.MainWindowHandle is a handle of top-most window, it's not really the \"Main Window Handle\"\n"
] |
[
11,
5
] |
[] |
[] |
[
".net",
"c#",
"msdn"
] |
stackoverflow_0000048288_.net_c#_msdn.txt
|
Q:
Does Visual Studio Server Explorer support custom database providers?
I had used Server Explorer and related tools for graphical database development with Microsoft SQL Server in some of my learning projects - and it was a great experience. However, in my work I deal with Oracle DB and SQLite and my hobby projects use MySQL (because they are hosted on Linux).
Is there a way to leverage the database-related tools in Visual Studio with other database providers?
A:
Here is instructions on how to connect to your MySQL database from Visual Studio:
To make the connection in server
explorer you need to do the following:
first of all you need to install the MyODBC connector 3.51 (or latest) on
the development machine (NB. you can
find this at
http://www.mysql.com/products/connector/odbc/
)
Create a datasource in Control Panel/Administrative Tools with a
connection to your database. This data
source is going to be used purely for
Server Manager and you dont need to
worry about creating the same data
source on your clients PC when you
have made your VS.NET application
(Unless you want to) - I dont want to
cover this in this answer, too long.
For the purpose of this explanation I
will pretend that you created a MyODBC
data source called 'AADSN' to database
'noddy' on mysqlserver 'SERVER01' and
have a root password of 'fred'. The
server can be either the Computer Name
(found in Control
Panel/System/Computer Name), or
alternatively it can be the IP
Address. NB. Make sure that you test
this connection before continuing with
this explanation.
open your VS.NET project
go to server explorer
right-click on 'Data Connections'
select 'Add Connection'
In DataLink Properties, go to the provider tab and select "Microsoft OLE
DB Provider For ODBC drivers"
Click Next
If you previously created an ODBC data source then you could just select
that. The disadvantage of this is that
when you install your project
application on the client machine, the
same data source needs to be there. I
prefer to use a connection string.
This should look something like:
DSN=AADSN;DESC=MySQL ODBC 3.51 Driver
DSN;DATABASE=noddy;SERVER=SERVER01;UID=root;PASSWORD=fred;PORT=3306;SOCKET=;OPTION=11;STMT=;
If you omit the password from the
connection string then you must make
sure that the datasource you created
(AADSN) contains a password. I am not
going to describe what these mean, you
can look in the documentation for
myodbc for that, just ensure that you
get a "Connection Succeeded" message
when you test the datasource.
A:
I found this during my research on Sqlite. I haven't had the chance to use it though. Let us know if this works for you.
http://sqlite.phxsoftware.com/
System.Data.SQLite System.Data.SQLite is the original
SQLite database engine and a complete
ADO.NET 2.0 provider all rolled into a
single mixed mode assembly.
...
Visual Studio 2005/2008 Design-Time
Support
You can add a SQLite connection to the
Server Explorer, create queries with
the query designer, drag-and-drop
tables onto a Typed DataSet and more!
SQLite's designer works on full
editions of Visual Studio 2005/2008,
including VS2005 Express Editions.
NEW You can create/edit views, tables, indexes, foreign keys,
constraints and triggers interactively
within the Visual Studio Server
Explorer!
A:
The Server Explorer should support any database system that provides an ODBC driver. In the case of Oracle there is a built in driver with Visual Studio.
In the Add Connection Dialog click the change button on the data source you should then get a list of the providers you have drivers for.
A:
Oracle has a set of tools that integrates with Visual Studio. It's packaged with their data access libraries.
http://www.oracle.com/technology/software/tech/windows/odpnet/index.html
|
Does Visual Studio Server Explorer support custom database providers?
|
I had used Server Explorer and related tools for graphical database development with Microsoft SQL Server in some of my learning projects - and it was a great experience. However, in my work I deal with Oracle DB and SQLite and my hobby projects use MySQL (because they are hosted on Linux).
Is there a way to leverage the database-related tools in Visual Studio with other database providers?
|
[
"Here is instructions on how to connect to your MySQL database from Visual Studio:\n\nTo make the connection in server\n explorer you need to do the following:\n\nfirst of all you need to install the MyODBC connector 3.51 (or latest) on\n the development machine (NB. you can\n find this at\n http://www.mysql.com/products/connector/odbc/\n )\nCreate a datasource in Control Panel/Administrative Tools with a\n connection to your database. This data\n source is going to be used purely for\n Server Manager and you dont need to\n worry about creating the same data\n source on your clients PC when you\n have made your VS.NET application\n (Unless you want to) - I dont want to\n cover this in this answer, too long.\n For the purpose of this explanation I\n will pretend that you created a MyODBC\n data source called 'AADSN' to database\n 'noddy' on mysqlserver 'SERVER01' and\n have a root password of 'fred'. The\n server can be either the Computer Name\n (found in Control\n Panel/System/Computer Name), or\n alternatively it can be the IP\n Address. NB. Make sure that you test\n this connection before continuing with\n this explanation.\nopen your VS.NET project\ngo to server explorer\nright-click on 'Data Connections'\nselect 'Add Connection'\nIn DataLink Properties, go to the provider tab and select \"Microsoft OLE\n DB Provider For ODBC drivers\"\nClick Next\nIf you previously created an ODBC data source then you could just select\n that. The disadvantage of this is that\n when you install your project\n application on the client machine, the\n same data source needs to be there. I\n prefer to use a connection string.\n This should look something like:\n\nDSN=AADSN;DESC=MySQL ODBC 3.51 Driver\n DSN;DATABASE=noddy;SERVER=SERVER01;UID=root;PASSWORD=fred;PORT=3306;SOCKET=;OPTION=11;STMT=;\nIf you omit the password from the\n connection string then you must make\n sure that the datasource you created\n (AADSN) contains a password. I am not\n going to describe what these mean, you\n can look in the documentation for\n myodbc for that, just ensure that you\n get a \"Connection Succeeded\" message\n when you test the datasource.\n\n",
"I found this during my research on Sqlite. I haven't had the chance to use it though. Let us know if this works for you.\nhttp://sqlite.phxsoftware.com/\n\nSystem.Data.SQLite System.Data.SQLite is the original\nSQLite database engine and a complete\nADO.NET 2.0 provider all rolled into a\nsingle mixed mode assembly.\n...\nVisual Studio 2005/2008 Design-Time\nSupport\nYou can add a SQLite connection to the\nServer Explorer, create queries with\nthe query designer, drag-and-drop\ntables onto a Typed DataSet and more!\nSQLite's designer works on full\neditions of Visual Studio 2005/2008,\nincluding VS2005 Express Editions.\nNEW You can create/edit views, tables, indexes, foreign keys,\nconstraints and triggers interactively\nwithin the Visual Studio Server\nExplorer!\n\n",
"The Server Explorer should support any database system that provides an ODBC driver. In the case of Oracle there is a built in driver with Visual Studio.\nIn the Add Connection Dialog click the change button on the data source you should then get a list of the providers you have drivers for.\n",
"Oracle has a set of tools that integrates with Visual Studio. It's packaged with their data access libraries. \nhttp://www.oracle.com/technology/software/tech/windows/odpnet/index.html\n"
] |
[
9,
4,
1,
1
] |
[] |
[] |
[
"c#",
"mysql",
"oracle",
"sqlite",
"visual_studio"
] |
stackoverflow_0000031885_c#_mysql_oracle_sqlite_visual_studio.txt
|
Q:
Useful browser plugins for openid authentication?
I've read https://stackoverflow.com/questions/41354/is-the-stackoverflow-login-situation-bearable and must agree to a certain point that openid (for me) makes it more difficult to log in. Not a show stoper but I'm used to opening the front page of the site, there's a small login form, firefox' password manager already filled in the correct values, submit, done. One click.
Here - and it's currently the only site with openid I use - the password/form manager doesn't even fill in my "login id". I often close all browser windows and all cookies are erased - and I would like to keep it this way.
Are there any firefox plugins you would recommend that make the login process easier? Maybe something that checks my status at myOpenId and performs the login if necessary.
Edit:
Unfortunately RichQ is right and I can't use Seatbelt. And Sxipper ...not quite what I had in mind ;) Anyway, both solutions would take away some of the "pain", so upvotes for both of you.
I've also tried the ssl certificate. But that only adds more steps. Hopefully I did something wrong and some of those steps can be eliminated:
Click "login" at stackoverflow
Click on the "select provider" Button.
Click on MyOpenId
Enter Username
Click "Login" (Sxipper could reduce the previous 4 steps to a single mouseclick)
MyOpenId login page is loaded
Click "Sign in with an SSL certificate"
Choose Certificate (grrr)
Click "Login" (GRRR)
Back to stackoverflow, finally.
What I really would like is:
Click "login" at stackoverflow
My (only) LoginId is filled in
Click "Login"
If necessary the certificate is chosen automagically, ssl login performed
Back to stackoverflow without any further user interaction.
That would be more or less what I'm used to - and I'm a creature of habit :)
A:
VeriSign (ick)'s SeatBelt plugin: https://pip.verisignlabs.com/seatbelt.do
Ideally, the plugin would allow a higher-level of authentication. I know something like this was planned for the OLPC.
A:
You could try Sxipper. It provides intelligent automatic form-fill, including auto-login.
From the Sxipper FAQ:
How does Sxipper support OpenID?
Sxipper remembers your OpenIDs and presents an overlay. You choose the one you want to use and login with one click. Sxipper also helps protect you against phishing.
|
Useful browser plugins for openid authentication?
|
I've read https://stackoverflow.com/questions/41354/is-the-stackoverflow-login-situation-bearable and must agree to a certain point that openid (for me) makes it more difficult to log in. Not a show stoper but I'm used to opening the front page of the site, there's a small login form, firefox' password manager already filled in the correct values, submit, done. One click.
Here - and it's currently the only site with openid I use - the password/form manager doesn't even fill in my "login id". I often close all browser windows and all cookies are erased - and I would like to keep it this way.
Are there any firefox plugins you would recommend that make the login process easier? Maybe something that checks my status at myOpenId and performs the login if necessary.
Edit:
Unfortunately RichQ is right and I can't use Seatbelt. And Sxipper ...not quite what I had in mind ;) Anyway, both solutions would take away some of the "pain", so upvotes for both of you.
I've also tried the ssl certificate. But that only adds more steps. Hopefully I did something wrong and some of those steps can be eliminated:
Click "login" at stackoverflow
Click on the "select provider" Button.
Click on MyOpenId
Enter Username
Click "Login" (Sxipper could reduce the previous 4 steps to a single mouseclick)
MyOpenId login page is loaded
Click "Sign in with an SSL certificate"
Choose Certificate (grrr)
Click "Login" (GRRR)
Back to stackoverflow, finally.
What I really would like is:
Click "login" at stackoverflow
My (only) LoginId is filled in
Click "Login"
If necessary the certificate is chosen automagically, ssl login performed
Back to stackoverflow without any further user interaction.
That would be more or less what I'm used to - and I'm a creature of habit :)
|
[
"VeriSign (ick)'s SeatBelt plugin: https://pip.verisignlabs.com/seatbelt.do\nIdeally, the plugin would allow a higher-level of authentication. I know something like this was planned for the OLPC.\n",
"You could try Sxipper. It provides intelligent automatic form-fill, including auto-login. \nFrom the Sxipper FAQ:\n\nHow does Sxipper support OpenID?\n Sxipper remembers your OpenIDs and presents an overlay. You choose the one you want to use and login with one click. Sxipper also helps protect you against phishing.\n\n"
] |
[
8,
1
] |
[] |
[] |
[
"firefox",
"openid"
] |
stackoverflow_0000048344_firefox_openid.txt
|
Q:
Installing Team Foundation Server
What are the best practices in setting up a new instance of TFS 2008 Workgroup edition?
Specifically, the constraints are as follows:
Must install on an existing Windows Server 2008 64 bit
TFS application layer is 32 bit only
Should I install SQL Server 2008, Sharepoint and the app layer in a virtual instance of Windows Server 2008 or 2003(I am already running Hyper-V) or split the layers with a database on the host OS and the app layer in a virtual machine?
Edit: Apparently, splitting the layers is not recommended
A:
This is my recipe for installing TFS 2008 SP1.
There is no domain controller in this scenario, we are only a couple of users. If I was to do it again, I would consider changing our environement to use a active directory domain.
Host Server running Windows Server 2008 with 8GB RAM and quad processor
Fresh install of Windows Server 2008 32bit in a VM under Hyper-V
Install Application Server role with IIS
Install SQL Server 2008 Standard edition
Use a user account for Reporting Services and Analysis Services
Create a slipstreamed image of TFS 2008 with SP1 and install TFS
Install VSTS 2008
Install Team System Explorer
Install VSTS 2008 SP1
Install TFS Web Access Power tool
After installing everything, reports were not generated. Found this forum post that helped resolve the problem.
Open p://localhost:8080/Warehouse/v1.0/warehousecontroller.asmx
Run the webservice (see above link for details), it will take a little while, the tfsWarehouse will be rebuilt
It is very important to do things in order, download the installation guide and follow it to the letter. I forgot to install the Team System Explorer until after installing SP1 and ventured into all sorts of problems. Installing SP1 once more fixed that.
A:
One critical thing you has to keep in mind about TFS, is that it likes to have the machine all to it self. So if you have to create a separate instance on Hyper-V do it using the proven Windows Server 2003 platform with SQL Server 2005.
I am sure Microsoft has done a great job getting it to work under Windows Server 2008 and SQL Server 2008, however you don't get any additional features with this newer install and it is currently unproven in the wild.
So my recommendation is to stick with what is known until the next release of TFS comes out.
Also splitting the layers is definitely not recommended, especially in the workgroup edition where you will only be allowed to have 5 licensed users. Those 5 users will never exceed the server's needs. Also my recommendation is to not update Sharepoint if you don't need to. In my environment, we don't really use Sharepoint all that much, so I left it alone. Sharepoint is usually, in my experience, where most of the problems come from with TFS.
A:
I just upgraded our team to TFS 2008, from TFS 2005. The hardest part was upgrading SharePoint 2.0 to 3.0, so I would make sure to do that first, if you have not already installed TFS 2008. We had a couple of other difficulties, but they were all either related to the SharePoint upgrade, or to the fact that we were using an aftermarket Policy package - Scrum for TeamSystem. We are on SQL Server 2005, so I cannot address SQL Server 2008. As for splitting the layers, we did not do this either, as we are running on Windows Server 2003 and everything ran under the host OS.
A:
Splitting the layers is only needed for more than 450 users.
I would also recommend having the Build Server on a completely seperate machine. Building is very file system intensive. SQL Server performs best when it has complete control of a file system - so having build and TFS on the same machine may create performance issues while builds are executing.
Perhaps this can be alleviated with proper tuning and seperate physical drives - but I'd think in the long run it would be a lot simpler to just either use some old hardware - or spin up a small virtual machine on a seperate host for your builds
|
Installing Team Foundation Server
|
What are the best practices in setting up a new instance of TFS 2008 Workgroup edition?
Specifically, the constraints are as follows:
Must install on an existing Windows Server 2008 64 bit
TFS application layer is 32 bit only
Should I install SQL Server 2008, Sharepoint and the app layer in a virtual instance of Windows Server 2008 or 2003(I am already running Hyper-V) or split the layers with a database on the host OS and the app layer in a virtual machine?
Edit: Apparently, splitting the layers is not recommended
|
[
"This is my recipe for installing TFS 2008 SP1. \nThere is no domain controller in this scenario, we are only a couple of users. If I was to do it again, I would consider changing our environement to use a active directory domain.\n\nHost Server running Windows Server 2008 with 8GB RAM and quad processor\nFresh install of Windows Server 2008 32bit in a VM under Hyper-V\nInstall Application Server role with IIS\nInstall SQL Server 2008 Standard edition\n\n\nUse a user account for Reporting Services and Analysis Services\n\nCreate a slipstreamed image of TFS 2008 with SP1 and install TFS\nInstall VSTS 2008\nInstall Team System Explorer\nInstall VSTS 2008 SP1\nInstall TFS Web Access Power tool\n\nAfter installing everything, reports were not generated. Found this forum post that helped resolve the problem.\n\nOpen p://localhost:8080/Warehouse/v1.0/warehousecontroller.asmx\nRun the webservice (see above link for details), it will take a little while, the tfsWarehouse will be rebuilt\n\nIt is very important to do things in order, download the installation guide and follow it to the letter. I forgot to install the Team System Explorer until after installing SP1 and ventured into all sorts of problems. Installing SP1 once more fixed that.\n",
"One critical thing you has to keep in mind about TFS, is that it likes to have the machine all to it self. So if you have to create a separate instance on Hyper-V do it using the proven Windows Server 2003 platform with SQL Server 2005. \nI am sure Microsoft has done a great job getting it to work under Windows Server 2008 and SQL Server 2008, however you don't get any additional features with this newer install and it is currently unproven in the wild.\nSo my recommendation is to stick with what is known until the next release of TFS comes out.\nAlso splitting the layers is definitely not recommended, especially in the workgroup edition where you will only be allowed to have 5 licensed users. Those 5 users will never exceed the server's needs. Also my recommendation is to not update Sharepoint if you don't need to. In my environment, we don't really use Sharepoint all that much, so I left it alone. Sharepoint is usually, in my experience, where most of the problems come from with TFS. \n",
"I just upgraded our team to TFS 2008, from TFS 2005. The hardest part was upgrading SharePoint 2.0 to 3.0, so I would make sure to do that first, if you have not already installed TFS 2008. We had a couple of other difficulties, but they were all either related to the SharePoint upgrade, or to the fact that we were using an aftermarket Policy package - Scrum for TeamSystem. We are on SQL Server 2005, so I cannot address SQL Server 2008. As for splitting the layers, we did not do this either, as we are running on Windows Server 2003 and everything ran under the host OS.\n",
"Splitting the layers is only needed for more than 450 users.\nI would also recommend having the Build Server on a completely seperate machine. Building is very file system intensive. SQL Server performs best when it has complete control of a file system - so having build and TFS on the same machine may create performance issues while builds are executing.\nPerhaps this can be alleviated with proper tuning and seperate physical drives - but I'd think in the long run it would be a lot simpler to just either use some old hardware - or spin up a small virtual machine on a seperate host for your builds\n"
] |
[
8,
2,
1,
1
] |
[] |
[] |
[
"hyper_v",
"tfs",
"version_control",
"visual_studio",
"visual_studio_2008"
] |
stackoverflow_0000032597_hyper_v_tfs_version_control_visual_studio_visual_studio_2008.txt
|
Q:
Running a regular background event in Java web app
In podcast #15, Jeff mentioned he twittered about how to run a regular event in the background as if it was a normal function - unfortunately I can't seem to find that through twitter. Now I need to do a similar thing and are going to throw the question to the masses.
My current plan is when the first user (probably me) enters the site it starts a background thread that waits until the alloted time (hourly on the hour) and then kicks off the event blocking the others (I am a Windows programmer by trade so I think in terms of events and WaitOnMultipleObjects) until it completes.
How did Jeff do it in Asp.Net and is his method applicable to the Java web-app world?
A:
I think developing a custom solution for running background tasks doesn't always worth, so I recommend to use the Quartz Scheduler in Java.
In your situation (need to run background tasks in a web application) you could use the ServletContextListener included in the distribution to initialize the engine at the startup of your web container.
After that you have a number of possibilities to start (trigger) your background tasks (jobs), e.g. you can use Calendars or cron-like expressions. In your situation most probably you should settle with SimpleTrigger that lets you run jobs in fixed, regular intervals.
The jobs themselves can be described easily too in Quartz, however you haven't provided any details about what you need to run, so I can't provide a suggestion in that area.
A:
As mentioned, Quartz is one standard solution. If you don't care about clustering or persistence of background tasks across restarts, you can use the built in ThreadPool support (in Java 5,6). If you use a ScheduledExecutorService you can put Runnables into the background thread pool that wait a specific amount of time before executing.
If you do care about clustering and/or persistence, you can use JMS queues for asynchronous execution, though you will still need some way of delaying background tasks (you can use Quartz or the ScheduledExecutorService to do this).
A:
Jeff's mechanism was to create some sort of cached object which ASP.Net would automatically recreate at some sort of interval - It seemed to be an ASP.Net specific solution, so probably won't help you (or me) much in Java world.
See https://stackoverflow.fogbugz.com/default.asp?W13117
Atwood: Well, I originally asked on Twitter, because I just wanted something light weight. I really didn't want to like write a windows service. I felt like that was out of band code. Plus the code that actually does the work is a web page in fact, because to me that is a logical unit of work on a website is a web page. So, it really is like we are calling back into the web site, it's just like another request in the website, so I viewed it as something that should stay inline, and the little approach that we came up that was recommended to me on Twitter was to essentially to add something to the application cache with a fixed expiration, then you have a call back so when that expires it calls a certain function which does the work then you add it back in to the cache with the same expiration. So, it's a little bit, maybe "ghetto" is the right word.
My approach has always been to have to OS (i.e. Cron or the Windows task scheduler) load a specific URL at some interval, and then setup a page at that URL to check it's queue, and perform whatever tasks were required, but I'd be interested to hear if there's a better way.
From the transcript, it looks like FogBugz uses the windows service loading a URL approach also.
Spolsky: So we have this special page called heartbeat.asp. And that page, whenever you hit it, and anybody can hit it at anytime: doesn't hurt. But when that page runs it checks a queue of waiting tasks to see if there's anything that needs to be done. And if there's anything that needs to be done, it does one thing and then looks in that queue again and if there's anything else to be done it returns a plus, and the entire web page that it returns is just a single character with a plus in it. And if there's nothing else to be done, the queue is now empty, it returns a minus. So, anybody can call this and hit it as many times, you can load up heartbeat.asp in your web browser you hit Ctrl-R Ctrl-R Ctrl-R Ctrl-R until you start getting minuses instead of pluses. And when you've done that FogBugz will have completed all of its maintenance work that it needs to do. So that's the first part, and the second part is a very, very simple Windows service which runs, and its whole job is to call heartbeat.asp and if it gets a plus, call it again soon, and if it gets a minus call it again, but not for a while. So basically there's this Windows service that's always running, that has a very, very, very simple task of just hitting a URL, and looking to see if it gets a plus or a minus and, and then scheduling when it runs again based on whether it got a plus or a minus. And obviously you can do any kind of variation you want on this theme, like for example, uh you could actually, instead of returning just a plus or minus you could say "Okay call me back in 60 seconds" or "Call me back right away I have more work to be done." And that's how it works... so that maintenance service it just runs, you know, it's like, you know, a half page of code that runs that maintenance service, and it never has to change, and it doesn't have any of the logic in there, it just contains the tickling that causes these web pages to get called with a certain guaranteed frequency. And inside that web page at heartbeat.asp there's code that maintains a queue of tasks that need to be done and looks at how much time has elapsed and does, you know, late-night maintenance and every seven days delete all the older messages that have been marked as spam and all kinds of just maintenance background tasks. And uh, that's how that does that.
A:
We use jtcron for our scheduled background tasks.
It works well, and if you understand cron it should make sense to you.
A:
Here is how they do it on StackOverflow.com:
https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
|
Running a regular background event in Java web app
|
In podcast #15, Jeff mentioned he twittered about how to run a regular event in the background as if it was a normal function - unfortunately I can't seem to find that through twitter. Now I need to do a similar thing and are going to throw the question to the masses.
My current plan is when the first user (probably me) enters the site it starts a background thread that waits until the alloted time (hourly on the hour) and then kicks off the event blocking the others (I am a Windows programmer by trade so I think in terms of events and WaitOnMultipleObjects) until it completes.
How did Jeff do it in Asp.Net and is his method applicable to the Java web-app world?
|
[
"I think developing a custom solution for running background tasks doesn't always worth, so I recommend to use the Quartz Scheduler in Java.\nIn your situation (need to run background tasks in a web application) you could use the ServletContextListener included in the distribution to initialize the engine at the startup of your web container.\nAfter that you have a number of possibilities to start (trigger) your background tasks (jobs), e.g. you can use Calendars or cron-like expressions. In your situation most probably you should settle with SimpleTrigger that lets you run jobs in fixed, regular intervals.\nThe jobs themselves can be described easily too in Quartz, however you haven't provided any details about what you need to run, so I can't provide a suggestion in that area.\n",
"As mentioned, Quartz is one standard solution. If you don't care about clustering or persistence of background tasks across restarts, you can use the built in ThreadPool support (in Java 5,6). If you use a ScheduledExecutorService you can put Runnables into the background thread pool that wait a specific amount of time before executing. \nIf you do care about clustering and/or persistence, you can use JMS queues for asynchronous execution, though you will still need some way of delaying background tasks (you can use Quartz or the ScheduledExecutorService to do this).\n",
"Jeff's mechanism was to create some sort of cached object which ASP.Net would automatically recreate at some sort of interval - It seemed to be an ASP.Net specific solution, so probably won't help you (or me) much in Java world.\nSee https://stackoverflow.fogbugz.com/default.asp?W13117\n\nAtwood: Well, I originally asked on Twitter, because I just wanted something light weight. I really didn't want to like write a windows service. I felt like that was out of band code. Plus the code that actually does the work is a web page in fact, because to me that is a logical unit of work on a website is a web page. So, it really is like we are calling back into the web site, it's just like another request in the website, so I viewed it as something that should stay inline, and the little approach that we came up that was recommended to me on Twitter was to essentially to add something to the application cache with a fixed expiration, then you have a call back so when that expires it calls a certain function which does the work then you add it back in to the cache with the same expiration. So, it's a little bit, maybe \"ghetto\" is the right word.\n\nMy approach has always been to have to OS (i.e. Cron or the Windows task scheduler) load a specific URL at some interval, and then setup a page at that URL to check it's queue, and perform whatever tasks were required, but I'd be interested to hear if there's a better way.\nFrom the transcript, it looks like FogBugz uses the windows service loading a URL approach also.\n\nSpolsky: So we have this special page called heartbeat.asp. And that page, whenever you hit it, and anybody can hit it at anytime: doesn't hurt. But when that page runs it checks a queue of waiting tasks to see if there's anything that needs to be done. And if there's anything that needs to be done, it does one thing and then looks in that queue again and if there's anything else to be done it returns a plus, and the entire web page that it returns is just a single character with a plus in it. And if there's nothing else to be done, the queue is now empty, it returns a minus. So, anybody can call this and hit it as many times, you can load up heartbeat.asp in your web browser you hit Ctrl-R Ctrl-R Ctrl-R Ctrl-R until you start getting minuses instead of pluses. And when you've done that FogBugz will have completed all of its maintenance work that it needs to do. So that's the first part, and the second part is a very, very simple Windows service which runs, and its whole job is to call heartbeat.asp and if it gets a plus, call it again soon, and if it gets a minus call it again, but not for a while. So basically there's this Windows service that's always running, that has a very, very, very simple task of just hitting a URL, and looking to see if it gets a plus or a minus and, and then scheduling when it runs again based on whether it got a plus or a minus. And obviously you can do any kind of variation you want on this theme, like for example, uh you could actually, instead of returning just a plus or minus you could say \"Okay call me back in 60 seconds\" or \"Call me back right away I have more work to be done.\" And that's how it works... so that maintenance service it just runs, you know, it's like, you know, a half page of code that runs that maintenance service, and it never has to change, and it doesn't have any of the logic in there, it just contains the tickling that causes these web pages to get called with a certain guaranteed frequency. And inside that web page at heartbeat.asp there's code that maintains a queue of tasks that need to be done and looks at how much time has elapsed and does, you know, late-night maintenance and every seven days delete all the older messages that have been marked as spam and all kinds of just maintenance background tasks. And uh, that's how that does that.\n\n",
"We use jtcron for our scheduled background tasks.\nIt works well, and if you understand cron it should make sense to you.\n",
"Here is how they do it on StackOverflow.com:\nhttps://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/\n"
] |
[
12,
5,
3,
2,
1
] |
[] |
[] |
[
"events",
"java"
] |
stackoverflow_0000048293_events_java.txt
|
Q:
How to do C++ style destructors in C#?
I've got a C# class with a Dispose function via IDisposable. It's intended to be used inside a using block so the expensive resource it handles can be released right away.
The problem is that a bug occurred when an exception was thrown before Dispose was called, and the programmer neglected to use using or finally.
In C++, I never had to worry about this. The call to a class's destructor would be automatically inserted at the end of the object's scope. The only way to avoid that happening would be to use the new operator and hold the object behind a pointer, but that required extra work for the programmer isn't something they would do by accident, like forgetting to use using.
Is there any way to for a using block to be automatically used in C#?
Many thanks.
UPDATE:
I'd like to explain why I'm not accepting the finalizer answers. Those answers are technically correct in themselves, but they are not C++ style destructors.
Here's the bug I found, reduced to the essentials...
try
{
PleaseDisposeMe a = new PleaseDisposeMe();
throw new Exception();
a.Dispose();
}
catch (Exception ex)
{
Log(ex);
}
// This next call will throw a time-out exception unless the GC
// runs a.Dispose in time.
PleaseDisposeMe b = new PleaseDisposeMe();
Using FXCop is an excellent suggestion, but if that's my only answer, my question would have to become a plea to the C# people, or use C++. Twenty nested using statements anyone?
A:
Where I work we use the following guidelines:
Each IDisposable class must have a finalizer
Whenever using an IDisposable object, it must be used inside a "using" block. The only exception is if the object is a member of another class, in which case the containing class must be IDisposable and must call the member's 'Dispose' method in its own implementation of 'Dispose'. This means 'Dispose' should never be called by the developer except for inside another 'Dispose' method, eliminating the bug described in the question.
The code in each Finalizer must begin with a warning/error log notifying us that the finalizer has been called. This way you have an extremely good chance of spotting such bugs as described above before releasing the code, plus it might be a hint for bugs occuring in your system.
To make our lives easier, we also have a SafeDispose method in our infrastructure, which calls the the Dispose method of its argument within a try-catch block (with error logging), just in case (although Dispose methods are not supposed to throw exceptions).
See also: Chris Lyon's suggestions regarding IDisposable
Edit:
@Quarrelsome: One thing you ought to do is call GC.SuppressFinalize inside 'Dispose', so that if the object was disposed, it wouldn't be "re-disposed".
It is also usually advisable to hold a flag indicating whether the object has already been disposed or not. The follwoing pattern is usually pretty good:
class MyDisposable: IDisposable {
public void Dispose() {
lock(this) {
if (disposed) {
return;
}
disposed = true;
}
GC.SuppressFinalize(this);
// Do actual disposing here ...
}
private bool disposed = false;
}
Of course, locking is not always necessary, but if you're not sure if your class would be used in a multi-threaded environment or not, it is advisable to keep it.
A:
Unfortunately there isn't any way to do this directly in the code. If this is an issue in house, there are various code analysis solutions that could catch these sort of problems. Have you looked into FxCop? I think that this will catch these situations and in all cases where IDisposable objects might be left hanging. If it is a component that people are using outside of your organization and you can't require FxCop, then documentation is really your only recourse :).
Edit: In the case of finalizers, this doesn't really guarantee when the finalization will happen. So this may be a solution for you but it depends on the situation.
A:
~ClassName()
{
}
EDIT (bold):
If will get called when the object is moved out of scope and is tidied by the garbage collector however this is not deterministic and is not guaranteed to happen at any particular time.
This is called a Finalizer. All objects with a finaliser get put on a special finalise queue by the garbage collector where the finalise method is invoked on them (so it's technically a performance hit to declare empty finalisers).
The "accepted" dispose pattern as per the Framework Guidelines is as follows with unmanaged resources:
public class DisposableFinalisableClass : IDisposable
{
~DisposableFinalisableClass()
{
Dispose(false);
}
public void Dispose()
{
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// tidy managed resources
}
// tidy unmanaged resources
}
}
So the above means that if someone calls Dispose the unmanaged resources are tidied. However in the case of someone forgetting to call Dispose or an exception preventing Dispose from being called the unmanaged resources will still be tidied away, only slightly later on when the GC gets its grubby mitts on it (which includes the application closing down or unexpectedly ending).
A:
@Quarrelsome
If will get called when the object is moved out of scope and is tidied by the garbage collector.
This statement is misleading and how I read it incorrect: There is absolutely no guarantee when the finalizer will be called. You are absolutely correct that billpg should implement a finalizer; however it will not be called automaticly when the object goes out of scope like he wants. Evidence, the first bullet point under Finalize operations have the following limitations.
In fact Microsoft gave a grant to Chris Sells to create an implementation of .NET that used reference counting instead of garbage collection Link. As it turned out there was a considerable performance hit.
A:
The best practice is to use a finaliser in your class and always use using blocks.
There isn't really a direct equivalent though, finalisers look like C destructors, but behave differently.
You're supposed to nest using blocks, that's why the C# code layout defaults to putting them on the same line...
using (SqlConnection con = new SqlConnection("DB con str") )
using (SqlCommand com = new SqlCommand( con, "sql query") )
{
//now code is indented one level
//technically we're nested twice
}
When you're not using using you can just do what it does under the hood anyway:
PleaseDisposeMe a;
try
{
a = new PleaseDisposeMe();
throw new Exception();
}
catch (Exception ex) { Log(ex); }
finally {
//this always executes, even with the exception
a.Dispose();
}
With managed code C# is very very good at looking after its own memory, even when stuff is poorly disposed. If you're dealing with unmanaged resources a lot it's not so strong.
A:
This is no different from a programmer forgetting to use delete in C++, except that at least here the garbage collector will still eventually catch up with it.
And you never need to use IDisposable if the only resource you're worried about is memory. The framework will handle that on it's own. IDisposable is only for unmanaged resources like database connections, filestreams, sockets, and the like.
A:
A better design is to make this class release the expensive resource on its own, before its disposed.
For example, If its a database connection, only connect when needed and release immediately, long before the actual class gets disposed.
|
How to do C++ style destructors in C#?
|
I've got a C# class with a Dispose function via IDisposable. It's intended to be used inside a using block so the expensive resource it handles can be released right away.
The problem is that a bug occurred when an exception was thrown before Dispose was called, and the programmer neglected to use using or finally.
In C++, I never had to worry about this. The call to a class's destructor would be automatically inserted at the end of the object's scope. The only way to avoid that happening would be to use the new operator and hold the object behind a pointer, but that required extra work for the programmer isn't something they would do by accident, like forgetting to use using.
Is there any way to for a using block to be automatically used in C#?
Many thanks.
UPDATE:
I'd like to explain why I'm not accepting the finalizer answers. Those answers are technically correct in themselves, but they are not C++ style destructors.
Here's the bug I found, reduced to the essentials...
try
{
PleaseDisposeMe a = new PleaseDisposeMe();
throw new Exception();
a.Dispose();
}
catch (Exception ex)
{
Log(ex);
}
// This next call will throw a time-out exception unless the GC
// runs a.Dispose in time.
PleaseDisposeMe b = new PleaseDisposeMe();
Using FXCop is an excellent suggestion, but if that's my only answer, my question would have to become a plea to the C# people, or use C++. Twenty nested using statements anyone?
|
[
"Where I work we use the following guidelines:\n\nEach IDisposable class must have a finalizer\nWhenever using an IDisposable object, it must be used inside a \"using\" block. The only exception is if the object is a member of another class, in which case the containing class must be IDisposable and must call the member's 'Dispose' method in its own implementation of 'Dispose'. This means 'Dispose' should never be called by the developer except for inside another 'Dispose' method, eliminating the bug described in the question.\nThe code in each Finalizer must begin with a warning/error log notifying us that the finalizer has been called. This way you have an extremely good chance of spotting such bugs as described above before releasing the code, plus it might be a hint for bugs occuring in your system.\n\nTo make our lives easier, we also have a SafeDispose method in our infrastructure, which calls the the Dispose method of its argument within a try-catch block (with error logging), just in case (although Dispose methods are not supposed to throw exceptions).\nSee also: Chris Lyon's suggestions regarding IDisposable\nEdit:\n@Quarrelsome: One thing you ought to do is call GC.SuppressFinalize inside 'Dispose', so that if the object was disposed, it wouldn't be \"re-disposed\".\nIt is also usually advisable to hold a flag indicating whether the object has already been disposed or not. The follwoing pattern is usually pretty good:\nclass MyDisposable: IDisposable {\n public void Dispose() {\n lock(this) {\n if (disposed) {\n return;\n }\n\n disposed = true;\n }\n\n GC.SuppressFinalize(this);\n\n // Do actual disposing here ...\n }\n\n private bool disposed = false;\n}\n\nOf course, locking is not always necessary, but if you're not sure if your class would be used in a multi-threaded environment or not, it is advisable to keep it.\n",
"Unfortunately there isn't any way to do this directly in the code. If this is an issue in house, there are various code analysis solutions that could catch these sort of problems. Have you looked into FxCop? I think that this will catch these situations and in all cases where IDisposable objects might be left hanging. If it is a component that people are using outside of your organization and you can't require FxCop, then documentation is really your only recourse :).\nEdit: In the case of finalizers, this doesn't really guarantee when the finalization will happen. So this may be a solution for you but it depends on the situation.\n",
"~ClassName()\n{\n}\n\nEDIT (bold):\nIf will get called when the object is moved out of scope and is tidied by the garbage collector however this is not deterministic and is not guaranteed to happen at any particular time.\nThis is called a Finalizer. All objects with a finaliser get put on a special finalise queue by the garbage collector where the finalise method is invoked on them (so it's technically a performance hit to declare empty finalisers).\nThe \"accepted\" dispose pattern as per the Framework Guidelines is as follows with unmanaged resources:\n public class DisposableFinalisableClass : IDisposable\n {\n ~DisposableFinalisableClass()\n {\n Dispose(false);\n }\n\n public void Dispose()\n {\n Dispose(true);\n }\n\n protected virtual void Dispose(bool disposing)\n {\n if (disposing)\n {\n // tidy managed resources\n }\n\n // tidy unmanaged resources\n }\n }\n\nSo the above means that if someone calls Dispose the unmanaged resources are tidied. However in the case of someone forgetting to call Dispose or an exception preventing Dispose from being called the unmanaged resources will still be tidied away, only slightly later on when the GC gets its grubby mitts on it (which includes the application closing down or unexpectedly ending).\n",
"@Quarrelsome \n\nIf will get called when the object is moved out of scope and is tidied by the garbage collector.\n\nThis statement is misleading and how I read it incorrect: There is absolutely no guarantee when the finalizer will be called. You are absolutely correct that billpg should implement a finalizer; however it will not be called automaticly when the object goes out of scope like he wants. Evidence, the first bullet point under Finalize operations have the following limitations.\nIn fact Microsoft gave a grant to Chris Sells to create an implementation of .NET that used reference counting instead of garbage collection Link. As it turned out there was a considerable performance hit.\n",
"The best practice is to use a finaliser in your class and always use using blocks.\nThere isn't really a direct equivalent though, finalisers look like C destructors, but behave differently.\nYou're supposed to nest using blocks, that's why the C# code layout defaults to putting them on the same line...\nusing (SqlConnection con = new SqlConnection(\"DB con str\") )\nusing (SqlCommand com = new SqlCommand( con, \"sql query\") )\n{\n //now code is indented one level\n //technically we're nested twice\n}\n\nWhen you're not using using you can just do what it does under the hood anyway:\nPleaseDisposeMe a;\ntry\n{\n a = new PleaseDisposeMe();\n throw new Exception();\n}\ncatch (Exception ex) { Log(ex); } \nfinally { \n //this always executes, even with the exception\n a.Dispose(); \n}\n\nWith managed code C# is very very good at looking after its own memory, even when stuff is poorly disposed. If you're dealing with unmanaged resources a lot it's not so strong.\n",
"This is no different from a programmer forgetting to use delete in C++, except that at least here the garbage collector will still eventually catch up with it.\nAnd you never need to use IDisposable if the only resource you're worried about is memory. The framework will handle that on it's own. IDisposable is only for unmanaged resources like database connections, filestreams, sockets, and the like.\n",
"A better design is to make this class release the expensive resource on its own, before its disposed.\nFor example, If its a database connection, only connect when needed and release immediately, long before the actual class gets disposed.\n"
] |
[
6,
3,
2,
2,
2,
0,
0
] |
[] |
[] |
[
"c#",
"dispose",
"idisposable",
"using"
] |
stackoverflow_0000047612_c#_dispose_idisposable_using.txt
|
Q:
Play button in browser
I want to put songs on a web page and have a little play button, like you can see on Last.fm or Pandora. There can be multiple songs listed on the site, and if you start playing a different song with one already playing, it will pause the first track and begin playing the one you just clicked on. I think they use Flash for this, and I could probably implement it in a few hours, but is there already code I could use for this? Maybe just a flash swf file that you stick hidden on a web page with a basic Javascript API that I can use to stream mp3 files?
Also, what about WMA or AAC files? Is there a universal solution that will play these 3 file types?
http://musicplayer.sourceforge.net/
A:
There are many flash mp3 players that you can use that do this. Usually, you just have to edit a text file to point at the mp3s you want to have available.
Here is the first one that showed up on a google search for flash mp3 player: http://www.flashmp3player.org/demo.html
A:
This is fairly simple if you want to embed the WMP you can use all the controls via JavaScript. There is a great MSDN section on it but I cant seem to find it now.Edit: I found this on MSDN it contains the properties that an embeded WMP will accept then all you have to do is call the methods via javascript.
<OBJECT id="VIDEO" width="320" height="240"
style="position:absolute; left:0;top:0;"
CLASSID="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6"
type="application/x-oleobject">
<PARAM NAME="URL" VALUE="your file or url">
<PARAM NAME="SendPlayStateChangeEvents" VALUE="True">
<PARAM NAME="AutoStart" VALUE="True">
<PARAM name="uiMode" value="none">
<PARAM name="PlayCount" value="9999">
</OBJECT>
Then for the javascript
<script type="javascript">
obj = document.getElementById("VIDEO"); //Where video is the id of the object above.
obj.URL="filename"; //You can use this to both start and change the current file.
obj.controls.stop(); //Will stop
obj.controls.Pause(); //Pause
</script>
Somewhere around here I have code to even control the volume.
A while ago I built a custom (looking) player for a client purely in HTML and JavaScript.
A:
Something I bookmarked long ago, but never got to test so far: http://www.schillmania.com/projects/soundmanager2/
A:
I second superjoe30's suggestion: I had great success with musicplayer. The only (slight) negative is that it's a little older project and not as well skinnable as some of the alternatives (although you have the full source code, so - given some time - you can make it look exactly as you need it to).
|
Play button in browser
|
I want to put songs on a web page and have a little play button, like you can see on Last.fm or Pandora. There can be multiple songs listed on the site, and if you start playing a different song with one already playing, it will pause the first track and begin playing the one you just clicked on. I think they use Flash for this, and I could probably implement it in a few hours, but is there already code I could use for this? Maybe just a flash swf file that you stick hidden on a web page with a basic Javascript API that I can use to stream mp3 files?
Also, what about WMA or AAC files? Is there a universal solution that will play these 3 file types?
http://musicplayer.sourceforge.net/
|
[
"There are many flash mp3 players that you can use that do this. Usually, you just have to edit a text file to point at the mp3s you want to have available.\nHere is the first one that showed up on a google search for flash mp3 player: http://www.flashmp3player.org/demo.html\n",
"This is fairly simple if you want to embed the WMP you can use all the controls via JavaScript. There is a great MSDN section on it but I cant seem to find it now.Edit: I found this on MSDN it contains the properties that an embeded WMP will accept then all you have to do is call the methods via javascript.\n<OBJECT id=\"VIDEO\" width=\"320\" height=\"240\" \n style=\"position:absolute; left:0;top:0;\"\n CLASSID=\"CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6\"\n type=\"application/x-oleobject\">\n\n <PARAM NAME=\"URL\" VALUE=\"your file or url\">\n <PARAM NAME=\"SendPlayStateChangeEvents\" VALUE=\"True\">\n <PARAM NAME=\"AutoStart\" VALUE=\"True\">\n <PARAM name=\"uiMode\" value=\"none\">\n <PARAM name=\"PlayCount\" value=\"9999\">\n</OBJECT>\n\nThen for the javascript\n<script type=\"javascript\">\nobj = document.getElementById(\"VIDEO\"); //Where video is the id of the object above.\nobj.URL=\"filename\"; //You can use this to both start and change the current file.\nobj.controls.stop(); //Will stop\nobj.controls.Pause(); //Pause\n</script>\n\nSomewhere around here I have code to even control the volume.\nA while ago I built a custom (looking) player for a client purely in HTML and JavaScript.\n",
"Something I bookmarked long ago, but never got to test so far: http://www.schillmania.com/projects/soundmanager2/\n",
"I second superjoe30's suggestion: I had great success with musicplayer. The only (slight) negative is that it's a little older project and not as well skinnable as some of the alternatives (although you have the full source code, so - given some time - you can make it look exactly as you need it to).\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"browser",
"flash",
"mp3",
"streaming"
] |
stackoverflow_0000048225_browser_flash_mp3_streaming.txt
|
Q:
How do you create a virtual network interface on Windows?
On linux, it's possible to create a tun interface using a tun driver which provides a "network interface psuedo-device" that can be treated as a regular network interface. Is there a way to do this programmatically on windows? Is there a way to do this without writing my own driver?
A:
You can do this on XP with the Microsoft Loopback Adapter which is a driver for a virtual network card.
On newer Windows version: Installing the Microsoft Loopback Adapter in Windows 8 and Windows Server 2012
A:
@Tim
Depending on the licensing you might be able to use the TUN/TAP driver that is part of OpenVPN, see here for details.
A:
In the Singularity project, Microsoft research communicates with the singularity VM through a "loopback" adapter. Maybe that'd help?
Running it is easy so it may be something fun to do anyway. :)
http://research.microsoft.com/os/Singularity/
|
How do you create a virtual network interface on Windows?
|
On linux, it's possible to create a tun interface using a tun driver which provides a "network interface psuedo-device" that can be treated as a regular network interface. Is there a way to do this programmatically on windows? Is there a way to do this without writing my own driver?
|
[
"You can do this on XP with the Microsoft Loopback Adapter which is a driver for a virtual network card.\nOn newer Windows version: Installing the Microsoft Loopback Adapter in Windows 8 and Windows Server 2012\n",
"@Tim\nDepending on the licensing you might be able to use the TUN/TAP driver that is part of OpenVPN, see here for details.\n",
"In the Singularity project, Microsoft research communicates with the singularity VM through a \"loopback\" adapter. Maybe that'd help? \nRunning it is easy so it may be something fun to do anyway. :)\nhttp://research.microsoft.com/os/Singularity/ \n"
] |
[
16,
4,
0
] |
[] |
[] |
[
"networking",
"windows"
] |
stackoverflow_0000047854_networking_windows.txt
|
Q:
How to make your website look the same on Linux
I have a fairly standards compliant XHTML+CSS site that looks great on all browsers on PC and Mac. The other day I saw it on FF3 on Linux and the letter spacing was slightly larger, throwing everything out of whack and causing unwanted wrapping and clipping of text. The CSS in question has
font-size: 11px;
font-family: Arial, Helvetica, sans-serif;
I know it's going with the generic sans-serif, whatever that maps to. If I add the following, the text scrunches up enough to be close to what I get on the other platforms:
letter-spacing: -1.5px;
but this would involve some nasty server-side OS sniffing. If there's a pure CSS solution to this I'd love to hear it.
The system in question is Ubuntu 7.04 but that is irrelevant as I'm looking to fix it for at least the majority of, if not all, Linux users. Of course asking the user to install a font is not an option!
A:
A List Apart has a pretty comprehensive article on sizing fonts in CSS. Their conclusion is to use "ems" to size text, since it generally gives the most consistent sizing across browsers. They make no direct mention of different OSes, but you should try using ems. It might solve your problem.
A:
Have you tried it in FF3 on windows?
Personally, I find a good CSS Reset goes a long way in making your page look the same in all browsers.
A:
I find the easiest way to solve font sizing problems between browsers is to simply leave room for error. Make divs slightly larger or fonts slightly smaller so that platform variation doesn't change wrapping or clipping considerably.
A:
Sizing/spacing differences are usually difficult to catch. What you can do is create a Linux-specific CSS file that will contain these values adjusted for Linux, then do a simple JS-based detect to inject that CSS if the User agent is a Linux one.
This is probably not the cleanest approach, but it will work, and with the least intrusion into your otherwise clean HTML/CSS.
A:
Unless your site is expecting an above-normal amount of Linux-based traffic, you're probably going to adversely affect more people if you "sacrifice the user’s ability to adjust his or her reading environment" as opposed to just not caring about the Linux experience.
Having said that, if you do want a nice Linux experience, you should address the reasons behind why your design breaks under small variations in font spacing, given that these issues are difficult to control under current CSS implementations.
|
How to make your website look the same on Linux
|
I have a fairly standards compliant XHTML+CSS site that looks great on all browsers on PC and Mac. The other day I saw it on FF3 on Linux and the letter spacing was slightly larger, throwing everything out of whack and causing unwanted wrapping and clipping of text. The CSS in question has
font-size: 11px;
font-family: Arial, Helvetica, sans-serif;
I know it's going with the generic sans-serif, whatever that maps to. If I add the following, the text scrunches up enough to be close to what I get on the other platforms:
letter-spacing: -1.5px;
but this would involve some nasty server-side OS sniffing. If there's a pure CSS solution to this I'd love to hear it.
The system in question is Ubuntu 7.04 but that is irrelevant as I'm looking to fix it for at least the majority of, if not all, Linux users. Of course asking the user to install a font is not an option!
|
[
"A List Apart has a pretty comprehensive article on sizing fonts in CSS. Their conclusion is to use \"ems\" to size text, since it generally gives the most consistent sizing across browsers. They make no direct mention of different OSes, but you should try using ems. It might solve your problem.\n",
"Have you tried it in FF3 on windows?\nPersonally, I find a good CSS Reset goes a long way in making your page look the same in all browsers.\n",
"I find the easiest way to solve font sizing problems between browsers is to simply leave room for error. Make divs slightly larger or fonts slightly smaller so that platform variation doesn't change wrapping or clipping considerably.\n",
"Sizing/spacing differences are usually difficult to catch. What you can do is create a Linux-specific CSS file that will contain these values adjusted for Linux, then do a simple JS-based detect to inject that CSS if the User agent is a Linux one.\nThis is probably not the cleanest approach, but it will work, and with the least intrusion into your otherwise clean HTML/CSS.\n",
"Unless your site is expecting an above-normal amount of Linux-based traffic, you're probably going to adversely affect more people if you \"sacrifice the user’s ability to adjust his or her reading environment\" as opposed to just not caring about the Linux experience.\nHaving said that, if you do want a nice Linux experience, you should address the reasons behind why your design breaks under small variations in font spacing, given that these issues are difficult to control under current CSS implementations.\n"
] |
[
5,
1,
0,
0,
0
] |
[] |
[] |
[
"css",
"linux"
] |
stackoverflow_0000047519_css_linux.txt
|
Q:
Change user for running windows forms program
I wrote a simple Windows Forms program in C#. I want to be able to input a windows user name and password and when I click a login button to run code run as the user I've entered as input.
A:
You can use the WindowsIdentity.Impersonate method to
achieve this. This method allows code to impersonate a different Windows
user. Here is a link for more information on this method with a good sample:
http://msdn.microsoft.com/en-us/library/system.security.principal.windowsidentity.impersonate.aspx
Complete example:
// This sample demonstrates the use of the WindowsIdentity class to impersonate a user.
// IMPORTANT NOTES:
// This sample can be run only on Windows XP. The default Windows 2000 security policy
// prevents this sample from executing properly, and changing the policy to allow
// proper execution presents a security risk.
// This sample requests the user to enter a password on the console screen.
// Because the console window does not support methods allowing the password to be masked,
// it will be visible to anyone viewing the screen.
// The sample is intended to be executed in a .NET Framework 1.1 environment. To execute
// this code in a 1.0 environment you will need to use a duplicate token in the call to the
// WindowsIdentity constructor. See KB article Q319615 for more information.
using System;
using System.Runtime.InteropServices;
using System.Security.Principal;
using System.Security.Permissions;
using System.Windows.Forms;
[assembly:SecurityPermissionAttribute(SecurityAction.RequestMinimum, UnmanagedCode=true)]
[assembly:PermissionSetAttribute(SecurityAction.RequestMinimum, Name = "FullTrust")]
public class ImpersonationDemo
{
[DllImport("advapi32.dll", SetLastError=true, CharSet = CharSet.Unicode)]
public static extern bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword,
int dwLogonType, int dwLogonProvider, ref IntPtr phToken);
[DllImport("kernel32.dll", CharSet=System.Runtime.InteropServices.CharSet.Auto)]
private unsafe static extern int FormatMessage(int dwFlags, ref IntPtr lpSource,
int dwMessageId, int dwLanguageId, ref String lpBuffer, int nSize, IntPtr *Arguments);
[DllImport("kernel32.dll", CharSet=CharSet.Auto)]
public extern static bool CloseHandle(IntPtr handle);
[DllImport("advapi32.dll", CharSet=CharSet.Auto, SetLastError=true)]
public extern static bool DuplicateToken(IntPtr ExistingTokenHandle,
int SECURITY_IMPERSONATION_LEVEL, ref IntPtr DuplicateTokenHandle);
// Test harness.
// If you incorporate this code into a DLL, be sure to demand FullTrust.
[PermissionSetAttribute(SecurityAction.Demand, Name = "FullTrust")]
public static void Main(string[] args)
{
IntPtr tokenHandle = new IntPtr(0);
IntPtr dupeTokenHandle = new IntPtr(0);
try
{
string userName, domainName;
// Get the user token for the specified user, domain, and password using the
// unmanaged LogonUser method.
// The local machine name can be used for the domain name to impersonate a user on this machine.
Console.Write("Enter the name of the domain on which to log on: ");
domainName = Console.ReadLine();
Console.Write("Enter the login of a user on {0} that you wish to impersonate: ", domainName);
userName = Console.ReadLine();
Console.Write("Enter the password for {0}: ", userName);
const int LOGON32_PROVIDER_DEFAULT = 0;
//This parameter causes LogonUser to create a primary token.
const int LOGON32_LOGON_INTERACTIVE = 2;
tokenHandle = IntPtr.Zero;
// Call LogonUser to obtain a handle to an access token.
bool returnValue = LogonUser(userName, domainName, Console.ReadLine(),
LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT,
ref tokenHandle);
Console.WriteLine("LogonUser called.");
if (false == returnValue)
{
int ret = Marshal.GetLastWin32Error();
Console.WriteLine("LogonUser failed with error code : {0}", ret);
throw new System.ComponentModel.Win32Exception(ret);
}
Console.WriteLine("Did LogonUser Succeed? " + (returnValue? "Yes" : "No"));
Console.WriteLine("Value of Windows NT token: " + tokenHandle);
// Check the identity.
Console.WriteLine("Before impersonation: "
+ WindowsIdentity.GetCurrent().Name);
// Use the token handle returned by LogonUser.
WindowsIdentity newId = new WindowsIdentity(tokenHandle);
WindowsImpersonationContext impersonatedUser = newId.Impersonate();
// Check the identity.
Console.WriteLine("After impersonation: "
+ WindowsIdentity.GetCurrent().Name);
// Stop impersonating the user.
impersonatedUser.Undo();
// Check the identity.
Console.WriteLine("After Undo: " + WindowsIdentity.GetCurrent().Name);
// Free the tokens.
if (tokenHandle != IntPtr.Zero)
CloseHandle(tokenHandle);
}
catch(Exception ex)
{
Console.WriteLine("Exception occurred. " + ex.Message);
}
}
}
A:
Impersonate will change the Thread context. If you want to change the identity and launch a separate process, you will have to use runas command.
The .NET Developer's Guide to Windows Security by Keith Brown is an excellent read which describes all the security scenarios.
Online version is also available.
|
Change user for running windows forms program
|
I wrote a simple Windows Forms program in C#. I want to be able to input a windows user name and password and when I click a login button to run code run as the user I've entered as input.
|
[
"You can use the WindowsIdentity.Impersonate method to \nachieve this. This method allows code to impersonate a different Windows \nuser. Here is a link for more information on this method with a good sample:\nhttp://msdn.microsoft.com/en-us/library/system.security.principal.windowsidentity.impersonate.aspx\nComplete example:\n// This sample demonstrates the use of the WindowsIdentity class to impersonate a user.\n// IMPORTANT NOTES:\n// This sample can be run only on Windows XP. The default Windows 2000 security policy\n// prevents this sample from executing properly, and changing the policy to allow\n// proper execution presents a security risk.\n// This sample requests the user to enter a password on the console screen.\n// Because the console window does not support methods allowing the password to be masked,\n// it will be visible to anyone viewing the screen.\n// The sample is intended to be executed in a .NET Framework 1.1 environment. To execute\n// this code in a 1.0 environment you will need to use a duplicate token in the call to the\n// WindowsIdentity constructor. See KB article Q319615 for more information.\n\nusing System;\nusing System.Runtime.InteropServices;\nusing System.Security.Principal;\nusing System.Security.Permissions;\nusing System.Windows.Forms;\n\n[assembly:SecurityPermissionAttribute(SecurityAction.RequestMinimum, UnmanagedCode=true)]\n[assembly:PermissionSetAttribute(SecurityAction.RequestMinimum, Name = \"FullTrust\")]\npublic class ImpersonationDemo\n{\n [DllImport(\"advapi32.dll\", SetLastError=true, CharSet = CharSet.Unicode)]\n public static extern bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword,\n int dwLogonType, int dwLogonProvider, ref IntPtr phToken);\n\n [DllImport(\"kernel32.dll\", CharSet=System.Runtime.InteropServices.CharSet.Auto)]\n private unsafe static extern int FormatMessage(int dwFlags, ref IntPtr lpSource,\n int dwMessageId, int dwLanguageId, ref String lpBuffer, int nSize, IntPtr *Arguments);\n\n [DllImport(\"kernel32.dll\", CharSet=CharSet.Auto)]\n public extern static bool CloseHandle(IntPtr handle);\n\n [DllImport(\"advapi32.dll\", CharSet=CharSet.Auto, SetLastError=true)]\n public extern static bool DuplicateToken(IntPtr ExistingTokenHandle,\n int SECURITY_IMPERSONATION_LEVEL, ref IntPtr DuplicateTokenHandle);\n\n // Test harness.\n // If you incorporate this code into a DLL, be sure to demand FullTrust.\n [PermissionSetAttribute(SecurityAction.Demand, Name = \"FullTrust\")]\n public static void Main(string[] args)\n {\n IntPtr tokenHandle = new IntPtr(0);\n IntPtr dupeTokenHandle = new IntPtr(0);\n try\n {\n string userName, domainName;\n // Get the user token for the specified user, domain, and password using the\n // unmanaged LogonUser method.\n // The local machine name can be used for the domain name to impersonate a user on this machine.\n Console.Write(\"Enter the name of the domain on which to log on: \");\n domainName = Console.ReadLine();\n\n Console.Write(\"Enter the login of a user on {0} that you wish to impersonate: \", domainName);\n userName = Console.ReadLine();\n\n Console.Write(\"Enter the password for {0}: \", userName);\n\n const int LOGON32_PROVIDER_DEFAULT = 0;\n //This parameter causes LogonUser to create a primary token.\n const int LOGON32_LOGON_INTERACTIVE = 2;\n\n tokenHandle = IntPtr.Zero;\n\n // Call LogonUser to obtain a handle to an access token.\n bool returnValue = LogonUser(userName, domainName, Console.ReadLine(),\n LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT,\n ref tokenHandle);\n\n Console.WriteLine(\"LogonUser called.\");\n\n if (false == returnValue)\n {\n int ret = Marshal.GetLastWin32Error();\n Console.WriteLine(\"LogonUser failed with error code : {0}\", ret);\n throw new System.ComponentModel.Win32Exception(ret);\n }\n\n Console.WriteLine(\"Did LogonUser Succeed? \" + (returnValue? \"Yes\" : \"No\"));\n Console.WriteLine(\"Value of Windows NT token: \" + tokenHandle);\n\n // Check the identity.\n Console.WriteLine(\"Before impersonation: \"\n + WindowsIdentity.GetCurrent().Name);\n // Use the token handle returned by LogonUser.\n WindowsIdentity newId = new WindowsIdentity(tokenHandle);\n WindowsImpersonationContext impersonatedUser = newId.Impersonate();\n\n // Check the identity.\n Console.WriteLine(\"After impersonation: \"\n + WindowsIdentity.GetCurrent().Name);\n\n // Stop impersonating the user.\n impersonatedUser.Undo();\n\n // Check the identity.\n Console.WriteLine(\"After Undo: \" + WindowsIdentity.GetCurrent().Name);\n\n // Free the tokens.\n if (tokenHandle != IntPtr.Zero)\n CloseHandle(tokenHandle);\n\n }\n catch(Exception ex)\n {\n Console.WriteLine(\"Exception occurred. \" + ex.Message);\n }\n\n }\n}\n\n",
"Impersonate will change the Thread context. If you want to change the identity and launch a separate process, you will have to use runas command.\nThe .NET Developer's Guide to Windows Security by Keith Brown is an excellent read which describes all the security scenarios.\nOnline version is also available.\n"
] |
[
4,
2
] |
[] |
[] |
[
".net",
"authentication",
"windows",
"winforms"
] |
stackoverflow_0000048567_.net_authentication_windows_winforms.txt
|
Q:
Best way to get user input for a menu-driven CLI in C
What's the best way to get user input in a C program where the choices are of a limited number?
Say for example the choices are:
A) Print the list.
B) Add 99 to the end of the list.
C) Delete all duplicates.
5) Reset 5 times.
Entering "A" and then Enter is OK.
Or, just a single keystroke would work as well.
A:
getchar(), or cgetc(), depending on the platform
A:
I have used something simple like the following:
int intput()
{
char input = 0;
int ret_val = 0;
read(0, &input, 1);
switch(input) {
case 'c':
// Do c
break;
case 'p':
// Do p
break;
case 'd':
// Do d
break;
case 'q':
quit = 1;
break;
case '?':
PRINT(ENABLE, "c - connect\n");
PRINT(ENABLE, "p - ping\n");
PRINT(ENABLE, "d - disconnect\n");
PRINT(ENABLE, "q - quit\n");
PRINT(ENABLE, "? - this message\n");
break;
}
return 0;
}
A:
Instead of using
switch(input)
use...
switch (toupper(input))
{
case 'A':
This will allow the user to enter 'a' or 'A' and saves you having to check for upper and lower case.
|
Best way to get user input for a menu-driven CLI in C
|
What's the best way to get user input in a C program where the choices are of a limited number?
Say for example the choices are:
A) Print the list.
B) Add 99 to the end of the list.
C) Delete all duplicates.
5) Reset 5 times.
Entering "A" and then Enter is OK.
Or, just a single keystroke would work as well.
|
[
"getchar(), or cgetc(), depending on the platform\n",
"I have used something simple like the following:\nint intput()\n{\n char input = 0;\n int ret_val = 0;\n\n read(0, &input, 1);\n\n switch(input) {\n case 'c':\n // Do c\n break;\n\n case 'p':\n // Do p\n break;\n\n case 'd':\n // Do d\n break;\n\n case 'q':\n quit = 1;\n break;\n\n case '?':\n PRINT(ENABLE, \"c - connect\\n\");\n PRINT(ENABLE, \"p - ping\\n\");\n PRINT(ENABLE, \"d - disconnect\\n\");\n PRINT(ENABLE, \"q - quit\\n\");\n PRINT(ENABLE, \"? - this message\\n\");\n break;\n }\n\n return 0;\n}\n\n",
"Instead of using\nswitch(input)\n\nuse...\nswitch (toupper(input))\n{\n case 'A':\n\nThis will allow the user to enter 'a' or 'A' and saves you having to check for upper and lower case.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"c"
] |
stackoverflow_0000042693_c.txt
|
Q:
How do you send and receive UDP packets in Java on a multihomed machine?
I have a machine with VmWare installed which added two extra network interfaces. The OS is Vista. I have two Java applications, one which broadcasts datagrams, and one which receives those datagrams. The problem I'm having is that unless I disable both VmWare network interfaces, the receiver can't receive the datagrams.
What is the best way to make that work without disabling the interfaces?
A:
Look at the alternate constructor for DatagramSocket:
DatagramSocket(int port, InetAddress laddr)
Creates a datagram socket, bound to the specified local address.
I'm guessing you're only specifying the port.
|
How do you send and receive UDP packets in Java on a multihomed machine?
|
I have a machine with VmWare installed which added two extra network interfaces. The OS is Vista. I have two Java applications, one which broadcasts datagrams, and one which receives those datagrams. The problem I'm having is that unless I disable both VmWare network interfaces, the receiver can't receive the datagrams.
What is the best way to make that work without disabling the interfaces?
|
[
"Look at the alternate constructor for DatagramSocket:\nDatagramSocket(int port, InetAddress laddr)\nCreates a datagram socket, bound to the specified local address.\n\nI'm guessing you're only specifying the port.\n"
] |
[
9
] |
[] |
[] |
[
"java",
"sockets"
] |
stackoverflow_0000048659_java_sockets.txt
|
Q:
Call Visitors web stat program from PHP
I've been looking into different web statistics programs for my site, and one promising one is Visitors. Unfortunately, it's a C program and I don't know how to call it from the web server. I've tried using PHP's shell_exec, but my web host (NFSN) has PHP's safe mode on and it's giving me an error message.
Is there a way to execute the program within safe mode? If not, can it work with CGI? If so, how? (I've never used CGI before)
A:
Visitors looks like a log analyzer and report generator. Its probably best setup as a chron job to create static HTML pages once a day or so.
If you don't have shell access to your hosting account, or some sort of control panel that lets you setup up chron jobs, you'll be out of luck.
A:
Is there any reason not to just use Google Analytics? It's free, and you don't have to write it yourself. I use it, and it gives you a lot of information.
Sorry, I know it's not a "programming" answer ;)
A:
I second the answer of Jonathan: this is a log analyzer, meaning that you must feed it as input the logfile of the webserver and it generates a summarization of it. Given that you are on a shared host, it is improbable that you can access to that file, and even if you would access it, it is probable that it contains then entries for all the websites hosted on the given machine (setting up separate logging for each VirtualHost is certainly possible with Apache, but I don't know if it is a common practice).
One possible workaround would be for you to write out a logfile from your pages. However this is rather difficult and can have a severe performance impact (you have to serialize the writes to the logfile for one, if you don't want to get garbage from time to time). All in all, I would suggest going with an online analytics service, like Google Analytics.
A:
As fortune would have it I do have access to the log file for my site. I've been able to generate the HTML page on the server manually - I've just been looking for a way to get it to happen automatically. All I need is to execute a shell command and get the output to display as the page.
Sounds like a good job for an intern.
=)
Call your host and see if you can work out a deal for doing a shell execute.
A:
I managed to solve this problem on my own. I put the following lines in a file named visitors.cgi:
#!/bin/sh
printf "Content-type: text/html\n\n"
exec visitors -A /home/logs/access_log
|
Call Visitors web stat program from PHP
|
I've been looking into different web statistics programs for my site, and one promising one is Visitors. Unfortunately, it's a C program and I don't know how to call it from the web server. I've tried using PHP's shell_exec, but my web host (NFSN) has PHP's safe mode on and it's giving me an error message.
Is there a way to execute the program within safe mode? If not, can it work with CGI? If so, how? (I've never used CGI before)
|
[
"Visitors looks like a log analyzer and report generator. Its probably best setup as a chron job to create static HTML pages once a day or so.\nIf you don't have shell access to your hosting account, or some sort of control panel that lets you setup up chron jobs, you'll be out of luck.\n",
"Is there any reason not to just use Google Analytics? It's free, and you don't have to write it yourself. I use it, and it gives you a lot of information.\nSorry, I know it's not a \"programming\" answer ;)\n",
"I second the answer of Jonathan: this is a log analyzer, meaning that you must feed it as input the logfile of the webserver and it generates a summarization of it. Given that you are on a shared host, it is improbable that you can access to that file, and even if you would access it, it is probable that it contains then entries for all the websites hosted on the given machine (setting up separate logging for each VirtualHost is certainly possible with Apache, but I don't know if it is a common practice).\nOne possible workaround would be for you to write out a logfile from your pages. However this is rather difficult and can have a severe performance impact (you have to serialize the writes to the logfile for one, if you don't want to get garbage from time to time). All in all, I would suggest going with an online analytics service, like Google Analytics.\n",
"\nAs fortune would have it I do have access to the log file for my site. I've been able to generate the HTML page on the server manually - I've just been looking for a way to get it to happen automatically. All I need is to execute a shell command and get the output to display as the page.\n\nSounds like a good job for an intern.\n=)\nCall your host and see if you can work out a deal for doing a shell execute.\n",
"I managed to solve this problem on my own. I put the following lines in a file named visitors.cgi:\n#!/bin/sh\n\nprintf \"Content-type: text/html\\n\\n\"\nexec visitors -A /home/logs/access_log\n\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"analytics",
"cgi",
"php_safe_mode",
"statistics",
"visitors"
] |
stackoverflow_0000048526_analytics_cgi_php_safe_mode_statistics_visitors.txt
|
Q:
Stream data (such as music) using PHP (or another language)
For years, I've been investigating how to create music streams from my computer. I've seen programs, but anything useful I've seen is Windows only (I use a Mac).
Eventually, I got interested in how streams work. Is there any way I could create my own stream, possibly using socket functions in PHP? Is there a PHP library for this?
A:
Take a look at Ampache. It is a Web-based Open Source Audio file manager. It is implemented with MySQL, and PHP. It allows you to view, edit, and play your audio files via the web.
A:
In the end it all boils down to the protocol you'd want to use. Shoutcast IMHO is plain HTTP, so to make your own stream, you just output the streams content.
To make an ogg based webradio work with my Sonos system, I have created a little transcoding wrapper around sox which is is actually written in PHP, so it may be helpful to you to serve as an example.
You'll find it here: http://www.gnegg.ch/ogg2mp3/
If you are after implementing your very own streaming protocol - maybe even UDP based, then, I'm afraid, PHP may not be the right solution for the problem - at least not as long as it has its share of problems when used for long running processes (which 5.3 may bring some help for with its integrated garbage collection)
|
Stream data (such as music) using PHP (or another language)
|
For years, I've been investigating how to create music streams from my computer. I've seen programs, but anything useful I've seen is Windows only (I use a Mac).
Eventually, I got interested in how streams work. Is there any way I could create my own stream, possibly using socket functions in PHP? Is there a PHP library for this?
|
[
"Take a look at Ampache. It is a Web-based Open Source Audio file manager. It is implemented with MySQL, and PHP. It allows you to view, edit, and play your audio files via the web.\n",
"In the end it all boils down to the protocol you'd want to use. Shoutcast IMHO is plain HTTP, so to make your own stream, you just output the streams content.\nTo make an ogg based webradio work with my Sonos system, I have created a little transcoding wrapper around sox which is is actually written in PHP, so it may be helpful to you to serve as an example.\nYou'll find it here: http://www.gnegg.ch/ogg2mp3/\nIf you are after implementing your very own streaming protocol - maybe even UDP based, then, I'm afraid, PHP may not be the right solution for the problem - at least not as long as it has its share of problems when used for long running processes (which 5.3 may bring some help for with its integrated garbage collection)\n"
] |
[
3,
1
] |
[] |
[] |
[
"mp3",
"php",
"sockets",
"stream"
] |
stackoverflow_0000045166_mp3_php_sockets_stream.txt
|
Q:
Python: No module named core.exceptions
I'm trying to get Google AppEngine to work on my Debian box and am getting the following error when I try to access my page:
<type 'exceptions.ImportError'>: No module named core.exceptions
The same app works fine for me when I run it on my other Ubuntu box, so I know it's not a problem with the app itself. However, I need to get it working on this Debian box. It originally had python 2.4 but after AppEngine complained about it I installed the python2.5 and python2.5-dev packages (to no avail).
I saw on this Google Group post that it may be due to the version of AppEngine and just to reinstall it, but that didn't work. Any ideas?
Edit 1: Also tried uninstalling python2.4 and 2.5 then reinstalling 2.5, which also didn't work.
Edit 2: Turns out when I made AppEngine into a CVS project it didn't add the core directory into my project, so when I checked it out there literally was no module named core.exceptions. Re-downloading that folder resolved the problem.
A:
core.exceptions is part of django; what version of django do you have installed? The AppEngine comes with the appropriate version for whatever release you've downloaded (in the lib/django directory). It can be installed by going to that directory and running python setup.py install
|
Python: No module named core.exceptions
|
I'm trying to get Google AppEngine to work on my Debian box and am getting the following error when I try to access my page:
<type 'exceptions.ImportError'>: No module named core.exceptions
The same app works fine for me when I run it on my other Ubuntu box, so I know it's not a problem with the app itself. However, I need to get it working on this Debian box. It originally had python 2.4 but after AppEngine complained about it I installed the python2.5 and python2.5-dev packages (to no avail).
I saw on this Google Group post that it may be due to the version of AppEngine and just to reinstall it, but that didn't work. Any ideas?
Edit 1: Also tried uninstalling python2.4 and 2.5 then reinstalling 2.5, which also didn't work.
Edit 2: Turns out when I made AppEngine into a CVS project it didn't add the core directory into my project, so when I checked it out there literally was no module named core.exceptions. Re-downloading that folder resolved the problem.
|
[
"core.exceptions is part of django; what version of django do you have installed? The AppEngine comes with the appropriate version for whatever release you've downloaded (in the lib/django directory). It can be installed by going to that directory and running python setup.py install\n"
] |
[
6
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000048777_google_app_engine_python.txt
|
Q:
Do you know of any "best practice" or "what works" vi tutorial for programmers?
There are thousands of vi tutorials on the web, most of them generically listing all the commands. There are even videos on youtube which show basic functionality.
But does anyone know of a vi tutorial which focuses on the needs of programmers?
For example when I program in Perl with vi, moving to the "next paragraph" is meaningless.
I want to know which commands seasoned vi users combine to e.g:
copy everything inside of parentheses
copy a function
copy and paste a variable (e.g. 2yw)
etc.
I am sure there are lots of functions using multiple-file capability, and the maps, macros, reading in of files for template code, regular expression search, jumping to functions, perhaps minimal code completion, or other features that emulate what programmers have gotten used to in Visual Studio and Eclipse, etc.
A:
A nice collection of vimtips.
And the best Vim cheatsheet around.
A:
I just ended up reading the vim manual a few times, over the years, picking up useful features on each iteration.
One thing that really made vim work for me as a perl IDE was starting to use tags, as explained here: http://www.vim.org/tips/tip.php?tip_id=94. Using the pltags script that ships with vim, you can jump around between modules to find your functions, methods, etc.
A:
If you are a beginner, vimtutor would be a good way to start with. (Type vimtutor on your shell and get going).
And once you get hold of the basics of vim, you can look around web and figure out things for yourself.
This and this may be an interesting read.
|
Do you know of any "best practice" or "what works" vi tutorial for programmers?
|
There are thousands of vi tutorials on the web, most of them generically listing all the commands. There are even videos on youtube which show basic functionality.
But does anyone know of a vi tutorial which focuses on the needs of programmers?
For example when I program in Perl with vi, moving to the "next paragraph" is meaningless.
I want to know which commands seasoned vi users combine to e.g:
copy everything inside of parentheses
copy a function
copy and paste a variable (e.g. 2yw)
etc.
I am sure there are lots of functions using multiple-file capability, and the maps, macros, reading in of files for template code, regular expression search, jumping to functions, perhaps minimal code completion, or other features that emulate what programmers have gotten used to in Visual Studio and Eclipse, etc.
|
[
"A nice collection of vimtips.\nAnd the best Vim cheatsheet around.\n",
"I just ended up reading the vim manual a few times, over the years, picking up useful features on each iteration.\nOne thing that really made vim work for me as a perl IDE was starting to use tags, as explained here: http://www.vim.org/tips/tip.php?tip_id=94. Using the pltags script that ships with vim, you can jump around between modules to find your functions, methods, etc.\n",
"If you are a beginner, vimtutor would be a good way to start with. (Type vimtutor on your shell and get going).\nAnd once you get hold of the basics of vim, you can look around web and figure out things for yourself.\nThis and this may be an interesting read.\n"
] |
[
10,
1,
1
] |
[] |
[] |
[
"editor",
"keyboard_shortcuts",
"text_editor",
"unix",
"vim"
] |
stackoverflow_0000047167_editor_keyboard_shortcuts_text_editor_unix_vim.txt
|
Q:
Unhandled exceptions filter in a windows service
I am creating a windows service and want to know best practices for this. In all my windows Program I have a form that asks the user if he wants to report the error and if he answers yes I created a case in FogBugz. What should I do in a windows service.
A:
Since you're not going to have a user interacting with the program, I'd say make configuration variable (in an app.config file) responsible for sending/not sending the data. That way users who don't want to report errors can just change a flag in a config file. I'd personally have it turned on by default and then give them guidance on how to turn it off it they wanted to.
A:
You could also have a system tray representation of the service which would show a small notification about any errors and ask the user whether they want it reported or not. I think that it is still better to be able to give the user the choice whenever you are sending 'out' data from their computer.
|
Unhandled exceptions filter in a windows service
|
I am creating a windows service and want to know best practices for this. In all my windows Program I have a form that asks the user if he wants to report the error and if he answers yes I created a case in FogBugz. What should I do in a windows service.
|
[
"Since you're not going to have a user interacting with the program, I'd say make configuration variable (in an app.config file) responsible for sending/not sending the data. That way users who don't want to report errors can just change a flag in a config file. I'd personally have it turned on by default and then give them guidance on how to turn it off it they wanted to.\n",
"You could also have a system tray representation of the service which would show a small notification about any errors and ask the user whether they want it reported or not. I think that it is still better to be able to give the user the choice whenever you are sending 'out' data from their computer.\n"
] |
[
4,
1
] |
[] |
[] |
[
".net",
"exception",
"windows_services"
] |
stackoverflow_0000048757_.net_exception_windows_services.txt
|
Q:
Detecting Client Disconnects in Web Services
I'm using the Apache CXF Web Services stack. When a client times out or disconnects from the server before the operation is complete, the server keeps running the operation until it is complete. I would like to have the server detect when the client disconnects and handle that accordingly.
Is there a way to detect when a client disconnects using Apache CXF? What about using other Java web-services stacks?
A:
I am not familiar with Apache CXF, but the following should be applicable to any Java Servlet based framework.
In order to determine if a user has disconnected (stop button, closed browser, etc.) the server must attempt to send a packet. If the TCP/IP connection has been closed, an IOException will be thrown.
In theory, a Java application could send a space character at various points during processing. An IOException would signal that the client has gone away and processing can be aborted.
However, there may be a few issues with this technique:
Sending characters during processing will cause the response to be "committed", so it may be impossible to set HTTP headers, cookies, etc. based on the result of the long-running serverside processing.
If the output stream is buffered, the space characters will not be sent immediately, thereby not performing an adequate test. It may be possible to use flush() as a workaround.
It may be difficult to implement this technique for a given framework or view technology (JSP, etc.) For example, the page rendering code will not be able to sent the content type after the response has been committed.
|
Detecting Client Disconnects in Web Services
|
I'm using the Apache CXF Web Services stack. When a client times out or disconnects from the server before the operation is complete, the server keeps running the operation until it is complete. I would like to have the server detect when the client disconnects and handle that accordingly.
Is there a way to detect when a client disconnects using Apache CXF? What about using other Java web-services stacks?
|
[
"I am not familiar with Apache CXF, but the following should be applicable to any Java Servlet based framework.\nIn order to determine if a user has disconnected (stop button, closed browser, etc.) the server must attempt to send a packet. If the TCP/IP connection has been closed, an IOException will be thrown.\nIn theory, a Java application could send a space character at various points during processing. An IOException would signal that the client has gone away and processing can be aborted.\nHowever, there may be a few issues with this technique:\n\nSending characters during processing will cause the response to be \"committed\", so it may be impossible to set HTTP headers, cookies, etc. based on the result of the long-running serverside processing.\nIf the output stream is buffered, the space characters will not be sent immediately, thereby not performing an adequate test. It may be possible to use flush() as a workaround.\nIt may be difficult to implement this technique for a given framework or view technology (JSP, etc.) For example, the page rendering code will not be able to sent the content type after the response has been committed.\n\n"
] |
[
2
] |
[] |
[] |
[
"cxf",
"java",
"web_services",
"webservice_client"
] |
stackoverflow_0000048859_cxf_java_web_services_webservice_client.txt
|
Q:
What is this Icarus thing that comes with MbUnit?
I've had to install MbUnit multiple times now and it keeps coming with something called the
Gallilo Icarus GUI Test Runner
I have tried using it thinking it was just an update to the MbUnit GUI but it won't detect my MbUnit tests and sometimes won't even open the assemblies properly.
Perhaps I'm just overlooking it but I haven't been able to find much of an answer on their website either except that it has something to do with a new testing platform.
Can someone give me a better explanation of what this is?
A:
According to a blog entry MbUnit v3 and Gallio alpha 1,
So whats going on here, Gallio is a
neutral test platform that is an off
shoot from the work we had done on
MbUnit v3. Gallio is both a common
framework and a set of runners for
testing tools. MbUnit v3 uses Gallio
as its native test platform, Gallio
can also as Jeff mentions run MbUnit,
NUnit and XUnit.net tests. For both
migration purposes and to help improve
how you are using your existing test
framework we hope this will prove
useful. We still have a lot of work to
do but make no secrets of what we are
up to, check out our road map. I do
want to draw attention to the work we
are doing with our new runners.
Starting with Icarus, our new GUI.
So, "Gallio is a neutral test platform" and "Icarus, [their] new GUI."
|
What is this Icarus thing that comes with MbUnit?
|
I've had to install MbUnit multiple times now and it keeps coming with something called the
Gallilo Icarus GUI Test Runner
I have tried using it thinking it was just an update to the MbUnit GUI but it won't detect my MbUnit tests and sometimes won't even open the assemblies properly.
Perhaps I'm just overlooking it but I haven't been able to find much of an answer on their website either except that it has something to do with a new testing platform.
Can someone give me a better explanation of what this is?
|
[
"According to a blog entry MbUnit v3 and Gallio alpha 1,\n\nSo whats going on here, Gallio is a\n neutral test platform that is an off\n shoot from the work we had done on\n MbUnit v3. Gallio is both a common\n framework and a set of runners for\n testing tools. MbUnit v3 uses Gallio\n as its native test platform, Gallio\n can also as Jeff mentions run MbUnit,\n NUnit and XUnit.net tests. For both\n migration purposes and to help improve\n how you are using your existing test\n framework we hope this will prove\n useful. We still have a lot of work to\n do but make no secrets of what we are\n up to, check out our road map. I do\n want to draw attention to the work we\n are doing with our new runners.\n Starting with Icarus, our new GUI.\n\nSo, \"Gallio is a neutral test platform\" and \"Icarus, [their] new GUI.\"\n"
] |
[
2
] |
[] |
[] |
[
"automated_tests",
"mbunit",
"tdd",
"testing",
"unit_testing"
] |
stackoverflow_0000048864_automated_tests_mbunit_tdd_testing_unit_testing.txt
|
Q:
Python descriptor protocol analog in other languages?
Is there something like the Python descriptor protocol implemented in other languages? It seems like a nice way to increase modularity/encapsulation without bloating your containing class' implementation, but I've never heard of a similar thing in any other languages. Is it likely absent from other languages because of the lookup overhead?
A:
I've not heard of a direct equivalent either. You could probably achieve the same effect with macros, especially in a language like Lisp which has extremely powerful macros.
I wouldn't be at all surprised if other languages start to incorporate something similar because it is so powerful.
A:
Ruby and C# both easily let you create accessors by specifying getter/setter methods for an attribute, much like in Python. However, this isn't designed to naturally let you write the code for these methods in another class the way that Python allows. In practice, I'm not sure how much this matters, since every time I've seen an attribute defined through the descriptor protocol its been implemented in the same class.
EDIT: Darn my dyslexia (by which I mean careless reading). For some reason I've always read "descriptor" as "decorator" and vice versa, even when I'm the one typing both of them. I'll leave my post intact since it has valid information, albeit information which has absolutely nothing to do with the question.
The term "decorator" itself is actually the name of a design pattern described in the famous "Design Patterns" book. The Wikipedia article contains many examples in different programming languages of decorator usage: http://en.wikipedia.org/wiki/Decorator_pattern
However, the decorators in that article object-oriented; they have classes implementing a predefined interface which lets another existing class behave differently somehow, etc. Python decorators act in a functional way by replacing a function at runtime with another function, allowing you to effectively modify/replace that function, insert code, etc.
This is known in the Java world as Aspect-Oriented programming, and the AspectJ Java compiler lets you do these kinds of things and compile your AspectJ code (which is a superset of Java) into Java bytecode.
I'm not familiar enough with C# or Ruby to know what their version of decorators would be.
|
Python descriptor protocol analog in other languages?
|
Is there something like the Python descriptor protocol implemented in other languages? It seems like a nice way to increase modularity/encapsulation without bloating your containing class' implementation, but I've never heard of a similar thing in any other languages. Is it likely absent from other languages because of the lookup overhead?
|
[
"I've not heard of a direct equivalent either. You could probably achieve the same effect with macros, especially in a language like Lisp which has extremely powerful macros.\nI wouldn't be at all surprised if other languages start to incorporate something similar because it is so powerful.\n",
"Ruby and C# both easily let you create accessors by specifying getter/setter methods for an attribute, much like in Python. However, this isn't designed to naturally let you write the code for these methods in another class the way that Python allows. In practice, I'm not sure how much this matters, since every time I've seen an attribute defined through the descriptor protocol its been implemented in the same class.\nEDIT: Darn my dyslexia (by which I mean careless reading). For some reason I've always read \"descriptor\" as \"decorator\" and vice versa, even when I'm the one typing both of them. I'll leave my post intact since it has valid information, albeit information which has absolutely nothing to do with the question.\nThe term \"decorator\" itself is actually the name of a design pattern described in the famous \"Design Patterns\" book. The Wikipedia article contains many examples in different programming languages of decorator usage: http://en.wikipedia.org/wiki/Decorator_pattern\nHowever, the decorators in that article object-oriented; they have classes implementing a predefined interface which lets another existing class behave differently somehow, etc. Python decorators act in a functional way by replacing a function at runtime with another function, allowing you to effectively modify/replace that function, insert code, etc.\nThis is known in the Java world as Aspect-Oriented programming, and the AspectJ Java compiler lets you do these kinds of things and compile your AspectJ code (which is a superset of Java) into Java bytecode.\nI'm not familiar enough with C# or Ruby to know what their version of decorators would be.\n"
] |
[
4,
0
] |
[] |
[] |
[
"encapsulation",
"language_features",
"python"
] |
stackoverflow_0000034243_encapsulation_language_features_python.txt
|
Q:
Is the Mono Developer Support from Novell worth it?
My company are thinking about using Mono for an upcoming product, so we were thinking about the $12,995 Mono Kickstart support from Novell.
Anybody here used it, is it worth it?
A:
if i were you i'd probably start the project and then only if i needed support for mono buy the product. that way if you dont need it you wont be wasting the $13k.
|
Is the Mono Developer Support from Novell worth it?
|
My company are thinking about using Mono for an upcoming product, so we were thinking about the $12,995 Mono Kickstart support from Novell.
Anybody here used it, is it worth it?
|
[
"if i were you i'd probably start the project and then only if i needed support for mono buy the product. that way if you dont need it you wont be wasting the $13k.\n"
] |
[
4
] |
[] |
[] |
[
"mono"
] |
stackoverflow_0000048844_mono.txt
|
Q:
Best practices for integrating third-party modules into your app
We have a few projects that involve building an application that is composed of maybe 50% custom functionality, but then pulls in, say, a wiki, a forum, and other components that are "wheels" that have already been invented that we do not wish to re-write from scratch.
These third-party applications usually have their own databases, themes, and authentication systems. Getting things to work like a single-sign-on, or a common theme, or tagging/searching across entities in multiple sub-apps are pretty challenging problems, in my experience. What are some of the best practices for this kind of integration project?
Our approach, so far, has been to try and pick your components carefully, choosing ones that have a clearly defined API, preferably via HTTP (like REST or SOAP), though that isn't always possible (we haven't found a decent forum that works that way). Are there suggestions folks can give to anyone trying to do this, as I suspect many of us are more and more frequently these days?
A:
If you are going with open source libraries, pick ones with a good license. I have found out the hard way (when trying to OEM an application) that many companies shy away from licenses like LGPL. I won't go into the details on why but they prefer Apache, BSD or MIT style licenses.
Pick tools that have been around for a while. Check out the community and make sure it is active. See what other people are using and use those tools.
Pick technologies that work well together. I've put together an application that uses ORM and Web Services. Spring Framework + Apache CXF + JPA for the ORM created a nice technology stack. All of the tools I use easily tie together in Spring making it easy to use them together. The last thing you would want to do is pick tools that you have to write a bunch of code just to use them together.
Pick technologies that are based on standards. That way if the library or tool dies, you can easily switch to another that uses the same standard.
A:
Make sure that the interface between your application and the third-party application or library is such that you can replace it easily with something else just in case. In some cases the third-party software may just be an implementation of an standard API (Java does this a lot with JDBC, JMS, JNDI, ...). In other cases this means wrapping the third-party library in some API that you come up with.
Of course there are times to throw that idea out the window and have things tightly integrated with the third-party software. Just be sure that you REALLY want to bind your application to that third-party. Once you go down this road it's REALLY hard to go back and change your mind.
A:
Donald Knuth said that even better then reusable code is modifiable code, so if there is no API, you should seek for an open source app that is written well and therefore possible to customize.
As for databases and login systems and other programming parts (i don't see how e.g. theming could benefit), you can also try, depending on circumstances wrapping stuff so that module believes it's on it's own, but actually talks to your code.
A:
My approach has been to use third party code for some core functionality. For example, I use Subsonic for my data access, Devexpress components for UI and Peter Blum Data Entry Suite for data entry and validation. Subsonic is open source, Devexpress Peter Blum's controls have source code available for an extra fee. It would be impossible for me to get the functionality of these control if I tried to write them myself.
This approach allows me to focus on the custom functionality of my application without having to worry about how I will access the database or how I make an editable treelist that looks pretty. Sure I don't have a fully configured and working forum but I know that I'll be using a SQL database for my app and I won't have to try and get different data storage components to work together. I don't have a wiki but I know how to use the devexpress ui components and formatting and validating data entry is breeze with Peter Blum's controls. In other words, learn tools (and of course choose them carefully) that will speed development of all of your projects then you can focus on portions of your application that have to be customized.
I'm not too concerned if it's open source or not as long as source code is available. If it's open source I donate to the project. If it's a commercial component I will pay a fair price. In any case, the tools help to make programming fun and the results have data integrity and are great looking. If I develop a wiki or forum I know that I can get them to work together seamlesslessly. Finally, all of the tools I have mentioned have been around for a long time and are written by outstanding developers with great reputations.
|
Best practices for integrating third-party modules into your app
|
We have a few projects that involve building an application that is composed of maybe 50% custom functionality, but then pulls in, say, a wiki, a forum, and other components that are "wheels" that have already been invented that we do not wish to re-write from scratch.
These third-party applications usually have their own databases, themes, and authentication systems. Getting things to work like a single-sign-on, or a common theme, or tagging/searching across entities in multiple sub-apps are pretty challenging problems, in my experience. What are some of the best practices for this kind of integration project?
Our approach, so far, has been to try and pick your components carefully, choosing ones that have a clearly defined API, preferably via HTTP (like REST or SOAP), though that isn't always possible (we haven't found a decent forum that works that way). Are there suggestions folks can give to anyone trying to do this, as I suspect many of us are more and more frequently these days?
|
[
"If you are going with open source libraries, pick ones with a good license. I have found out the hard way (when trying to OEM an application) that many companies shy away from licenses like LGPL. I won't go into the details on why but they prefer Apache, BSD or MIT style licenses.\nPick tools that have been around for a while. Check out the community and make sure it is active. See what other people are using and use those tools.\nPick technologies that work well together. I've put together an application that uses ORM and Web Services. Spring Framework + Apache CXF + JPA for the ORM created a nice technology stack. All of the tools I use easily tie together in Spring making it easy to use them together. The last thing you would want to do is pick tools that you have to write a bunch of code just to use them together.\nPick technologies that are based on standards. That way if the library or tool dies, you can easily switch to another that uses the same standard.\n",
"Make sure that the interface between your application and the third-party application or library is such that you can replace it easily with something else just in case. In some cases the third-party software may just be an implementation of an standard API (Java does this a lot with JDBC, JMS, JNDI, ...). In other cases this means wrapping the third-party library in some API that you come up with. \nOf course there are times to throw that idea out the window and have things tightly integrated with the third-party software. Just be sure that you REALLY want to bind your application to that third-party. Once you go down this road it's REALLY hard to go back and change your mind. \n",
"Donald Knuth said that even better then reusable code is modifiable code, so if there is no API, you should seek for an open source app that is written well and therefore possible to customize. \nAs for databases and login systems and other programming parts (i don't see how e.g. theming could benefit), you can also try, depending on circumstances wrapping stuff so that module believes it's on it's own, but actually talks to your code.\n",
"My approach has been to use third party code for some core functionality. For example, I use Subsonic for my data access, Devexpress components for UI and Peter Blum Data Entry Suite for data entry and validation. Subsonic is open source, Devexpress Peter Blum's controls have source code available for an extra fee. It would be impossible for me to get the functionality of these control if I tried to write them myself. \nThis approach allows me to focus on the custom functionality of my application without having to worry about how I will access the database or how I make an editable treelist that looks pretty. Sure I don't have a fully configured and working forum but I know that I'll be using a SQL database for my app and I won't have to try and get different data storage components to work together. I don't have a wiki but I know how to use the devexpress ui components and formatting and validating data entry is breeze with Peter Blum's controls. In other words, learn tools (and of course choose them carefully) that will speed development of all of your projects then you can focus on portions of your application that have to be customized. \nI'm not too concerned if it's open source or not as long as source code is available. If it's open source I donate to the project. If it's a commercial component I will pay a fair price. In any case, the tools help to make programming fun and the results have data integrity and are great looking. If I develop a wiki or forum I know that I can get them to work together seamlesslessly. Finally, all of the tools I have mentioned have been around for a long time and are written by outstanding developers with great reputations. \n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"architecture",
"system_integration"
] |
stackoverflow_0000048322_architecture_system_integration.txt
|
Q:
Make a CD run once
We're doing an "Amazing Race" kind of event, and thought it would be cool to have CDs that could only play once... like a "this message will self destruct in 5 seconds..."
Any thoughts on how to do this? I was thinking it could be a compiled HTML website that would write a cookie and only play once. I don't want to write to the registry (don't want to depend on windows, don't want to install anything, etc).
I also don't care if it's hackable... This is a one-time fun event, and I don't really care too much if people could remove the cookie or something.
Any other ideas?
A:
If the content is HTML and run inside a normal browser window, then a cookie may work but there are caveats:
User runs the CD once when IE is the default browser. User runs at a later time, when Firefox is the default browser so cookie cannot be checked.
The browser's security settings may be locked down to prevent use of script so the cookie cannot be set (more of an IE problem).
An alternative might be Flash's equivalent of cookies, but if script was locked down then the same may be true for Flash.
A:
If you used Flash to create the content, you could then use a wrapper program such as Zinc to produce cross-platform executables of the content.
Then, the Flash could use its version of cookies (local shared objects) to determine when the content has been displayed - write to the LSO the first time and read from it thereafter.
A:
I would say encrypt (part of) the contents of the disc with a unique one time pad, that you request from a server that does a read directly followed by a delete of the decryption key. You could write an identifier on each disk so you can use multiple disks, each with a unique key.
This requires network access and some encryption tools, but a very simple implementation would do what you want it to do, is feasible, and it would be 'unbreakable' unless the one time pad is captured and stored.
If just for fun, this should be secure enough.
A:
You can create a volatile registry entry. It will only exist untill the computer is restarted. This solution is very much "hackable", but it is simple and may suffice for what you want to do.
Take a look at the REG_OPTION_VOLATILE here.
A:
Will the computers this is run on have internet access? You can easily load up a remote url (execute 'start http://yoururl.com' from autorun.inf), store the cookie and prevent it from being loaded again if the cookie exists.
A:
If it's allowed to be hackable, then I'd just go with a simple solution of HTML + JavaScript, requiring (say) a GUID to enter, with some silly obfuscation in the code to validate the GUID.
What I mean by silly obfuscated validation is something like putting together a big array of ROT13'ed GUIDs, then adding code to only accept the Math.floor(PI * E + 32/(new DateTime()).getYear())'th GUID in the array, and ROT13 it again using sufficiently uncommented/unclear code, then check the user input against the result. Do it all in one line for kicks, or generate the GUIDs in some pseudo-random manner using a known seed... you get the idea :).
The only snag might be if IE doesn't allow local JavaScript? Hmm, looks like they'd need to deal with the InfoBar thing :(.
A:
You could also set a registry key that would prevent playing, though this could be bypassed.
A:
I think your best bet is to use Rewritable media for this. You can create your application easily, like HTML site or something like that, and after the last link or last page, however you decide to do you could execute a script with some command-line burner that would erase the rewritable media, or even write an ISO that you keep in CD with a text file or a flash that explains that the CD is lost forever.
Give a look at some Command Line Burners. Linux have several, that isn't worth to mention here, for windows you can use Cheetah CommandLine Burner among several others.
If you wish to do a CD without depending on the installed OS you should give a look at LIVE CDs. FreeDOS is a choice for "DOS Compatible applications" or my suggestioon you use a Linux live CD.
Also you will have several options for small HTTP servers, like lighthttpd and even browsers in several flavors from text interfaces to the graphical ones.
Good luck on the race :D. Great idea BTW!
A:
Make a Java Swing application. That will not require Internet and it runs on Mac, Windows, and Linux. You can write to the file system for the lock. System.getProperty("user.home") gives you the home equivalent for the platform. You might have to include a jre in your CD.
A:
Not quite what you're looking for, but you could put in on re-writable media and have an executable over-write itself (or part of itself).
I don't know if a CD-RW could do that automatically, or if you would have to look at cheap USB sticks.
|
Make a CD run once
|
We're doing an "Amazing Race" kind of event, and thought it would be cool to have CDs that could only play once... like a "this message will self destruct in 5 seconds..."
Any thoughts on how to do this? I was thinking it could be a compiled HTML website that would write a cookie and only play once. I don't want to write to the registry (don't want to depend on windows, don't want to install anything, etc).
I also don't care if it's hackable... This is a one-time fun event, and I don't really care too much if people could remove the cookie or something.
Any other ideas?
|
[
"If the content is HTML and run inside a normal browser window, then a cookie may work but there are caveats:\n\nUser runs the CD once when IE is the default browser. User runs at a later time, when Firefox is the default browser so cookie cannot be checked.\nThe browser's security settings may be locked down to prevent use of script so the cookie cannot be set (more of an IE problem).\n\nAn alternative might be Flash's equivalent of cookies, but if script was locked down then the same may be true for Flash.\n",
"If you used Flash to create the content, you could then use a wrapper program such as Zinc to produce cross-platform executables of the content. \nThen, the Flash could use its version of cookies (local shared objects) to determine when the content has been displayed - write to the LSO the first time and read from it thereafter.\n",
"I would say encrypt (part of) the contents of the disc with a unique one time pad, that you request from a server that does a read directly followed by a delete of the decryption key. You could write an identifier on each disk so you can use multiple disks, each with a unique key.\nThis requires network access and some encryption tools, but a very simple implementation would do what you want it to do, is feasible, and it would be 'unbreakable' unless the one time pad is captured and stored.\nIf just for fun, this should be secure enough.\n",
"You can create a volatile registry entry. It will only exist untill the computer is restarted. This solution is very much \"hackable\", but it is simple and may suffice for what you want to do.\nTake a look at the REG_OPTION_VOLATILE here.\n",
"Will the computers this is run on have internet access? You can easily load up a remote url (execute 'start http://yoururl.com' from autorun.inf), store the cookie and prevent it from being loaded again if the cookie exists.\n",
"If it's allowed to be hackable, then I'd just go with a simple solution of HTML + JavaScript, requiring (say) a GUID to enter, with some silly obfuscation in the code to validate the GUID.\nWhat I mean by silly obfuscated validation is something like putting together a big array of ROT13'ed GUIDs, then adding code to only accept the Math.floor(PI * E + 32/(new DateTime()).getYear())'th GUID in the array, and ROT13 it again using sufficiently uncommented/unclear code, then check the user input against the result. Do it all in one line for kicks, or generate the GUIDs in some pseudo-random manner using a known seed... you get the idea :).\nThe only snag might be if IE doesn't allow local JavaScript? Hmm, looks like they'd need to deal with the InfoBar thing :(.\n",
"You could also set a registry key that would prevent playing, though this could be bypassed.\n",
"I think your best bet is to use Rewritable media for this. You can create your application easily, like HTML site or something like that, and after the last link or last page, however you decide to do you could execute a script with some command-line burner that would erase the rewritable media, or even write an ISO that you keep in CD with a text file or a flash that explains that the CD is lost forever.\nGive a look at some Command Line Burners. Linux have several, that isn't worth to mention here, for windows you can use Cheetah CommandLine Burner among several others. \nIf you wish to do a CD without depending on the installed OS you should give a look at LIVE CDs. FreeDOS is a choice for \"DOS Compatible applications\" or my suggestioon you use a Linux live CD.\nAlso you will have several options for small HTTP servers, like lighthttpd and even browsers in several flavors from text interfaces to the graphical ones.\nGood luck on the race :D. Great idea BTW!\n",
"Make a Java Swing application. That will not require Internet and it runs on Mac, Windows, and Linux. You can write to the file system for the lock. System.getProperty(\"user.home\") gives you the home equivalent for the platform. You might have to include a jre in your CD.\n",
"Not quite what you're looking for, but you could put in on re-writable media and have an executable over-write itself (or part of itself).\nI don't know if a CD-RW could do that automatically, or if you would have to look at cheap USB sticks.\n"
] |
[
4,
4,
3,
3,
2,
2,
2,
2,
2,
1
] |
[] |
[] |
[
"cd",
"embeddedwebserver"
] |
stackoverflow_0000042239_cd_embeddedwebserver.txt
|
Q:
Choosing between Ajax, Flex and Silverlight
Ajax, Flex and Silverlight are a few ways to make more interactive web applications. What kinds of factors would you consider when deciding which to use for a new web application?
Does any one of them offer better cross-platform compatibility, performance, developer tools or community support?
A:
Here's a quick rundown of each area (with lots of helpful links):
Cross-platform compatibility
Ajax works in any modern browser that can run JavaScript. Flex requires Flash or anything else that can handle SWFs but, once that's installed, it's a total freeride as far as compatibility. Silverlight is tricky and misunderstood so carefully consider your userbase before going with this MS foray into the rich web applications arms race. Also keep in mind that Silverlight is still in Beta, so it may become more widely used and installed in the future as it is developed.
Performance
I'm fearful of making too many statements about performance because it really depends on how much you are willing to optimize and the exact nature of your application. Also, some technology stacks are just never going to be very fast. Some people out there have been making comparisons, but again, it depends on a great many factors (even the version of the browser from which you are testing!). It's probably best to choose based on other factors and optimize once you've started to develop.
Developer tools
There are the "golden standard" dev tools for each of the three:
Ajax has basically unlimited options, depending on the rest of your technology and architecture choices. The big questions you're actually faced with are which libraries to rely upon, and Google has voiced a pretty well adopted answer with things like Web Toolkit. When you get right down to it, it's just XML and JavaScript, right?
Flex is from Adobe and, just like with Flash development, you'd better stick with their homegrown tools because--well--they're making the standards as they go along.
Microsoft has positioned Microsoft Expression Blend versions 2.0 and 2.5 for designing the UI of Silverlight 1.0 and 2 applications respectively. Visual Studio 2008 can be used to develop and debug Silverlight applications (from Wikipedia).
Community support
There is both official and unofficial community, corporate, and open-source support for all three options. Whichever you are already integrated with and which makes you feel most at home are very individual things, but I'll offer this advice: stick with what you know. If you are a MS developer and have MSDN as your homepage, you are probably going to think the Silverlight documentation and forums are really helpful. Flex has a very similar story; the forums are pretty good and if you're a Flash person already, you're going to be right at home with their documentation and user community.
On the other hand, Ajax is basically all over the place because you can implement so many different ways and use so many widely-varied libraries. Each library can have it's own forums to visit and mailing lists to lurk within for answers.
Once again, all three have corporate giants trying to foster their communities and to get the best support possible to the developers that will give them greater market share in the future.
A:
The choice should in my opinion be mostly based on the nature of the application you'll be building (for example, if you need to manipulate vector graphics, Ajax is pretty much out), but here are some general guidelines:
Ubiquity
Ajax -- Supported by all modern browsers across platforms
Flex -- Runtime (Flash Player) has very wide installed base for Windows, Mac OS, Linux. Linux version was a bit buggy the last time I checked, though
Silverlight -- Runtime has quite low installed base (and no Linux support) at the moment
Choice of programming language
(Unordeded because of subjectivity, but note that Silverlight offers the most choice. Also note that the existing language experience of developers in your team should be taken into account.)
Silverlight: Any .NET language (C#, Visual Basic, IronPython(?), IronRuby(?)) (and XAML for UI definition)
Ajax: JavaScript (and XHTML for UI definition)
Flex: ActionScript 3 (and MXML for UI definition)
API Stability and compatibility
Flex -- Runtime is the same across platforms and browsers, more mature and stable at the moment than Silverlight
Silverlight -- Runtime is the same across platforms and browsers, less mature than Flex/Flash, v2.0 is still in beta
Ajax -- Compatibility problems across browsers (may be mitigated via Ajax libraries, though)
Web/Browser Integration
Ajax -- Content is native inside browser, based on standards: searchable by browser and search engine crawlers, subject to any standard UI practices the browser and operating system have established
Flex and Silverlight -- Content not native to browser (i.e. runs in its own little "sandbox/rectangle"): not necessarily subject to established UI practices for the given platform
Developer Tools
Ajax -- Your favorite code editor, browser and debugging toolkit for the browser
Flex -- Flex SDK is available for Windows, Mac OS and Linux for free and can be used with your favorite code editor. A Command-line debugger is included, but the Adobe-provided profiler is only available in the commercial Flex Builder IDE
Silverlight -- AFAIK, The SDK is available for Windows for free and can be used with your favorite .NET development tools
A:
The web runtimes like Flex and Silverlight all offer enticing things, but come with two big costs:
They run only within a rectangle on the page, and don't interact with normal web widgets
They are only available to people who have that plug-in installed
Even the nearly-ubiquitous Flash isn't installed on every web browser, so by choosing to use a plug-in runtime you're excluding part of your audience.
In contrast, JavaScript (or Ajax) is available on pretty much every browser, and interacts better with normal web pages, but is a more primitive and restricting language. Using it for complex animations can be tricky, and you'll need to test your applications in more versions on more platforms to make sure it works.
Cross-platform compatibility isn't the issue it used to be, so the issue is this: Will you gain more in the features of a plug-in library than you'll lose in the audience you exclude?
My own answer has so far always been JavaScript/Ajax, but I'd re-evaluate that in any new project.
A:
What is your audience: public web site or an intranet business app? Adoption rates are not relevant if you have a controlled audience who will install what is needed to run your app. However, if you need the largest possible audience to make your web startup a success then it may be critical.
What is your goal? Building something for the lowest cost? Learning new technology?
Can you leverage your existing skills? If you already know .NET then Silverlight gets a boost. Learning Flex may be interesting and useful, but is it more useful to you than more experience with .NET technologies? Remember to consider the opportunity cost of learning something totally new.
I don't see a clear technology winner at this point, and likely there won't be one for a long time, so the choice will come down to fairly subjective factors.
A:
Other than what's already been mentioned here, another huge thing to consider is what your UI is going to be.
If you're going to be using a lot of advanced UI controls like trees, lists, tab controls, etc then consider the following:
JavaScript/HTML - No native support for anything beyond things basic drop down boxes, buttons, and text fields. If you want something like a tree control or tab control then you'll have to roll your own or find a third party library.
Adobe ActionScript - Native support for a wide array of advanced UI controls
Silverlight - 1.0 had very limited UI controls, but 2.0 will be adding many more and I'm sure we'll continue to see controls added in future releases.
|
Choosing between Ajax, Flex and Silverlight
|
Ajax, Flex and Silverlight are a few ways to make more interactive web applications. What kinds of factors would you consider when deciding which to use for a new web application?
Does any one of them offer better cross-platform compatibility, performance, developer tools or community support?
|
[
"Here's a quick rundown of each area (with lots of helpful links):\nCross-platform compatibility\nAjax works in any modern browser that can run JavaScript. Flex requires Flash or anything else that can handle SWFs but, once that's installed, it's a total freeride as far as compatibility. Silverlight is tricky and misunderstood so carefully consider your userbase before going with this MS foray into the rich web applications arms race. Also keep in mind that Silverlight is still in Beta, so it may become more widely used and installed in the future as it is developed.\nPerformance\nI'm fearful of making too many statements about performance because it really depends on how much you are willing to optimize and the exact nature of your application. Also, some technology stacks are just never going to be very fast. Some people out there have been making comparisons, but again, it depends on a great many factors (even the version of the browser from which you are testing!). It's probably best to choose based on other factors and optimize once you've started to develop.\nDeveloper tools\nThere are the \"golden standard\" dev tools for each of the three:\n\nAjax has basically unlimited options, depending on the rest of your technology and architecture choices. The big questions you're actually faced with are which libraries to rely upon, and Google has voiced a pretty well adopted answer with things like Web Toolkit. When you get right down to it, it's just XML and JavaScript, right?\nFlex is from Adobe and, just like with Flash development, you'd better stick with their homegrown tools because--well--they're making the standards as they go along.\nMicrosoft has positioned Microsoft Expression Blend versions 2.0 and 2.5 for designing the UI of Silverlight 1.0 and 2 applications respectively. Visual Studio 2008 can be used to develop and debug Silverlight applications (from Wikipedia). \n\nCommunity support\nThere is both official and unofficial community, corporate, and open-source support for all three options. Whichever you are already integrated with and which makes you feel most at home are very individual things, but I'll offer this advice: stick with what you know. If you are a MS developer and have MSDN as your homepage, you are probably going to think the Silverlight documentation and forums are really helpful. Flex has a very similar story; the forums are pretty good and if you're a Flash person already, you're going to be right at home with their documentation and user community.\nOn the other hand, Ajax is basically all over the place because you can implement so many different ways and use so many widely-varied libraries. Each library can have it's own forums to visit and mailing lists to lurk within for answers.\nOnce again, all three have corporate giants trying to foster their communities and to get the best support possible to the developers that will give them greater market share in the future.\n",
"The choice should in my opinion be mostly based on the nature of the application you'll be building (for example, if you need to manipulate vector graphics, Ajax is pretty much out), but here are some general guidelines:\nUbiquity\n\nAjax -- Supported by all modern browsers across platforms\nFlex -- Runtime (Flash Player) has very wide installed base for Windows, Mac OS, Linux. Linux version was a bit buggy the last time I checked, though\nSilverlight -- Runtime has quite low installed base (and no Linux support) at the moment\n\nChoice of programming language\n(Unordeded because of subjectivity, but note that Silverlight offers the most choice. Also note that the existing language experience of developers in your team should be taken into account.)\n\nSilverlight: Any .NET language (C#, Visual Basic, IronPython(?), IronRuby(?)) (and XAML for UI definition)\nAjax: JavaScript (and XHTML for UI definition)\nFlex: ActionScript 3 (and MXML for UI definition)\n\nAPI Stability and compatibility\n\nFlex -- Runtime is the same across platforms and browsers, more mature and stable at the moment than Silverlight\nSilverlight -- Runtime is the same across platforms and browsers, less mature than Flex/Flash, v2.0 is still in beta\nAjax -- Compatibility problems across browsers (may be mitigated via Ajax libraries, though)\n\nWeb/Browser Integration\n\nAjax -- Content is native inside browser, based on standards: searchable by browser and search engine crawlers, subject to any standard UI practices the browser and operating system have established\nFlex and Silverlight -- Content not native to browser (i.e. runs in its own little \"sandbox/rectangle\"): not necessarily subject to established UI practices for the given platform\n\nDeveloper Tools\n\nAjax -- Your favorite code editor, browser and debugging toolkit for the browser\nFlex -- Flex SDK is available for Windows, Mac OS and Linux for free and can be used with your favorite code editor. A Command-line debugger is included, but the Adobe-provided profiler is only available in the commercial Flex Builder IDE\nSilverlight -- AFAIK, The SDK is available for Windows for free and can be used with your favorite .NET development tools\n\n",
"The web runtimes like Flex and Silverlight all offer enticing things, but come with two big costs:\n\nThey run only within a rectangle on the page, and don't interact with normal web widgets\nThey are only available to people who have that plug-in installed\n\nEven the nearly-ubiquitous Flash isn't installed on every web browser, so by choosing to use a plug-in runtime you're excluding part of your audience.\nIn contrast, JavaScript (or Ajax) is available on pretty much every browser, and interacts better with normal web pages, but is a more primitive and restricting language. Using it for complex animations can be tricky, and you'll need to test your applications in more versions on more platforms to make sure it works.\nCross-platform compatibility isn't the issue it used to be, so the issue is this: Will you gain more in the features of a plug-in library than you'll lose in the audience you exclude?\nMy own answer has so far always been JavaScript/Ajax, but I'd re-evaluate that in any new project.\n",
"What is your audience: public web site or an intranet business app? Adoption rates are not relevant if you have a controlled audience who will install what is needed to run your app. However, if you need the largest possible audience to make your web startup a success then it may be critical.\nWhat is your goal? Building something for the lowest cost? Learning new technology?\nCan you leverage your existing skills? If you already know .NET then Silverlight gets a boost. Learning Flex may be interesting and useful, but is it more useful to you than more experience with .NET technologies? Remember to consider the opportunity cost of learning something totally new.\nI don't see a clear technology winner at this point, and likely there won't be one for a long time, so the choice will come down to fairly subjective factors.\n",
"Other than what's already been mentioned here, another huge thing to consider is what your UI is going to be.\nIf you're going to be using a lot of advanced UI controls like trees, lists, tab controls, etc then consider the following:\n\nJavaScript/HTML - No native support for anything beyond things basic drop down boxes, buttons, and text fields. If you want something like a tree control or tab control then you'll have to roll your own or find a third party library.\nAdobe ActionScript - Native support for a wide array of advanced UI controls\nSilverlight - 1.0 had very limited UI controls, but 2.0 will be adding many more and I'm sure we'll continue to see controls added in future releases.\n\n"
] |
[
13,
5,
2,
1,
0
] |
[] |
[] |
[
"ajax",
"apache_flex",
"silverlight"
] |
stackoverflow_0000048877_ajax_apache_flex_silverlight.txt
|
Q:
How Do Sockets Work in C?
I am a bit confused about socket programming in C.
You create a socket, bind it to an interface and an IP address and get it to listen. I found a couple of web resources on that, and understood it fine. In particular, I found an article Network programming under Unix systems to be very informative.
What confuses me is the timing of data arriving on the socket.
How can you tell when packets arrive, and how big the packet is, do you have to do all the heavy lifting yourself?
My basic assumption here is that packets can be of variable length, so once binary data starts appearing down the socket, how do you begin to construct packets from that?
A:
Short answer is that you have to do all the heavy lifting yourself. You can be notified that there is data available to be read, but you won't know how many bytes are available. In most IP protocols that use variable length packets, there will be a header with a known fixed length prepended to the packet. This header will contain the length of the packet. You read the header, get the length of the packet, then read the packet. You repeat this pattern (read header, then read packet) until communication is complete.
When reading data from a socket, you request a certain number of bytes. The read call may block until the requested number of bytes are read, but it can return fewer bytes than what was requested. When this happens, you simply retry the read, requesting the remaining bytes.
Here's a typical C function for reading a set number of bytes from a socket:
/* buffer points to memory block that is bigger than the number of bytes to be read */
/* socket is open socket that is connected to a sender */
/* bytesToRead is the number of bytes expected from the sender */
/* bytesRead is a pointer to a integer variable that will hold the number of bytes */
/* actually received from the sender. */
/* The function returns either the number of bytes read, */
/* 0 if the socket was closed by the sender, and */
/* -1 if an error occurred while reading from the socket */
int readBytes(int socket, char *buffer, int bytesToRead, int *bytesRead)
{
*bytesRead = 0;
while(*bytesRead < bytesToRead)
{
int ret = read(socket, buffer + *bytesRead, bytesToRead - *bytesRead);
if(ret <= 0)
{
/* either connection was closed or an error occurred */
return ret;
}
else
{
*bytesRead += ret;
}
}
return *bytesRead;
}
A:
So, the answer to your question depends a fair bit on whether you are using UDP or TCP as your transport.
For UDP, life gets a lot simpler, in that you can call recv/recvfrom/recvmsg with the packet size you need (you'd likely send fixed-length packets from the source anyway), and make the assumption that if data is available, it's there in multiples of packet-length sizes. (I.E. You call recv* with the size of your sending side packet, and you're set.)
For TCP, life gets a bit more interesting - for the purpose of this explanation, I will assume that you already know how to use socket(), bind(), listen() and accept() - the latter being how you get the file descriptor (FD) of your newly made connection.
There are two ways of doing the I/O for a socket - blocking, in which you call read(fd, buf, N) and the read sits there and waits until you've read N bytes into buf - or non-blocking, in which you have to check (using select() or poll()) whether the FD is readable, and THEN do your read().
When dealing with TCP-based connections, the OS doesn't pay attention to the packet sizes, since it's considered a continual stream of data, not seperate packet-sized chunks.
If your application uses "packets" (packed or unpacked data structures that you're passing around), you ought to be able to call read() with the proper size argument, and read an entire data structure off the socket at a time. The only caveat you have to deal with, is to remember to properly byte-order any data that you're sending, in case the source and destination system are of different byte endian-ness. This applies to both UDP and TCP.
As far as *NIX socket programming is concerned, I highly recommend W. Richard Stevens' "Unix Network Programming, Vol. 1" (UNPv1) and "Advanced Programming in an Unix Environment" (APUE). The first is a tome regarding network-based programming, regardless of the transport, and the latter is a good all-around programming book as it applies to *NIX based programming. Also, look for "TCP/IP Illustrated", Volumes 1 and 2.
A:
When you do a read on the socket, you tell it how many maximum bytes to read, but if it doesn't have that many, it gives you however many it's got. It's up to you to design the protocol so you know whether you've got a partial packet or not. For instance, in the past when sending variable length binary data, I would put an int at the beginning that said how many bytes to expect. I'd do a read requesting a number of bytes greater than the largest possible packet in my protocol, and then I'd compare the first int against however many bytes I'd received, and either process it or try more reads until I'd gotten the full packet, depending.
A:
Sockets operate at a higher level than raw packets - it's like a file you can read/write from. Also, when you try to read from a socket, the operating system will block (put on hold) your process until it has data to fulfill the request.
|
How Do Sockets Work in C?
|
I am a bit confused about socket programming in C.
You create a socket, bind it to an interface and an IP address and get it to listen. I found a couple of web resources on that, and understood it fine. In particular, I found an article Network programming under Unix systems to be very informative.
What confuses me is the timing of data arriving on the socket.
How can you tell when packets arrive, and how big the packet is, do you have to do all the heavy lifting yourself?
My basic assumption here is that packets can be of variable length, so once binary data starts appearing down the socket, how do you begin to construct packets from that?
|
[
"Short answer is that you have to do all the heavy lifting yourself. You can be notified that there is data available to be read, but you won't know how many bytes are available. In most IP protocols that use variable length packets, there will be a header with a known fixed length prepended to the packet. This header will contain the length of the packet. You read the header, get the length of the packet, then read the packet. You repeat this pattern (read header, then read packet) until communication is complete.\nWhen reading data from a socket, you request a certain number of bytes. The read call may block until the requested number of bytes are read, but it can return fewer bytes than what was requested. When this happens, you simply retry the read, requesting the remaining bytes.\nHere's a typical C function for reading a set number of bytes from a socket:\n/* buffer points to memory block that is bigger than the number of bytes to be read */\n/* socket is open socket that is connected to a sender */\n/* bytesToRead is the number of bytes expected from the sender */\n/* bytesRead is a pointer to a integer variable that will hold the number of bytes */\n/* actually received from the sender. */\n/* The function returns either the number of bytes read, */\n/* 0 if the socket was closed by the sender, and */\n/* -1 if an error occurred while reading from the socket */\nint readBytes(int socket, char *buffer, int bytesToRead, int *bytesRead)\n{\n *bytesRead = 0;\n while(*bytesRead < bytesToRead)\n {\n int ret = read(socket, buffer + *bytesRead, bytesToRead - *bytesRead);\n if(ret <= 0)\n {\n /* either connection was closed or an error occurred */\n return ret;\n }\n else\n {\n *bytesRead += ret;\n }\n }\n return *bytesRead;\n}\n\n",
"So, the answer to your question depends a fair bit on whether you are using UDP or TCP as your transport.\nFor UDP, life gets a lot simpler, in that you can call recv/recvfrom/recvmsg with the packet size you need (you'd likely send fixed-length packets from the source anyway), and make the assumption that if data is available, it's there in multiples of packet-length sizes. (I.E. You call recv* with the size of your sending side packet, and you're set.)\nFor TCP, life gets a bit more interesting - for the purpose of this explanation, I will assume that you already know how to use socket(), bind(), listen() and accept() - the latter being how you get the file descriptor (FD) of your newly made connection.\nThere are two ways of doing the I/O for a socket - blocking, in which you call read(fd, buf, N) and the read sits there and waits until you've read N bytes into buf - or non-blocking, in which you have to check (using select() or poll()) whether the FD is readable, and THEN do your read().\nWhen dealing with TCP-based connections, the OS doesn't pay attention to the packet sizes, since it's considered a continual stream of data, not seperate packet-sized chunks.\nIf your application uses \"packets\" (packed or unpacked data structures that you're passing around), you ought to be able to call read() with the proper size argument, and read an entire data structure off the socket at a time. The only caveat you have to deal with, is to remember to properly byte-order any data that you're sending, in case the source and destination system are of different byte endian-ness. This applies to both UDP and TCP.\nAs far as *NIX socket programming is concerned, I highly recommend W. Richard Stevens' \"Unix Network Programming, Vol. 1\" (UNPv1) and \"Advanced Programming in an Unix Environment\" (APUE). The first is a tome regarding network-based programming, regardless of the transport, and the latter is a good all-around programming book as it applies to *NIX based programming. Also, look for \"TCP/IP Illustrated\", Volumes 1 and 2.\n",
"When you do a read on the socket, you tell it how many maximum bytes to read, but if it doesn't have that many, it gives you however many it's got. It's up to you to design the protocol so you know whether you've got a partial packet or not. For instance, in the past when sending variable length binary data, I would put an int at the beginning that said how many bytes to expect. I'd do a read requesting a number of bytes greater than the largest possible packet in my protocol, and then I'd compare the first int against however many bytes I'd received, and either process it or try more reads until I'd gotten the full packet, depending.\n",
"Sockets operate at a higher level than raw packets - it's like a file you can read/write from. Also, when you try to read from a socket, the operating system will block (put on hold) your process until it has data to fulfill the request.\n"
] |
[
17,
13,
3,
1
] |
[] |
[] |
[
"c",
"network_programming",
"sockets"
] |
stackoverflow_0000048908_c_network_programming_sockets.txt
|
Q:
What is a MUST COVER in my Groovy presentation?
I'm working on getting an Introduction to Groovy presentation ready for my local Java User's Group and I've pretty much got it together. What I'd like to see is what you all think I just have to cover.
Remember, this is an introductory presentation. Most of the people are experienced Java developers, but I'm pretty sure they have little to no Groovy knowledge. I won't poison the well by mentioning what I've already got down to cover as I want to see what the community has to offer.
What are the best things I can cover (in a 1 hour time frame) that will help me effectively communicate to these Java developers how useful Groovy could be to them?
p.s. I'll share my presentation here later for anyone interested.
as promised now that my presentation has been presented here it is
A:
I don't know anything about groovy so in a sense I've qualified to answer this...
I would want you to:
Tell me why I would want to use Scripting (in general) as opposed to Java-- what does it let me do quicker (as in development time), what does it make more readable. Give tantalising examples of ways I can use chunks of scripting in my mostly Java app. You want to make this relevant to Java devs moreso than tech-junkies.
With that out of the way, why Groovy? Why not Ruby, Python or whatever (which are all runnable on the JVM).
Don't show me syntax that Java can already do (if statements, loops etc) or if you do make it quick. It's as boring as hell to watch someone walk through language syntax 101 for 20min.
For syntax that has a comparible feature in Java maybe show them side by side quickly.
For syntax that is not in Java (closures etc) you can talk to them in a bit more detail.
Remember those examples from the first point. Show me one, fully working (or at least looking like it is).
At the end have question time. That is crazy important, and with that comes a burden on you to be a psuedo-guru :P.
I'm not sure about how the Java6 scripting support works but I'm fairly sure it can be made secure. I remember something about defining the API the script can use before it's run.
If this is the case then an example you could show would be some thick-client application (e.g. a music player) where users can write their own scripts with an API you provide them in Groovy which allows them to script their app in interesting and secure ways (e.g. creating custom columns in the playlist)
A:
I'd go for:
Closures
Duck typing
Builders (XML builder and slurper)
GStrings
Grails
A:
I'd mention the following things in addition to what has already been stated:
GDK - extensions/additions to existing JDK classes
Interaction between Groovy and Java code (basically a non-issue)
Compiling Groovy code to Java .class files
XML parsing and mechanisms for accessing document content
One thing I like doing with Groovy is implementing an interface defined in Java as a map from method names to closures. It's a cool thing you can do with Groovy, but probably well beyond an introductory presentation though.
A:
Include an example of how making Java code more groovy takes away soooo much code. Wait for them to pick their jaws up off of the floor before continuing. Scott Davis has a simple example at the beginning of Groovy Recipes that takes 35 lines of Java or 3 lines of Groovy.
A:
You should definitely show them how to create a quick Grails application. Two domain classes that are related. Build a basic CRUD app. Explain that tables are being created behind the scenes using GORM(Hibernate). Then explain that you can create a war file and deploy it as you would any other Java war file. You can also add Grails/Groovy to an existing Java/JSP project so it doesn't require a huge commitment or paradigm change.
Groovy/Grails is simply Ruby/Rails for Java people. I'd cover the plugins for Netbeans/Eclipse too. Groovy/Grails are just now getting full support in the major IDE's.
Finally, if you can find a good diagram that shows how Grails is built on top of Spring, Hibernate, Quartz, Sitemesh and Groovy, I think people will understand that there is a treasure chest waiting to be unlocked.
|
What is a MUST COVER in my Groovy presentation?
|
I'm working on getting an Introduction to Groovy presentation ready for my local Java User's Group and I've pretty much got it together. What I'd like to see is what you all think I just have to cover.
Remember, this is an introductory presentation. Most of the people are experienced Java developers, but I'm pretty sure they have little to no Groovy knowledge. I won't poison the well by mentioning what I've already got down to cover as I want to see what the community has to offer.
What are the best things I can cover (in a 1 hour time frame) that will help me effectively communicate to these Java developers how useful Groovy could be to them?
p.s. I'll share my presentation here later for anyone interested.
as promised now that my presentation has been presented here it is
|
[
"I don't know anything about groovy so in a sense I've qualified to answer this...\nI would want you to:\n\nTell me why I would want to use Scripting (in general) as opposed to Java-- what does it let me do quicker (as in development time), what does it make more readable. Give tantalising examples of ways I can use chunks of scripting in my mostly Java app. You want to make this relevant to Java devs moreso than tech-junkies.\nWith that out of the way, why Groovy? Why not Ruby, Python or whatever (which are all runnable on the JVM).\nDon't show me syntax that Java can already do (if statements, loops etc) or if you do make it quick. It's as boring as hell to watch someone walk through language syntax 101 for 20min.\n\n\nFor syntax that has a comparible feature in Java maybe show them side by side quickly.\nFor syntax that is not in Java (closures etc) you can talk to them in a bit more detail.\n\nRemember those examples from the first point. Show me one, fully working (or at least looking like it is).\nAt the end have question time. That is crazy important, and with that comes a burden on you to be a psuedo-guru :P.\n\nI'm not sure about how the Java6 scripting support works but I'm fairly sure it can be made secure. I remember something about defining the API the script can use before it's run.\nIf this is the case then an example you could show would be some thick-client application (e.g. a music player) where users can write their own scripts with an API you provide them in Groovy which allows them to script their app in interesting and secure ways (e.g. creating custom columns in the playlist)\n",
"I'd go for:\n\nClosures\nDuck typing\nBuilders (XML builder and slurper)\nGStrings\nGrails \n\n",
"I'd mention the following things in addition to what has already been stated:\n\nGDK - extensions/additions to existing JDK classes\nInteraction between Groovy and Java code (basically a non-issue)\nCompiling Groovy code to Java .class files\nXML parsing and mechanisms for accessing document content\n\nOne thing I like doing with Groovy is implementing an interface defined in Java as a map from method names to closures. It's a cool thing you can do with Groovy, but probably well beyond an introductory presentation though.\n",
"Include an example of how making Java code more groovy takes away soooo much code. Wait for them to pick their jaws up off of the floor before continuing. Scott Davis has a simple example at the beginning of Groovy Recipes that takes 35 lines of Java or 3 lines of Groovy.\n",
"You should definitely show them how to create a quick Grails application. Two domain classes that are related. Build a basic CRUD app. Explain that tables are being created behind the scenes using GORM(Hibernate). Then explain that you can create a war file and deploy it as you would any other Java war file. You can also add Grails/Groovy to an existing Java/JSP project so it doesn't require a huge commitment or paradigm change. \nGroovy/Grails is simply Ruby/Rails for Java people. I'd cover the plugins for Netbeans/Eclipse too. Groovy/Grails are just now getting full support in the major IDE's.\nFinally, if you can find a good diagram that shows how Grails is built on top of Spring, Hibernate, Quartz, Sitemesh and Groovy, I think people will understand that there is a treasure chest waiting to be unlocked.\n"
] |
[
8,
8,
3,
2,
1
] |
[] |
[] |
[
"groovy",
"java"
] |
stackoverflow_0000029461_groovy_java.txt
|
Q:
Use the routing engine for form submissions in ASP.NET MVC Preview 4
I'm using ASP.NET MVC Preview 4 and would like to know how to use the routing engine for form submissions.
For example, I have a route like this:
routes.MapRoute(
"TestController-TestAction",
"TestController.mvc/TestAction/{paramName}",
new { controller = "TestController", action = "TestAction", id = "TestTopic" }
);
And a form declaration that looks like this:
<% using (Html.Form("TestController", "TestAction", FormMethod.Get))
{ %>
<input type="text" name="paramName" />
<input type="submit" />
<% } %>
which renders to:
<form method="get" action="/TestController.mvc/TestAction">
<input type="text" name="paramName" />
<input type="submit" />
</form>
The resulting URL of a form submission is:
localhost/TestController.mvc/TestAction?paramName=value
Is there any way to have this form submission route to the desired URL of:
localhost/TestController.mvc/TestAction/value
The only solutions I can think of are to create a separate action that just checks the request parameters, or to use Javascript.
A:
Solution:
public ActionResult TestAction(string paramName)
{
if (!String.IsNullOrEmpty(Request["paramName"]))
{
return RedirectToAction("TestAction", new { paramName = Request["paramName"]});
}
/* ... */
}
A:
In your route, get rid of the {paramName} part of the URL. It should be:
TestController.mvc/TestAction
As that is the URL you want the request to route to. Your form will then post to that URL.
Posted form values are mapped to parameters of an action method automatically, so don't worry about not having that data passed to your action method.
A:
My understanding is that this is how HTML works. If you do a <form url="foo" method="get"> and post the form, then the form will post foo?
param1=value1&...¶mn=valuen
It has nothing to do with MVC.
Besides, what part of REST does that URL violate? It's not a pretty URL, but by strict definition of REST, it can be RESTful. REST doesn't specify that query parameters have to be in an URL segment. And in this case, those are query parameters.
|
Use the routing engine for form submissions in ASP.NET MVC Preview 4
|
I'm using ASP.NET MVC Preview 4 and would like to know how to use the routing engine for form submissions.
For example, I have a route like this:
routes.MapRoute(
"TestController-TestAction",
"TestController.mvc/TestAction/{paramName}",
new { controller = "TestController", action = "TestAction", id = "TestTopic" }
);
And a form declaration that looks like this:
<% using (Html.Form("TestController", "TestAction", FormMethod.Get))
{ %>
<input type="text" name="paramName" />
<input type="submit" />
<% } %>
which renders to:
<form method="get" action="/TestController.mvc/TestAction">
<input type="text" name="paramName" />
<input type="submit" />
</form>
The resulting URL of a form submission is:
localhost/TestController.mvc/TestAction?paramName=value
Is there any way to have this form submission route to the desired URL of:
localhost/TestController.mvc/TestAction/value
The only solutions I can think of are to create a separate action that just checks the request parameters, or to use Javascript.
|
[
"Solution:\npublic ActionResult TestAction(string paramName)\n{\n if (!String.IsNullOrEmpty(Request[\"paramName\"]))\n {\n return RedirectToAction(\"TestAction\", new { paramName = Request[\"paramName\"]});\n }\n /* ... */\n}\n\n",
"In your route, get rid of the {paramName} part of the URL. It should be:\nTestController.mvc/TestAction\nAs that is the URL you want the request to route to. Your form will then post to that URL. \nPosted form values are mapped to parameters of an action method automatically, so don't worry about not having that data passed to your action method.\n",
"My understanding is that this is how HTML works. If you do a <form url=\"foo\" method=\"get\"> and post the form, then the form will post foo?\nparam1=value1&...¶mn=valuen\n\nIt has nothing to do with MVC.\nBesides, what part of REST does that URL violate? It's not a pretty URL, but by strict definition of REST, it can be RESTful. REST doesn't specify that query parameters have to be in an URL segment. And in this case, those are query parameters.\n"
] |
[
9,
3,
3
] |
[] |
[] |
[
"asp.net_mvc",
"asp.net_mvc_routing",
"forms"
] |
stackoverflow_0000008485_asp.net_mvc_asp.net_mvc_routing_forms.txt
|
Q:
Connecting to registry remotely, and getting exceptions
I've inherited a hoary old piece of code (by hoary, I mean warty with lots of undocumented bug fixes than WTF-y) and there's one part that's giving me a bit of trouble. Here's how it connects to the remote registry to get the add/remove programs key:
try
{
remoteKey = RegistryKey.OpenRemoteBaseKey(
RegistryHive.LocalMachine, addr.Value).OpenSubKey(
"SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall");
return 1;
}
catch (IOException e)
{
IOException myexception = e;
//Console.WriteLine("{0}: {1}: {2}",
// e.GetType().Name, e.Message, addr.Value);
return 2;
}
catch (UnauthorizedAccessException e)
{
UnauthorizedAccessException myexception = e;
//Console.WriteLine("{0}: {1}: {2}",
// e.GetType().Name, e.Message, addr.Value);
return 3;
}
catch (System.Security.SecurityException e)
{
System.Security.SecurityException myexception = e;
//Console.WriteLine("{0}: {1}: {2}",
// e.GetType().Name, e.Message, addr.Value);
return 4;
}
Now, I have two problems:
I know why the IOException - if it's a non-Windows machine it'll throw that. The difference between UnauthorizedAccessException and SecurityException I'm not so clear on. Anyone got any ideas?
This entire bit of code was designed before anyone had thought you might not use your local logon for everything. I can't work out how you do authentication for remotely connecting to the registry, anyway, and this code looks like it's only used in one case, when it can't get this information from WMI.
Any help with either would be great.
A:
You probably have to use impersonation to change the credentials of the thread that calls the remote registry methods. See here (linky) for some information on MSDN. Basically, your thread has a security context that is used to make managed and unmanaged calls.
A:
According to MSDN, UnauthorizedAccessException is not thrown by OpenSubKey. So I think it's not needed.
A:
John's pointer to MSDN answered what UnauthorizedAccessException is for - it only appears when you try to access a key remotely, using OpenRemoteBaseKey.
We're a little wary about changing the security context on the computer - I've found a reference here about using WMI (which we're already using for the vast majority of the heavy lifting) to access the registry, so I might try that instead.
|
Connecting to registry remotely, and getting exceptions
|
I've inherited a hoary old piece of code (by hoary, I mean warty with lots of undocumented bug fixes than WTF-y) and there's one part that's giving me a bit of trouble. Here's how it connects to the remote registry to get the add/remove programs key:
try
{
remoteKey = RegistryKey.OpenRemoteBaseKey(
RegistryHive.LocalMachine, addr.Value).OpenSubKey(
"SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall");
return 1;
}
catch (IOException e)
{
IOException myexception = e;
//Console.WriteLine("{0}: {1}: {2}",
// e.GetType().Name, e.Message, addr.Value);
return 2;
}
catch (UnauthorizedAccessException e)
{
UnauthorizedAccessException myexception = e;
//Console.WriteLine("{0}: {1}: {2}",
// e.GetType().Name, e.Message, addr.Value);
return 3;
}
catch (System.Security.SecurityException e)
{
System.Security.SecurityException myexception = e;
//Console.WriteLine("{0}: {1}: {2}",
// e.GetType().Name, e.Message, addr.Value);
return 4;
}
Now, I have two problems:
I know why the IOException - if it's a non-Windows machine it'll throw that. The difference between UnauthorizedAccessException and SecurityException I'm not so clear on. Anyone got any ideas?
This entire bit of code was designed before anyone had thought you might not use your local logon for everything. I can't work out how you do authentication for remotely connecting to the registry, anyway, and this code looks like it's only used in one case, when it can't get this information from WMI.
Any help with either would be great.
|
[
"You probably have to use impersonation to change the credentials of the thread that calls the remote registry methods. See here (linky) for some information on MSDN. Basically, your thread has a security context that is used to make managed and unmanaged calls.\n",
"According to MSDN, UnauthorizedAccessException is not thrown by OpenSubKey. So I think it's not needed.\n",
"John's pointer to MSDN answered what UnauthorizedAccessException is for - it only appears when you try to access a key remotely, using OpenRemoteBaseKey.\nWe're a little wary about changing the security context on the computer - I've found a reference here about using WMI (which we're already using for the vast majority of the heavy lifting) to access the registry, so I might try that instead.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"c#",
"exception",
"windows"
] |
stackoverflow_0000049166_c#_exception_windows.txt
|
Q:
Importing JavaScript in JSP tags
I have a .tag file that requires a JavaScript library (as in a .js file).
Currently I am just remembering to import the .js file in every JSP that uses the tag but this is a bit cumbersome and prone to error.
Is there a way to do the importing of the .js inside the JSP tag?
(for caching reasons I would want the .js to be a script import)
A:
There is no reason you cannot have a script tag in the body, even though it is preferable for it to be in the head. Just emit the script tag before you emit your tag's markup. The only thing to consider is that you do not want to include the script more than once if you use the jsp tag on the page more than once. The way to solve that is to remember that you have already included the script, by addng an attribute to the request object.
A:
Short of just including the js in every page automatically, I do not think so. It really would not be something that tags are designed to to.
Without knowing what your tag is actually doing (presumably its its outputting something in the body section) then there is no way that it will be able to get at the head to put the declaration there.
One solution that might (in my head) work would be to have an include that copies verbatim what you have in the head after the place in the head to import tags right up to where you want to use the tag. This is really not something that you would want to do. You would have to have multiple 'header' files to import depending on the content and where you want to use the tag. Maintenance nightmare. Just a bad idea all round. Any solution I can think of would require more work than manually just adding in the declaration.
I think you are out of luck and stuck with manually putting it in.
edit: Just import it in every page. It will be cached and then this problem goes away.
|
Importing JavaScript in JSP tags
|
I have a .tag file that requires a JavaScript library (as in a .js file).
Currently I am just remembering to import the .js file in every JSP that uses the tag but this is a bit cumbersome and prone to error.
Is there a way to do the importing of the .js inside the JSP tag?
(for caching reasons I would want the .js to be a script import)
|
[
"There is no reason you cannot have a script tag in the body, even though it is preferable for it to be in the head. Just emit the script tag before you emit your tag's markup. The only thing to consider is that you do not want to include the script more than once if you use the jsp tag on the page more than once. The way to solve that is to remember that you have already included the script, by addng an attribute to the request object.\n",
"Short of just including the js in every page automatically, I do not think so. It really would not be something that tags are designed to to.\nWithout knowing what your tag is actually doing (presumably its its outputting something in the body section) then there is no way that it will be able to get at the head to put the declaration there.\nOne solution that might (in my head) work would be to have an include that copies verbatim what you have in the head after the place in the head to import tags right up to where you want to use the tag. This is really not something that you would want to do. You would have to have multiple 'header' files to import depending on the content and where you want to use the tag. Maintenance nightmare. Just a bad idea all round. Any solution I can think of would require more work than manually just adding in the declaration.\nI think you are out of luck and stuck with manually putting it in.\nedit: Just import it in every page. It will be cached and then this problem goes away.\n"
] |
[
6,
3
] |
[] |
[] |
[
"java",
"javascript",
"jsp",
"jsp_tags"
] |
stackoverflow_0000049156_java_javascript_jsp_jsp_tags.txt
|
Q:
Cycle count measurement
I have a MS Visual Studio 2005 application solution. All the code is in C. I want to measure the number of cycles taken for execution by particular functions. Is there any Win32 API which I can use to get the cycle count?
I have used gettimeofday() to get time in micro seconds, but I want to know the cycles consumed.
A:
Both Intel and AMD offer windows libraries and tools to access the performance counters on their cpus. These give access not only to cycle counts, but also to cache line hits and misses and TLB flush counts. The Intel tools are marketed under the name VTune, while AMD calls theirs CodeAnalyst.
|
Cycle count measurement
|
I have a MS Visual Studio 2005 application solution. All the code is in C. I want to measure the number of cycles taken for execution by particular functions. Is there any Win32 API which I can use to get the cycle count?
I have used gettimeofday() to get time in micro seconds, but I want to know the cycles consumed.
|
[
"Both Intel and AMD offer windows libraries and tools to access the performance counters on their cpus. These give access not only to cycle counts, but also to cache line hits and misses and TLB flush counts. The Intel tools are marketed under the name VTune, while AMD calls theirs CodeAnalyst. \n"
] |
[
2
] |
[] |
[] |
[
"profiling"
] |
stackoverflow_0000049207_profiling.txt
|
Q:
How can I map a list of strings to my entity using NHibernate?
I've got two tables in my database: Articles and Tags
The Tags tables consist of ArticleID (foreign key) and a Tag (varchar).
Now I need to map an articles tags into a readonly collection on Article entity, either using IList Tags or ReadOnlyCollection Tags.
I've consulted the NHibernate reference material, but I can't seem to figure when to use Set, Bag and the other Nhibernate collections. I've seen examples using the ISet collection, but I really don't like to tie my entity classes to a NHibernate type.
How can I do the mapping in NHibernate?
edit: I ended up using a <bag> instead, as it doesn't require an index:
<bag name="Tags" table="Tag" access="nosetter.camelcase" lazy="false">
<key column="ArticleId" />
<element column="Tag" type="System.String" />
</bag>
A:
The type of collection to use in your mapping depends on how you want to represent the collection in code. The settings map like so:
The <list> maps directly to an
IList.
The <map> maps directly to an IDictionary.
The <bag> maps to an IList. A does not completely comply
with the IList interface because the
Add() method is not guaranteed to
return the correct index. An object
can be added to a <bag> without
initializing the IList. Make sure to
either hide the IList from the
consumers of your API or make it
well documented.
The <set> maps to an Iesi.Collections.ISet. That
interface is part of the
Iesi.Collections assembly
distributed with NHibernate.
so if you want an IList to be returned, then you would use the <list> mapping. In your case, I'd probably map using the <list> mapping.
|
How can I map a list of strings to my entity using NHibernate?
|
I've got two tables in my database: Articles and Tags
The Tags tables consist of ArticleID (foreign key) and a Tag (varchar).
Now I need to map an articles tags into a readonly collection on Article entity, either using IList Tags or ReadOnlyCollection Tags.
I've consulted the NHibernate reference material, but I can't seem to figure when to use Set, Bag and the other Nhibernate collections. I've seen examples using the ISet collection, but I really don't like to tie my entity classes to a NHibernate type.
How can I do the mapping in NHibernate?
edit: I ended up using a <bag> instead, as it doesn't require an index:
<bag name="Tags" table="Tag" access="nosetter.camelcase" lazy="false">
<key column="ArticleId" />
<element column="Tag" type="System.String" />
</bag>
|
[
"The type of collection to use in your mapping depends on how you want to represent the collection in code. The settings map like so:\n\nThe <list> maps directly to an\nIList.\nThe <map> maps directly to an IDictionary.\nThe <bag> maps to an IList. A does not completely comply\nwith the IList interface because the\nAdd() method is not guaranteed to\nreturn the correct index. An object\ncan be added to a <bag> without\ninitializing the IList. Make sure to\neither hide the IList from the\nconsumers of your API or make it\nwell documented.\nThe <set> maps to an Iesi.Collections.ISet. That\ninterface is part of the\nIesi.Collections assembly\ndistributed with NHibernate.\n\nso if you want an IList to be returned, then you would use the <list> mapping. In your case, I'd probably map using the <list> mapping.\n"
] |
[
5
] |
[] |
[] |
[
"nhibernate"
] |
stackoverflow_0000049220_nhibernate.txt
|
Q:
ASP.NET MVC Preview 4 - Stop Url.RouteUrl() etc. using existing parameters
I have an action like this:
public class News : System.Web.Mvc.Controller
{
public ActionResult Archive(int year)
{
/ *** /
}
}
With a route like this:
routes.MapRoute(
"News-Archive",
"News.mvc/Archive/{year}",
new { controller = "News", action = "Archive" }
);
The URL that I am on is:
News.mvc/Archive/2008
I have a form on this page like this:
<form>
<select name="year">
<option value="2007">2007</option>
</select>
</form>
Submitting the form should go to News.mvc/Archive/2007 if '2007' is selected in the form.
This requires the form 'action' attribute to be "News.mvc/Archive".
However, if I declare a form like this:
<form method="get" action="<%=Url.RouteUrl("News-Archive")%>">
it renders as:
<form method="get" action="/News.mvc/Archive/2008">
Can someone please let me know what I'm missing?
A:
You have a couple problems, I think.
First, your route doesn't have a default value for "year", so the URL "/News.mvc/Archive" is actually not valid for routing purposes.
Second, you're expect form values to show up as route parameters, but that's not how HTML works. If you use a plain form with a select and a submit, your URLs will end up having "?year=2007" on the end of them. This is just how GET-method forms are designed to work in HTML.
So you need to come to some conclusion about what's important.
If you want the user to be able to select something from the dropdown and that changes the submission URL, then you're going to have to use Javascript to achieve this (by intercepting the form submit and formulating the correct URL).
If you're okay with /News.mvc/Archive?year=2007 as your URL, then you should remove the {year} designator from the route entirely. You can still leave the "int year" parameter on your action, since form values will also populate action method parameters.
A:
I think I've worked out why - the route includes {year} so the generated routes always will too..
If anyone can confirm this?
A:
Solution
Okay here is the solution, (thanks to Brad for leading me there).
1) Require default value in route:
routes.MapRoute(
"News-Archive",
"News.mvc/Archive/{year}",
new { controller = "News", action = "Archive", year = 0 }
);
2) Add a redirect to parse GET parameters as though they are URL segments.
public ActionResult Archive(int year)
{
if (!String.IsNullOrEmpty(Request["year"]))
{
return RedirectToAction("Archive", new { year = Request["year"] });
}
}
3) Make sure you have your redirect code for Request params before any code for redirecting "default" year entries. i.e.
public ActionResult Archive(int year)
{
if (!String.IsNullOrEmpty(Request["year"]))
{
return RedirectToAction("Archive", new { year = Request["year"] });
}
if (year == 0)
{
/* ... */
}
/* ... */
}
3) Explicitly specify the default value for year in the Url.RouteUrl() call:
Url.RouteUrl("News-Archive", new { year = 0 })
|
ASP.NET MVC Preview 4 - Stop Url.RouteUrl() etc. using existing parameters
|
I have an action like this:
public class News : System.Web.Mvc.Controller
{
public ActionResult Archive(int year)
{
/ *** /
}
}
With a route like this:
routes.MapRoute(
"News-Archive",
"News.mvc/Archive/{year}",
new { controller = "News", action = "Archive" }
);
The URL that I am on is:
News.mvc/Archive/2008
I have a form on this page like this:
<form>
<select name="year">
<option value="2007">2007</option>
</select>
</form>
Submitting the form should go to News.mvc/Archive/2007 if '2007' is selected in the form.
This requires the form 'action' attribute to be "News.mvc/Archive".
However, if I declare a form like this:
<form method="get" action="<%=Url.RouteUrl("News-Archive")%>">
it renders as:
<form method="get" action="/News.mvc/Archive/2008">
Can someone please let me know what I'm missing?
|
[
"You have a couple problems, I think.\nFirst, your route doesn't have a default value for \"year\", so the URL \"/News.mvc/Archive\" is actually not valid for routing purposes.\nSecond, you're expect form values to show up as route parameters, but that's not how HTML works. If you use a plain form with a select and a submit, your URLs will end up having \"?year=2007\" on the end of them. This is just how GET-method forms are designed to work in HTML.\nSo you need to come to some conclusion about what's important.\n\nIf you want the user to be able to select something from the dropdown and that changes the submission URL, then you're going to have to use Javascript to achieve this (by intercepting the form submit and formulating the correct URL).\nIf you're okay with /News.mvc/Archive?year=2007 as your URL, then you should remove the {year} designator from the route entirely. You can still leave the \"int year\" parameter on your action, since form values will also populate action method parameters.\n\n",
"I think I've worked out why - the route includes {year} so the generated routes always will too..\nIf anyone can confirm this?\n",
"Solution\nOkay here is the solution, (thanks to Brad for leading me there).\n1) Require default value in route:\nroutes.MapRoute(\n \"News-Archive\", \n \"News.mvc/Archive/{year}\", \n new { controller = \"News\", action = \"Archive\", year = 0 }\n );\n\n2) Add a redirect to parse GET parameters as though they are URL segments.\npublic ActionResult Archive(int year)\n{\n if (!String.IsNullOrEmpty(Request[\"year\"]))\n {\n return RedirectToAction(\"Archive\", new { year = Request[\"year\"] });\n }\n}\n\n3) Make sure you have your redirect code for Request params before any code for redirecting \"default\" year entries. i.e.\npublic ActionResult Archive(int year)\n{\n if (!String.IsNullOrEmpty(Request[\"year\"]))\n {\n return RedirectToAction(\"Archive\", new { year = Request[\"year\"] });\n }\n if (year == 0)\n {\n /* ... */\n }\n /* ... */\n}\n\n3) Explicitly specify the default value for year in the Url.RouteUrl() call:\nUrl.RouteUrl(\"News-Archive\", new { year = 0 })\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"asp.net_mvc",
"c#",
"forms",
"routing"
] |
stackoverflow_0000049194_asp.net_mvc_c#_forms_routing.txt
|
Q:
REST how to handle query parameters when put to resource?
I have a REST data service where I want to allow the users to create new items with HTTP PUT using different formats like json,xml,csv. I'm unsure how to best handle the format specification in the url:
PUT /ressource/ID/json
PUT /ressource/ID/xml
or
PUT /ressource/ID?format=json
PUT /ressource/ID?format=xml
So what is the best way to specify a format indicator?
If I specify the format with an query parameter and want to do a PUT how can I do this with curl?
curl -T test/data.json -d "format=json" http://localhost:5000/resource/33
does not work.
curl -T test/data.json http://localhost:5000/update?format=json
works, but I would rather let curl build the query parameters instead of adding them by myself.
A:
A general principle of RESTful web services is to use the features built-in to HTTP, when applicable. In this case, you can indicate the format of your PUT request's content by setting the Content-Type header to application/json or application/xml.
|
REST how to handle query parameters when put to resource?
|
I have a REST data service where I want to allow the users to create new items with HTTP PUT using different formats like json,xml,csv. I'm unsure how to best handle the format specification in the url:
PUT /ressource/ID/json
PUT /ressource/ID/xml
or
PUT /ressource/ID?format=json
PUT /ressource/ID?format=xml
So what is the best way to specify a format indicator?
If I specify the format with an query parameter and want to do a PUT how can I do this with curl?
curl -T test/data.json -d "format=json" http://localhost:5000/resource/33
does not work.
curl -T test/data.json http://localhost:5000/update?format=json
works, but I would rather let curl build the query parameters instead of adding them by myself.
|
[
"A general principle of RESTful web services is to use the features built-in to HTTP, when applicable. In this case, you can indicate the format of your PUT request's content by setting the Content-Type header to application/json or application/xml.\n"
] |
[
15
] |
[] |
[] |
[
"html",
"rest"
] |
stackoverflow_0000049011_html_rest.txt
|
Q:
MFC: MessageBox during a Drag-Drop
I need to display an error message on rejecting a drop in my application.
I tried this in the OnDrop() but then the source application hangs until my message box is dismissed.
How can I do that?
A:
You can always call PostMessage with a private message in the WM_APP range and in the message handler show the error.
That way you show the error after the drag and drop operation is really over and there is no danger of messing up anything.
A:
You're right. But all the data I need to report in the message box is in the OnDrop.
A:
If you need data you can copy it in the OnDrop, store it in some temporary location, then in the WM_APP range message pass the index to the data in temporary location. The handler for the WM_APP message can clean up the temporary data after showing the message box.
|
MFC: MessageBox during a Drag-Drop
|
I need to display an error message on rejecting a drop in my application.
I tried this in the OnDrop() but then the source application hangs until my message box is dismissed.
How can I do that?
|
[
"You can always call PostMessage with a private message in the WM_APP range and in the message handler show the error.\nThat way you show the error after the drag and drop operation is really over and there is no danger of messing up anything.\n",
"You're right. But all the data I need to report in the message box is in the OnDrop.\n",
"If you need data you can copy it in the OnDrop, store it in some temporary location, then in the WM_APP range message pass the index to the data in temporary location. The handler for the WM_APP message can clean up the temporary data after showing the message box.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"drag_and_drop",
"mfc"
] |
stackoverflow_0000049183_drag_and_drop_mfc.txt
|
Q:
Custom WPF command pattern example
I've done some WPF programing and one thing I never got was the command pattern. Every example seems to be for built in ones, edit, cut, paste. Anyone have an example or suggestion of best practice for custom commands?
A:
Ah ha! A question I can answer! Firstly, I should mention that I have personally found it easier to define and hook up commands in code rather than in XAML. It allows me to hook up the handlers for the commands a little more flexibly than an all XAML approach does.
You should work out what commands you want to have and what they relate to. In my application, I currently have a class for defining important application commands like so:
public static class CommandBank
{
/// Command definition for Closing a window
public static RoutedUICommand CloseWindow { get; private set; }
/// Static private constructor, sets up all application wide commands.
static CommandBank()
{
CloseWindow = new RoutedUICommand();
CloseWindow.InputGestures.Add(new KeyGesture(Key.F4, ModifierKeys.Alt));
// ...
}
Now, because I wanted to keep the code all together, using a code only approach to Commands lets me put the following methods in the class above:
/// Closes the window provided as a parameter
public static void CloseWindowExecute(object sender, ExecutedRoutedEventArgs e)
{
((Window)e.Parameter).Close();
}
/// Allows a Command to execute if the CommandParameter is not a null value
public static void CanExecuteIfParameterIsNotNull(object sender, CanExecuteRoutedEventArgs e)
{
e.CanExecute = e.Parameter != null;
e.Handled = true;
}
The second method there can even be shared with other Commands without me having to repeat it all over the place.
Once you have defined the commands like this, you can add them to any piece of UI. In the following, once the Window has Loaded, I add command bindings to both the Window and MenuItem and then add an input binding to the Window using a loop to do this for all command bindings. The parameter that is passed is the Window its self so the code above knows what Window to try and close.
public partial class SimpleWindow : Window
{
private void WindowLoaded(object sender, RoutedEventArgs e)
{
// ...
this.CommandBindings.Add(
new CommandBinding(
CommandBank.CloseWindow,
CommandBank.CloseWindowExecute,
CommandBank.CanExecuteIfParameterIsNotNull));
foreach (CommandBinding binding in this.CommandBindings)
{
RoutedCommand command = (RoutedCommand)binding.Command;
if (command.InputGestures.Count > 0)
{
foreach (InputGesture gesture in command.InputGestures)
{
var iBind = new InputBinding(command, gesture);
iBind.CommandParameter = this;
this.InputBindings.Add(iBind);
}
}
}
// menuItemExit is defined in XAML
menuItemExit.Command = CommandBank.CloseWindow;
menuItemExit.CommandParameter = this;
// ...
}
// ....
}
I then also later have event handlers for the WindowClosing and WindowClosed events, I do recommend you make the actual implementation of commands as small and generic as possible. As in this case, I didn't try to put code that tries to stop the Window closing if there is unsaved data, I kept that code firmly inside the WindowClosing event.
Let me know if you have any follow up questions. :)
A:
I blogged about a bunch of resources on WPF Commands along with an example last year at http://blogs.vertigo.com/personal/alanl/Blog/archive/2007/05/31/commands-in-wpf.aspx
Pasting here:
Adam Nathan’s sample chapter on Important New Concepts in WPF: Commands
MSDN article: The Command Pattern In WPF
Keyvan Nayyeri: How to Add Commands to Custom WPF Control
Ian Griffiths: Avalon Input, Commands, and Handlers
Wikipedia: Command Pattern
MSDN Library: Commanding Overview
MSDN Library: CommandBinding Class
MSDN Library: Input and Commands How-to Topics
MSDN Library: EditingCommands Class
MSDN Library: MediaCommands Class
MSDN Library: ApplicationCommands Class
MSDN Library: NavigationCommands Class
MSDN Library: ComponentCommands Class
Also buried in the WPF SDK samples, there's a nice sample on RichTextBox editing which I've extended. You can find it here: RichTextEditor.zip
A:
In the September 2008 edition of the MSDN magazine, Brian Noyes has a excellent article about the RoutedCommand/RoutedEvents!!!
Here is the link:
http://msdn.microsoft.com/en-us/magazine/cc785480.aspx
A:
The thing about XAML is that it is fine for 'simple' programs, but sadly, it doesn't work well when you want to do things like share functions. Say you have several classes and UI's all of which had commands that were never disabled, you'd have to write a 'CanAlwaysExecute' method for each Window or UserControl! That's just not very DRY.
Having read several blogs and through trying several things, I've made the choice to make XAML purely about looks, styles, animation and triggers. All my hooking up of event handlers and commanding is now down in the code-behind. :)
Another gotcha by the way is Input binding, in order for them to be caught, focus must be on the object that contains the Input bindings. For example, to have a short cut you can use at any time (say, F1 to open help), that input binding must be set on the Window object, since that always has focus when your app is Active. Using the code method should make that easier, even when you start using UserControls which might want to add input bindings to their parent Window.
|
Custom WPF command pattern example
|
I've done some WPF programing and one thing I never got was the command pattern. Every example seems to be for built in ones, edit, cut, paste. Anyone have an example or suggestion of best practice for custom commands?
|
[
"Ah ha! A question I can answer! Firstly, I should mention that I have personally found it easier to define and hook up commands in code rather than in XAML. It allows me to hook up the handlers for the commands a little more flexibly than an all XAML approach does.\nYou should work out what commands you want to have and what they relate to. In my application, I currently have a class for defining important application commands like so:\npublic static class CommandBank\n{\n /// Command definition for Closing a window\n public static RoutedUICommand CloseWindow { get; private set; }\n\n /// Static private constructor, sets up all application wide commands.\n static CommandBank()\n {\n CloseWindow = new RoutedUICommand();\n CloseWindow.InputGestures.Add(new KeyGesture(Key.F4, ModifierKeys.Alt));\n // ...\n }\n\nNow, because I wanted to keep the code all together, using a code only approach to Commands lets me put the following methods in the class above:\n/// Closes the window provided as a parameter\npublic static void CloseWindowExecute(object sender, ExecutedRoutedEventArgs e)\n{\n ((Window)e.Parameter).Close();\n}\n\n/// Allows a Command to execute if the CommandParameter is not a null value\npublic static void CanExecuteIfParameterIsNotNull(object sender, CanExecuteRoutedEventArgs e)\n{\n e.CanExecute = e.Parameter != null;\n e.Handled = true;\n}\n\nThe second method there can even be shared with other Commands without me having to repeat it all over the place.\nOnce you have defined the commands like this, you can add them to any piece of UI. In the following, once the Window has Loaded, I add command bindings to both the Window and MenuItem and then add an input binding to the Window using a loop to do this for all command bindings. The parameter that is passed is the Window its self so the code above knows what Window to try and close.\npublic partial class SimpleWindow : Window\n{\n private void WindowLoaded(object sender, RoutedEventArgs e)\n {\n // ...\n this.CommandBindings.Add(\n new CommandBinding(\n CommandBank.CloseWindow,\n CommandBank.CloseWindowExecute,\n CommandBank.CanExecuteIfParameterIsNotNull));\n\n foreach (CommandBinding binding in this.CommandBindings)\n {\n RoutedCommand command = (RoutedCommand)binding.Command;\n if (command.InputGestures.Count > 0)\n {\n foreach (InputGesture gesture in command.InputGestures)\n {\n var iBind = new InputBinding(command, gesture);\n iBind.CommandParameter = this;\n this.InputBindings.Add(iBind);\n }\n }\n }\n\n // menuItemExit is defined in XAML\n menuItemExit.Command = CommandBank.CloseWindow;\n menuItemExit.CommandParameter = this;\n // ...\n }\n\n // ....\n}\n\nI then also later have event handlers for the WindowClosing and WindowClosed events, I do recommend you make the actual implementation of commands as small and generic as possible. As in this case, I didn't try to put code that tries to stop the Window closing if there is unsaved data, I kept that code firmly inside the WindowClosing event.\nLet me know if you have any follow up questions. :)\n",
"I blogged about a bunch of resources on WPF Commands along with an example last year at http://blogs.vertigo.com/personal/alanl/Blog/archive/2007/05/31/commands-in-wpf.aspx\nPasting here:\nAdam Nathan’s sample chapter on Important New Concepts in WPF: Commands\nMSDN article: The Command Pattern In WPF\nKeyvan Nayyeri: How to Add Commands to Custom WPF Control\nIan Griffiths: Avalon Input, Commands, and Handlers\nWikipedia: Command Pattern\nMSDN Library: Commanding Overview\nMSDN Library: CommandBinding Class\nMSDN Library: Input and Commands How-to Topics\nMSDN Library: EditingCommands Class\nMSDN Library: MediaCommands Class\nMSDN Library: ApplicationCommands Class\nMSDN Library: NavigationCommands Class\nMSDN Library: ComponentCommands Class\nAlso buried in the WPF SDK samples, there's a nice sample on RichTextBox editing which I've extended. You can find it here: RichTextEditor.zip\n",
"In the September 2008 edition of the MSDN magazine, Brian Noyes has a excellent article about the RoutedCommand/RoutedEvents!!!\nHere is the link:\nhttp://msdn.microsoft.com/en-us/magazine/cc785480.aspx\n",
"The thing about XAML is that it is fine for 'simple' programs, but sadly, it doesn't work well when you want to do things like share functions. Say you have several classes and UI's all of which had commands that were never disabled, you'd have to write a 'CanAlwaysExecute' method for each Window or UserControl! That's just not very DRY.\nHaving read several blogs and through trying several things, I've made the choice to make XAML purely about looks, styles, animation and triggers. All my hooking up of event handlers and commanding is now down in the code-behind. :)\nAnother gotcha by the way is Input binding, in order for them to be caught, focus must be on the object that contains the Input bindings. For example, to have a short cut you can use at any time (say, F1 to open help), that input binding must be set on the Window object, since that always has focus when your app is Active. Using the code method should make that easier, even when you start using UserControls which might want to add input bindings to their parent Window.\n"
] |
[
38,
8,
4,
3
] |
[] |
[] |
[
"command",
"design_patterns",
"wpf"
] |
stackoverflow_0000008452_command_design_patterns_wpf.txt
|
Q:
Customize the Sharepoint add list column page
I have defined a custom Sharepoint list for special attributes related to a software application inventory and installed it as a feature. I also want to group these attributes in categories. How could I change the Sharepoint page that allows the user to add a column to a list, so that when the user adds a column to my custom list type (column = attribute) he'll have a dropdown to choose the category?
A:
From what I understand you want to add a choice column data type thats already prepopulated so that users can then add it to their own content types?
have a look here, this is probably what you want to do: http://www.sharethispoint.com/archive/2006/08/07/23.aspx
|
Customize the Sharepoint add list column page
|
I have defined a custom Sharepoint list for special attributes related to a software application inventory and installed it as a feature. I also want to group these attributes in categories. How could I change the Sharepoint page that allows the user to add a column to a list, so that when the user adds a column to my custom list type (column = attribute) he'll have a dropdown to choose the category?
|
[
"From what I understand you want to add a choice column data type thats already prepopulated so that users can then add it to their own content types?\nhave a look here, this is probably what you want to do: http://www.sharethispoint.com/archive/2006/08/07/23.aspx \n"
] |
[
1
] |
[] |
[] |
[
"list",
"sharepoint"
] |
stackoverflow_0000045741_list_sharepoint.txt
|
Q:
Simplest way to change listview and treeview colours
I'm trying to find a simple way to change the colour of the text and background in listview and treeview controls in WTL or plain Win32 code.
I really don't want to have to implement full owner drawing for these controls, simply change the colours used.
I want to make sure that the images are still drawn with proper transparency.
Any suggestions?
A:
Have a look at the following macros:
ListView_SetBkColor
ListView_SetTextColor
TreeView_SetBkColor
TreeView_SetTextColor
A:
There are also appropriate methods of the CListViewCtrl and CTreeViewCtrl wrapper classes:
GetBkColor
SetBkColor
A:
You may also want to take a look at WTL's CCustomDraw::OnItemPrePaint (that's if you need to control the drawing of individual items)
A good article that describes this process is here
A:
It's been a while since I've use the win32 API directly, but I believe that if you handle the WM_ERASEBACKGROUND message for your control, you can use FillRect() in your handler to paint the background using whatever color you like.
|
Simplest way to change listview and treeview colours
|
I'm trying to find a simple way to change the colour of the text and background in listview and treeview controls in WTL or plain Win32 code.
I really don't want to have to implement full owner drawing for these controls, simply change the colours used.
I want to make sure that the images are still drawn with proper transparency.
Any suggestions?
|
[
"Have a look at the following macros:\nListView_SetBkColor\nListView_SetTextColor \nTreeView_SetBkColor\nTreeView_SetTextColor\n",
"There are also appropriate methods of the CListViewCtrl and CTreeViewCtrl wrapper classes:\n\nGetBkColor\nSetBkColor\n\n",
"You may also want to take a look at WTL's CCustomDraw::OnItemPrePaint (that's if you need to control the drawing of individual items)\nA good article that describes this process is here\n",
"It's been a while since I've use the win32 API directly, but I believe that if you handle the WM_ERASEBACKGROUND message for your control, you can use FillRect() in your handler to paint the background using whatever color you like.\n"
] |
[
3,
2,
2,
0
] |
[] |
[] |
[
"c++",
"winapi",
"wtl"
] |
stackoverflow_0000045528_c++_winapi_wtl.txt
|
Q:
128 bit data encryption using Java
I need to store some sensitive data by encrypting it with atleast 128 bit key. I investigated into javax.crypto package and found that there are certain Cipher names, like
PBEWithMD5AndDES or PBEWithSHA1AndDESede which provides encryption upto 56 bit and 80 bit (http://en.wikipedia.org/wiki/DESede).
I referred other guys posts but those are mainly using RSA and in my understanding RSA is generally suitable for encrypting the communication data (with private-public key pair). My need is different, I just want to store the data and retrieve it back by decrypting it. Therefore I don't need any private-public key pairs.
Please let me know if you have any idea about this.
A:
Use Advanced Encryption Standard (AES). It supports Key lengths of 128, 192, or 256 bits.
The algorithm is simple. The Sun Java website has a section explaining how to do AES encryption in Java.
From Wikipedia...
... the Advanced
Encryption Standard (AES), also known
as Rijndael, is a block cipher adopted
as an encryption standard by the U.S.
government. It has been analyzed
extensively and is now used worldwide,
as was the case with its
predecessor, the Data Encryption
Standard (DES)...
So as a rule of thumb you are not supposed to use DES or its variants because it is being phased out.
As of now, it is better to use AES. There are other options like Twofish, Blowfish etc also. Note that Twofish can be considered as an advanced version of Blowfish.
A:
I have had good success in the past with http://www.bouncycastle.org/ (they have a C# version as well).
A:
You need to download and install the unlimited strength JCE policy file for your JDK. For JDK 6, it is on http://java.sun.com/javase/downloads/index.jsp at the very bottom.
A:
Combining 3 different replies gives what I think is the correct answer.
Download encryption libraries from Bouncycastle then you need to download the "Unlimited Strength Jurisdiction Policy" from Oracle (the files are at the bottom of the download page). Make sure you read the Readme-file on how to install it.
Once you have done this, and using the sample code supplied with the Bountycastle package you should be able to encrypt your data. You can go with a tripple DES implementation, which will give you 112 bits key (often referred to as 128 bit, but only 112 of them are actually secure), or as previously stated, you can use AES. My money would be on AES.
A:
I'm not a crypto expert by any means (so take this suggestion with a grain of salt), but I have used Blowfish before, and I think you can use it for what you need. There is also a newer algorithm by the same guy called Twofish.
Here is a website with a Java implementation, but be careful of the license (it says free for non-commercial use). You can find that link also from Bruce Schneier's website (the creator of both algorithms).
A:
Thanks Michael, after trying out many things in JCE, I finally settled for bouncycastle.
JCE supports AES for encryption and PBE for password based encryption but it does not support combination of both. I wanted the same thing and that I found in bouncycastle.
The example is at : http://forums.sun.com/thread.jspa?messageID=4164916
|
128 bit data encryption using Java
|
I need to store some sensitive data by encrypting it with atleast 128 bit key. I investigated into javax.crypto package and found that there are certain Cipher names, like
PBEWithMD5AndDES or PBEWithSHA1AndDESede which provides encryption upto 56 bit and 80 bit (http://en.wikipedia.org/wiki/DESede).
I referred other guys posts but those are mainly using RSA and in my understanding RSA is generally suitable for encrypting the communication data (with private-public key pair). My need is different, I just want to store the data and retrieve it back by decrypting it. Therefore I don't need any private-public key pairs.
Please let me know if you have any idea about this.
|
[
"Use Advanced Encryption Standard (AES). It supports Key lengths of 128, 192, or 256 bits.\nThe algorithm is simple. The Sun Java website has a section explaining how to do AES encryption in Java.\nFrom Wikipedia...\n\n... the Advanced\n Encryption Standard (AES), also known\n as Rijndael, is a block cipher adopted\n as an encryption standard by the U.S.\n government. It has been analyzed\n extensively and is now used worldwide,\n as was the case with its\n predecessor, the Data Encryption\n Standard (DES)...\n\nSo as a rule of thumb you are not supposed to use DES or its variants because it is being phased out.\nAs of now, it is better to use AES. There are other options like Twofish, Blowfish etc also. Note that Twofish can be considered as an advanced version of Blowfish.\n",
"I have had good success in the past with http://www.bouncycastle.org/ (they have a C# version as well). \n",
"You need to download and install the unlimited strength JCE policy file for your JDK. For JDK 6, it is on http://java.sun.com/javase/downloads/index.jsp at the very bottom.\n",
"Combining 3 different replies gives what I think is the correct answer.\nDownload encryption libraries from Bouncycastle then you need to download the \"Unlimited Strength Jurisdiction Policy\" from Oracle (the files are at the bottom of the download page). Make sure you read the Readme-file on how to install it. \nOnce you have done this, and using the sample code supplied with the Bountycastle package you should be able to encrypt your data. You can go with a tripple DES implementation, which will give you 112 bits key (often referred to as 128 bit, but only 112 of them are actually secure), or as previously stated, you can use AES. My money would be on AES.\n",
"I'm not a crypto expert by any means (so take this suggestion with a grain of salt), but I have used Blowfish before, and I think you can use it for what you need. There is also a newer algorithm by the same guy called Twofish.\nHere is a website with a Java implementation, but be careful of the license (it says free for non-commercial use). You can find that link also from Bruce Schneier's website (the creator of both algorithms).\n",
"Thanks Michael, after trying out many things in JCE, I finally settled for bouncycastle. \nJCE supports AES for encryption and PBE for password based encryption but it does not support combination of both. I wanted the same thing and that I found in bouncycastle.\nThe example is at : http://forums.sun.com/thread.jspa?messageID=4164916\n"
] |
[
8,
4,
3,
2,
0,
0
] |
[] |
[] |
[
"cryptography",
"java"
] |
stackoverflow_0000049226_cryptography_java.txt
|
Q:
Using ASP.NET AJAX PageMethods and Validators
I have a basic CRUD form that uses PageMethods to update the user details, however the Validators don't fire off, I think I need to manually initialize the validators and check whether the validation has passed in my javascript save method. Any ideas on how to do this?
A:
Ok so I finally solved this: You need to call Page_ClientValidate() in your Save javascript method and If it returns true continue with the save, the Page_ClientValidate() initiates the client side validators, See code below:
function Save()
{
var clientValidationPassed =Page_ClientValidate();
if(clientValidationPassed)
{
//Save Data
PageMethods.SaveUser(UserName,Role,SaveCustomerRequestComplete, RequestError);
$find('editPopupExtender').hide();
}
else
{
//Do Nothing as CLient Validation messages are now displayed
}
return false;
}
A:
what are you using for development? VS 2008 supposedly has better JS debugging, haven't tried it yet.
For Ajax you can use the Sys.Debug obj
A:
If you use Firefox, you can use the FireBug plugin. It has great javascript debugging support.
|
Using ASP.NET AJAX PageMethods and Validators
|
I have a basic CRUD form that uses PageMethods to update the user details, however the Validators don't fire off, I think I need to manually initialize the validators and check whether the validation has passed in my javascript save method. Any ideas on how to do this?
|
[
"Ok so I finally solved this: You need to call Page_ClientValidate() in your Save javascript method and If it returns true continue with the save, the Page_ClientValidate() initiates the client side validators, See code below:\n function Save()\n {\n var clientValidationPassed =Page_ClientValidate();\n if(clientValidationPassed)\n {\n //Save Data\n PageMethods.SaveUser(UserName,Role,SaveCustomerRequestComplete, RequestError);\n $find('editPopupExtender').hide();\n }\n else\n {\n //Do Nothing as CLient Validation messages are now displayed\n }\n return false;\n }\n\n",
"what are you using for development? VS 2008 supposedly has better JS debugging, haven't tried it yet. \nFor Ajax you can use the Sys.Debug obj\n",
"If you use Firefox, you can use the FireBug plugin. It has great javascript debugging support.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"asp.net_ajax",
"validation"
] |
stackoverflow_0000045626_asp.net_ajax_validation.txt
|
Q:
Sharepoint scheduling with SSRS issue
I have some scheduled SSRS reports (integrated mode) that get emailed by subscription.
All of a sudden the reports have stopped being emailed. I get the error:
Failure sending mail: Report Server has encountered a SharePoint error.
I don't even know where to start to look as I can't get into SSRS and my Sharepoint knowledge is lacking.
Can you help?
A:
Did you enable trace logging in SharePoint? You can activate it by going to the Central Administration website > Operations > Diagnostic Logging > Trace Logging. Perhaps we can get a more detailed error from there...
|
Sharepoint scheduling with SSRS issue
|
I have some scheduled SSRS reports (integrated mode) that get emailed by subscription.
All of a sudden the reports have stopped being emailed. I get the error:
Failure sending mail: Report Server has encountered a SharePoint error.
I don't even know where to start to look as I can't get into SSRS and my Sharepoint knowledge is lacking.
Can you help?
|
[
"Did you enable trace logging in SharePoint? You can activate it by going to the Central Administration website > Operations > Diagnostic Logging > Trace Logging. Perhaps we can get a more detailed error from there...\n"
] |
[
2
] |
[] |
[] |
[
"moss",
"reporting_services",
"sharepoint"
] |
stackoverflow_0000049316_moss_reporting_services_sharepoint.txt
|
Q:
How-to: Ranking Search Results
I have a webapp development problem that I've developed one solution for, but am trying to find other ideas that might get around some performance issues I'm seeing.
problem statement:
a user enters several keywords/tokens
the application searches for matches to the tokens
need one result for each token
ie, if an entry has 3 tokens, i need the entry id 3 times
rank the results
assign X points for token match
sort the entry ids based on points
if point values are the same, use date to sort results
What I want to be able to do, but have not figured out, is to send 1 query that returns something akin to the results of an in(), but returns a duplicate entry id for each token matches for each entry id checked.
Is there a better way to do this than what I'm doing, of using multiple, individual queries running one query per token? If so, what's the easiest way to implement those?
edit
I've already tokenized the entries, so, for example, "see spot run" has an entry id of 1, and three tokens, 'see', 'spot', 'run', and those are in a separate token table, with entry ids relevant to them so the table might look like this:
'see', 1
'spot', 1
'run', 1
'run', 2
'spot', 3
A:
you could achive this in one query using 'UNION ALL' in MySQL.
Just loop through the tokens in PHP creating a UNION ALL for each token:
e.g if the tokens are 'x', 'y' and 'z' your query may look something like this
SELECT * FROM `entries`
WHERE token like "%x%" union all
SELECT * FROM `entries`
WHERE token like "%y%" union all
SELECT * FROM `entries`
WHERE token like "%z%" ORDER BY score ect...
The order clause should operate on the entire result set as one, which is what you need.
In terms of performance it won't be all that fast (I'm guessing), however with databases the main overhead in terms of speed is often sending the query to the database engine from PHP and receiving the results. With this technique this only happens once instead of once per token, so performance will increase, I just don't know if it'll be enough.
A:
I know this isn't strictly an answer to the question you're asking but if your table is thousands rather than millions of rows, then a FULLTEXT solution might be the best way to go here.
In MySQL when you use MATCH on your indexed column, each keyword you supply will be given a relevance score (calculated roughly by the number of times each keyword was mentioned) that will be more accurate than your method and certainly more effecient for multiple keywords.
See here:
http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html
A:
If you're using the UNION ALL pattern you may also want to include the following parts to your query:
SELECT COUNT(*) AS C
...
GROUP BY ID
ORDER BY c DESC
While this is a really trivial example it does get you the frequency of the matches for each result and this could be a pseudo rank to start with.
|
How-to: Ranking Search Results
|
I have a webapp development problem that I've developed one solution for, but am trying to find other ideas that might get around some performance issues I'm seeing.
problem statement:
a user enters several keywords/tokens
the application searches for matches to the tokens
need one result for each token
ie, if an entry has 3 tokens, i need the entry id 3 times
rank the results
assign X points for token match
sort the entry ids based on points
if point values are the same, use date to sort results
What I want to be able to do, but have not figured out, is to send 1 query that returns something akin to the results of an in(), but returns a duplicate entry id for each token matches for each entry id checked.
Is there a better way to do this than what I'm doing, of using multiple, individual queries running one query per token? If so, what's the easiest way to implement those?
edit
I've already tokenized the entries, so, for example, "see spot run" has an entry id of 1, and three tokens, 'see', 'spot', 'run', and those are in a separate token table, with entry ids relevant to them so the table might look like this:
'see', 1
'spot', 1
'run', 1
'run', 2
'spot', 3
|
[
"you could achive this in one query using 'UNION ALL' in MySQL.\nJust loop through the tokens in PHP creating a UNION ALL for each token:\ne.g if the tokens are 'x', 'y' and 'z' your query may look something like this\nSELECT * FROM `entries` \nWHERE token like \"%x%\" union all \n SELECT * FROM `entries` \n WHERE token like \"%y%\" union all \n SELECT * FROM `entries` \n WHERE token like \"%z%\" ORDER BY score ect...\n\nThe order clause should operate on the entire result set as one, which is what you need.\nIn terms of performance it won't be all that fast (I'm guessing), however with databases the main overhead in terms of speed is often sending the query to the database engine from PHP and receiving the results. With this technique this only happens once instead of once per token, so performance will increase, I just don't know if it'll be enough.\n",
"I know this isn't strictly an answer to the question you're asking but if your table is thousands rather than millions of rows, then a FULLTEXT solution might be the best way to go here.\nIn MySQL when you use MATCH on your indexed column, each keyword you supply will be given a relevance score (calculated roughly by the number of times each keyword was mentioned) that will be more accurate than your method and certainly more effecient for multiple keywords.\nSee here:\nhttp://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html\n",
"If you're using the UNION ALL pattern you may also want to include the following parts to your query:\nSELECT COUNT(*) AS C\n...\nGROUP BY ID\nORDER BY c DESC\n\nWhile this is a really trivial example it does get you the frequency of the matches for each result and this could be a pseudo rank to start with.\n"
] |
[
6,
3,
1
] |
[
"You'll probably get much better performance if you used a data structure designed for search tasks rather than a database. For example, you might try looking at building an inverted index. Rather than writing it youself, however, you might also want to look into something like Lucene which does most of the work for you.\n"
] |
[
-1
] |
[
"mysql",
"php",
"search"
] |
stackoverflow_0000047762_mysql_php_search.txt
|
Q:
How to make cruisecontrol only build one project at a time
I have just set up cruise control.net on our build server, and I am unable to find a setting to tell it to only build one project at a time.
Any ideas?
A:
If you are using CruiseControl 1.3 or later you can use an Integration Queue
These allow you to control which projects can be built concurrently and which must be serialized.
|
How to make cruisecontrol only build one project at a time
|
I have just set up cruise control.net on our build server, and I am unable to find a setting to tell it to only build one project at a time.
Any ideas?
|
[
"If you are using CruiseControl 1.3 or later you can use an Integration Queue\nThese allow you to control which projects can be built concurrently and which must be serialized.\n"
] |
[
5
] |
[] |
[] |
[
"build_automation",
"cruisecontrol.net"
] |
stackoverflow_0000049352_build_automation_cruisecontrol.net.txt
|
Q:
How to shortcut time before data after first hit in browser
We have a couple of large solutions, each with about 40 individual projects (class libraries and nested websites). It takes about 2 minutes to do a full rebuild all.
A couple of specs on the system:
Visual Studio 2005, C#
Primary project is a Web Application Project
40 projects in total (4 Web projects)
We use the internal VS webserver
We extensively use user controls, right down to a user control which contains a textbox
We have a couple of inline web projects that allows us to do partial deployment
About 120 user controls
About 200.000 lines of code (incl. HTML)
We use Source Safe
What I would like to know is how to bring down the time it takes when hitting the site with a browser for the first time. And, I'm not talking about post full deployment - I'm talking about doing a small change in the code, build, refresh browser.
This first hit, takes about 1 minute 15 seconds before data gets back.
To speed things up, I have experimented a little with Ram disks, specifically changing the <compilation> attribute in web.config, setting the tempDirectory to my Ram disk.
This does speed things up a bit. Interestingly though, this totally removed ALL IO access during first hit from the browser.
Remarks
We never do a full compile during development, only partial. For example, the class library being worked on is compiled and then the main site is compiled which then copies the binaries from the class library to the bin directory.
I understand that the asp.net engine needs to parse all the ascx/aspx files after critical files have been changed (bin dir for example) but, what I don't understand is why it needs to do that when only one library dll has been modified.
So, anybody know of a way to either:
Sub segment the solutions to provide faster first hit or fine tune settings in config files or something.
And, again: I'm only talking about development, NOT production deployment, so doing the pre-built compile option is not applicable.
Thanks, Ruvan
A:
Wow, 120 user controls, some of which only contain a single TextBox? This sounds like a lot of code.
When you change a library project, all projects that depend on that library project then need to be recompiled, and also every project that depends on them, etc, all the way up the stack. You know you've only made a 1 line change to a function which doesn't affect all of your user controls, but the compiler doesn't know that.
And as you're probably aware ASPX and ASCX files are only compiled when the web application is first hit.
A possible speed omprovement might be gained by changing your ASCX files into Composite Controls instead, inside another Library Project. These would then be compiled at compile time (if you will) rather than at web application load time.
|
How to shortcut time before data after first hit in browser
|
We have a couple of large solutions, each with about 40 individual projects (class libraries and nested websites). It takes about 2 minutes to do a full rebuild all.
A couple of specs on the system:
Visual Studio 2005, C#
Primary project is a Web Application Project
40 projects in total (4 Web projects)
We use the internal VS webserver
We extensively use user controls, right down to a user control which contains a textbox
We have a couple of inline web projects that allows us to do partial deployment
About 120 user controls
About 200.000 lines of code (incl. HTML)
We use Source Safe
What I would like to know is how to bring down the time it takes when hitting the site with a browser for the first time. And, I'm not talking about post full deployment - I'm talking about doing a small change in the code, build, refresh browser.
This first hit, takes about 1 minute 15 seconds before data gets back.
To speed things up, I have experimented a little with Ram disks, specifically changing the <compilation> attribute in web.config, setting the tempDirectory to my Ram disk.
This does speed things up a bit. Interestingly though, this totally removed ALL IO access during first hit from the browser.
Remarks
We never do a full compile during development, only partial. For example, the class library being worked on is compiled and then the main site is compiled which then copies the binaries from the class library to the bin directory.
I understand that the asp.net engine needs to parse all the ascx/aspx files after critical files have been changed (bin dir for example) but, what I don't understand is why it needs to do that when only one library dll has been modified.
So, anybody know of a way to either:
Sub segment the solutions to provide faster first hit or fine tune settings in config files or something.
And, again: I'm only talking about development, NOT production deployment, so doing the pre-built compile option is not applicable.
Thanks, Ruvan
|
[
"Wow, 120 user controls, some of which only contain a single TextBox? This sounds like a lot of code.\nWhen you change a library project, all projects that depend on that library project then need to be recompiled, and also every project that depends on them, etc, all the way up the stack. You know you've only made a 1 line change to a function which doesn't affect all of your user controls, but the compiler doesn't know that.\nAnd as you're probably aware ASPX and ASCX files are only compiled when the web application is first hit.\nA possible speed omprovement might be gained by changing your ASCX files into Composite Controls instead, inside another Library Project. These would then be compiled at compile time (if you will) rather than at web application load time.\n"
] |
[
2
] |
[] |
[] |
[
"asp.net",
"visual_studio_2005"
] |
stackoverflow_0000049421_asp.net_visual_studio_2005.txt
|
Q:
How do you manage your app when the database goes offline?
Take a .Net Winforms App.. mix in a flakey wireless network connection, stir with a few users who like to simply pull the blue plug out occasionally and for good measure, add a Systems Admin that decides to reboot the SQL server box without warning now and again just to keep everyone on their toes.
What are the suggestions and strategies for handling this sort of scenario in respect to :
Error Handling - for example, do you wrap every
call to the server with a Try/Catch
or do you rely on some form of
Generic Error Handling to manage
this? If so what does it look like?
Application Management - for example, do you
disable the app and not allow users
to interact with it until a
connection is detected again? What would you do?
A:
Answer depends on type of your application. There are applications that can work offline - Microsoft Outlook for example. Such applications doesn't treat connectivity exceptions as critical, they can save your work locally and synchronize it later. Another applications such as online games will treat communication problem as critical exception and will quit if connection gets lost.
As of error handling, I think that you should control exceptions on all layers rather than relying on some general exception handling piece of code. Your business layer should understand what happened on lower layer (data access layer in our case) and respond correspondingly. Connection lost should not be treated as unexpected exception in my opinion. For good practices of exceptions management I recommend to take a look at Exception Handling Application Block.
Concerning application behavior, you should answer yourself on the following question "Does my application have business value for customer in disconnected state?" In many cases it would be beneficial to end user to be able to continues their work in disconnected state. However such behavior tremendously hard to implement.
Especially for your scenario Microsoft developed Disconnected Service Agent Application Block
A:
I have not touched WinForms and .NET for years now, so I can not give you any technical details, but there is the lager picture answer:
First and foremost - do not bind your form data directly to a database.
Create a separate data/model layer that you bind your form widgets to.
From there on, you have several options available to you depending on the level of stability and availability you need to provide.
Probably one of the simplest solutions here would be to just enable/disable the parts of application that need to interact with a database based on the connection state.
Next level of protection would include caching the part of the data model locally and while the database connection is down, using local cache for viewing and disabling any functions that require explicit database connection.
Probably the trickiest thing (that may also provide the most stable experience to the end user) is to replicate the database locally and use some sort of synchronization schema to keep your copy of the database in sync with remote db.
A:
We have this in our Main() method which traps all unhandled exceptions...
Application.ThreadException += new
System.Threading.ThreadExceptionEventHandler(UnhandledExceptionCatcher);
Thread.GetDomain().UnhandledException += new
UnhandledExceptionEventHandler(Application_UnhandledException);
and then Application_UnhandledException and UnhandledExceptionCatcher display user friendly messages.
In addition the application then emails data such as the stack trace to the developers which can be very useful.
It depends on the app of course but for the kind of failures that you describe I would close the app down.
A:
This may be a little too much support for the offline scenario, but have you considered the "Microsoft Sync Framework"? Included in the framework is the "Sync Services for ADO.NET 2.0", which allows your application to hit a local SQL Server CE instance. This can easily be synchronized with a central SQL Server via a variety of methods.
This framework handles the permanent offline scenario, and as I said, it may not be appropriate for your specific requirements, however it will give your application solid offline support.
A:
In our application, we give the user the option to connect to another server, e.g., if the database connection fails, a dialog box displays saying that the server is unavailable, and they could input another IP address to try.
A:
Use something like SQLite to store data offline until a connection is available.
Update: I believe SQLite is the back end for Google Gears, which from my understanding does what you're looking for in web apps... though I don't know if it can be used in a non web context.
|
How do you manage your app when the database goes offline?
|
Take a .Net Winforms App.. mix in a flakey wireless network connection, stir with a few users who like to simply pull the blue plug out occasionally and for good measure, add a Systems Admin that decides to reboot the SQL server box without warning now and again just to keep everyone on their toes.
What are the suggestions and strategies for handling this sort of scenario in respect to :
Error Handling - for example, do you wrap every
call to the server with a Try/Catch
or do you rely on some form of
Generic Error Handling to manage
this? If so what does it look like?
Application Management - for example, do you
disable the app and not allow users
to interact with it until a
connection is detected again? What would you do?
|
[
"Answer depends on type of your application. There are applications that can work offline - Microsoft Outlook for example. Such applications doesn't treat connectivity exceptions as critical, they can save your work locally and synchronize it later. Another applications such as online games will treat communication problem as critical exception and will quit if connection gets lost.\nAs of error handling, I think that you should control exceptions on all layers rather than relying on some general exception handling piece of code. Your business layer should understand what happened on lower layer (data access layer in our case) and respond correspondingly. Connection lost should not be treated as unexpected exception in my opinion. For good practices of exceptions management I recommend to take a look at Exception Handling Application Block.\nConcerning application behavior, you should answer yourself on the following question \"Does my application have business value for customer in disconnected state?\" In many cases it would be beneficial to end user to be able to continues their work in disconnected state. However such behavior tremendously hard to implement.\nEspecially for your scenario Microsoft developed Disconnected Service Agent Application Block\n",
"I have not touched WinForms and .NET for years now, so I can not give you any technical details, but there is the lager picture answer:\nFirst and foremost - do not bind your form data directly to a database.\nCreate a separate data/model layer that you bind your form widgets to.\nFrom there on, you have several options available to you depending on the level of stability and availability you need to provide.\nProbably one of the simplest solutions here would be to just enable/disable the parts of application that need to interact with a database based on the connection state.\nNext level of protection would include caching the part of the data model locally and while the database connection is down, using local cache for viewing and disabling any functions that require explicit database connection.\nProbably the trickiest thing (that may also provide the most stable experience to the end user) is to replicate the database locally and use some sort of synchronization schema to keep your copy of the database in sync with remote db.\n",
"We have this in our Main() method which traps all unhandled exceptions...\nApplication.ThreadException += new \nSystem.Threading.ThreadExceptionEventHandler(UnhandledExceptionCatcher);\n\nThread.GetDomain().UnhandledException += new \nUnhandledExceptionEventHandler(Application_UnhandledException);\n\nand then Application_UnhandledException and UnhandledExceptionCatcher display user friendly messages.\nIn addition the application then emails data such as the stack trace to the developers which can be very useful.\nIt depends on the app of course but for the kind of failures that you describe I would close the app down.\n",
"This may be a little too much support for the offline scenario, but have you considered the \"Microsoft Sync Framework\"? Included in the framework is the \"Sync Services for ADO.NET 2.0\", which allows your application to hit a local SQL Server CE instance. This can easily be synchronized with a central SQL Server via a variety of methods.\nThis framework handles the permanent offline scenario, and as I said, it may not be appropriate for your specific requirements, however it will give your application solid offline support.\n",
"In our application, we give the user the option to connect to another server, e.g., if the database connection fails, a dialog box displays saying that the server is unavailable, and they could input another IP address to try.\n",
"Use something like SQLite to store data offline until a connection is available.\nUpdate: I believe SQLite is the back end for Google Gears, which from my understanding does what you're looking for in web apps... though I don't know if it can be used in a non web context.\n"
] |
[
3,
2,
1,
1,
0,
0
] |
[] |
[] |
[
".net",
"error_handling",
"sql_server"
] |
stackoverflow_0000049426_.net_error_handling_sql_server.txt
|
Q:
Targeting with VS 2008 after installing SP1 of .NET 3.5
How do I target .NET 3.5 alone after installing SP1 in VS2008? This is because VS 2008 lists only .NET 3.5, .NET 3.0 & .NET 2.0 and does not specifically show .NET 3.5 SP1.
A:
I think that you cannot specify SP1, only different versions of the framework. It does make sense, otherwise you could have a lot of problems with an application specifically compiled for a given SP. You can also have problems with the current situation, but I think it's less headache.
A:
I think if you reference SP1 assemblies, it should automatically target SP1.
|
Targeting with VS 2008 after installing SP1 of .NET 3.5
|
How do I target .NET 3.5 alone after installing SP1 in VS2008? This is because VS 2008 lists only .NET 3.5, .NET 3.0 & .NET 2.0 and does not specifically show .NET 3.5 SP1.
|
[
"I think that you cannot specify SP1, only different versions of the framework. It does make sense, otherwise you could have a lot of problems with an application specifically compiled for a given SP. You can also have problems with the current situation, but I think it's less headache.\n",
"I think if you reference SP1 assemblies, it should automatically target SP1.\n"
] |
[
0,
0
] |
[] |
[] |
[
".net_3.5",
"installation",
"visual_studio_2008"
] |
stackoverflow_0000049469_.net_3.5_installation_visual_studio_2008.txt
|
Q:
Trigger UpdatePanel on mouse over (as tooltip)
I need to display aditional information, like a tooltip, but it's a lot of info (about 500 - 600 characters) on the items in a RadioButtonList.
I now trigger the update on a PanelUpdate when the user selects an item in the RadioButtonList, using OnSelectedIndexChanged and AutoPostBack. What I would like to do, is trigger this on onMouseHover (ie. user holds the mouse a second or two over the item) rather than mouse click but I cannot find a way to do this.
A:
You could try setting an AsyncPostBackTrigger on the updatePanel to watch the value of a hidden field. Then in the javascript onMouseHover event, increment the hidden value. This would fire the AsyncPostBackTrigger, updating the UpdatePanel.
|
Trigger UpdatePanel on mouse over (as tooltip)
|
I need to display aditional information, like a tooltip, but it's a lot of info (about 500 - 600 characters) on the items in a RadioButtonList.
I now trigger the update on a PanelUpdate when the user selects an item in the RadioButtonList, using OnSelectedIndexChanged and AutoPostBack. What I would like to do, is trigger this on onMouseHover (ie. user holds the mouse a second or two over the item) rather than mouse click but I cannot find a way to do this.
|
[
"You could try setting an AsyncPostBackTrigger on the updatePanel to watch the value of a hidden field. Then in the javascript onMouseHover event, increment the hidden value. This would fire the AsyncPostBackTrigger, updating the UpdatePanel.\n"
] |
[
1
] |
[] |
[] |
[
"asp.net",
"asp.net_ajax",
"javascript"
] |
stackoverflow_0000049431_asp.net_asp.net_ajax_javascript.txt
|
Q:
What are the preferred conventions in naming attributes, methods and classes in different languages?
Are the naming conventions similar in different languages? If not, what are the differences?
A:
Each language has a specific style. At least one.
Each project adopts a specific style. At least, they should. This can sometimes be a different style to the canonical style your language uses - probably based on the dev leaders preferences.
Which style to use?
If your language ships with a good standard library, try to adopt the conventions in that library.
If your language has a canonical book (The C Programming language, The Camel Book, Programming Ruby etc.) use that.
Sometimes the language designers (C#, Java spring to mind) actually write a bunch of guidelines. Use those, especially if the community adopts them too.
If you use multiple languages remember to stay flexible and adjust your preferred coding style to the language you are using - when coding in Python use a different style to coding in C# etc.
A:
As others have said, things vary a lot, but here's a rough overview of the most commonly used naming conventions in various languages:
lowercase, lowercase_with_underscores:
Commonly used for local variables and function names (typical C syntax).
UPPERCASE, UPPERCASE_WITH_UNDERSCORES:
Commonly used for constants and variables that never change. Some (older) languages like BASIC also have a convention for using all upper case for all variable names.
CamelCase, javaCamelCase:
Typically used for function names and variable names. Some use it only for functions and combine it with lowercase or lowercase_with_underscores for variables. When javaCamelCase is used, it's typically used both for functions and variables.
This syntax is also quite common for external APIs, since this is how the Win32 and Java APIs do it. (Even if a library uses a different convention internally they typically export with the (java)CamelCase syntax for function names.)
prefix_CamelCase, prefix_lowercase, prefix_lowercase_with_underscores:
Commonly used in languages that don't support namespaces (i.e. C). The prefix will usually denote the library or module to which the function or variable belongs. Usually reserved to global variables and global functions. Prefix can also be in UPPERCASE. Some conventions use lowercase prefix for internal functions and variables and UPPERCASE prefix for exported ones.
There are of course many other ways to name things, but most conventions are based on one of the ones mentioned above or a variety on those.
BTW: I forgot to mention Hungarian notation on purpose.
A:
Of course there are some common guidelines but there are also differences due to difference in language syntax\design.
For .NET (C#, VB, etc) I would recommend following resource:
Framework Design Guidelines -
definitive book on .NET coding
guidelines including naming
conventions
Naming Guidelines - guidelines from Microsoft
General Naming Conventions - another set of MS guidelines (C#, C++, VB)
A:
G'day,
One of the best recommendations I can make is to read the relevant section(s) of Steve McConnell's Code Complete (Amazon Link). He has an excellent discussion on naming techniques.
HTH
cheers,
Rob
A:
I think that most naming conventions will vary but the developer, for example I name variables like: mulitwordVarName, however some of the dev I have worked with used something like mulitword_var_name or multiwordvarname or aj5g54ag or... I think it really depends on your preference.
A:
Years ago an wise old programmer taught me the evils of Hungarian notation, this was a real legacy system, Microsoft adopted it some what in the Windows SDK, and later in MFC. It was designed around loose typed languages like C, and not for strong typed languages like C++. At the time I was programming Windows 3.0 using Borland's Turbo Pascal 1.0 for Windows, which later became Delphi.
Anyway long story short at this time the team I was working on developed our own standards very simple and applicable to almost all languages, based on simple prefixes -
a - argument
l - local
m - member
g - global
The emphasis here is on scope, rely on the compiler to check type, all you need care about is scope, where the data lives. This has many advantages over nasty old Hungarian notation in that if you change the type of something via refactoring you don't have to search and replace all instances of it.
Nearly 16 years later I still promote the use of this practice, and have found it applicable to almost every language I have developed in.
|
What are the preferred conventions in naming attributes, methods and classes in different languages?
|
Are the naming conventions similar in different languages? If not, what are the differences?
|
[
"Each language has a specific style. At least one.\nEach project adopts a specific style. At least, they should. This can sometimes be a different style to the canonical style your language uses - probably based on the dev leaders preferences.\nWhich style to use? \nIf your language ships with a good standard library, try to adopt the conventions in that library.\nIf your language has a canonical book (The C Programming language, The Camel Book, Programming Ruby etc.) use that.\nSometimes the language designers (C#, Java spring to mind) actually write a bunch of guidelines. Use those, especially if the community adopts them too.\nIf you use multiple languages remember to stay flexible and adjust your preferred coding style to the language you are using - when coding in Python use a different style to coding in C# etc.\n",
"As others have said, things vary a lot, but here's a rough overview of the most commonly used naming conventions in various languages:\nlowercase, lowercase_with_underscores:\nCommonly used for local variables and function names (typical C syntax).\nUPPERCASE, UPPERCASE_WITH_UNDERSCORES:\nCommonly used for constants and variables that never change. Some (older) languages like BASIC also have a convention for using all upper case for all variable names.\nCamelCase, javaCamelCase:\nTypically used for function names and variable names. Some use it only for functions and combine it with lowercase or lowercase_with_underscores for variables. When javaCamelCase is used, it's typically used both for functions and variables.\nThis syntax is also quite common for external APIs, since this is how the Win32 and Java APIs do it. (Even if a library uses a different convention internally they typically export with the (java)CamelCase syntax for function names.)\nprefix_CamelCase, prefix_lowercase, prefix_lowercase_with_underscores:\nCommonly used in languages that don't support namespaces (i.e. C). The prefix will usually denote the library or module to which the function or variable belongs. Usually reserved to global variables and global functions. Prefix can also be in UPPERCASE. Some conventions use lowercase prefix for internal functions and variables and UPPERCASE prefix for exported ones.\nThere are of course many other ways to name things, but most conventions are based on one of the ones mentioned above or a variety on those.\nBTW: I forgot to mention Hungarian notation on purpose.\n",
"Of course there are some common guidelines but there are also differences due to difference in language syntax\\design.\nFor .NET (C#, VB, etc) I would recommend following resource:\n\nFramework Design Guidelines -\ndefinitive book on .NET coding\nguidelines including naming\nconventions\nNaming Guidelines - guidelines from Microsoft\nGeneral Naming Conventions - another set of MS guidelines (C#, C++, VB)\n\n",
"G'day,\nOne of the best recommendations I can make is to read the relevant section(s) of Steve McConnell's Code Complete (Amazon Link). He has an excellent discussion on naming techniques.\nHTH\ncheers,\nRob\n",
"I think that most naming conventions will vary but the developer, for example I name variables like: mulitwordVarName, however some of the dev I have worked with used something like mulitword_var_name or multiwordvarname or aj5g54ag or... I think it really depends on your preference.\n",
"Years ago an wise old programmer taught me the evils of Hungarian notation, this was a real legacy system, Microsoft adopted it some what in the Windows SDK, and later in MFC. It was designed around loose typed languages like C, and not for strong typed languages like C++. At the time I was programming Windows 3.0 using Borland's Turbo Pascal 1.0 for Windows, which later became Delphi. \nAnyway long story short at this time the team I was working on developed our own standards very simple and applicable to almost all languages, based on simple prefixes -\n\na - argument \nl - local \nm - member \ng - global\n\nThe emphasis here is on scope, rely on the compiler to check type, all you need care about is scope, where the data lives. This has many advantages over nasty old Hungarian notation in that if you change the type of something via refactoring you don't have to search and replace all instances of it.\nNearly 16 years later I still promote the use of this practice, and have found it applicable to almost every language I have developed in. \n"
] |
[
4,
2,
1,
1,
1,
0
] |
[] |
[] |
[
"naming",
"programming_languages"
] |
stackoverflow_0000049382_naming_programming_languages.txt
|
Q:
How to pass enumerated values to a web service
My dilemma is, basically, how to share an enumeration between two applications.
The users upload documents through a front-end application that is on the web. This application calls a web service of the back-end application and passes the document to it. The back-end app saves the document and inserts a row in the Document table.
The document type (7 possible document types: Invoice, Contract etc.) is passed as a parameter to the web service's UploadDocument method. The question is, what should the type (and possible values) of this parameter be?
Since you need to hardcode these values in both applications, I think it is O.K. to use a descriptive string (Invoice, Contract, WorkOrder, SignedWorkOrder).
Is it maybe a better approach to create a DocumentTypes enumeration in the first application, and to reproduce it also in the second application, and then pass the corresponding integer value to the web service between them?
A:
I'd suggest against passing an integer between them, simply for purposes of readability and debugging. Say you're going through your logs and you see a bunch of 500 errors for DocumentType=4. Now you've got to go look up which DocumentType is 4. Or if one of the applications refers to a number that doesn't exist in the other, perhaps due to mismatched versions.
It's a bit more code, and it rubs the static typing part of the brain a bit raw, but in protocols on top of HTTP the received wisdom is to side with legible strings over opaque enumerations.
A:
I would still use enumeration internally but would expect consumers to pass me only the name, not the numeric value itself.
just some silly example to illustrate:
public enum DocumentType
{
Invoice,
Contract,
WorkOrder,
SignedWorkOrder
}
[WebMethod]
public void UploadDocument(string type, byte[] data)
{
DocumentType docType = (DocumentType)Enum.Parse(typeof(DocumentType), type);
}
A:
I can only speak about .net, but if you have an ASP.net Webservice, you should be able to add an enumeration directly to it.
When you then use the "Add Web Reference" in your Client Application, the resulting Class should include that enum
But this is from the top of my head, i'm pretty sure i've done it in the past, but I can't say for sure.
A:
In .NET, enumeration values are (by default) serialized into xml with the name. For instances where you can have multiple values (flags), then it puts a space between the values. This works because the enumeration doesn't contain spaces, so you can get the value again by splitting the string (ie. "Invoice Contract SignedWorkOrder", using lubos's example).
You can control the serialization of values of in asp.net web services using the XmlEnumAttribute, or using the EnumMember attribute when using WCF.
A:
If you are consuming your Web service from a .NET page/application, you should be able to access the enumeration after you add your Web reference to the project that is consuming the service.
A:
If you are not working with .NET to .NET SOAP, you can still define an enumerator provided both endpoints are using WSDL.
<s:simpleType name="MyEnum">
<s:restriction base="s:string">
<s:enumeration value="Wow"/>
<s:enumeration value="This"/>
<s:enumeration value="Is"/>
<s:enumeration value="Really"/>
<s:enumeration value="Simple"/>
</s:restriction>
</s:simpleType>
Its up to the WSDL -> Proxy generator tool to parse that into a enum equivalent in the client language.
A:
There are some fairly good reasons for not using enums on an interface boundary like that. Consider Dare's post on the subject.
A:
I've noticed that when using "Add Service Reference" as opposed to "Add Web Reference" from VS.net, the actual enum values come across as well as the enum names. This is really annoying as I need to support both 2.0 and 3.5 clients. I end up having to go into the 2.0 generated web service proxy code and manually adding the enum values every time I make a change!
|
How to pass enumerated values to a web service
|
My dilemma is, basically, how to share an enumeration between two applications.
The users upload documents through a front-end application that is on the web. This application calls a web service of the back-end application and passes the document to it. The back-end app saves the document and inserts a row in the Document table.
The document type (7 possible document types: Invoice, Contract etc.) is passed as a parameter to the web service's UploadDocument method. The question is, what should the type (and possible values) of this parameter be?
Since you need to hardcode these values in both applications, I think it is O.K. to use a descriptive string (Invoice, Contract, WorkOrder, SignedWorkOrder).
Is it maybe a better approach to create a DocumentTypes enumeration in the first application, and to reproduce it also in the second application, and then pass the corresponding integer value to the web service between them?
|
[
"I'd suggest against passing an integer between them, simply for purposes of readability and debugging. Say you're going through your logs and you see a bunch of 500 errors for DocumentType=4. Now you've got to go look up which DocumentType is 4. Or if one of the applications refers to a number that doesn't exist in the other, perhaps due to mismatched versions.\nIt's a bit more code, and it rubs the static typing part of the brain a bit raw, but in protocols on top of HTTP the received wisdom is to side with legible strings over opaque enumerations.\n",
"I would still use enumeration internally but would expect consumers to pass me only the name, not the numeric value itself.\njust some silly example to illustrate:\npublic enum DocumentType\n{\n Invoice,\n Contract,\n WorkOrder,\n SignedWorkOrder\n}\n\n[WebMethod]\npublic void UploadDocument(string type, byte[] data)\n{\n DocumentType docType = (DocumentType)Enum.Parse(typeof(DocumentType), type);\n}\n\n",
"I can only speak about .net, but if you have an ASP.net Webservice, you should be able to add an enumeration directly to it.\nWhen you then use the \"Add Web Reference\" in your Client Application, the resulting Class should include that enum\nBut this is from the top of my head, i'm pretty sure i've done it in the past, but I can't say for sure.\n",
"In .NET, enumeration values are (by default) serialized into xml with the name. For instances where you can have multiple values (flags), then it puts a space between the values. This works because the enumeration doesn't contain spaces, so you can get the value again by splitting the string (ie. \"Invoice Contract SignedWorkOrder\", using lubos's example).\nYou can control the serialization of values of in asp.net web services using the XmlEnumAttribute, or using the EnumMember attribute when using WCF.\n",
"If you are consuming your Web service from a .NET page/application, you should be able to access the enumeration after you add your Web reference to the project that is consuming the service.\n",
"If you are not working with .NET to .NET SOAP, you can still define an enumerator provided both endpoints are using WSDL.\n<s:simpleType name=\"MyEnum\"> \n <s:restriction base=\"s:string\">\n <s:enumeration value=\"Wow\"/>\n <s:enumeration value=\"This\"/>\n <s:enumeration value=\"Is\"/>\n <s:enumeration value=\"Really\"/>\n <s:enumeration value=\"Simple\"/>\n </s:restriction>\n</s:simpleType>\n\nIts up to the WSDL -> Proxy generator tool to parse that into a enum equivalent in the client language.\n",
"There are some fairly good reasons for not using enums on an interface boundary like that. Consider Dare's post on the subject.\n",
"I've noticed that when using \"Add Service Reference\" as opposed to \"Add Web Reference\" from VS.net, the actual enum values come across as well as the enum names. This is really annoying as I need to support both 2.0 and 3.5 clients. I end up having to go into the 2.0 generated web service proxy code and manually adding the enum values every time I make a change!\n"
] |
[
6,
5,
3,
3,
3,
3,
3,
0
] |
[] |
[] |
[
"application_integration",
"web_services"
] |
stackoverflow_0000001709_application_integration_web_services.txt
|
Q:
Creating batch jobs in PowerShell
Imagine a DOS style .cmd file which is used to launch interdependent windowed applications in the right order.
Example:
1) Launch a server application by calling an exe with parameters.
2) Wait for the server to become initialized (or a fixed amount of time).
3) Launch client application by calling an exe with parameters.
What is the simplest way of accomplishing this kind of batch job in PowerShell?
A:
Remember that PowerShell can access .Net objects. The Start-Sleep as suggested by Blair Conrad can be replaced by a call to WaitForInputIdle of the server process so you know when the server is ready before starting the client.
$sp = get-process server-application
$sp.WaitForInputIdle()
You could also use Process.Start to start the process and have it return the exact Process. Then you don't need the get-process.
$sp = [diagnostics.process]::start("server-application", "params")
$sp.WaitForInputIdle()
$cp = [diagnostics.process]::start("client-application", "params")
A:
@Lars Truijens suggested
Remember that PowerShell can access
.Net objects. The Start-Sleep as
suggested by Blair Conrad can be
replaced by a call to WaitForInputIdle
of the server process so you know when
the server is ready before starting
the client.
This is more elegant than sleeping for a fixed (or supplied via parameter) amount of time. However,
WaitForInputIdle
applies only to processes with a user
interface and, therefore, a message
loop.
so this may not work, depending on the characteristics of launch-server-application. However, as Lars pointed out to me, the question referred to a windowed application (which I missed when I read the question), so his solution is probably best.
A:
To wait 10 seconds between launching the applications, try
launch-server-application serverparam1 serverparam2 ...
Start-Sleep -s 10
launch-client-application clientparam1 clientparam2 clientparam3 ...
If you want to create a script and have the arguments passed in, create a file called runlinkedapps.ps1 (or whatever) with these contents:
launch-server-application $args[0] $args[1]
Start-Sleep -s 10
launch-client-application $args[2] $args[3] $args[4]
Or however you choose to distribute the server and client parameters on the line you use to run runlinkedapps.ps1. If you want, you could even pass in the delay here, instead of hardcoding 10.
Remember, your .ps1 file need to be on your Path, or you'll have to specify its location when you run it. (Oh, and I've assumed that launch-server-application and launch-client-application are on your Path - if not, you'll need to specify the full path to them as well.)
|
Creating batch jobs in PowerShell
|
Imagine a DOS style .cmd file which is used to launch interdependent windowed applications in the right order.
Example:
1) Launch a server application by calling an exe with parameters.
2) Wait for the server to become initialized (or a fixed amount of time).
3) Launch client application by calling an exe with parameters.
What is the simplest way of accomplishing this kind of batch job in PowerShell?
|
[
"Remember that PowerShell can access .Net objects. The Start-Sleep as suggested by Blair Conrad can be replaced by a call to WaitForInputIdle of the server process so you know when the server is ready before starting the client.\n$sp = get-process server-application\n$sp.WaitForInputIdle()\n\nYou could also use Process.Start to start the process and have it return the exact Process. Then you don't need the get-process.\n$sp = [diagnostics.process]::start(\"server-application\", \"params\")\n$sp.WaitForInputIdle()\n$cp = [diagnostics.process]::start(\"client-application\", \"params\")\n\n",
"@Lars Truijens suggested\n\nRemember that PowerShell can access\n .Net objects. The Start-Sleep as\n suggested by Blair Conrad can be\n replaced by a call to WaitForInputIdle\n of the server process so you know when\n the server is ready before starting\n the client.\n\nThis is more elegant than sleeping for a fixed (or supplied via parameter) amount of time. However, \nWaitForInputIdle \n\napplies only to processes with a user\n interface and, therefore, a message\n loop.\n\nso this may not work, depending on the characteristics of launch-server-application. However, as Lars pointed out to me, the question referred to a windowed application (which I missed when I read the question), so his solution is probably best.\n",
"To wait 10 seconds between launching the applications, try\nlaunch-server-application serverparam1 serverparam2 ...\nStart-Sleep -s 10\nlaunch-client-application clientparam1 clientparam2 clientparam3 ...\n\nIf you want to create a script and have the arguments passed in, create a file called runlinkedapps.ps1 (or whatever) with these contents:\nlaunch-server-application $args[0] $args[1]\nStart-Sleep -s 10\nlaunch-client-application $args[2] $args[3] $args[4]\n\nOr however you choose to distribute the server and client parameters on the line you use to run runlinkedapps.ps1. If you want, you could even pass in the delay here, instead of hardcoding 10.\nRemember, your .ps1 file need to be on your Path, or you'll have to specify its location when you run it. (Oh, and I've assumed that launch-server-application and launch-client-application are on your Path - if not, you'll need to specify the full path to them as well.)\n"
] |
[
5,
1,
0
] |
[] |
[] |
[
"batch_file",
"powershell"
] |
stackoverflow_0000049402_batch_file_powershell.txt
|
Q:
Incosistency between MS Sql 2k and 2k5 with columns as function arguments
I'm having trouble getting the following to work in SQL Server 2k, but it works in 2k5:
--works in 2k5, not in 2k
create view foo as
SELECT usertable.legacyCSVVarcharCol as testvar
FROM usertable
WHERE rsrcID in
( select val
from
dbo.fnSplitStringToInt(usertable.legacyCSVVarcharCol, default)
)
--error message:
Msg 170, Level 15, State 1, Procedure foo, Line 4
Line 25: Incorrect syntax near '.'.
So, legacyCSVVarcharCol is a column containing comma-separated lists of INTs. I realize that this is a huge WTF, but this is legacy code, and there's nothing that can be done about the schema right now. Passing "testvar" as the argument to the function doesn't work in 2k either. In fact, it results in a slightly different (and even weirder error):
Msg 155, Level 15, State 1, Line 8
'testvar' is not a recognized OPTIMIZER LOCK HINTS option.
Passing a hard-coded string as the argument to fnSplitStringToInt works in both 2k and 2k5.
Does anyone know why this doesn't work in 2k? Is this perhaps a known bug in the query planner? Any suggestions for how to make it work? Again, I realize that the real answer is "don't store CSV lists in your DB!", but alas, that's beyond my control.
Some sample data, if it helps:
INSERT INTO usertable (legacyCSVVarcharCol) values ('1,2,3');
INSERT INTO usertable (legacyCSVVarcharCol) values ('11,13,42');
Note that the data in the table does not seem to matter since this is a syntax error, and it occurs even if usertable is completely empty.
EDIT: Realizing that perhaps the initial example was unclear, here are two examples, one of which works and one of which does not, which should highlight the problem that's occurring:
--fails in sql2000, works in 2005
SELECT t1.*
FROM usertable t1
WHERE 1 in
(Select val
from
fnSplitStringToInt(t1.legacyCSVVarcharCol, ',')
)
--works everywhere:
SELECT t1.*
FROM usertable t1
WHERE 1 in
( Select val
from
fnSplitStringToInt('1,4,543,56578', ',')
)
Note that the only difference is the first argument to fnSplitStringToInt is a column in the case that fails in 2k and a literal string in the case that succeeds in both.
A:
Passing column-values to a table-valued user-defined function is not supported in SQL Server 2000, you can only use constants, so the following (simpler version) would also fail:
SELECT *, (SELECT TOP 1 val FROM dbo.fnSplitStringToInt(usertable.legacyCSVVarcharCol, ','))
FROM usertable
It will work on SQL Server 2005, though, as you have found out.
A:
I don't think functions can have default values in functions in SS2K.
What happens when you run this SQL in SS2K?
select val
from dbo.fnSplitStringToInt('1,2,3', default)
|
Incosistency between MS Sql 2k and 2k5 with columns as function arguments
|
I'm having trouble getting the following to work in SQL Server 2k, but it works in 2k5:
--works in 2k5, not in 2k
create view foo as
SELECT usertable.legacyCSVVarcharCol as testvar
FROM usertable
WHERE rsrcID in
( select val
from
dbo.fnSplitStringToInt(usertable.legacyCSVVarcharCol, default)
)
--error message:
Msg 170, Level 15, State 1, Procedure foo, Line 4
Line 25: Incorrect syntax near '.'.
So, legacyCSVVarcharCol is a column containing comma-separated lists of INTs. I realize that this is a huge WTF, but this is legacy code, and there's nothing that can be done about the schema right now. Passing "testvar" as the argument to the function doesn't work in 2k either. In fact, it results in a slightly different (and even weirder error):
Msg 155, Level 15, State 1, Line 8
'testvar' is not a recognized OPTIMIZER LOCK HINTS option.
Passing a hard-coded string as the argument to fnSplitStringToInt works in both 2k and 2k5.
Does anyone know why this doesn't work in 2k? Is this perhaps a known bug in the query planner? Any suggestions for how to make it work? Again, I realize that the real answer is "don't store CSV lists in your DB!", but alas, that's beyond my control.
Some sample data, if it helps:
INSERT INTO usertable (legacyCSVVarcharCol) values ('1,2,3');
INSERT INTO usertable (legacyCSVVarcharCol) values ('11,13,42');
Note that the data in the table does not seem to matter since this is a syntax error, and it occurs even if usertable is completely empty.
EDIT: Realizing that perhaps the initial example was unclear, here are two examples, one of which works and one of which does not, which should highlight the problem that's occurring:
--fails in sql2000, works in 2005
SELECT t1.*
FROM usertable t1
WHERE 1 in
(Select val
from
fnSplitStringToInt(t1.legacyCSVVarcharCol, ',')
)
--works everywhere:
SELECT t1.*
FROM usertable t1
WHERE 1 in
( Select val
from
fnSplitStringToInt('1,4,543,56578', ',')
)
Note that the only difference is the first argument to fnSplitStringToInt is a column in the case that fails in 2k and a literal string in the case that succeeds in both.
|
[
"Passing column-values to a table-valued user-defined function is not supported in SQL Server 2000, you can only use constants, so the following (simpler version) would also fail:\nSELECT *, (SELECT TOP 1 val FROM dbo.fnSplitStringToInt(usertable.legacyCSVVarcharCol, ','))\nFROM usertable\n\nIt will work on SQL Server 2005, though, as you have found out.\n",
"I don't think functions can have default values in functions in SS2K.\nWhat happens when you run this SQL in SS2K?\nselect val \nfrom dbo.fnSplitStringToInt('1,2,3', default)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000026478_sql_server.txt
|
Q:
Team System notification of unassociated checkins
How can I be notified when someone checks a file into Team System and doesn't associate it with a work item?
A:
Use the very cool team foundation server event subscription tool. You can find the tool here: http://www.codeplex.com/tfseventsubscription. Once installed, setup a subscription with the following parameters:
XPath: PolicyOverrideComment <> ''
Event: CheckinEvent
|
Team System notification of unassociated checkins
|
How can I be notified when someone checks a file into Team System and doesn't associate it with a work item?
|
[
"Use the very cool team foundation server event subscription tool. You can find the tool here: http://www.codeplex.com/tfseventsubscription. Once installed, setup a subscription with the following parameters: \n\nXPath: PolicyOverrideComment <> ''\nEvent: CheckinEvent\n\n"
] |
[
1
] |
[] |
[] |
[
"tfs"
] |
stackoverflow_0000049616_tfs.txt
|
Q:
Has anyone used NUnitLite with any success?
I've recently started work on the Compact Framework and I was wondering if anyone had some recommendations for unit testing beyond what's in VS 2008. MSTest is ok, but debugging the tests is a nightmare and the test runner is so slow.
I see that NUnitLite on codeplex is an option, but it doesn't look very active; it's also in the roadmap for NUnit 3.0, but who knows when that will come out. Has anyone had any success with it?
A:
What we've done that really improves our efficiency and quality is to multi target our mobile application. That is to say with a very little bit of creativity and a few conditional compile tags and custom project configurations it is possible to build a version of your mobile application that also runs on the desktop.
If you put all your business logic you need tested in a separate project/assembly then this layer can be very effectively tested using any of the desktop tools you are already familiar with.
A:
We use NUnitLite, although I think we did have had to add some code to it in order for it to work.
One of the problems we found is that if you are using parts of the platform that only exist in CF, then you can only run those tests in NUnitLite on an emulator or Windows Mobile device, which makes it hard to run the tests as part of an integrated build process. We got round this by added a new test attribute allowing you to disable the tests what would only run on the CF (typically these would be p/invoking out to some windows mobile only dll).
|
Has anyone used NUnitLite with any success?
|
I've recently started work on the Compact Framework and I was wondering if anyone had some recommendations for unit testing beyond what's in VS 2008. MSTest is ok, but debugging the tests is a nightmare and the test runner is so slow.
I see that NUnitLite on codeplex is an option, but it doesn't look very active; it's also in the roadmap for NUnit 3.0, but who knows when that will come out. Has anyone had any success with it?
|
[
"What we've done that really improves our efficiency and quality is to multi target our mobile application. That is to say with a very little bit of creativity and a few conditional compile tags and custom project configurations it is possible to build a version of your mobile application that also runs on the desktop. \nIf you put all your business logic you need tested in a separate project/assembly then this layer can be very effectively tested using any of the desktop tools you are already familiar with.\n",
"We use NUnitLite, although I think we did have had to add some code to it in order for it to work. \nOne of the problems we found is that if you are using parts of the platform that only exist in CF, then you can only run those tests in NUnitLite on an emulator or Windows Mobile device, which makes it hard to run the tests as part of an integrated build process. We got round this by added a new test attribute allowing you to disable the tests what would only run on the CF (typically these would be p/invoking out to some windows mobile only dll).\n"
] |
[
3,
0
] |
[] |
[] |
[
"compact_framework",
"nunit",
"unit_testing",
"windows_mobile"
] |
stackoverflow_0000014497_compact_framework_nunit_unit_testing_windows_mobile.txt
|
Q:
Different sizeof results
Why does n not equal to 8 in the following function?
void foo(char cvalue[8])
{
int n = sizeof (cvalue);
}
But n does equal to 8 in this version of the function:
void bar()
{
char cvalue[8];
int n = sizeof (cvalue);
}
A:
Because you can't pass entire arrays as function parameters in C. You're actually passing a pointer to it; the brackets are syntactic sugar. There are no guarantees the array you're pointing to has size 8, since you could pass this function any character pointer you want.
// These all do the same thing
void foo(char cvalue[8])
void foo(char cvalue[])
void foo(char *cvalue)
A:
C and C++ arrays are not first class objects; you cannot pass arrays to functions, they always decay to pointers.
You can, however, pass pointers and references to arrays. This prevents the array bounds from decaying. So this is legal:
template<typename T, size_t N>
void foo(const T(&arr)[N])
{
int n = sizeof(arr);
}
A:
In the first example, cvalue as passed parameter is in really just a pointer to a character array and when you take the sizeof() of it, you get the size of the pointer. In the second case, where you've declared it as a local variable, you get the size of the the entire array.
A:
The size of the parameter on 32-bit systems will be 4 and on 64-bit systems compiled with -m64 will be 8. This is because arrays are passed as pointers in functions. The pointer is merely a memory address.
|
Different sizeof results
|
Why does n not equal to 8 in the following function?
void foo(char cvalue[8])
{
int n = sizeof (cvalue);
}
But n does equal to 8 in this version of the function:
void bar()
{
char cvalue[8];
int n = sizeof (cvalue);
}
|
[
"Because you can't pass entire arrays as function parameters in C. You're actually passing a pointer to it; the brackets are syntactic sugar. There are no guarantees the array you're pointing to has size 8, since you could pass this function any character pointer you want.\n// These all do the same thing\nvoid foo(char cvalue[8])\nvoid foo(char cvalue[])\nvoid foo(char *cvalue)\n\n",
"C and C++ arrays are not first class objects; you cannot pass arrays to functions, they always decay to pointers.\nYou can, however, pass pointers and references to arrays. This prevents the array bounds from decaying. So this is legal:\ntemplate<typename T, size_t N>\nvoid foo(const T(&arr)[N])\n{\n int n = sizeof(arr);\n}\n\n",
"In the first example, cvalue as passed parameter is in really just a pointer to a character array and when you take the sizeof() of it, you get the size of the pointer. In the second case, where you've declared it as a local variable, you get the size of the the entire array.\n",
"The size of the parameter on 32-bit systems will be 4 and on 64-bit systems compiled with -m64 will be 8. This is because arrays are passed as pointers in functions. The pointer is merely a memory address.\n"
] |
[
47,
14,
1,
0
] |
[] |
[] |
[
"c",
"c++",
"sizeof"
] |
stackoverflow_0000049046_c_c++_sizeof.txt
|
Q:
Nant and maintain directory structure
How do you use the nant <copy> command and maintain the directory structure? This is what I am doing, but it is copying all the files to a single directory.
<copy todir="..\out">
<fileset>
<includes name="..\src\PrecompiledWeb\**\*" />
</fileset>
</copy>
A:
Try:
<fileset baseDir="../src/PrecompiledWeb"><includes name="**/*" />
|
Nant and maintain directory structure
|
How do you use the nant <copy> command and maintain the directory structure? This is what I am doing, but it is copying all the files to a single directory.
<copy todir="..\out">
<fileset>
<includes name="..\src\PrecompiledWeb\**\*" />
</fileset>
</copy>
|
[
"Try:\n<fileset baseDir=\"../src/PrecompiledWeb\"><includes name=\"**/*\" />\n\n"
] |
[
15
] |
[] |
[] |
[
"build_automation",
"nant"
] |
stackoverflow_0000049623_build_automation_nant.txt
|
Q:
How to read bound hover callback functions in jQuery
I used jQuery to set hover callbacks for elements on my page. I'm now writing a module which needs to temporarily set new hover behaviour for some elements. The new module has no access to the original code for the hover functions.
I want to store the old hover functions before I set new ones so I can restore them when finished with the temporary hover behaviour.
I think these can be stored using the jQuery.data() function:
//save old hover behavior (somehow)
$('#foo').data('oldhoverin',???)
$('#foo').data('oldhoverout',???);
//set new hover behavior
$('#foo').hover(newhoverin,newhoverout);
Do stuff with new hover behaviour...
//restore old hover behaviour
$('#foo').hover($('#foo').data('oldhoverin'),$('#foo').data('oldhoverout'));
But how do I get the currently registered hover functions from jQuery?
Shadow2531, I am trying to do this without modifying the code which originally registered the callbacks. Your suggestion would work fine otherwise. Thanks for the suggestion, and for helping clarify what I'm searching for. Maybe I have to go into the source of jquery and figure out how these callbacks are stored internally. Maybe I should change the question to "Is it possible to do this without modifying jquery?"
A:
Calling an event bind method (such as hover) does not delete old event handlers, only adds your new events, so your idea of 'restoring' the old event functions wouldn't work, as it wouldn't delete your events.
You can add your own events, and then remove them without affecting any other events then use Event namespacing: http://docs.jquery.com/Events_(Guide)#Namespacing_events
A:
Not sure if this will work, but you can try this:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Jquery - Get, change and restore hover handlers</title>
<script src="jquery.js"></script>
<script>
function setHover(obj, mouseenter, mouseleave) {
obj.data("_mouseenter", mouseenter);
obj.data("_mouseleave", mouseleave);
obj.hover(obj.data("_mouseenter"), obj.data("_mouseleave"));
}
function removeHover(obj) {
obj.unbind("mouseenter", obj.data("_mouseenter"));
obj.unbind("mouseleave", obj.data("_mouseleave"));
obj.data("_mouseenter", undefined);
obj.data("_mouseleave", undefined);
}
$(document).ready(function() {
var test = $("#test");
setHover(test, function(e) {
alert("original " + e.type);
}, function(e) {
alert("original " + e.type);
});
var saved_mouseenter = test.data("_mouseenter");
var saved_mouseleave = test.data("_mouseleave");
removeHover(test);
setHover(test, function() {
alert("zip");
}, function() {
alert('zam');
});
removeHover(test);
setHover(test, saved_mouseenter, saved_mouseleave);
});
</script>
</head>
<body>
<p><a id="test" href="">test</a></p>
</body>
</html>
If not, maybe it'll give you some ideas.
A:
I'm not sure if this is what you mean, but you can bind custom events and then trigger them.
http://docs.jquery.com/Events/bind
So add your hover event, script the functionality you need for that hover, then trigger your custom event.
A:
Maybe it would be easier to just hide the old element and create a clone with your event handlers attached? Then just swap back in the old element when you're done..
|
How to read bound hover callback functions in jQuery
|
I used jQuery to set hover callbacks for elements on my page. I'm now writing a module which needs to temporarily set new hover behaviour for some elements. The new module has no access to the original code for the hover functions.
I want to store the old hover functions before I set new ones so I can restore them when finished with the temporary hover behaviour.
I think these can be stored using the jQuery.data() function:
//save old hover behavior (somehow)
$('#foo').data('oldhoverin',???)
$('#foo').data('oldhoverout',???);
//set new hover behavior
$('#foo').hover(newhoverin,newhoverout);
Do stuff with new hover behaviour...
//restore old hover behaviour
$('#foo').hover($('#foo').data('oldhoverin'),$('#foo').data('oldhoverout'));
But how do I get the currently registered hover functions from jQuery?
Shadow2531, I am trying to do this without modifying the code which originally registered the callbacks. Your suggestion would work fine otherwise. Thanks for the suggestion, and for helping clarify what I'm searching for. Maybe I have to go into the source of jquery and figure out how these callbacks are stored internally. Maybe I should change the question to "Is it possible to do this without modifying jquery?"
|
[
"Calling an event bind method (such as hover) does not delete old event handlers, only adds your new events, so your idea of 'restoring' the old event functions wouldn't work, as it wouldn't delete your events.\nYou can add your own events, and then remove them without affecting any other events then use Event namespacing: http://docs.jquery.com/Events_(Guide)#Namespacing_events\n",
"Not sure if this will work, but you can try this:\n\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <meta charset=\"utf-8\">\n <title>Jquery - Get, change and restore hover handlers</title>\n <script src=\"jquery.js\"></script>\n <script>\n function setHover(obj, mouseenter, mouseleave) {\n obj.data(\"_mouseenter\", mouseenter);\n obj.data(\"_mouseleave\", mouseleave);\n obj.hover(obj.data(\"_mouseenter\"), obj.data(\"_mouseleave\"));\n }\n function removeHover(obj) {\n obj.unbind(\"mouseenter\", obj.data(\"_mouseenter\"));\n obj.unbind(\"mouseleave\", obj.data(\"_mouseleave\"));\n obj.data(\"_mouseenter\", undefined);\n obj.data(\"_mouseleave\", undefined);\n }\n $(document).ready(function() {\n var test = $(\"#test\");\n setHover(test, function(e) {\n alert(\"original \" + e.type);\n }, function(e) {\n alert(\"original \" + e.type);\n });\n var saved_mouseenter = test.data(\"_mouseenter\");\n var saved_mouseleave = test.data(\"_mouseleave\");\n removeHover(test);\n setHover(test, function() {\n alert(\"zip\");\n }, function() {\n alert('zam');\n });\n removeHover(test);\n setHover(test, saved_mouseenter, saved_mouseleave);\n });\n </script>\n </head>\n <body>\n <p><a id=\"test\" href=\"\">test</a></p>\n </body>\n</html>\n\nIf not, maybe it'll give you some ideas.\n",
"I'm not sure if this is what you mean, but you can bind custom events and then trigger them.\nhttp://docs.jquery.com/Events/bind\nSo add your hover event, script the functionality you need for that hover, then trigger your custom event.\n",
"Maybe it would be easier to just hide the old element and create a clone with your event handlers attached? Then just swap back in the old element when you're done..\n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
"callback",
"javascript",
"jquery"
] |
stackoverflow_0000048931_callback_javascript_jquery.txt
|
Q:
How to limit result set size for arbitrary query in Ingres?
In Oracle, the number of rows returned in an arbitrary query can be limited by filtering on the "virtual" rownum column. Consider the following example, which will return, at most, 10 rows.
SELECT * FROM all_tables WHERE rownum <= 10
Is there a simple, generic way to do something similar in Ingres?
A:
Blatantly changing my answer. "Limit 10" works for MySql and others, Ingres uses
Select First 10 * from myTable
Ref
A:
select * from myTable limit 10 does not work.
Have discovered one possible solution:
TIDs are "tuple identifiers" or row addresses. The TID contains the
page number and the index of the offset to the row relative to the
page boundary. TIDs are presently implemented as 4-byte integers.
The TID uniquely identifies each row in a table. Every row has a
TID. The high-order 23 bits of the TID are the page number of the page
in which the row occurs. The TID can be addressed in SQL by the name
`tid.'
So you can limit the number of rows coming back using something like:
select * from SomeTable where tid < 2048
The method is somewhat inexact in the number of rows it returns. It's fine for my requirement though because I just want to limit rows coming back from a very large result set to speed up testing.
A:
Hey Craig. I'm sorry, I made a Ninja Edit.
No, Limit 10 does not work, I was mistaken in thinking it was standard SQL supported by everyone. Ingres uses (according to doc) "First" to solve the issue.
A:
Hey Ninja editor from Stockholm! No worries, have confirmed that "first X" works well and a much nicer solution than I came up with. Thankyou!
|
How to limit result set size for arbitrary query in Ingres?
|
In Oracle, the number of rows returned in an arbitrary query can be limited by filtering on the "virtual" rownum column. Consider the following example, which will return, at most, 10 rows.
SELECT * FROM all_tables WHERE rownum <= 10
Is there a simple, generic way to do something similar in Ingres?
|
[
"Blatantly changing my answer. \"Limit 10\" works for MySql and others, Ingres uses\nSelect First 10 * from myTable\n\nRef\n",
"select * from myTable limit 10 does not work.\nHave discovered one possible solution:\n\n TIDs are \"tuple identifiers\" or row addresses. The TID contains the\n page number and the index of the offset to the row relative to the\n page boundary. TIDs are presently implemented as 4-byte integers.\n The TID uniquely identifies each row in a table. Every row has a\n TID. The high-order 23 bits of the TID are the page number of the page\n in which the row occurs. The TID can be addressed in SQL by the name \n `tid.'\n\nSo you can limit the number of rows coming back using something like:\nselect * from SomeTable where tid < 2048\nThe method is somewhat inexact in the number of rows it returns. It's fine for my requirement though because I just want to limit rows coming back from a very large result set to speed up testing. \n",
"Hey Craig. I'm sorry, I made a Ninja Edit.\nNo, Limit 10 does not work, I was mistaken in thinking it was standard SQL supported by everyone. Ingres uses (according to doc) \"First\" to solve the issue.\n",
"Hey Ninja editor from Stockholm! No worries, have confirmed that \"first X\" works well and a much nicer solution than I came up with. Thankyou!\n"
] |
[
6,
2,
0,
0
] |
[] |
[] |
[
"ingres",
"oracle",
"sql"
] |
stackoverflow_0000049602_ingres_oracle_sql.txt
|
Q:
Templates In VB
I've got some VB code (actually VBA) which is basically the same except for the type on which it operates. Since I think the DRY principle is a good guiding principle for software development, I want to write one routine for all of the different types which need to be operated on. For example if I had two snippets of code like these:
Dim i as Obj1
Set i = RoutineThatReturnsObj1()
i.property = newvalue
Dim i as Obj2
Set i = RoutineThatReturnsObj2()
i.property = newvalue
I'd like to have something like this to handle both instances:
Sub MyRoutine(o as ObjectType, r as RoutineToInitializeObject, newvalue as value)
Dim i as o
Set i = r
i.property = newvalue
End Sub
If I were using C++ I'd generate a template and say no more about it. But I'm using VBA. I'm fairly sure there's no capability like C++ templates in the VBA language definition but is there any other means by which I might achieve the same effect? I'm guessing the answer is no but I ask here because maybe there is some feature of VBA that I've missed.
A:
There's nothing in VB6 that will do that. If you update to Visual Studio Tools for Office with .Net you can use generics:
Function MyRoutine(Of O)(R As Delegate, newvalue As Object) As O
Dim i As O = CType(r.Method.Invoke(Nothing, Nothing), O)
'you need another parameter to tell it which property to use'
' and then use reflection to set the value'
i.property = newvalue
return i
End Function
|
Templates In VB
|
I've got some VB code (actually VBA) which is basically the same except for the type on which it operates. Since I think the DRY principle is a good guiding principle for software development, I want to write one routine for all of the different types which need to be operated on. For example if I had two snippets of code like these:
Dim i as Obj1
Set i = RoutineThatReturnsObj1()
i.property = newvalue
Dim i as Obj2
Set i = RoutineThatReturnsObj2()
i.property = newvalue
I'd like to have something like this to handle both instances:
Sub MyRoutine(o as ObjectType, r as RoutineToInitializeObject, newvalue as value)
Dim i as o
Set i = r
i.property = newvalue
End Sub
If I were using C++ I'd generate a template and say no more about it. But I'm using VBA. I'm fairly sure there's no capability like C++ templates in the VBA language definition but is there any other means by which I might achieve the same effect? I'm guessing the answer is no but I ask here because maybe there is some feature of VBA that I've missed.
|
[
"There's nothing in VB6 that will do that. If you update to Visual Studio Tools for Office with .Net you can use generics:\nFunction MyRoutine(Of O)(R As Delegate, newvalue As Object) As O\n Dim i As O = CType(r.Method.Invoke(Nothing, Nothing), O)\n\n 'you need another parameter to tell it which property to use'\n ' and then use reflection to set the value'\n i.property = newvalue \n return i\nEnd Function\n\n"
] |
[
1
] |
[] |
[] |
[
"templates",
"vba"
] |
stackoverflow_0000049718_templates_vba.txt
|
Q:
Is automatic upgrades a realistic feature to expect from enterprise Web applications?
Most of the work I do is with what could be considered enterprise Web applications. These projects have large budgets, longer timelines (from 3-12 months), and heavy customizations. Because as developers we have been touting the idea of the Web as the next desktop OS, customers are coming to expect the software running on this "new OS" to react the same as on the desktop. That includes easy to manage automatic upgrades. In other words, "An update is available. Do you want to upgrade?" Is this even a realistic expectation? Can anyone speak from experience on trying to implement this feature?
A:
At my company we have enterprise installations ranging into the thousands of seats. If we implemented an auto-upgrade, our customers would mutiny!
Large installations have peculiar issues that don't apply to small ones. For example, with 2000 users (not all of whom are, let us say, the most sophisticated of tool users), tool-training is a big deal: training time, internal demos, internal process documents, etc.. They cannot unleash a new feature or UI change without a chance to understand how it fits in their process and therefore what their internal best practices are and how to communicate that to their users.
Also when applications fail, it's the internal IT team who are responsible. Therefore, they want time to install a new version in a test area, beat it up, and deploy on a Saturday only when they're good and ready.
I can see the value in making minor patches more easy to install, particularly when the patch is just for a bug-fix and not for anything that would require retraining, and if the admins still get final say over when it's installed. But even then, I don't believe anyone has ever asked for this! Whether because they don't want it or they are trained to not expect it, it doesn't seem worth it.
A:
Well, it really depends on your business model but for a lot of applications the SaaS model can end up biting you. It's great for a lot of things but for some larger applications the users are not investing as significant amount up front and could possibly move to something else before you've made any money.
See
http://news.zdnet.com/2424-9595_22-218408.html
and here
http://www.25hoursaday.com/weblog/2008/07/21/SoftwareAsAServiceWhenYourBusinessModelBecomesAParadox.aspx
for more information
A:
One of the primary reasons to implement an application as a web application is that you get automatic upgrades for free. Why would users be getting prompted for upgrades on a web app?
For Windows applications, the "update is available, do you want to upgrade?" functionality is provided by Microsoft using ClickOnce, which I have used in an enterprise environment successfully -- there are a few gotchas but for the most part it is a good way to manage automatic deployment and upgrade of Windows apps.
For mobile apps, you can also implement auto-upgrades, although it is a little trickier.
In any case, to answer your question in a broad sense, I don't know if it is expected that all enterprise apps should make upgrading easy, but it certainly is worth the money from an IT support standpoint to architect them to allow for easy upgrading.
A:
If you're providing a hosted solution, I wouldn't bother. Let the upgrade happen silently (perhaps with a notice that you did it). If you're selling an application that's hosted on their servers, let the upgrade decision be made by a single owner, not every user of the app.
|
Is automatic upgrades a realistic feature to expect from enterprise Web applications?
|
Most of the work I do is with what could be considered enterprise Web applications. These projects have large budgets, longer timelines (from 3-12 months), and heavy customizations. Because as developers we have been touting the idea of the Web as the next desktop OS, customers are coming to expect the software running on this "new OS" to react the same as on the desktop. That includes easy to manage automatic upgrades. In other words, "An update is available. Do you want to upgrade?" Is this even a realistic expectation? Can anyone speak from experience on trying to implement this feature?
|
[
"At my company we have enterprise installations ranging into the thousands of seats. If we implemented an auto-upgrade, our customers would mutiny!\nLarge installations have peculiar issues that don't apply to small ones. For example, with 2000 users (not all of whom are, let us say, the most sophisticated of tool users), tool-training is a big deal: training time, internal demos, internal process documents, etc.. They cannot unleash a new feature or UI change without a chance to understand how it fits in their process and therefore what their internal best practices are and how to communicate that to their users.\nAlso when applications fail, it's the internal IT team who are responsible. Therefore, they want time to install a new version in a test area, beat it up, and deploy on a Saturday only when they're good and ready.\nI can see the value in making minor patches more easy to install, particularly when the patch is just for a bug-fix and not for anything that would require retraining, and if the admins still get final say over when it's installed. But even then, I don't believe anyone has ever asked for this! Whether because they don't want it or they are trained to not expect it, it doesn't seem worth it.\n",
"Well, it really depends on your business model but for a lot of applications the SaaS model can end up biting you. It's great for a lot of things but for some larger applications the users are not investing as significant amount up front and could possibly move to something else before you've made any money.\nSee \nhttp://news.zdnet.com/2424-9595_22-218408.html\nand here\nhttp://www.25hoursaday.com/weblog/2008/07/21/SoftwareAsAServiceWhenYourBusinessModelBecomesAParadox.aspx\nfor more information\n",
"One of the primary reasons to implement an application as a web application is that you get automatic upgrades for free. Why would users be getting prompted for upgrades on a web app?\nFor Windows applications, the \"update is available, do you want to upgrade?\" functionality is provided by Microsoft using ClickOnce, which I have used in an enterprise environment successfully -- there are a few gotchas but for the most part it is a good way to manage automatic deployment and upgrade of Windows apps.\nFor mobile apps, you can also implement auto-upgrades, although it is a little trickier. \nIn any case, to answer your question in a broad sense, I don't know if it is expected that all enterprise apps should make upgrading easy, but it certainly is worth the money from an IT support standpoint to architect them to allow for easy upgrading.\n",
"If you're providing a hosted solution, I wouldn't bother. Let the upgrade happen silently (perhaps with a notice that you did it). If you're selling an application that's hosted on their servers, let the upgrade decision be made by a single owner, not every user of the app.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"enterprise",
"upgrade"
] |
stackoverflow_0000049732_enterprise_upgrade.txt
|
Q:
Programmatically extract macro (VBA) code from Word 2007 docs
Is it possible to extract all of the VBA code from a Word 2007 "docm" document using the API?
I have found how to insert VBA code at runtime, and how to delete all VBA code, but not pull the actual code out into a stream or string that I can store (and insert into other documents in the future).
Any tips or resources would be appreciated.
Edit: thanks to everyone, Aardvark's answer was exactly what I was looking for. I have converted his code to C#, and was able to call it from a class library using Visual Studio 2008.
using Microsoft.Office.Interop.Word;
using Microsoft.Vbe.Interop;
...
public List<string> GetMacrosFromDoc()
{
Document doc = GetWordDoc(@"C:\Temp\test.docm");
List<string> macros = new List<string>();
VBProject prj;
CodeModule code;
string composedFile;
prj = doc.VBProject;
foreach (VBComponent comp in prj.VBComponents)
{
code = comp.CodeModule;
// Put the name of the code module at the top
composedFile = comp.Name + Environment.NewLine;
// Loop through the (1-indexed) lines
for (int i = 0; i < code.CountOfLines; i++)
{
composedFile += code.get_Lines(i + 1, 1) + Environment.NewLine;
}
// Add the macro to the list
macros.Add(composedFile);
}
CloseDoc(doc);
return macros;
}
A:
You could export the code to files and then read them back in.
I've been using the code below to help me keep some Excel macros under source control (using Subversion & TortoiseSVN). It basically exports all the code to text files any time I save with the VBA editor open. I put the text files in subversion so that I can do diffs. You should be able to adapt/steal some of this to work in Word.
The registry check in CanAccessVBOM() corresponds to the "Trust access to Visual Basic Project" in the security setting.
Sub ExportCode()
If Not CanAccessVBOM Then Exit Sub ' Exit if access to VB object model is not allowed
If (ThisWorkbook.VBProject.VBE.ActiveWindow Is Nothing) Then
Exit Sub ' Exit if VBA window is not open
End If
Dim comp As VBComponent
Dim codeFolder As String
codeFolder = CombinePaths(GetWorkbookPath, "Code")
On Error Resume Next
MkDir codeFolder
On Error GoTo 0
Dim FileName As String
For Each comp In ThisWorkbook.VBProject.VBComponents
Select Case comp.Type
Case vbext_ct_ClassModule
FileName = CombinePaths(codeFolder, comp.Name & ".cls")
DeleteFile FileName
comp.Export FileName
Case vbext_ct_StdModule
FileName = CombinePaths(codeFolder, comp.Name & ".bas")
DeleteFile FileName
comp.Export FileName
Case vbext_ct_MSForm
FileName = CombinePaths(codeFolder, comp.Name & ".frm")
DeleteFile FileName
comp.Export FileName
Case vbext_ct_Document
FileName = CombinePaths(codeFolder, comp.Name & ".cls")
DeleteFile FileName
comp.Export FileName
End Select
Next
End Sub
Function CanAccessVBOM() As Boolean
' Check resgistry to see if we can access the VB object model
Dim wsh As Object
Dim str1 As String
Dim AccessVBOM As Long
Set wsh = CreateObject("WScript.Shell")
str1 = "HKEY_CURRENT_USER\Software\Microsoft\Office\" & _
Application.Version & "\Excel\Security\AccessVBOM"
On Error Resume Next
AccessVBOM = wsh.RegRead(str1)
Set wsh = Nothing
CanAccessVBOM = (AccessVBOM = 1)
End Function
Sub DeleteFile(FileName As String)
On Error Resume Next
Kill FileName
End Sub
Function GetWorkbookPath() As String
Dim fullName As String
Dim wrkbookName As String
Dim pos As Long
wrkbookName = ThisWorkbook.Name
fullName = ThisWorkbook.fullName
pos = InStr(1, fullName, wrkbookName, vbTextCompare)
GetWorkbookPath = Left$(fullName, pos - 1)
End Function
Function CombinePaths(ByVal Path1 As String, ByVal Path2 As String) As String
If Not EndsWith(Path1, "\") Then
Path1 = Path1 & "\"
End If
CombinePaths = Path1 & Path2
End Function
Function EndsWith(ByVal InString As String, ByVal TestString As String) As Boolean
EndsWith = (Right$(InString, Len(TestString)) = TestString)
End Function
A:
You'll have to add a reference to Microsoft Visual Basic for Applications Extensibility 5.3 (or whatever version you have). I have the VBA SDK and such on my box - so this may not be exactly what office ships with.
Also you have to enable access to the VBA Object Model specifically - see the "Trust Center" in Word options. This is in addition to all the other Macro security settings Office provides.
This example will extract code from the current document it lives in - it itself is a VBA macro (and will display itself and any other code as well). There is also a Application.vbe.VBProjects collection to access other documents. While I've never done it, I assume an external application could get to open files using this VBProjects collection as well. Security is funny with this stuff so it may be tricky.
I also wonder what the docm file format is now - XML like the docx? Would that be a better approach?
Sub GetCode()
Dim prj As VBProject
Dim comp As VBComponent
Dim code As CodeModule
Dim composedFile As String
Dim i As Integer
Set prj = ThisDocument.VBProject
For Each comp In prj.VBComponents
Set code = comp.CodeModule
composedFile = comp.Name & vbNewLine
For i = 1 To code.CountOfLines
composedFile = composedFile & code.Lines(i, 1) & vbNewLine
Next
MsgBox composedFile
Next
End Sub
|
Programmatically extract macro (VBA) code from Word 2007 docs
|
Is it possible to extract all of the VBA code from a Word 2007 "docm" document using the API?
I have found how to insert VBA code at runtime, and how to delete all VBA code, but not pull the actual code out into a stream or string that I can store (and insert into other documents in the future).
Any tips or resources would be appreciated.
Edit: thanks to everyone, Aardvark's answer was exactly what I was looking for. I have converted his code to C#, and was able to call it from a class library using Visual Studio 2008.
using Microsoft.Office.Interop.Word;
using Microsoft.Vbe.Interop;
...
public List<string> GetMacrosFromDoc()
{
Document doc = GetWordDoc(@"C:\Temp\test.docm");
List<string> macros = new List<string>();
VBProject prj;
CodeModule code;
string composedFile;
prj = doc.VBProject;
foreach (VBComponent comp in prj.VBComponents)
{
code = comp.CodeModule;
// Put the name of the code module at the top
composedFile = comp.Name + Environment.NewLine;
// Loop through the (1-indexed) lines
for (int i = 0; i < code.CountOfLines; i++)
{
composedFile += code.get_Lines(i + 1, 1) + Environment.NewLine;
}
// Add the macro to the list
macros.Add(composedFile);
}
CloseDoc(doc);
return macros;
}
|
[
"You could export the code to files and then read them back in.\nI've been using the code below to help me keep some Excel macros under source control (using Subversion & TortoiseSVN). It basically exports all the code to text files any time I save with the VBA editor open. I put the text files in subversion so that I can do diffs. You should be able to adapt/steal some of this to work in Word.\nThe registry check in CanAccessVBOM() corresponds to the \"Trust access to Visual Basic Project\" in the security setting.\nSub ExportCode()\n\n If Not CanAccessVBOM Then Exit Sub ' Exit if access to VB object model is not allowed\n If (ThisWorkbook.VBProject.VBE.ActiveWindow Is Nothing) Then\n Exit Sub ' Exit if VBA window is not open\n End If\n Dim comp As VBComponent\n Dim codeFolder As String\n\n codeFolder = CombinePaths(GetWorkbookPath, \"Code\")\n On Error Resume Next\n MkDir codeFolder\n On Error GoTo 0\n Dim FileName As String\n\n For Each comp In ThisWorkbook.VBProject.VBComponents\n Select Case comp.Type\n Case vbext_ct_ClassModule\n FileName = CombinePaths(codeFolder, comp.Name & \".cls\")\n DeleteFile FileName\n comp.Export FileName\n Case vbext_ct_StdModule\n FileName = CombinePaths(codeFolder, comp.Name & \".bas\")\n DeleteFile FileName\n comp.Export FileName\n Case vbext_ct_MSForm\n FileName = CombinePaths(codeFolder, comp.Name & \".frm\")\n DeleteFile FileName\n comp.Export FileName\n Case vbext_ct_Document\n FileName = CombinePaths(codeFolder, comp.Name & \".cls\")\n DeleteFile FileName\n comp.Export FileName\n End Select\n Next\n\nEnd Sub\nFunction CanAccessVBOM() As Boolean\n ' Check resgistry to see if we can access the VB object model\n Dim wsh As Object\n Dim str1 As String\n Dim AccessVBOM As Long\n\n Set wsh = CreateObject(\"WScript.Shell\")\n str1 = \"HKEY_CURRENT_USER\\Software\\Microsoft\\Office\\\" & _\n Application.Version & \"\\Excel\\Security\\AccessVBOM\"\n On Error Resume Next\n AccessVBOM = wsh.RegRead(str1)\n Set wsh = Nothing\n CanAccessVBOM = (AccessVBOM = 1)\nEnd Function\n\n\nSub DeleteFile(FileName As String)\n On Error Resume Next\n Kill FileName\nEnd Sub\n\nFunction GetWorkbookPath() As String\n Dim fullName As String\n Dim wrkbookName As String\n Dim pos As Long\n\n wrkbookName = ThisWorkbook.Name\n fullName = ThisWorkbook.fullName\n\n pos = InStr(1, fullName, wrkbookName, vbTextCompare)\n\n GetWorkbookPath = Left$(fullName, pos - 1)\nEnd Function\n\nFunction CombinePaths(ByVal Path1 As String, ByVal Path2 As String) As String\n If Not EndsWith(Path1, \"\\\") Then\n Path1 = Path1 & \"\\\"\n End If\n CombinePaths = Path1 & Path2\nEnd Function\n\nFunction EndsWith(ByVal InString As String, ByVal TestString As String) As Boolean\n EndsWith = (Right$(InString, Len(TestString)) = TestString)\nEnd Function\n\n",
"You'll have to add a reference to Microsoft Visual Basic for Applications Extensibility 5.3 (or whatever version you have). I have the VBA SDK and such on my box - so this may not be exactly what office ships with.\nAlso you have to enable access to the VBA Object Model specifically - see the \"Trust Center\" in Word options. This is in addition to all the other Macro security settings Office provides.\nThis example will extract code from the current document it lives in - it itself is a VBA macro (and will display itself and any other code as well). There is also a Application.vbe.VBProjects collection to access other documents. While I've never done it, I assume an external application could get to open files using this VBProjects collection as well. Security is funny with this stuff so it may be tricky.\nI also wonder what the docm file format is now - XML like the docx? Would that be a better approach? \nSub GetCode()\n\n Dim prj As VBProject\n Dim comp As VBComponent\n Dim code As CodeModule\n Dim composedFile As String\n Dim i As Integer\n\n Set prj = ThisDocument.VBProject\n For Each comp In prj.VBComponents\n Set code = comp.CodeModule\n\n composedFile = comp.Name & vbNewLine\n\n For i = 1 To code.CountOfLines\n composedFile = composedFile & code.Lines(i, 1) & vbNewLine\n Next\n\n MsgBox composedFile\n Next\n\nEnd Sub\n\n"
] |
[
29,
11
] |
[] |
[] |
[
".net",
"automation",
"ms_office"
] |
stackoverflow_0000049724_.net_automation_ms_office.txt
|
Q:
LINQ and Database Permissions
I'm still trying to get my head around LINQ and accessing a SQL Database.
I was always taught that you should only have execute permissions of stored procedures to your data.
You should never have select / insert / update / delete.
(This is because of performance and security)
To get the data out of LINQ you obviously need select permissions. I know you can use stored procs with LINQ, but since I can't do joins what's the point?
Have I missed something???
A:
1) We're programmers, not DBA fascists. If your requirements state that the database must be locked down 100%, Linq is not for you. I'm not a DBA, so I believe that most of the performance/security hype is just that. Linq is for me.
2) You can do joins with linq.
@Philippe: Linq automatically transforms evaluations into query parameters, so it provides some sql injection protection. However, you still have to closely evaluate your requirements to determine how much security you need and at what levels. Linq makes dealing with the database much easier, but it makes it easier to put secuirty design on the back burner, which is a bad thing.
A:
I'm very much in agreement with Jeff Atwood on the "Stored Procedures vs. Inline SQL/LINQ" issue: Who Needs Stored Procedures, Anyways?.
I'm confused as to why you'd even want to perform a JOIN if you're in the SPROCs-for-everything crowd; shouldn't you wrap that JOIN up into another SPROC?
As Will said, LINQ wasn't designed for the kind of DB use you're talking about; it was designed to give us statically-typed inline SQL. You could, however, still control access through user permissions if you use LINQ to SQL.
A:
Well, for security reasons you should not input any user entered data into queries. If you stick with this rule, I don't see the problem of having select permission.
A:
Whether all of your database access is "behind" stored procedures depends on the needs of the application and the company. I have implemented systems that use views to get all data and stored procedures for all updates. This allows for centralized security and database logic while still letting front-end developers use SQL queries where appropriate.
Like so many other things in programming - it depends on the needs for your project.
LinqToSql does support stored procedures. Scott Gu has a post on it:
http://weblogs.asp.net/scottgu/archive/2007/08/16/linq-to-sql-part-6-retrieving-data-using-stored-procedures.aspx
|
LINQ and Database Permissions
|
I'm still trying to get my head around LINQ and accessing a SQL Database.
I was always taught that you should only have execute permissions of stored procedures to your data.
You should never have select / insert / update / delete.
(This is because of performance and security)
To get the data out of LINQ you obviously need select permissions. I know you can use stored procs with LINQ, but since I can't do joins what's the point?
Have I missed something???
|
[
"1) We're programmers, not DBA fascists. If your requirements state that the database must be locked down 100%, Linq is not for you. I'm not a DBA, so I believe that most of the performance/security hype is just that. Linq is for me.\n2) You can do joins with linq.\n@Philippe: Linq automatically transforms evaluations into query parameters, so it provides some sql injection protection. However, you still have to closely evaluate your requirements to determine how much security you need and at what levels. Linq makes dealing with the database much easier, but it makes it easier to put secuirty design on the back burner, which is a bad thing. \n",
"I'm very much in agreement with Jeff Atwood on the \"Stored Procedures vs. Inline SQL/LINQ\" issue: Who Needs Stored Procedures, Anyways?.\nI'm confused as to why you'd even want to perform a JOIN if you're in the SPROCs-for-everything crowd; shouldn't you wrap that JOIN up into another SPROC?\nAs Will said, LINQ wasn't designed for the kind of DB use you're talking about; it was designed to give us statically-typed inline SQL. You could, however, still control access through user permissions if you use LINQ to SQL.\n",
"Well, for security reasons you should not input any user entered data into queries. If you stick with this rule, I don't see the problem of having select permission.\n",
"Whether all of your database access is \"behind\" stored procedures depends on the needs of the application and the company. I have implemented systems that use views to get all data and stored procedures for all updates. This allows for centralized security and database logic while still letting front-end developers use SQL queries where appropriate.\nLike so many other things in programming - it depends on the needs for your project.\nLinqToSql does support stored procedures. Scott Gu has a post on it:\nhttp://weblogs.asp.net/scottgu/archive/2007/08/16/linq-to-sql-part-6-retrieving-data-using-stored-procedures.aspx\n"
] |
[
2,
0,
0,
0
] |
[] |
[] |
[
"linq_to_sql",
"permissions"
] |
stackoverflow_0000049551_linq_to_sql_permissions.txt
|
Q:
Apache rewrite based on subdomain
I'm trying to redirect requests for a wildcard domain to a sub-directory.
ie. something.blah.example.com --> blah.example.com/something
I don't know how to get the subdomain name to use in the rewrite rule.
Final Solution:
RewriteCond %{HTTP_HOST} !^blah\.example\.com
RewriteCond %{HTTP_HOST} ^([^.]+)
RewriteRule ^(.*) /%1/$1 [L]
Or as pointed out by pilif
RewriteCond %{HTTP_HOST} ^([^.]+)\.blah\.example\.com$
A:
You should have a look at the URL Rewriting Guide from the apache documentation.
The following is untested, but it should to the trick:
RewriteCond %{HTTP_HOST} ^([^.]+)\.blah\.domain\.com$
RewriteRule ^/(.*)$ http://blah.domain.com/%1/$1 [L,R]
This only works if the subdomain contains no dots. Otherwise, you'd have to alter the Regexp in RewriteCond to match any character which should still work due to the anchoring, but this certainly feels safer.
A:
Try this:
RewriteCond %{HTTP_HOST} (.+)\.blah\.domain\.com
RewriteRule ^(.+)$ /%1/$1 [L]
@pilif (see comment): Okay, that's true. I just copied a .htaccess that I use on one of my projects. Guess it has a slightly different approach :)
A:
@Sam
your RewriteCond line is wrong. The expansion of the variable is triggered with %, not $.
RewriteCond %{HTTP_HOST} ^([^\.]+)\.media\.xnet\.tk$
^
that should do the trick
|
Apache rewrite based on subdomain
|
I'm trying to redirect requests for a wildcard domain to a sub-directory.
ie. something.blah.example.com --> blah.example.com/something
I don't know how to get the subdomain name to use in the rewrite rule.
Final Solution:
RewriteCond %{HTTP_HOST} !^blah\.example\.com
RewriteCond %{HTTP_HOST} ^([^.]+)
RewriteRule ^(.*) /%1/$1 [L]
Or as pointed out by pilif
RewriteCond %{HTTP_HOST} ^([^.]+)\.blah\.example\.com$
|
[
"You should have a look at the URL Rewriting Guide from the apache documentation.\nThe following is untested, but it should to the trick:\nRewriteCond %{HTTP_HOST} ^([^.]+)\\.blah\\.domain\\.com$\nRewriteRule ^/(.*)$ http://blah.domain.com/%1/$1 [L,R] \n\nThis only works if the subdomain contains no dots. Otherwise, you'd have to alter the Regexp in RewriteCond to match any character which should still work due to the anchoring, but this certainly feels safer.\n",
"Try this:\nRewriteCond %{HTTP_HOST} (.+)\\.blah\\.domain\\.com\nRewriteRule ^(.+)$ /%1/$1 [L]\n\n@pilif (see comment): Okay, that's true. I just copied a .htaccess that I use on one of my projects. Guess it has a slightly different approach :)\n",
"@Sam\nyour RewriteCond line is wrong. The expansion of the variable is triggered with %, not $.\nRewriteCond %{HTTP_HOST} ^([^\\.]+)\\.media\\.xnet\\.tk$\n ^\n\nthat should do the trick\n"
] |
[
36,
4,
1
] |
[] |
[] |
[
"apache",
"mod_rewrite",
"redirect",
"subdomain",
"wildcard_subdomain"
] |
stackoverflow_0000049500_apache_mod_rewrite_redirect_subdomain_wildcard_subdomain.txt
|
Q:
How can I overwrite the same portion of the console in a Windows native C++ console app, without using a 3rd Party library?
I have a console app that needs to display the state of items, but rather than having text scroll by like mad I'd rather see the current status keep showing up on the same lines. For the sake of example:
Running... nn% complete
Buffer size: bbbb bytes
should be the output, where 'nn' is the current percentage complete, and 'bbbb' is a buffer size, updated periodically on the same lines of the console.
The first approach I took simply printed the correct number of backspaces to the console before printing the new state, but this has an obnoxious flicker that I want to get rid of. I also want to stick to either standard library or MS-provided functionality (VC 8) so as not to introduce another dependency for this one simple need.
A:
You can use SetConsoleCursorPosition. You'll need to call GetStdHandle to get a handle to the output buffer.
A:
Joseph, JP, and CodingTheWheel all provided valuable help.
For my simple case, the most straight-forward approach seemed to be based on CodingTheWheel's answer:
// before entering update loop
HANDLE h = GetStdHandle(STD_OUTPUT_HANDLE);
CONSOLE_SCREEN_BUFFER_INFO bufferInfo;
GetConsoleScreenBufferInfo(h, &bufferInfo);
// update loop
while (updating)
{
// reset the cursor position to where it was each time
SetConsoleCursorPosition(h, bufferInfo.dwCursorPosition);
//...
// insert combinations of sprintf, printf, etc. here
//...
}
For more complicated problems, the full console API as provided by JP's answer, in coordination with the examples provided via the link from Joseph's answer may prove useful, but I found the work necessary to use CHAR_INFO too tedious for such a simple app.
A:
If you print using \r and don't use a function that will generate a newline or add \n to the end, the cursor will go back to the beginning of the line and just print over the next thing you put up. Generating the complete string before printing might reduce flicker as well.
UPDATE: The question has been changed to 2 lines of output instead of 1 which makes my answer no longer complete. A more complicated approach is likely necessary. JP has the right idea with the Console API. I believe the following site details many of the things you will need to accomplish your goal. The site also mentions that the key to reducing flicker is to render everything offscreen before displaying it. This is true whenever you are displaying anything on the screen whether it is text or graphics (2D or 3D).
http://www.benryves.com/tutorials/?t=winconsole
A:
In case the Joseph's suggestion does not give you enough flexibility, have a look at the Console API: http://msdn.microsoft.com/en-us/library/ms682073(VS.85).aspx.
A:
In Linux, you can accomplish this by printing \b and/or \r to stderr. You might need to experiment to find the right combination of things in Windows.
|
How can I overwrite the same portion of the console in a Windows native C++ console app, without using a 3rd Party library?
|
I have a console app that needs to display the state of items, but rather than having text scroll by like mad I'd rather see the current status keep showing up on the same lines. For the sake of example:
Running... nn% complete
Buffer size: bbbb bytes
should be the output, where 'nn' is the current percentage complete, and 'bbbb' is a buffer size, updated periodically on the same lines of the console.
The first approach I took simply printed the correct number of backspaces to the console before printing the new state, but this has an obnoxious flicker that I want to get rid of. I also want to stick to either standard library or MS-provided functionality (VC 8) so as not to introduce another dependency for this one simple need.
|
[
"You can use SetConsoleCursorPosition. You'll need to call GetStdHandle to get a handle to the output buffer.\n",
"Joseph, JP, and CodingTheWheel all provided valuable help.\nFor my simple case, the most straight-forward approach seemed to be based on CodingTheWheel's answer:\n// before entering update loop\nHANDLE h = GetStdHandle(STD_OUTPUT_HANDLE);\nCONSOLE_SCREEN_BUFFER_INFO bufferInfo;\nGetConsoleScreenBufferInfo(h, &bufferInfo);\n\n// update loop\nwhile (updating)\n{\n // reset the cursor position to where it was each time\n SetConsoleCursorPosition(h, bufferInfo.dwCursorPosition);\n\n //...\n // insert combinations of sprintf, printf, etc. here\n //...\n}\n\nFor more complicated problems, the full console API as provided by JP's answer, in coordination with the examples provided via the link from Joseph's answer may prove useful, but I found the work necessary to use CHAR_INFO too tedious for such a simple app.\n",
"If you print using \\r and don't use a function that will generate a newline or add \\n to the end, the cursor will go back to the beginning of the line and just print over the next thing you put up. Generating the complete string before printing might reduce flicker as well.\nUPDATE: The question has been changed to 2 lines of output instead of 1 which makes my answer no longer complete. A more complicated approach is likely necessary. JP has the right idea with the Console API. I believe the following site details many of the things you will need to accomplish your goal. The site also mentions that the key to reducing flicker is to render everything offscreen before displaying it. This is true whenever you are displaying anything on the screen whether it is text or graphics (2D or 3D).\nhttp://www.benryves.com/tutorials/?t=winconsole\n",
"In case the Joseph's suggestion does not give you enough flexibility, have a look at the Console API: http://msdn.microsoft.com/en-us/library/ms682073(VS.85).aspx.\n",
"In Linux, you can accomplish this by printing \\b and/or \\r to stderr. You might need to experiment to find the right combination of things in Windows.\n"
] |
[
8,
7,
5,
3,
0
] |
[] |
[] |
[
"c++",
"console",
"windows"
] |
stackoverflow_0000045286_c++_console_windows.txt
|
Q:
What is a good design when trying to build objects from a list of key value pairs?
So if I have a method of parsing a text file and returning a list of a list of key value pairs, and want to create objects from the kvps returned (each list of kvps represents a different object), what would be the best method?
The first method that pops into mind is pretty simple, just keep a list of keywords:
private const string NAME = "name";
private const string PREFIX = "prefix";
and check against the keys I get for the constants I want, defined above. This is a fairly core piece of the project I'm working on though, so I want to do it well; does anyone have any more robust suggestions (not saying there's anything inherently un-robust about the above method - I'm just asking around)?
Edit:
More details have been asked for. I'm working on a little game in my spare time, and I am building up the game world with configuration files. There are four - one defines all creatures, another defines all areas (and their locations in a map), another all objects, and a final one defines various configuration options and things that don't fit else where. With the first three configuration files, I will be creating objects based on the content of the files - it will be quite text-heavy, so there will be a lot of strings, things like names, plurals, prefixes - that sort of thing. The configuration values are all like so:
-
key: value
key: value
-
key: value
key: value
-
Where the '-' line denotes a new section/object.
A:
Take a deep look at the XmlSerializer. Even if you are constrained to not use XML on-disk, you might want to copy some of its features. This could then look like this:
public class DataObject {
[Column("name")]
public string Name { get; set; }
[Column("prefix")]
public string Prefix { get; set; }
}
Be careful though to include some kind of format version in your files, or you will be in hell's kitchen come the next format change.
A:
Making a lot of unwarranted assumptions, I think that the best approach would be to create a Factory that will receive the list of key value pairs and return the proper object or throw an exception if it's invalid (or create a dummy object, or whatever is better in the particular case).
private class Factory {
public static IConfigurationObject Factory(List<string> keyValuePair) {
switch (keyValuePair[0]) {
case "x":
return new x(keyValuePair[1]);
break;
/* etc. */
default:
throw new ArgumentException("Wrong parameter in the file");
}
}
}
The strongest assumption here is that all your objects can be treated partly like the same (ie, they implement the same interface (IConfigurationObject in the example) or belong to the same inheritance tree).
If they don't, then it depends on your program flow and what are you doing with them. But nonetheless, they should :)
EDIT: Given your explanation, you could have one Factory per file type, the switch in it would be the authoritative source on the allowed types per file type and they probably share something in common. Reflection is possible, but it's riskier because it's less obvious and self documenting than this one.
A:
What do you need object for? The way you describe it, you'll use them as some kind (of key-wise) restricted map anyway. If you do not need some kind of inheritance, I'd simply wrap a map-like structure into a object like this:
[java-inspired pseudo-code:]
class RestrictedKVDataStore {
const ALLOWED_KEYS = new Collection('name', 'prefix');
Map data = new Map();
void put(String key, Object value) {
if (ALLOWED_KEYS.contains(key))
data.put(key, value)
}
Object get(String key) {
return data.get(key);
}
}
A:
You could create an interface that matched the column names, and then use the Reflection.Emit API to create a type at runtime that gave access to the data in the fields.
A:
EDIT:
Scratch that, this still applies, but I think what your doing is reading a configuration file and parsing it into this:
List<List<KeyValuePair<String,String>>> itemConfig =
new List<List<KeyValuePair<String,String>>>();
In this case, we can still use a reflection factory to instantiate the objects, I'd just pass in the nested inner list to it, instead of passing each individual key/value pair.
OLD POST:
Here is a clever little way to do this using reflection:
The basic idea:
Use a common base class for each Object class.
Put all of these classes in their own assembly.
Put this factory in that assembly too.
Pass in the KeyValuePair that you read from your config, and in return it finds the class that matches KV.Key and instantiates it with KV.Value
public class KeyValueToObjectFactory
{
private Dictionary _kvTypes = new Dictionary();
public KeyValueToObjectFactory()
{
// Preload the Types into a dictionary so we can look them up later
// Obviously, you want to reuse the factory to minimize overhead, so don't
// do something stupid like instantiate a new factory in a loop.
foreach (Type type in typeof(KeyValueToObjectFactory).Assembly.GetTypes())
{
if (type.IsSubclassOf(typeof(KVObjectBase)))
{
_kvTypes[type.Name.ToLower()] = type;
}
}
}
public KVObjectBase CreateObjectFromKV(KeyValuePair kv)
{
if (kv != null)
{
string kvName = kv.Key;
// If the Type information is in our Dictionary, instantiate a new instance of that class.
Type kvType;
if (_kvTypes.TryGetValue(kvName, out kvType))
{
return (KVObjectBase)Activator.CreateInstance(kvType, kv.Value);
}
else
{
throw new ArgumentException("Unrecognized KV Pair");
}
}
else
{
return null;
}
}
}
A:
@David:
I already have the parser (and most of these will be hand written, so I decided against XML). But that looks like I really nice way of doing it; I'll have to check it out. Excellent point about versioning too.
@Argelbargel:
That looks good too. :')
A:
...This is a fairly core piece of the
project I'm working on though...
Is it really?
It's tempting to just abstract it and provide a basic implementation with the intention of refactoring later on.
Then you can get on with what matters: the game.
Just a thought
<bb />
A:
Is it really?
Yes; I have thought this out. Far be it from me to do more work than neccessary. :')
|
What is a good design when trying to build objects from a list of key value pairs?
|
So if I have a method of parsing a text file and returning a list of a list of key value pairs, and want to create objects from the kvps returned (each list of kvps represents a different object), what would be the best method?
The first method that pops into mind is pretty simple, just keep a list of keywords:
private const string NAME = "name";
private const string PREFIX = "prefix";
and check against the keys I get for the constants I want, defined above. This is a fairly core piece of the project I'm working on though, so I want to do it well; does anyone have any more robust suggestions (not saying there's anything inherently un-robust about the above method - I'm just asking around)?
Edit:
More details have been asked for. I'm working on a little game in my spare time, and I am building up the game world with configuration files. There are four - one defines all creatures, another defines all areas (and their locations in a map), another all objects, and a final one defines various configuration options and things that don't fit else where. With the first three configuration files, I will be creating objects based on the content of the files - it will be quite text-heavy, so there will be a lot of strings, things like names, plurals, prefixes - that sort of thing. The configuration values are all like so:
-
key: value
key: value
-
key: value
key: value
-
Where the '-' line denotes a new section/object.
|
[
"Take a deep look at the XmlSerializer. Even if you are constrained to not use XML on-disk, you might want to copy some of its features. This could then look like this:\npublic class DataObject {\n [Column(\"name\")]\n public string Name { get; set; }\n\n [Column(\"prefix\")]\n public string Prefix { get; set; }\n}\n\nBe careful though to include some kind of format version in your files, or you will be in hell's kitchen come the next format change.\n",
"Making a lot of unwarranted assumptions, I think that the best approach would be to create a Factory that will receive the list of key value pairs and return the proper object or throw an exception if it's invalid (or create a dummy object, or whatever is better in the particular case).\nprivate class Factory {\n\n public static IConfigurationObject Factory(List<string> keyValuePair) {\n\n switch (keyValuePair[0]) {\n\n case \"x\":\n return new x(keyValuePair[1]);\n break;\n /* etc. */\n default:\n throw new ArgumentException(\"Wrong parameter in the file\");\n }\n\n }\n\n}\n\nThe strongest assumption here is that all your objects can be treated partly like the same (ie, they implement the same interface (IConfigurationObject in the example) or belong to the same inheritance tree).\nIf they don't, then it depends on your program flow and what are you doing with them. But nonetheless, they should :)\nEDIT: Given your explanation, you could have one Factory per file type, the switch in it would be the authoritative source on the allowed types per file type and they probably share something in common. Reflection is possible, but it's riskier because it's less obvious and self documenting than this one.\n",
"What do you need object for? The way you describe it, you'll use them as some kind (of key-wise) restricted map anyway. If you do not need some kind of inheritance, I'd simply wrap a map-like structure into a object like this:\n[java-inspired pseudo-code:]\nclass RestrictedKVDataStore {\n const ALLOWED_KEYS = new Collection('name', 'prefix');\n Map data = new Map();\n\n void put(String key, Object value) {\n if (ALLOWED_KEYS.contains(key))\n data.put(key, value)\n }\n\n Object get(String key) {\n return data.get(key);\n }\n}\n\n",
"You could create an interface that matched the column names, and then use the Reflection.Emit API to create a type at runtime that gave access to the data in the fields.\n",
"EDIT: \nScratch that, this still applies, but I think what your doing is reading a configuration file and parsing it into this:\nList<List<KeyValuePair<String,String>>> itemConfig = \n new List<List<KeyValuePair<String,String>>>();\n\nIn this case, we can still use a reflection factory to instantiate the objects, I'd just pass in the nested inner list to it, instead of passing each individual key/value pair.\nOLD POST:\nHere is a clever little way to do this using reflection:\nThe basic idea:\n\nUse a common base class for each Object class.\nPut all of these classes in their own assembly.\nPut this factory in that assembly too.\nPass in the KeyValuePair that you read from your config, and in return it finds the class that matches KV.Key and instantiates it with KV.Value\n\n \n public class KeyValueToObjectFactory\n { \n private Dictionary _kvTypes = new Dictionary();\n\n public KeyValueToObjectFactory()\n {\n // Preload the Types into a dictionary so we can look them up later\n // Obviously, you want to reuse the factory to minimize overhead, so don't\n // do something stupid like instantiate a new factory in a loop.\n\n foreach (Type type in typeof(KeyValueToObjectFactory).Assembly.GetTypes())\n {\n if (type.IsSubclassOf(typeof(KVObjectBase)))\n {\n _kvTypes[type.Name.ToLower()] = type;\n }\n }\n }\n\n public KVObjectBase CreateObjectFromKV(KeyValuePair kv)\n {\n if (kv != null)\n {\n string kvName = kv.Key;\n\n // If the Type information is in our Dictionary, instantiate a new instance of that class.\n Type kvType;\n if (_kvTypes.TryGetValue(kvName, out kvType))\n {\n return (KVObjectBase)Activator.CreateInstance(kvType, kv.Value);\n }\n else\n {\n throw new ArgumentException(\"Unrecognized KV Pair\");\n }\n }\n else\n {\n return null;\n }\n }\n }\n\n",
"@David:\nI already have the parser (and most of these will be hand written, so I decided against XML). But that looks like I really nice way of doing it; I'll have to check it out. Excellent point about versioning too. \n@Argelbargel:\nThat looks good too. :')\n",
"\n...This is a fairly core piece of the\nproject I'm working on though...\n\nIs it really? \nIt's tempting to just abstract it and provide a basic implementation with the intention of refactoring later on.\nThen you can get on with what matters: the game.\nJust a thought\n<bb />\n",
"\nIs it really?\n\nYes; I have thought this out. Far be it from me to do more work than neccessary. :') \n"
] |
[
3,
3,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"c#",
"key_value"
] |
stackoverflow_0000049757_c#_key_value.txt
|
Q:
SQL 2000 database copy to SQL 2005 options
We have a production web and database server with SQL Server 2000. (However, a few clients they have their own servers with SQL 2005.) So we have local installs of SQL 2005 Express for development on Windows XP SP3 boxes (which don't allow SQL 2000 Enterprise installations).
We often need to copy SQL 2000 databases to SQL 2005 instances. In the past, we have used the SQL Publishing tool (also mentioned here). However, one of our databases is so big that using that tool fails as it creates SQL scripts that get too large for Management Studio to handle them properly. Besides, it takes too long... :)
We would use the Copy Database Wizard included with SQL 2005, but our development machines run SQL 2005 Express which don't included SQL Server Agent, which is required for Copy Database Wizard to work. So, I guess our solution will be to upgrade our development installs with the full version of SQL 2005 (we have an MSDN subscription of course).
I was wondering what other solutions, if any, work well for you guys? (Besides complaining to the bosses to upgrade our production servers to 2005 or even 2008--which I've already tried.)
A:
Back it up in SQL Server 2000 and then use the RESTORE WITH MOVE command into 2005 Express.
|
SQL 2000 database copy to SQL 2005 options
|
We have a production web and database server with SQL Server 2000. (However, a few clients they have their own servers with SQL 2005.) So we have local installs of SQL 2005 Express for development on Windows XP SP3 boxes (which don't allow SQL 2000 Enterprise installations).
We often need to copy SQL 2000 databases to SQL 2005 instances. In the past, we have used the SQL Publishing tool (also mentioned here). However, one of our databases is so big that using that tool fails as it creates SQL scripts that get too large for Management Studio to handle them properly. Besides, it takes too long... :)
We would use the Copy Database Wizard included with SQL 2005, but our development machines run SQL 2005 Express which don't included SQL Server Agent, which is required for Copy Database Wizard to work. So, I guess our solution will be to upgrade our development installs with the full version of SQL 2005 (we have an MSDN subscription of course).
I was wondering what other solutions, if any, work well for you guys? (Besides complaining to the bosses to upgrade our production servers to 2005 or even 2008--which I've already tried.)
|
[
"Back it up in SQL Server 2000 and then use the RESTORE WITH MOVE command into 2005 Express.\n"
] |
[
1
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000050003_sql_server.txt
|
Q:
Regex to Match first 28 days of the month
I am looking for a Regular expression to match only if a date is in the first 28 days of the month. This is for my validator control in ASP.NET
A:
Don't do this with Regex. Dates are formatted differently in different countries. Use the DateTime.TryParse routine instead:
DateTime parsedDate;
if ( DateTime.TryParse( dateString, out parsedDate) && parsedDate.Day <= 28 )
{
// logic goes here.
}
Regex is nearly the golden hammer of input validation, but in this instance, it's the wrong choice.
A:
I don't think this is a task very well-suited for a regexp.
I'd try and use the library functions (DateTime.Parse for .NET) to parse the date and then check the day component of it. Everything else is duplicating half the library function anyways.
A:
Why not just covert it to a date data type and check the day? Using a regular expression, while it could be done, just makes it overly complicated.
A:
([1-9]|1\d|2[0-8]) // matches 1 to 28 but woudn't allow leading zeros for single digits
(0?[1-9]|1\d|2[0-8]) // matches 1 to 28 and would allow 01, 02,... 09
(where \d matches any digit, use [0-9] if your regex engine doesn't support it.)
See also the question What is the regex pattern for datetime (2008-09-01 12:35:45 ) ?
A:
I would use one of the DateTime.TryParse techniques in conjunction with a CustomValidator
|
Regex to Match first 28 days of the month
|
I am looking for a Regular expression to match only if a date is in the first 28 days of the month. This is for my validator control in ASP.NET
|
[
"Don't do this with Regex. Dates are formatted differently in different countries. Use the DateTime.TryParse routine instead:\nDateTime parsedDate;\n\nif ( DateTime.TryParse( dateString, out parsedDate) && parsedDate.Day <= 28 )\n{\n // logic goes here.\n}\n\nRegex is nearly the golden hammer of input validation, but in this instance, it's the wrong choice.\n",
"I don't think this is a task very well-suited for a regexp.\nI'd try and use the library functions (DateTime.Parse for .NET) to parse the date and then check the day component of it. Everything else is duplicating half the library function anyways.\n",
"Why not just covert it to a date data type and check the day? Using a regular expression, while it could be done, just makes it overly complicated.\n",
" ([1-9]|1\\d|2[0-8]) // matches 1 to 28 but woudn't allow leading zeros for single digits\n(0?[1-9]|1\\d|2[0-8]) // matches 1 to 28 and would allow 01, 02,... 09\n\n(where \\d matches any digit, use [0-9] if your regex engine doesn't support it.)\nSee also the question What is the regex pattern for datetime (2008-09-01 12:35:45 ) ?\n",
"I would use one of the DateTime.TryParse techniques in conjunction with a CustomValidator\n"
] |
[
18,
2,
1,
1,
1
] |
[] |
[] |
[
"asp.net",
"regex"
] |
stackoverflow_0000049919_asp.net_regex.txt
|
Q:
In what order are locations searched to load referenced DLLs?
I know that the .NET framework looks for referenced DLLs in several locations
Global assembly cache (GAC)
Any private paths added to the AppDomain
The current directory of the executing assembly
What order are those locations searched? Is the search for a DLL ceased if a match is found or does it continue through all locations (and if so, how are conflicts resolved)?
Also, please confirm or deny those locations and provide any other locations I have failed to mention.
A:
Assembly loading is a rather elaborate process which depends on lots of different factors like configuration files, publisher policies, appdomain settings, CLR hosts, partial or full assembly names, etc.
The simple version is that the GAC is first, then the private paths. %PATH% is never used.
It is best to use Assembly Binding Log Viewer (Fuslogvw.exe) to debug any assembly loading problems.
EDIT
How the Runtime Locates Assemblies explains the process in more detail.
A:
I found an article referencing the MSDN article on DLL search order that says
For managed code dependencies, the
Global Assembly Cache always prevails;
the local assembly in application
directory will not be picked up if
there is an existing (or newer with
policy) copy in the GAC.
Considering this, I guess the MSDN list is correct with one addition
0. Global assembly cache
A:
No longer is the current directory searched first when loading DLLs! This change was also made in Windows XP SP1. The default behavior now is to look in all the system locations first, then the current directory, and finally any user-defined paths.
(ref. http://weblogs.asp.net/pwilson/archive/2003/06/24/9214.aspx)
The default search order, which can be changed by the application, is also described on MSDN: Dynamic-Link Library Search Order.
|
In what order are locations searched to load referenced DLLs?
|
I know that the .NET framework looks for referenced DLLs in several locations
Global assembly cache (GAC)
Any private paths added to the AppDomain
The current directory of the executing assembly
What order are those locations searched? Is the search for a DLL ceased if a match is found or does it continue through all locations (and if so, how are conflicts resolved)?
Also, please confirm or deny those locations and provide any other locations I have failed to mention.
|
[
"Assembly loading is a rather elaborate process which depends on lots of different factors like configuration files, publisher policies, appdomain settings, CLR hosts, partial or full assembly names, etc.\nThe simple version is that the GAC is first, then the private paths. %PATH% is never used.\nIt is best to use Assembly Binding Log Viewer (Fuslogvw.exe) to debug any assembly loading problems.\nEDIT\nHow the Runtime Locates Assemblies explains the process in more detail.\n",
"I found an article referencing the MSDN article on DLL search order that says\n\nFor managed code dependencies, the\n Global Assembly Cache always prevails;\n the local assembly in application\n directory will not be picked up if\n there is an existing (or newer with\n policy) copy in the GAC.\n\nConsidering this, I guess the MSDN list is correct with one addition\n0. Global assembly cache\n",
"\nNo longer is the current directory searched first when loading DLLs! This change was also made in Windows XP SP1. The default behavior now is to look in all the system locations first, then the current directory, and finally any user-defined paths.\n\n(ref. http://weblogs.asp.net/pwilson/archive/2003/06/24/9214.aspx)\nThe default search order, which can be changed by the application, is also described on MSDN: Dynamic-Link Library Search Order.\n"
] |
[
57,
7,
2
] |
[] |
[] |
[
".net",
"dll"
] |
stackoverflow_0000049972_.net_dll.txt
|
Q:
Is Visual Studio 2003 still available/supported
Pretty much what the title says really.
We have some code that is .NET 1.1 based and no real desire to up-convert it. However, we are looking to add developers to the team and they will need copies of Visual Studio.
My understanding is that they will need VS 2003 - as this is the only IDE that supports .NET 1.1 but I am wondering if we are still able to purchase it!
A:
You can build 1.1 projects in Visual Studio 2005:
http://www.hanselman.com/blog/BuildingNET11ProjectsUsingVisualStudio2005.aspx
The MSBuild Everett Environment (MSBEE) has been announced, and soon this will be a (reasonably) supported scenario and we'll all be able to build both 1.1 and 2.0 versions of .NET code on Visual Studio 2005.
Also read this post about this issue:
http://blogs.msdn.com/clichten/archive/2005/11/08/490541.aspx
And also:
MSBuild Extras – Toolkit for .NET 1.1 “MSBee” is an addition to MSBuild that allows developers to build managed applications using Visual Studio 2005 projects that target .NET 1.1.
A:
Visual Studio 2003 is still available to download for MSDN subscribers.
The EULA for Visual Studio includes a 'downgrade' clause, which appears, IMNAL, to allow you to buy Visual Studio 2008 and then install 2003 under the same license.
DOWNGRADE. You may install and use
this version and an earlier version of
the software at the same time. This
agreement applies to your use of the
earlier version. If the earlier
version includes different components,
any terms for those components in the
agreement that comes with the earlier
version apply to your use of them.
Microsoft is not obligated to supply
earlier versions to you.
A:
Mainstream support for VS2003 ends in October of this year:
http://support.microsoft.com/lifecycle/search/?sort=PN&alpha=Visual+Studio
Extended support (whatever that means) is still available for quite some time.
A:
In addition to Espo's link, look into MSBee, an enhancements kit for MSBuild to better support .NET Framework 1.1.
It seems you can even use .NET 1.1 with Visual Studio 2008, though, so you should have no problem.
That said, I'd be interested in hearing what made you choose against upgrading.
A:
Supported: Yes
Available: Not through normal channels. You might still find a boxed copy on Amazon or somewhere.
A:
.NET 1.1 code can be imported in VS 2005, as .NET 2.0 is backward compatible with .NET 1.1.
You'll probably have to convert the project, but it should still run in VS 2005.
A:
I believe that vs2003 looses support in october
|
Is Visual Studio 2003 still available/supported
|
Pretty much what the title says really.
We have some code that is .NET 1.1 based and no real desire to up-convert it. However, we are looking to add developers to the team and they will need copies of Visual Studio.
My understanding is that they will need VS 2003 - as this is the only IDE that supports .NET 1.1 but I am wondering if we are still able to purchase it!
|
[
"You can build 1.1 projects in Visual Studio 2005:\nhttp://www.hanselman.com/blog/BuildingNET11ProjectsUsingVisualStudio2005.aspx\nThe MSBuild Everett Environment (MSBEE) has been announced, and soon this will be a (reasonably) supported scenario and we'll all be able to build both 1.1 and 2.0 versions of .NET code on Visual Studio 2005. \nAlso read this post about this issue:\nhttp://blogs.msdn.com/clichten/archive/2005/11/08/490541.aspx\nAnd also:\nMSBuild Extras – Toolkit for .NET 1.1 “MSBee” is an addition to MSBuild that allows developers to build managed applications using Visual Studio 2005 projects that target .NET 1.1. \n",
"Visual Studio 2003 is still available to download for MSDN subscribers.\nThe EULA for Visual Studio includes a 'downgrade' clause, which appears, IMNAL, to allow you to buy Visual Studio 2008 and then install 2003 under the same license.\n\nDOWNGRADE. You may install and use\n this version and an earlier version of\n the software at the same time. This\n agreement applies to your use of the\n earlier version. If the earlier\n version includes different components,\n any terms for those components in the\n agreement that comes with the earlier\n version apply to your use of them.\n Microsoft is not obligated to supply\n earlier versions to you.\n\n",
"Mainstream support for VS2003 ends in October of this year:\nhttp://support.microsoft.com/lifecycle/search/?sort=PN&alpha=Visual+Studio\nExtended support (whatever that means) is still available for quite some time.\n",
"In addition to Espo's link, look into MSBee, an enhancements kit for MSBuild to better support .NET Framework 1.1.\nIt seems you can even use .NET 1.1 with Visual Studio 2008, though, so you should have no problem.\nThat said, I'd be interested in hearing what made you choose against upgrading.\n",
"Supported: Yes\nAvailable: Not through normal channels. You might still find a boxed copy on Amazon or somewhere.\n",
".NET 1.1 code can be imported in VS 2005, as .NET 2.0 is backward compatible with .NET 1.1.\nYou'll probably have to convert the project, but it should still run in VS 2005.\n",
"I believe that vs2003 looses support in october\n"
] |
[
8,
8,
2,
1,
0,
0,
0
] |
[] |
[] |
[
".net",
"visual_studio",
"visual_studio_2003"
] |
stackoverflow_0000050046_.net_visual_studio_visual_studio_2003.txt
|
Q:
What are some good SharePoint security resources?
I've got a SharePoint application and I'm sad to say that in my SharePoint-induced excitement, I ignored a lot of the security concerns I should have been paying more attention to. Though we didn't before, now we actually need granular security, so I need to get educated. I'm mostly interested in how to best create groups and add users to those groups. We have a single main site collection and a couple dozen subsites under that collection. How can I best create a granular security world where I can independently assign rights to each of these subsites?
A:
To have permissions vary at the "sub site" level which is the SPWeb object in object model terms you need to enable unique permission for the site.
A good article outlining the permission hierarchy in SharePoint 2007 can be found on the office web site About controlling access to sites and site content
In my experience if you are able to use permission inheritance over granular security it's much less hassle to manage.
Breaking site permission inheritance
Click "People and groups"
Click "Site permissions"
From the actions menu in the list click "Edit Permissions"
http://blog.richfinn.net/content/binary/WindowsLiveWriter/InstallandConfiguretheCommunityKitforSha_E660/image_3.png http://blog.richfinn.net/content/binary/WindowsLiveWriter/InstallandConfiguretheCommunityKitforSha_E660/image_3.png
Other references
SharePoint 2007: Permissions, permissions, permissions.
SharePoint 2007 SiteGroups - part 1 - the basics
|
What are some good SharePoint security resources?
|
I've got a SharePoint application and I'm sad to say that in my SharePoint-induced excitement, I ignored a lot of the security concerns I should have been paying more attention to. Though we didn't before, now we actually need granular security, so I need to get educated. I'm mostly interested in how to best create groups and add users to those groups. We have a single main site collection and a couple dozen subsites under that collection. How can I best create a granular security world where I can independently assign rights to each of these subsites?
|
[
"To have permissions vary at the \"sub site\" level which is the SPWeb object in object model terms you need to enable unique permission for the site.\nA good article outlining the permission hierarchy in SharePoint 2007 can be found on the office web site About controlling access to sites and site content\n\nIn my experience if you are able to use permission inheritance over granular security it's much less hassle to manage.\nBreaking site permission inheritance\n\nClick \"People and groups\"\nClick \"Site permissions\"\nFrom the actions menu in the list click \"Edit Permissions\"\n\nhttp://blog.richfinn.net/content/binary/WindowsLiveWriter/InstallandConfiguretheCommunityKitforSha_E660/image_3.png http://blog.richfinn.net/content/binary/WindowsLiveWriter/InstallandConfiguretheCommunityKitforSha_E660/image_3.png\nOther references\n\nSharePoint 2007: Permissions, permissions, permissions.\nSharePoint 2007 SiteGroups - part 1 - the basics \n\n"
] |
[
5
] |
[] |
[] |
[
"security",
"sharepoint"
] |
stackoverflow_0000050037_security_sharepoint.txt
|
Q:
Are there any negative reasons to use an N-Tier solution?
I'm pretty new to my company (2 weeks) and we're starting a new platform for our system using .NET 3.5 Team Foundation from DotNetNuke. Our "architect" is suggesting we use one class project. Of course, I chime back with a "3-tier" architecture (Business, Data, Web class projects).
Is there any disadvantages to using this architecture? Pro's would be separation of code from data, keeping class objects away from your code, etc.
A:
I guess a fairly big downside is that the extra volume of code that you have to write, manage and maintain for a small project may just be overkill.
It's all down to what's appropriate for the size of the project, the expected life of the final project and the budget! Sometimes, whilst doing things 'properly' is appealing, doing something a little more 'lightweight' can be the right commercial decision!
A:
it tends to take an inexperienced team longer to build 3-tier.It's more code, so more bugs. I'm just playing the devil's advocate though.
A:
I would be pushing hard for the N tiered approach even if it's a small project. If you use an ORM tool like codesmith + nettiers you will be able to quickly setup the projects and be developing code that solves your business problems quickly.
It kills me when you start a new project and you spend days sitting around spinning wheels talking about how the "architecture" should be architected. You want to be spending time solving the business problem, not solving problems that other people have solved for you. Using an ORM (it doesn't really matter which one, just pick one and stick to it) to help you get initial traction will help keep you focussed on the goals of the project and not distract you trying to solve "architecture" issues.
If, at the end of the day, the architect wants to go the one project approach, there is no reason you can't create an app_code folder with a BLL and DAL folder to seperate the code for now which will help you move to an N-Tiered solution later.
A:
Because you want the capability of being able to distribute the layers onto different physical tiers (I always use "tier" for physical, and "layer" for logical), you should think twice before just putting everything into one class because you've got major refactorings to do if or when you do need to start distributing.
A:
The only disadvantage is complexity but really how hard is it to add some domain objects and bind to a list of them as opposed to using a dataset. You don't even have to create three seperate projects, you can just create 3 seperate folders within the web app and give each one a namespace like, YourCompany.YourApp.Domain, YourCompany.YourApp.Data, etc.
The big advantage is having a more flexible solution. If you start writing your app as a data centric application, strongly coupling your web forms pages to datasets, you are going to end up doing a lot more work later migrating to a more domain centeric model as your business logic grows in complexity.
Maybe in the short term you focus on a simple solution by creating very simple domain objects and populating them from datasets, then you can add business logic to them as needed and build out a more sophisticated ORM as needed, or use nhibernate.
A:
As with anything abstraction creates complexity, and so the complexity of doing N-tiered should be properly justified, e.g., does N-tiered actually benefit the system? There will be small systems that will work best with N-tiered, although a lot of them will not.
Also, even if your system is small at the moment, you might want to add more features to it later -- not going N-tiered might consitute a sort of technical debt on your part, so you have to be careful.
|
Are there any negative reasons to use an N-Tier solution?
|
I'm pretty new to my company (2 weeks) and we're starting a new platform for our system using .NET 3.5 Team Foundation from DotNetNuke. Our "architect" is suggesting we use one class project. Of course, I chime back with a "3-tier" architecture (Business, Data, Web class projects).
Is there any disadvantages to using this architecture? Pro's would be separation of code from data, keeping class objects away from your code, etc.
|
[
"I guess a fairly big downside is that the extra volume of code that you have to write, manage and maintain for a small project may just be overkill.\nIt's all down to what's appropriate for the size of the project, the expected life of the final project and the budget! Sometimes, whilst doing things 'properly' is appealing, doing something a little more 'lightweight' can be the right commercial decision!\n",
"it tends to take an inexperienced team longer to build 3-tier.It's more code, so more bugs. I'm just playing the devil's advocate though.\n",
"I would be pushing hard for the N tiered approach even if it's a small project. If you use an ORM tool like codesmith + nettiers you will be able to quickly setup the projects and be developing code that solves your business problems quickly.\nIt kills me when you start a new project and you spend days sitting around spinning wheels talking about how the \"architecture\" should be architected. You want to be spending time solving the business problem, not solving problems that other people have solved for you. Using an ORM (it doesn't really matter which one, just pick one and stick to it) to help you get initial traction will help keep you focussed on the goals of the project and not distract you trying to solve \"architecture\" issues.\nIf, at the end of the day, the architect wants to go the one project approach, there is no reason you can't create an app_code folder with a BLL and DAL folder to seperate the code for now which will help you move to an N-Tiered solution later.\n",
"Because you want the capability of being able to distribute the layers onto different physical tiers (I always use \"tier\" for physical, and \"layer\" for logical), you should think twice before just putting everything into one class because you've got major refactorings to do if or when you do need to start distributing.\n",
"The only disadvantage is complexity but really how hard is it to add some domain objects and bind to a list of them as opposed to using a dataset. You don't even have to create three seperate projects, you can just create 3 seperate folders within the web app and give each one a namespace like, YourCompany.YourApp.Domain, YourCompany.YourApp.Data, etc. \nThe big advantage is having a more flexible solution. If you start writing your app as a data centric application, strongly coupling your web forms pages to datasets, you are going to end up doing a lot more work later migrating to a more domain centeric model as your business logic grows in complexity. \nMaybe in the short term you focus on a simple solution by creating very simple domain objects and populating them from datasets, then you can add business logic to them as needed and build out a more sophisticated ORM as needed, or use nhibernate.\n",
"As with anything abstraction creates complexity, and so the complexity of doing N-tiered should be properly justified, e.g., does N-tiered actually benefit the system? There will be small systems that will work best with N-tiered, although a lot of them will not.\nAlso, even if your system is small at the moment, you might want to add more features to it later -- not going N-tiered might consitute a sort of technical debt on your part, so you have to be careful.\n"
] |
[
9,
3,
2,
2,
1,
0
] |
[] |
[] |
[
"architecture",
"n_tier_architecture"
] |
stackoverflow_0000005880_architecture_n_tier_architecture.txt
|
Q:
A good algorithm similar to Levenshtein but weighted for Qwerty keyboards?
I noticed some posts here on string matching, which reminded me of an old problem I'd like to solve. Does anyone have a good Levenshtein-like algorithm that is weighted toward Qwerty keyboards?
I want to compare two strings, and and allow for typos. Levenshtein is okay, but I'd prefer to also accept spelling errors based on the physical distance between keys on Qwerty keyboard. In other words, the algorithm should prefer "yelephone" to "zelephone" since the "y" key is located nearer to the "t" key than to the "z" key on most keyboards.
Any help would be great... this feature isn't central to my project, so I don't want to veer off into a rat-hole when I should be doing something more productive.
A:
In bioinformatics when you align two sequences of DNA you might have a model that has a different cost based on if the substitution is a transition or a transversion. This is exactly what you want but instead of a 4x4 matrix, you want a 40x40 matrix or some, dare I say distance function? So the cost of a replacement is from the matrix/function, not a constant.
CAVEAT: Be sure that deletions and insertions are weighted properly though, so they aren't over accepted as the minimum. You'll end up with a string of insertions/deletions/no-change-substitution characters.
The new function you are trying to minimize would be:
d[i, j] := minimum(
d[i-1, j] + del_cost,
d[i, j-1] + ins_cost,
d[i-1, j-1] + keyboard_distance( s[i], t[j] )
)
|
A good algorithm similar to Levenshtein but weighted for Qwerty keyboards?
|
I noticed some posts here on string matching, which reminded me of an old problem I'd like to solve. Does anyone have a good Levenshtein-like algorithm that is weighted toward Qwerty keyboards?
I want to compare two strings, and and allow for typos. Levenshtein is okay, but I'd prefer to also accept spelling errors based on the physical distance between keys on Qwerty keyboard. In other words, the algorithm should prefer "yelephone" to "zelephone" since the "y" key is located nearer to the "t" key than to the "z" key on most keyboards.
Any help would be great... this feature isn't central to my project, so I don't want to veer off into a rat-hole when I should be doing something more productive.
|
[
"In bioinformatics when you align two sequences of DNA you might have a model that has a different cost based on if the substitution is a transition or a transversion. This is exactly what you want but instead of a 4x4 matrix, you want a 40x40 matrix or some, dare I say distance function? So the cost of a replacement is from the matrix/function, not a constant. \nCAVEAT: Be sure that deletions and insertions are weighted properly though, so they aren't over accepted as the minimum. You'll end up with a string of insertions/deletions/no-change-substitution characters. \nThe new function you are trying to minimize would be:\nd[i, j] := minimum(\n d[i-1, j] + del_cost,\n d[i, j-1] + ins_cost,\n d[i-1, j-1] + keyboard_distance( s[i], t[j] )\n)\n\n"
] |
[
20
] |
[] |
[] |
[
"algorithm",
"comparison",
"string",
"text"
] |
stackoverflow_0000050144_algorithm_comparison_string_text.txt
|
Q:
What is the best way to display a status message in WPF?
I have several wpf pages with update/delete/add buttons. I want to display to the user messages like "successful delete", etc. How can I best implement this so the message is defined in a single place (similar to an asp.net master page) and I can update this message from anywhere?
A:
You may want to consider doing a publish/subscribe ("Observer" pattern) -- define a "status changed" event on a base page, and create a custom control that sets up a delegate and event handler to listen for status updates.
Then you could drop the custom control on any page that inherits from the base, and it would automatically listen for and display status messages whenever the event is fired.
Edit: I put together a sample implementation of this pattern and published a blog post walking through the code.
A:
I don't think you have the ASP.Net master pages translated to the WPF Page world just yet.
A workaround till MS gets there, I would probably put a Control at the top of the page (or wherever) that just displays a particular User-level "Application Setting". You can update the string property like
MyAppUserSettings.StatusMessage = "You just deleted the administrator!"
Crude but will get the job done I think!
|
What is the best way to display a status message in WPF?
|
I have several wpf pages with update/delete/add buttons. I want to display to the user messages like "successful delete", etc. How can I best implement this so the message is defined in a single place (similar to an asp.net master page) and I can update this message from anywhere?
|
[
"You may want to consider doing a publish/subscribe (\"Observer\" pattern) -- define a \"status changed\" event on a base page, and create a custom control that sets up a delegate and event handler to listen for status updates.\nThen you could drop the custom control on any page that inherits from the base, and it would automatically listen for and display status messages whenever the event is fired.\nEdit: I put together a sample implementation of this pattern and published a blog post walking through the code.\n",
"I don't think you have the ASP.Net master pages translated to the WPF Page world just yet.\nA workaround till MS gets there, I would probably put a Control at the top of the page (or wherever) that just displays a particular User-level \"Application Setting\". You can update the string property like\nMyAppUserSettings.StatusMessage = \"You just deleted the administrator!\" \n\nCrude but will get the job done I think!\n"
] |
[
4,
1
] |
[] |
[] |
[
"wpf"
] |
stackoverflow_0000050151_wpf.txt
|
Q:
How can I lock down my MS-SQL DB from my users and yet still access it through ODBC?
I've got an ms-access application that's accessing and ms-sql db through an ODBC connection. I'm trying to force my users to update the data only through the application portion, but I don't care if they read the data directly or through their own custom ms-access db (they use it for creating ad hoc reports).
What I'm looking for is a way to make the data only editable if they are using the compiled .mde file I distribute to them. I know I can make the data read only for the general population, and editable for select users.
Is there a way I can get ms-sql to make the data editable only if they are accessing it through the my canned mde?
Thought, is there a way to get ms-access to log into the database as a different user (or change the login once connected)?
@Jake,
Yes, it's using forms. What I'm looking to do is just have it switch users once when I have my launchpad/mainmenu form pop up.
@Peter,
That is indeed the direction I'm headed. What I haven't determined was how to go about switching to that second ID. I'm not so worried about the password being sniffed, the users are all internal, and on an internal LAN. If they can sniff that password, they can certainly sniff the one for my privileged ID.
@no one in general,
Right now its security by obscurity. I've given the uses a special .mdb for doing reporting that will let them read data, but not update it. They don't know about relinking to the tables through the ODBC connection. A slightly more ms-access/DB literate user could by pass what I've done in seconds - and there a few who imagine themselves to be DBA, so they will figure it out eventually.
A:
There is a way to do this that is effective with internal users, but can be hacked. You create two IDs for each user. One is a reporting ID that has read-only access. This is they ID that the user knows about: Fred / mypassword
The second is an ID that can do updates. That id is Fred_app / mypassword_mangled. They log on to your app with Fred. When your application accesses data, it uses the application id.
This can be sniffed, but for many applications it is sufficient.
A:
Does you app allow for linked table updates or does it go through forms? Sounds like your idea of using a centralized user with distinct roles is the way to go. Yes, you could change users but I that may introduce more coding and once you start adding more and more code other solutions (stored procedures, etc) may sound more inviting.
|
How can I lock down my MS-SQL DB from my users and yet still access it through ODBC?
|
I've got an ms-access application that's accessing and ms-sql db through an ODBC connection. I'm trying to force my users to update the data only through the application portion, but I don't care if they read the data directly or through their own custom ms-access db (they use it for creating ad hoc reports).
What I'm looking for is a way to make the data only editable if they are using the compiled .mde file I distribute to them. I know I can make the data read only for the general population, and editable for select users.
Is there a way I can get ms-sql to make the data editable only if they are accessing it through the my canned mde?
Thought, is there a way to get ms-access to log into the database as a different user (or change the login once connected)?
@Jake,
Yes, it's using forms. What I'm looking to do is just have it switch users once when I have my launchpad/mainmenu form pop up.
@Peter,
That is indeed the direction I'm headed. What I haven't determined was how to go about switching to that second ID. I'm not so worried about the password being sniffed, the users are all internal, and on an internal LAN. If they can sniff that password, they can certainly sniff the one for my privileged ID.
@no one in general,
Right now its security by obscurity. I've given the uses a special .mdb for doing reporting that will let them read data, but not update it. They don't know about relinking to the tables through the ODBC connection. A slightly more ms-access/DB literate user could by pass what I've done in seconds - and there a few who imagine themselves to be DBA, so they will figure it out eventually.
|
[
"There is a way to do this that is effective with internal users, but can be hacked. You create two IDs for each user. One is a reporting ID that has read-only access. This is they ID that the user knows about: Fred / mypassword\nThe second is an ID that can do updates. That id is Fred_app / mypassword_mangled. They log on to your app with Fred. When your application accesses data, it uses the application id.\nThis can be sniffed, but for many applications it is sufficient.\n",
"Does you app allow for linked table updates or does it go through forms? Sounds like your idea of using a centralized user with distinct roles is the way to go. Yes, you could change users but I that may introduce more coding and once you start adding more and more code other solutions (stored procedures, etc) may sound more inviting. \n"
] |
[
2,
1
] |
[] |
[] |
[
"ms_access",
"odbc",
"sql_server"
] |
stackoverflow_0000050164_ms_access_odbc_sql_server.txt
|
Q:
Does "display: marker" work in any current browsers, and if so, how?
I can't be sure if my code is sucking, or if it's just that the browsers haven't caught up with the spec yet.
My goal is to simulate list markers using generated content, so as to get e.g. continuation of the counters from list to list in pure CSS.
So the code below, which I think is correct according to the spec, is like this:
html {
counter-reset: myCounter;
}
li {
counter-increment: myCounter;
}
li:before {
content: counter(myCounter)". ";
display: marker;
width: 5em;
text-align: right;
marker-offset: 1em;
}
<ol>
<li>The<li>
<li>quick</li>
<li>brown</li>
</ol>
<ol>
<li>fox</li>
<li>jumped</li>
<li>over</li>
</ol>
But this doesn't seem to generate markers, in either FF3, Chrome, or IE8 beta 2, and if I recall correctly not Opera either (although I've since uninstalled Opera).
So, does anyone know if markers are supposed to work? Quirksmode.org isn't being its usual helpful self in this regard :(.
A:
Apparently marker was introduced as a value in CSS 2 but did not make it to CSS 2.1 because of lacking browser support.
I suppose that didn’t help its popularity …
Source: http://de.selfhtml.org/css/eigenschaften/positionierung.htm#display (German)
A:
Oh ouch, did not know that :-|. That probably seals its case, then. Because mostly I was under the assumption that such a basic CSS2 property should definitely be supported in modern browsers, but if it didn't make it into CSS 2.1, then it makes a lot more sense that it isn't.
For future reference, it doesn't show up in the Mozilla Development Center, so presumably Firefox doesn't support it at all.
Also for future reference, I got my original example to work with inline-block instead:
li:before
{
content: counter(myCounter)". ";
display: inline-block;
width: 2em;
padding-right: 0.3em;
text-align: right;
}
|
Does "display: marker" work in any current browsers, and if so, how?
|
I can't be sure if my code is sucking, or if it's just that the browsers haven't caught up with the spec yet.
My goal is to simulate list markers using generated content, so as to get e.g. continuation of the counters from list to list in pure CSS.
So the code below, which I think is correct according to the spec, is like this:
html {
counter-reset: myCounter;
}
li {
counter-increment: myCounter;
}
li:before {
content: counter(myCounter)". ";
display: marker;
width: 5em;
text-align: right;
marker-offset: 1em;
}
<ol>
<li>The<li>
<li>quick</li>
<li>brown</li>
</ol>
<ol>
<li>fox</li>
<li>jumped</li>
<li>over</li>
</ol>
But this doesn't seem to generate markers, in either FF3, Chrome, or IE8 beta 2, and if I recall correctly not Opera either (although I've since uninstalled Opera).
So, does anyone know if markers are supposed to work? Quirksmode.org isn't being its usual helpful self in this regard :(.
|
[
"Apparently marker was introduced as a value in CSS 2 but did not make it to CSS 2.1 because of lacking browser support.\nI suppose that didn’t help its popularity …\nSource: http://de.selfhtml.org/css/eigenschaften/positionierung.htm#display (German)\n",
"Oh ouch, did not know that :-|. That probably seals its case, then. Because mostly I was under the assumption that such a basic CSS2 property should definitely be supported in modern browsers, but if it didn't make it into CSS 2.1, then it makes a lot more sense that it isn't.\nFor future reference, it doesn't show up in the Mozilla Development Center, so presumably Firefox doesn't support it at all.\nAlso for future reference, I got my original example to work with inline-block instead:\nli:before\n{\n content: counter(myCounter)\". \";\n display: inline-block;\n width: 2em;\n padding-right: 0.3em;\n text-align: right;\n}\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"cross_browser",
"css"
] |
stackoverflow_0000050170_cross_browser_css.txt
|
Q:
View of allocated memory
I'm looking for a tool ($, free, open source; I don't care) that will allow me to view not just the memory statistics for a .NET program, but also the object hierarchy. I'd really like to be able to drill down each thourgh each object and view it's foot print, as well as all the object's it references.
I've looked at things like Ants Profiler from RedGate, but it's not quite what I want: I can't view specific instances.
EDIT:
I've used the .NET Memory Profiler (the one that ships with visual studio, and the one that used to be part of the SDK (?))before, and while it's really good (and shows views most other don't), what I'm really after is being able to drill down through my object hierarchy, viewing each object instance.
A:
I have used JetBrains DotTrace and Redgate Ants, both of which I would recommend. A lesser known profiler I have also used is .Net Memory Profiler (http://memprofiler.com/), which at the time I used it provided a different perspective on memory usage than the former two profilers mentioned. I find DotTrace and Ants to be very similar, though each one is slightly different.
A:
JetBrains dottrace profiler is the best. I wouldn't work without it. It is hard to find a tool that is free and performs well in this arena. Dottrace is hands down the best profiler I have used for .Net.
A:
There's also the Microsoft .net profiler - I've used it a bit, and it's not bad for a free tool. Not sure if you can walk the object hierarchy, but does break down memory use by type, and over time. You can even see the underlying data.
It does slow down the app a lot, though.
|
View of allocated memory
|
I'm looking for a tool ($, free, open source; I don't care) that will allow me to view not just the memory statistics for a .NET program, but also the object hierarchy. I'd really like to be able to drill down each thourgh each object and view it's foot print, as well as all the object's it references.
I've looked at things like Ants Profiler from RedGate, but it's not quite what I want: I can't view specific instances.
EDIT:
I've used the .NET Memory Profiler (the one that ships with visual studio, and the one that used to be part of the SDK (?))before, and while it's really good (and shows views most other don't), what I'm really after is being able to drill down through my object hierarchy, viewing each object instance.
|
[
"I have used JetBrains DotTrace and Redgate Ants, both of which I would recommend. A lesser known profiler I have also used is .Net Memory Profiler (http://memprofiler.com/), which at the time I used it provided a different perspective on memory usage than the former two profilers mentioned. I find DotTrace and Ants to be very similar, though each one is slightly different.\n",
"JetBrains dottrace profiler is the best. I wouldn't work without it. It is hard to find a tool that is free and performs well in this arena. Dottrace is hands down the best profiler I have used for .Net.\n",
"There's also the Microsoft .net profiler - I've used it a bit, and it's not bad for a free tool. Not sure if you can walk the object hierarchy, but does break down memory use by type, and over time. You can even see the underlying data.\nIt does slow down the app a lot, though.\n"
] |
[
5,
0,
0
] |
[] |
[] |
[
".net",
"memory"
] |
stackoverflow_0000050237_.net_memory.txt
|
Q:
Are there any issues with using log4net in a multi-threaded environment?
I'm wondering if anyone has any experience using log4net in a multi-threaded environment like asp.net. We are currently using log4net and I want to make sure we won't run into any issues.
A:
We run log4net (and log4cxx) in highly multi-threaded environments without issue. You will want to be careful how you configure them though.
The issue with log4net that Jeff describes pertains to the use of a certain appender. We stick with simple log file appenders on the whole to reduce the impact of logging on the operation of the code. Writing a line to a file is pretty minimal, kicking off another database transaction is very heavy.
|
Are there any issues with using log4net in a multi-threaded environment?
|
I'm wondering if anyone has any experience using log4net in a multi-threaded environment like asp.net. We are currently using log4net and I want to make sure we won't run into any issues.
|
[
"We run log4net (and log4cxx) in highly multi-threaded environments without issue. You will want to be careful how you configure them though.\nThe issue with log4net that Jeff describes pertains to the use of a certain appender. We stick with simple log file appenders on the whole to reduce the impact of logging on the operation of the code. Writing a line to a file is pretty minimal, kicking off another database transaction is very heavy.\n"
] |
[
1
] |
[] |
[] |
[
"log4net"
] |
stackoverflow_0000050213_log4net.txt
|
Q:
.MSI Not Always Uninstalling Previous Versions
In a number of applications we create an MSI Installer with the Visual Studio Setup Project. In most cases, the install works fine, but every now and then the previous version was not uninstalled correctly. The user ends up with two icons on the desktop, and in the Add/Remove program list, the application appears twice. We have yet to find any pattern and in most cases the installer works without any problems.
A:
What happens when the uninstall of the previous version fails depends on the sequencing of the RemoveExistingProducts action. I have written a summary about the various options in the past: http://jpassing.wordpress.com/2007/06/16/where-to-place-removeexistingproducts-in-a-major-msi-upgrade/.
Unfortunately, you do not have control over RemoveExistingProducts sequencing when using VS setup projects (Unless you edit the MSI with Orca after it has been built, which usually is not practical). But if your setup project is not completely trivial, I would strongly suggest you to use a different MSI authoring tool like WiX or one of the commercial tools anyway.
|
.MSI Not Always Uninstalling Previous Versions
|
In a number of applications we create an MSI Installer with the Visual Studio Setup Project. In most cases, the install works fine, but every now and then the previous version was not uninstalled correctly. The user ends up with two icons on the desktop, and in the Add/Remove program list, the application appears twice. We have yet to find any pattern and in most cases the installer works without any problems.
|
[
"What happens when the uninstall of the previous version fails depends on the sequencing of the RemoveExistingProducts action. I have written a summary about the various options in the past: http://jpassing.wordpress.com/2007/06/16/where-to-place-removeexistingproducts-in-a-major-msi-upgrade/.\nUnfortunately, you do not have control over RemoveExistingProducts sequencing when using VS setup projects (Unless you edit the MSI with Orca after it has been built, which usually is not practical). But if your setup project is not completely trivial, I would strongly suggest you to use a different MSI authoring tool like WiX or one of the commercial tools anyway.\n"
] |
[
1
] |
[] |
[] |
[
".net",
"windows_installer"
] |
stackoverflow_0000050268_.net_windows_installer.txt
|
Q:
Floats messing up in Safari browsers
I have a site I made really fast that uses floats to display different sections of content. The floated content and the content that has an additional margin both appear fine in FF/IE, but on safari one of the divs is completely hidden. I've tried switching to padding and position:relative, but nothing has worked for me. If I take out the code to display it to the right it shows up again but under the floated content.
The main section of css that seems to be causing the problem is:
#settings{
float:left;
}
#right_content{
margin-top:20px;
margin-left:440px;
width:400px;
}
This gives me the same result whether I specify a size to the #settings div or not. Any ideas would be appreciated.
The site is available at: http://frickinsweet.com/tools/Theme.mvc.aspx to see the source code.
A:
Have you tried floating the #right_content div to the right?
#right_content{
float: right;
margin-top: 20px;
width: 400px;
}
A:
I believe the error lies in the mark up that the color picker is generating. I saved the page and removed that code for the color picker and it renders fine in IE/FF/SF.
A:
Sorry I should have mentioned that as well. I tried floating that content right and additionally tried floating it left and setting the position with the thinking that both divs would start out at left:0 where setting the margin of the right would move it over.
Thanks
A:
A few things you should fix beforehand:
Your <style> tag is in <body>, when it belongs in <head>
You have a typo "realtive" in one of your inline styles:
<a href="http://feeds.feedburner.com/ryanlanciaux" style="position:realtive; top:-6px;">
Try to get your page to validate; this should make debugging the actual problems far easier.
|
Floats messing up in Safari browsers
|
I have a site I made really fast that uses floats to display different sections of content. The floated content and the content that has an additional margin both appear fine in FF/IE, but on safari one of the divs is completely hidden. I've tried switching to padding and position:relative, but nothing has worked for me. If I take out the code to display it to the right it shows up again but under the floated content.
The main section of css that seems to be causing the problem is:
#settings{
float:left;
}
#right_content{
margin-top:20px;
margin-left:440px;
width:400px;
}
This gives me the same result whether I specify a size to the #settings div or not. Any ideas would be appreciated.
The site is available at: http://frickinsweet.com/tools/Theme.mvc.aspx to see the source code.
|
[
"Have you tried floating the #right_content div to the right?\n#right_content{\n float: right;\n margin-top: 20px;\n width: 400px;\n}\n\n",
"I believe the error lies in the mark up that the color picker is generating. I saved the page and removed that code for the color picker and it renders fine in IE/FF/SF. \n",
"Sorry I should have mentioned that as well. I tried floating that content right and additionally tried floating it left and setting the position with the thinking that both divs would start out at left:0 where setting the margin of the right would move it over.\nThanks\n",
"A few things you should fix beforehand:\n\nYour <style> tag is in <body>, when it belongs in <head>\nYou have a typo \"realtive\" in one of your inline styles:\n<a href=\"http://feeds.feedburner.com/ryanlanciaux\" style=\"position:realtive; top:-6px;\">\n\n\nTry to get your page to validate; this should make debugging the actual problems far easier.\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"css",
"css_float",
"safari"
] |
stackoverflow_0000050280_css_css_float_safari.txt
|
Q:
Center a block of content when you don't know its width in advance
After lots of attempts and search I have never found a satisfactory way to do it with CSS2.
A simple way to accomplish it is to wrap it into a handy <table> as shown in the sample below. Do you know how to do it avoiding table layouts and also avoiding quirky tricks?
table {
margin: 0 auto;
}
<table>
<tr>
<td>test<br/>test</td>
</tr>
</table>
What I want to know is how to do it without a fixed width and also being a block.
A:
@Jason, yep, <center> works. Good times. I'll propose the following, though:
body {
text-align: center;
}
.my-centered-content {
margin: 0 auto; /* Centering */
display: inline;
}
<div class="my-centered-content">
<p>test</p>
<p>test</p>
</div>
EDIT @Santi, a block-level element will fill the width of the parent container, so it will effectively be width:100% and the text will flow on the left, leaving you with useless markup and an uncentered element. You might want to try display: inline-block;. Firefox might complain, but it's right. Also, try adding a border: solid red 1px; to the CSS of the .my-centered-content DIV to see what's happening as you try these things out.
A:
This is going to be the lamest answer, but it works:
Use the deprecated <center> tag.
:P
I told you it would be lame. But, like I said, it works!
*shudder*
A:
I think that your example would work just as well if you used a <div> instead of a <table>. The only difference is that the text in the <table> is also centered. If you want that too, just add the text-align: center; rule.
Another thing to keep in mind is that the <div> will by default fill up all the available horizontal space. Put a border on it if you aren't sure where it starts and ends.
A:
The following works well enough. note the position, and the use of auto
<div style="border: 1px solid black;
width: 300px;
height: 300px;">
<div style="width: 150px;
height: 150px;
background-color: blue;
position: relative;
left: auto;
right: auto;
margin-right: auto;
margin-left: auto;">
</div>
</div>
NOTE: not sure if it works in IE.
A:
#wrapper {
width: 100%;
border: 1px solid #333;
}
#content {
width: 200px;
background: #0f0;
}
<div id="wrapper" align="center">
<div id="content" align="left"> Content Here </div>
</div>
A:
In FF3, you can:
<div style="display: table; margin: 0px auto 0 auto;">test<br>test</div>
This has the advantage of using whatever element makes most semantic sense (replace the div with something better, if appropriate), but the disadvantage that it fails in IE (grr...)
Other than that, without setting the width, your best bet is to use javascript to precisely position the left-hand edge. I'm not sure if you'd class that as a 'quirky trick', though.
It really depends on what you want to do, of course. Given your simple test case, a div with text-align: center would have exactly the same effect.
|
Center a block of content when you don't know its width in advance
|
After lots of attempts and search I have never found a satisfactory way to do it with CSS2.
A simple way to accomplish it is to wrap it into a handy <table> as shown in the sample below. Do you know how to do it avoiding table layouts and also avoiding quirky tricks?
table {
margin: 0 auto;
}
<table>
<tr>
<td>test<br/>test</td>
</tr>
</table>
What I want to know is how to do it without a fixed width and also being a block.
|
[
"@Jason, yep, <center> works. Good times. I'll propose the following, though:\n\n\nbody {\r\n text-align: center;\r\n}\r\n\r\n.my-centered-content {\r\n margin: 0 auto; /* Centering */\r\n display: inline;\r\n}\n<div class=\"my-centered-content\">\r\n <p>test</p>\r\n <p>test</p>\r\n</div>\n\n\n\nEDIT @Santi, a block-level element will fill the width of the parent container, so it will effectively be width:100% and the text will flow on the left, leaving you with useless markup and an uncentered element. You might want to try display: inline-block;. Firefox might complain, but it's right. Also, try adding a border: solid red 1px; to the CSS of the .my-centered-content DIV to see what's happening as you try these things out.\n",
"This is going to be the lamest answer, but it works:\nUse the deprecated <center> tag.\n:P\nI told you it would be lame. But, like I said, it works!\n*shudder*\n",
"I think that your example would work just as well if you used a <div> instead of a <table>. The only difference is that the text in the <table> is also centered. If you want that too, just add the text-align: center; rule.\nAnother thing to keep in mind is that the <div> will by default fill up all the available horizontal space. Put a border on it if you aren't sure where it starts and ends.\n",
"The following works well enough. note the position, and the use of auto\n<div style=\"border: 1px solid black; \n width: 300px; \n height: 300px;\">\n <div style=\"width: 150px; \n height: 150px; \n background-color: blue;\n position: relative;\n left: auto;\n right: auto;\n margin-right: auto;\n margin-left: auto;\">\n </div>\n</div>\n\nNOTE: not sure if it works in IE.\n",
"\n\n#wrapper {\r\n width: 100%;\r\n border: 1px solid #333;\r\n}\r\n#content {\r\n width: 200px;\r\n background: #0f0;\r\n}\n<div id=\"wrapper\" align=\"center\">\r\n <div id=\"content\" align=\"left\"> Content Here </div>\r\n</div>\n\n\n\n",
"In FF3, you can:\n<div style=\"display: table; margin: 0px auto 0 auto;\">test<br>test</div>\n\nThis has the advantage of using whatever element makes most semantic sense (replace the div with something better, if appropriate), but the disadvantage that it fails in IE (grr...)\nOther than that, without setting the width, your best bet is to use javascript to precisely position the left-hand edge. I'm not sure if you'd class that as a 'quirky trick', though.\nIt really depends on what you want to do, of course. Given your simple test case, a div with text-align: center would have exactly the same effect.\n"
] |
[
9,
4,
1,
0,
0,
0
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0000046350_css_html.txt
|
Q:
CSS - Placement of a div in the lower left-hand corner
I wish I were a CSS smarty ....
How can you place a div container in the lower left-hand corner of the web page; taking into account the users scroll-position?
A:
To position an element relative to the "viewport" (the window or frame it's in), and have it ignore how that viewport is scrolled, you can use the position: fixed; property value (MDN documentation). This has been supported by every browser since Internet Explorer 7.
To position the element at the bottom-left of the window, we need to also specify that it should be positioned at 0 distance from the bottom and left:
position: fixed;
bottom: 0;
left: 0;
Full Example
.bottom-left {
position: fixed;
bottom: 0;
left: 0;
}
.alert {
border: 2px solid red;
background: white;
font-weight: bold;
padding: 1em;
}
<div class="bottom-left alert">
Look at me!
</div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam dignissim diam arcu, a gravida justo malesuada et. Fusce iaculis, dui laoreet ultricies congue, arcu lectus rhoncus neque, ut molestie magna augue ut neque. Duis in feugiat ipsum, et imperdiet nunc. Cras convallis lorem eu diam malesuada malesuada. Nunc dapibus suscipit ligula, vel mattis eros blandit id. In placerat justo vitae pretium fermentum. Proin ac erat commodo nibh ullamcorper feugiat. Nulla ultricies maximus massa, non semper dolor malesuada vel. Nullam sem justo, bibendum vel tempus pharetra, gravida vel sapien. Morbi facilisis tristique mauris vel elementum. Ut porttitor egestas metus eget auctor. Phasellus efficitur rutrum massa nec fringilla. Aliquam et imperdiet leo. Sed tincidunt hendrerit tortor eget tempor.</p>
<p>Sed vel dolor lectus. Nulla sed blandit lacus. Mauris ac magna nec libero vehicula aliquet id a libero. Vivamus sed lobortis velit. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Sed at feugiat sapien, ut commodo mi. Quisque scelerisque maximus efficitur. In ultrices, magna eu semper pellentesque, tellus odio hendrerit augue, ut porta sapien lacus quis odio.</p>
<p>Duis sodales, dui a condimentum imperdiet, tellus est laoreet velit, a viverra risus libero sed urna. Phasellus sollicitudin tincidunt viverra. Proin vulputate leo at justo auctor feugiat. Nam auctor, mauris at commodo tempus, eros diam varius ligula, vitae efficitur massa lectus et enim. Integer tristique nibh in lacus condimentum, et interdum urna mollis. Aenean id risus tristique, volutpat dolor sed, fermentum ex. Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nullam velit nibh, elementum at orci quis, tempor fermentum tellus. Nunc facilisis nisi at leo auctor aliquet. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Aliquam tempor ipsum vel scelerisque tincidunt. Etiam vulputate auctor ante, in tristique est congue ut. Vestibulum maximus nibh vestibulum tristique ullamcorper. Phasellus eu eleifend ante, nec efficitur nulla.</p>
<p>Nunc pulvinar purus id arcu egestas, sed iaculis nisl finibus. Sed cursus bibendum tortor, id cursus lacus euismod in. Nam lacinia, sapien faucibus dapibus varius, neque velit fringilla est, in porta quam sem sit amet ligula. Aliquam ornare est ac pellentesque suscipit. Curabitur eleifend convallis sem, volutpat efficitur erat laoreet id. Maecenas interdum ante in lectus varius, lobortis auctor quam rutrum. Nullam tristique felis quis lectus luctus gravida. Cras porttitor pellentesque nibh. Fusce placerat vehicula commodo. Mauris vel lectus viverra sem consectetur sagittis quis vel lectus. Quisque vel dapibus augue. Sed lacinia massa quis dui sodales faucibus.</p>
<p>Donec sagittis, dolor sed fermentum dapibus, justo ipsum porttitor purus, sed fermentum mi nulla non lorem. Praesent aliquet iaculis molestie. Phasellus enim nunc, vestibulum non odio vel, porta imperdiet lorem. Morbi laoreet felis a ipsum elementum sollicitudin. Morbi varius mollis ex, a posuere lorem fringilla ac. Curabitur metus ligula, mollis quis diam eu, pulvinar placerat libero. Aenean vestibulum lacinia diam in facilisis. Praesent egestas sapien a est consequat facilisis. Nulla id mauris a metus venenatis pellentesque. Praesent justo augue, efficitur ac vulputate et, luctus at elit. Proin quis urna quam. Pellentesque iaculis, felis sed hendrerit venenatis, purus augue venenatis tellus, a posuere justo tellus at ex. Donec et arcu non arcu scelerisque efficitur nec sed dolor. Sed eget lacus enim. Donec sodales mollis condimentum.</p>
|
CSS - Placement of a div in the lower left-hand corner
|
I wish I were a CSS smarty ....
How can you place a div container in the lower left-hand corner of the web page; taking into account the users scroll-position?
|
[
"To position an element relative to the \"viewport\" (the window or frame it's in), and have it ignore how that viewport is scrolled, you can use the position: fixed; property value (MDN documentation). This has been supported by every browser since Internet Explorer 7.\nTo position the element at the bottom-left of the window, we need to also specify that it should be positioned at 0 distance from the bottom and left:\nposition: fixed;\nbottom: 0;\nleft: 0;\n\nFull Example\n\n\n.bottom-left {\r\n position: fixed;\r\n bottom: 0;\r\n left: 0;\r\n}\r\n\r\n.alert {\r\n border: 2px solid red;\r\n background: white;\r\n font-weight: bold;\r\n padding: 1em;\r\n}\n<div class=\"bottom-left alert\">\r\n Look at me!\r\n</div>\r\n\r\n<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam dignissim diam arcu, a gravida justo malesuada et. Fusce iaculis, dui laoreet ultricies congue, arcu lectus rhoncus neque, ut molestie magna augue ut neque. Duis in feugiat ipsum, et imperdiet nunc. Cras convallis lorem eu diam malesuada malesuada. Nunc dapibus suscipit ligula, vel mattis eros blandit id. In placerat justo vitae pretium fermentum. Proin ac erat commodo nibh ullamcorper feugiat. Nulla ultricies maximus massa, non semper dolor malesuada vel. Nullam sem justo, bibendum vel tempus pharetra, gravida vel sapien. Morbi facilisis tristique mauris vel elementum. Ut porttitor egestas metus eget auctor. Phasellus efficitur rutrum massa nec fringilla. Aliquam et imperdiet leo. Sed tincidunt hendrerit tortor eget tempor.</p>\r\n\r\n<p>Sed vel dolor lectus. Nulla sed blandit lacus. Mauris ac magna nec libero vehicula aliquet id a libero. Vivamus sed lobortis velit. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Sed at feugiat sapien, ut commodo mi. Quisque scelerisque maximus efficitur. In ultrices, magna eu semper pellentesque, tellus odio hendrerit augue, ut porta sapien lacus quis odio.</p>\r\n\r\n<p>Duis sodales, dui a condimentum imperdiet, tellus est laoreet velit, a viverra risus libero sed urna. Phasellus sollicitudin tincidunt viverra. Proin vulputate leo at justo auctor feugiat. Nam auctor, mauris at commodo tempus, eros diam varius ligula, vitae efficitur massa lectus et enim. Integer tristique nibh in lacus condimentum, et interdum urna mollis. Aenean id risus tristique, volutpat dolor sed, fermentum ex. Interdum et malesuada fames ac ante ipsum primis in faucibus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nullam velit nibh, elementum at orci quis, tempor fermentum tellus. Nunc facilisis nisi at leo auctor aliquet. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Aliquam tempor ipsum vel scelerisque tincidunt. Etiam vulputate auctor ante, in tristique est congue ut. Vestibulum maximus nibh vestibulum tristique ullamcorper. Phasellus eu eleifend ante, nec efficitur nulla.</p>\r\n\r\n<p>Nunc pulvinar purus id arcu egestas, sed iaculis nisl finibus. Sed cursus bibendum tortor, id cursus lacus euismod in. Nam lacinia, sapien faucibus dapibus varius, neque velit fringilla est, in porta quam sem sit amet ligula. Aliquam ornare est ac pellentesque suscipit. Curabitur eleifend convallis sem, volutpat efficitur erat laoreet id. Maecenas interdum ante in lectus varius, lobortis auctor quam rutrum. Nullam tristique felis quis lectus luctus gravida. Cras porttitor pellentesque nibh. Fusce placerat vehicula commodo. Mauris vel lectus viverra sem consectetur sagittis quis vel lectus. Quisque vel dapibus augue. Sed lacinia massa quis dui sodales faucibus.</p>\r\n\r\n<p>Donec sagittis, dolor sed fermentum dapibus, justo ipsum porttitor purus, sed fermentum mi nulla non lorem. Praesent aliquet iaculis molestie. Phasellus enim nunc, vestibulum non odio vel, porta imperdiet lorem. Morbi laoreet felis a ipsum elementum sollicitudin. Morbi varius mollis ex, a posuere lorem fringilla ac. Curabitur metus ligula, mollis quis diam eu, pulvinar placerat libero. Aenean vestibulum lacinia diam in facilisis. Praesent egestas sapien a est consequat facilisis. Nulla id mauris a metus venenatis pellentesque. Praesent justo augue, efficitur ac vulputate et, luctus at elit. Proin quis urna quam. Pellentesque iaculis, felis sed hendrerit venenatis, purus augue venenatis tellus, a posuere justo tellus at ex. Donec et arcu non arcu scelerisque efficitur nec sed dolor. Sed eget lacus enim. Donec sodales mollis condimentum.</p>\n\n\n\n"
] |
[
42
] |
[] |
[] |
[
"css",
"html",
"positioning"
] |
stackoverflow_0000050430_css_html_positioning.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.