content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Real-world problems with naive shuffling
I'm writing a number of articles meant to teach beginning programming concepts through the use of poker-related topics. Currently, I'm working on the subject of shuffling.
As Jeff Atwood points out on CodingHorror.com, one simple shuffling method (iterating through an array and swapping each card with a random card elsewhere in the array) creates an uneven distribution of permutations. In an actual application, I would just use the Knuth Fisher-Yates shuffle for more uniform randomness. But, I don't want to bog down an explanation of programming concepts with the much less coder-friendly algorithm.
This leads to the question: Just how much of an advantage would a black-hat have if they knew you were using a naive shuffle of a 52-card deck? It seems like it would be infinitesimally small.
A:
The knuth shuffle is an insignificant change compared to the naive shuffle: Just swap with any card in the remaining (unshuffled) section of the deck instead of anywhere in the entire deck. If you think of it as repeatedly choosing the next card in order from the remaining unchosen cards, it's pretty intuitive, too.
Personally, I think teaching students a poor algorithm when the proper one is no more complicated (and easier to visualise!) is a bad approach.
A:
It turns out the advantage is quite significant. Check out this article
Part of the problem is the flawed algorithm, but another part is the assumption that you can get "random" numbers from a computer.
A:
A simple & fair algorithm for shuffling would be to assign a random floating-point number (e.g., between 0 and 1) to each card in the deck, then sort the deck by the assigned numbers.
This is actually a perfect example for students to realize that just because something is intuitive, the naive shuffle in our case, doesn't mean it's correct.
A:
Just as an aside, there was a blog post over on ITtoolbox about shuffling that may be of interest when it comes to simulating a shuffle.
As to your question, consider that there are 52! deck configurations that one could start with that may play a role in where things land as in Jeff's example of the 3 card deck, note that the 1 in the over-represented occurs in each slot once. Also note that he says you'd have to have a few thousand examples before it becomes apparent where the advantage is, but with a deck you aren't likely to start again with the exact same initial deck, are you? You'd take the dealt cards and put them on the bottom and shuffle them which isn't likely to repeat I'd think.
A:
It's not like you're writing a poker program that will be used for an actual online gambling site. An ability for someone to cheat at the program isn't a big deal when you're teaching people how to program.
Leave a note saying that this is a poor model of the real world (with a reference to it as a possible security flaw), and just keep going with the teaching.
|
Real-world problems with naive shuffling
|
I'm writing a number of articles meant to teach beginning programming concepts through the use of poker-related topics. Currently, I'm working on the subject of shuffling.
As Jeff Atwood points out on CodingHorror.com, one simple shuffling method (iterating through an array and swapping each card with a random card elsewhere in the array) creates an uneven distribution of permutations. In an actual application, I would just use the Knuth Fisher-Yates shuffle for more uniform randomness. But, I don't want to bog down an explanation of programming concepts with the much less coder-friendly algorithm.
This leads to the question: Just how much of an advantage would a black-hat have if they knew you were using a naive shuffle of a 52-card deck? It seems like it would be infinitesimally small.
|
[
"The knuth shuffle is an insignificant change compared to the naive shuffle: Just swap with any card in the remaining (unshuffled) section of the deck instead of anywhere in the entire deck. If you think of it as repeatedly choosing the next card in order from the remaining unchosen cards, it's pretty intuitive, too.\nPersonally, I think teaching students a poor algorithm when the proper one is no more complicated (and easier to visualise!) is a bad approach.\n",
"It turns out the advantage is quite significant. Check out this article\nPart of the problem is the flawed algorithm, but another part is the assumption that you can get \"random\" numbers from a computer.\n",
"A simple & fair algorithm for shuffling would be to assign a random floating-point number (e.g., between 0 and 1) to each card in the deck, then sort the deck by the assigned numbers.\nThis is actually a perfect example for students to realize that just because something is intuitive, the naive shuffle in our case, doesn't mean it's correct.\n",
"Just as an aside, there was a blog post over on ITtoolbox about shuffling that may be of interest when it comes to simulating a shuffle.\nAs to your question, consider that there are 52! deck configurations that one could start with that may play a role in where things land as in Jeff's example of the 3 card deck, note that the 1 in the over-represented occurs in each slot once. Also note that he says you'd have to have a few thousand examples before it becomes apparent where the advantage is, but with a deck you aren't likely to start again with the exact same initial deck, are you? You'd take the dealt cards and put them on the bottom and shuffle them which isn't likely to repeat I'd think.\n",
"It's not like you're writing a poker program that will be used for an actual online gambling site. An ability for someone to cheat at the program isn't a big deal when you're teaching people how to program.\nLeave a note saying that this is a poor model of the real world (with a reference to it as a possible security flaw), and just keep going with the teaching.\n"
] |
[
14,
6,
3,
1,
0
] |
[
"Subjective.\n\nIt seems like it would be infinitesimally small.\n\nAgree.\n"
] |
[
-1
] |
[
"algorithm",
"probability",
"shuffle"
] |
stackoverflow_0000096840_algorithm_probability_shuffle.txt
|
Q:
How do you get the response from the Request object in MooTools?
How do you access the response from the Request
object in MooTools? I've been looking at the documentation and the MooTorial, but
I can't seem to make any headway. Other Ajax stuff I've done with
MooTools I haven't had to manipulate the response at all, so I've just
been able to inject it straight into the document, but now I need to
make some changes to it first. I don't want to alert the response, I'd like to access it so I can make further changes to it. Any help would be greatly appreciated.
Thanks.
Edit:
I'd like to be able to access
the response after the request has already been made, preferably
outside of the Request object. It's for an RSS reader, so I need to do
some parsing and Request is just being used to get the feed from a
server file. This function is a method in a class, which should return
the response in a string, but it isn't returning anything but
undefined:
fetch: function(site){
var feed;
var req = new Request({
method: this.options.method,
url: this.options.rssFetchPath,
data: { 'url' : site },
onRequest: function() {
if (this.options.targetId) { $
(this.options.targetId).setProperty('html',
this.options.onRequestMessage); }
}.bind(this),
onSuccess: function(responseText) {
feed = responseText;
}
});
req.send();
return feed;
}
A:
The response content is returned to the anonymous function defined in onComplete.
It can be accessed from there.
var req = new Request({
method: 'get',
url: ...,
data: ...,
onRequest: function() { alert('Request made. Please wait...'); },
// the response is passed to the callback as the first parameter
onComplete: function(response) { alert('Response: ' + response); }
}).send();
A:
I was able to find my answer on the MooTools Group at Google.
|
How do you get the response from the Request object in MooTools?
|
How do you access the response from the Request
object in MooTools? I've been looking at the documentation and the MooTorial, but
I can't seem to make any headway. Other Ajax stuff I've done with
MooTools I haven't had to manipulate the response at all, so I've just
been able to inject it straight into the document, but now I need to
make some changes to it first. I don't want to alert the response, I'd like to access it so I can make further changes to it. Any help would be greatly appreciated.
Thanks.
Edit:
I'd like to be able to access
the response after the request has already been made, preferably
outside of the Request object. It's for an RSS reader, so I need to do
some parsing and Request is just being used to get the feed from a
server file. This function is a method in a class, which should return
the response in a string, but it isn't returning anything but
undefined:
fetch: function(site){
var feed;
var req = new Request({
method: this.options.method,
url: this.options.rssFetchPath,
data: { 'url' : site },
onRequest: function() {
if (this.options.targetId) { $
(this.options.targetId).setProperty('html',
this.options.onRequestMessage); }
}.bind(this),
onSuccess: function(responseText) {
feed = responseText;
}
});
req.send();
return feed;
}
|
[
"The response content is returned to the anonymous function defined in onComplete.\nIt can be accessed from there.\nvar req = new Request({\n method: 'get',\n url: ...,\n data: ...,\n onRequest: function() { alert('Request made. Please wait...'); },\n\n // the response is passed to the callback as the first parameter\n onComplete: function(response) { alert('Response: ' + response); }\n\n}).send(); \n\n",
"I was able to find my answer on the MooTools Group at Google.\n"
] |
[
2,
0
] |
[] |
[] |
[
"ajax",
"javascript",
"mootools"
] |
stackoverflow_0000096463_ajax_javascript_mootools.txt
|
Q:
Is there a way to purge some files from the history of git?
I have migrated a couple of project from Subversion to git. It work really well but when I clone my repository, it's really long because I have all the history of a lot of .jar file included in the transfer.
Is there a way to keep only the latest version of certain type of file in my main repository. I mainly want to delete old version on binary file.
A:
You can remove old versions with either "git rebase" -i or "git filter-branch"
http://schacon.github.com/git/git-filter-branch.html
http://schacon.github.com/git/git-rebase.html
Other docs and tutorials:
http://git-scm.com/documentation
Keeping only the current version from now forward is not supported. Your best bet is
to instead keep in revision control a small script that downloads (or builds, or otherwise generates) the large .jar file.
As this modifies history, it will make all previous clones or pulls from this repository invalid.
A:
In short, this would involve rewriting the entire git commit tree to exclude the files.
Have you tried using git gc and git pack to have git compress your repository?
|
Is there a way to purge some files from the history of git?
|
I have migrated a couple of project from Subversion to git. It work really well but when I clone my repository, it's really long because I have all the history of a lot of .jar file included in the transfer.
Is there a way to keep only the latest version of certain type of file in my main repository. I mainly want to delete old version on binary file.
|
[
"You can remove old versions with either \"git rebase\" -i or \"git filter-branch\"\nhttp://schacon.github.com/git/git-filter-branch.html\nhttp://schacon.github.com/git/git-rebase.html\nOther docs and tutorials:\nhttp://git-scm.com/documentation\nKeeping only the current version from now forward is not supported. Your best bet is\nto instead keep in revision control a small script that downloads (or builds, or otherwise generates) the large .jar file.\nAs this modifies history, it will make all previous clones or pulls from this repository invalid.\n",
"In short, this would involve rewriting the entire git commit tree to exclude the files.\nHave you tried using git gc and git pack to have git compress your repository?\n"
] |
[
5,
1
] |
[] |
[] |
[
"git",
"version_control"
] |
stackoverflow_0000096842_git_version_control.txt
|
Q:
MySQL slow query log - how slow is slow?
What do you find is the optimal setting for mysql slow query log parameter, and why?
A:
I recommend these three lines
log_slow_queries
set-variable = long_query_time=1
log-queries-not-using-indexes
The first and second will log any query over a second. As others have pointed out a one second query is pretty far gone if you are a shooting for a high transaction rate on your website, but I find that it turns up some real WTFs; queries that should be fast, but for whatever combination of data it was run against was not.
The last will log any query that does not use an index. Unless your doing data warehousing any common query should have the best index you can find so pay attention to its output.
Although its certainly not for production, this last option
log = /var/log/mysql/mysql.log
will log all queries, which can be useful if you are trying to tune a specific page or action.
A:
Whatever time /you/ feel is unacceptably slow for a query on your systems.
It depends on the kind of queries you run and the kind of system; a query taking several seconds might not matter if it's some back-end reporting system doing complex data-mining etc where a delay doesn't matter, but might be completely unacceptable on a user-facing system which is expected to return results promptly.
A:
Set it to whatever you like. The only problem is that in a stock MySQL, it can only be set in increments of 1 second, which is too slow for some people.
Most heavily used production servers execute far too many queries to log them all. The slow log is a way of filtering the log so that we can see the ones which take a long time (most queries are likely to be executed almost instantly). It's a bit of a blunt instrument.
Set it to 1 sec if you like, you're probably not going to run out of disc space or create a performance problem by doing that.
It's really about the risk of enabling the slow log- don't do it if you feel it's likely to cause further disc or performance problems.
Of course you could enable the slow log on a non-production server and put simulated load through, but that is never quite the same.
A:
Peter Zaitsev posted a nice article about using the slow query log. One thing he notes is important is to also consider how often a certain query is used. Reports run once a day are not important to be fast. But something that is run very often might be a problem even if it takes half a second. And you cant detect that without the microslow patch.
A:
Not only is it a blunt instrument as far as resolution is concerned, but also it is MySQL-instance wide, so that if you have different databases with differing performancy requirements you're kind of out of luck. Obviously there are ways around that, but it's important to keep that in mind when setting your slow log setting.
Aside from performance requirements of your application, another factor to consider is what you're trying to log. Are you using the log to catch queries that would threaten the stability of your db instance (ones that cause deadlocks or Cartesian joins, for instance) or queries that affect the performance for specific users and that might require a little tuning? That will influence where you set your threshold.
|
MySQL slow query log - how slow is slow?
|
What do you find is the optimal setting for mysql slow query log parameter, and why?
|
[
"I recommend these three lines\n\nlog_slow_queries\nset-variable = long_query_time=1\nlog-queries-not-using-indexes\n\nThe first and second will log any query over a second. As others have pointed out a one second query is pretty far gone if you are a shooting for a high transaction rate on your website, but I find that it turns up some real WTFs; queries that should be fast, but for whatever combination of data it was run against was not. \nThe last will log any query that does not use an index. Unless your doing data warehousing any common query should have the best index you can find so pay attention to its output.\nAlthough its certainly not for production, this last option\n\nlog = /var/log/mysql/mysql.log\n\nwill log all queries, which can be useful if you are trying to tune a specific page or action.\n",
"Whatever time /you/ feel is unacceptably slow for a query on your systems.\nIt depends on the kind of queries you run and the kind of system; a query taking several seconds might not matter if it's some back-end reporting system doing complex data-mining etc where a delay doesn't matter, but might be completely unacceptable on a user-facing system which is expected to return results promptly.\n",
"Set it to whatever you like. The only problem is that in a stock MySQL, it can only be set in increments of 1 second, which is too slow for some people.\nMost heavily used production servers execute far too many queries to log them all. The slow log is a way of filtering the log so that we can see the ones which take a long time (most queries are likely to be executed almost instantly). It's a bit of a blunt instrument.\nSet it to 1 sec if you like, you're probably not going to run out of disc space or create a performance problem by doing that.\nIt's really about the risk of enabling the slow log- don't do it if you feel it's likely to cause further disc or performance problems.\nOf course you could enable the slow log on a non-production server and put simulated load through, but that is never quite the same.\n",
"Peter Zaitsev posted a nice article about using the slow query log. One thing he notes is important is to also consider how often a certain query is used. Reports run once a day are not important to be fast. But something that is run very often might be a problem even if it takes half a second. And you cant detect that without the microslow patch.\n",
"Not only is it a blunt instrument as far as resolution is concerned, but also it is MySQL-instance wide, so that if you have different databases with differing performancy requirements you're kind of out of luck. Obviously there are ways around that, but it's important to keep that in mind when setting your slow log setting.\nAside from performance requirements of your application, another factor to consider is what you're trying to log. Are you using the log to catch queries that would threaten the stability of your db instance (ones that cause deadlocks or Cartesian joins, for instance) or queries that affect the performance for specific users and that might require a little tuning? That will influence where you set your threshold.\n"
] |
[
18,
4,
3,
3,
1
] |
[] |
[] |
[
"database",
"mysql",
"performance"
] |
stackoverflow_0000092103_database_mysql_performance.txt
|
Q:
Nuking huge file in svn repository
As the local subversion czar i explain to everyone to keep only source code and non-huge text files in the repository, not huge binary data files. Smaller binary files that are parts of tests, maybe.
Unfortunately i work with humans! Someone is likely to someday accidentally commit a 800MB binary hulk. This slows down repository operations.
Last time i checked, you can't delete a file from the repository; only make it not part of the latest revision. The repository keeps the monster for all eternity, in case anyone ever wants to recall the state of the repository for that date or revision number.
Is there a way to really delete that monster file and end up with a decent sized repository? I've tried the svnadmin dump/load thing but it was a pain.
A:
To permanently delete monster files from a svn repository, there is no other solution than using svnadmin dump/load. (SVN Book: dump command)
To prevent huge files from being committed, a hook script can be used. You could have, for example, a script that ran "pre-commit" whenever someone tried to commit to the repository. The script might check filesize, or filetype, and reject the commit if it contained a file or files that were too large, or of a "forbidden" type.
More typical uses of hook scripts are to check (pre-commit) that a commit contains a log message, or (post-commit) to email details of the commit or to update a website with the newly committed files.
A hook script is a script that runs in response to response to repository events (SVN Book: Create hooks).
A:
Some extra info about this can be found at the blog post: Subversion Obliterate, the missing feature
Be sure to read through the comments too, where Karl Fogel puts the article into perspective :-)
A:
If you can catch it as soon as it's committed, the svnadmin dump/load technique isn't too painful. Suppose someone just accidentally committed gormundous-raw-image.psd in Revision 3849. You could do this:
svnadmin dump /var/repos -r 1:3848 > ~/repos_dump
That would create a dump file containing everything up to and including Revision 3848. At that point, you could use svnadmin create and svnadmin load to reconstitute the repository without the offending commit, the caveat being that any changes you made within the repository's directory structure--hooks, symlinks, permission changes, auth files, etc.--would need to be copied over from the old directory. Here's an example of the rest of the bash session you might use to complete the operation:
svnadmin create /var/repos-new
svnadmin load /var/repos-new < ~/repos_dump
cp -r /var/repos/conf /var/repos-new
cp -r /var/repos/hooks /var/repos-new
mv /var/repos{,-old} && mv /var/repos-new /var/repos
I'm sure this will be more painful the more history your repository has, but it does work.
A:
Once you removed the file from your HEAD revision, it doesn't slow you down on operation speed as ony deltas between revisions are handled.
(Repository backups must of course handle the load).
|
Nuking huge file in svn repository
|
As the local subversion czar i explain to everyone to keep only source code and non-huge text files in the repository, not huge binary data files. Smaller binary files that are parts of tests, maybe.
Unfortunately i work with humans! Someone is likely to someday accidentally commit a 800MB binary hulk. This slows down repository operations.
Last time i checked, you can't delete a file from the repository; only make it not part of the latest revision. The repository keeps the monster for all eternity, in case anyone ever wants to recall the state of the repository for that date or revision number.
Is there a way to really delete that monster file and end up with a decent sized repository? I've tried the svnadmin dump/load thing but it was a pain.
|
[
"To permanently delete monster files from a svn repository, there is no other solution than using svnadmin dump/load. (SVN Book: dump command)\nTo prevent huge files from being committed, a hook script can be used. You could have, for example, a script that ran \"pre-commit\" whenever someone tried to commit to the repository. The script might check filesize, or filetype, and reject the commit if it contained a file or files that were too large, or of a \"forbidden\" type.\nMore typical uses of hook scripts are to check (pre-commit) that a commit contains a log message, or (post-commit) to email details of the commit or to update a website with the newly committed files. \nA hook script is a script that runs in response to response to repository events (SVN Book: Create hooks).\n",
"Some extra info about this can be found at the blog post: Subversion Obliterate, the missing feature\nBe sure to read through the comments too, where Karl Fogel puts the article into perspective :-)\n",
"If you can catch it as soon as it's committed, the svnadmin dump/load technique isn't too painful. Suppose someone just accidentally committed gormundous-raw-image.psd in Revision 3849. You could do this:\nsvnadmin dump /var/repos -r 1:3848 > ~/repos_dump\n\nThat would create a dump file containing everything up to and including Revision 3848. At that point, you could use svnadmin create and svnadmin load to reconstitute the repository without the offending commit, the caveat being that any changes you made within the repository's directory structure--hooks, symlinks, permission changes, auth files, etc.--would need to be copied over from the old directory. Here's an example of the rest of the bash session you might use to complete the operation:\nsvnadmin create /var/repos-new\nsvnadmin load /var/repos-new < ~/repos_dump\ncp -r /var/repos/conf /var/repos-new\ncp -r /var/repos/hooks /var/repos-new\nmv /var/repos{,-old} && mv /var/repos-new /var/repos\n\nI'm sure this will be more painful the more history your repository has, but it does work.\n",
"Once you removed the file from your HEAD revision, it doesn't slow you down on operation speed as ony deltas between revisions are handled. \n(Repository backups must of course handle the load).\n"
] |
[
17,
13,
3,
1
] |
[] |
[] |
[
"large_files",
"svn"
] |
stackoverflow_0000080833_large_files_svn.txt
|
Q:
MOSS SSP Issue - Failed database logons from deleted SSP
We've been having some issues with a SharePoint instance in a test
environment. Thankfully this is not production ;) The problems started
when the disk with the SQL Server databases and search index ran out
of space. Following this, the search service would not run and search
settings in the SSP were not accessible. Reclaiming the disk space did
not resolve the issue. So rather than restoring the VM, we decided to
try to fix the issue.
We created a new SSP and changed the association of all services to
the new SSP. The old SSP and it's databases were then deleted. Search
results for PDF files are no longer appearing, but the search works
fine otherwise. MySites also works OK.
Following the implementation of this change, these problems occur:
1) An audit failure message started appearing in the application event log, for 'DOMAIN\SPMOSSSvc' which is the MOSS farm account.
Event Type: Failure Audit
Event Source: MSSQLSERVER
Event Category: (4)
Event ID: 18456
Date: 8/5/2008
Time: 3:55:19 PM
User: DOMAIN\SPMOSSSvc
Computer: dastest01
Description:
Login failed for user 'DOMAIN\SPMOSSSvc'. [CLIENT: <local machine>]
2) SQL Server profiler is showing queries from SharePoint that reference the old
(deleted) SSP database.
So...
Where would these references to DOMAIN\SPMOSSSvc and the old SSP
database exist?
Is there a way to 'completely' remove the SSP from the server, and
re-create? The option to delete was not available (greyed out) when a
single SSP is in place.
A:
As Daniel McPherson said, this is caused when SSPs are deleted but the associated
job are not and attempt to communicate with the deleted database.If the SSP
database has been deleted or a problem occurred when deleting an SSP, the job may
not be deleted. When the job attempts to run, it will fail since the database no
longer exists.
Follow the steps Daniel mentioned:
1. Go to SQL Server Management Studio
2. Disable the job called SSPNAME_JobDeleteExpiredSessions, right click and choose Disable Job.
A:
I suspect these are related to the SQL Server Agent trying to login to a database that no longer exists.
To clear it up you need to:
1. Go to SQL Server Management Studio
2. Disable the job called <database name>_job_deleteExpiredSessions
If that works, then you should be all clear to delete it.
A:
Have you tried removing the SSP using the command line? I found this worked once when we had a broken an SSP and just wanted to get rid of it.
The command is:
stsadm.exe -o deletessp -title <sspname> [-deletedatabases]
The deletedatbases switch is optional.
Also, check in Central Administration under Job Definitions and Job Schedules to ensure no SSP related jobs are still running
|
MOSS SSP Issue - Failed database logons from deleted SSP
|
We've been having some issues with a SharePoint instance in a test
environment. Thankfully this is not production ;) The problems started
when the disk with the SQL Server databases and search index ran out
of space. Following this, the search service would not run and search
settings in the SSP were not accessible. Reclaiming the disk space did
not resolve the issue. So rather than restoring the VM, we decided to
try to fix the issue.
We created a new SSP and changed the association of all services to
the new SSP. The old SSP and it's databases were then deleted. Search
results for PDF files are no longer appearing, but the search works
fine otherwise. MySites also works OK.
Following the implementation of this change, these problems occur:
1) An audit failure message started appearing in the application event log, for 'DOMAIN\SPMOSSSvc' which is the MOSS farm account.
Event Type: Failure Audit
Event Source: MSSQLSERVER
Event Category: (4)
Event ID: 18456
Date: 8/5/2008
Time: 3:55:19 PM
User: DOMAIN\SPMOSSSvc
Computer: dastest01
Description:
Login failed for user 'DOMAIN\SPMOSSSvc'. [CLIENT: <local machine>]
2) SQL Server profiler is showing queries from SharePoint that reference the old
(deleted) SSP database.
So...
Where would these references to DOMAIN\SPMOSSSvc and the old SSP
database exist?
Is there a way to 'completely' remove the SSP from the server, and
re-create? The option to delete was not available (greyed out) when a
single SSP is in place.
|
[
"As Daniel McPherson said, this is caused when SSPs are deleted but the associated \njob are not and attempt to communicate with the deleted database.If the SSP \ndatabase has been deleted or a problem occurred when deleting an SSP, the job may \nnot be deleted. When the job attempts to run, it will fail since the database no \nlonger exists.\nFollow the steps Daniel mentioned:\n1. Go to SQL Server Management Studio \n2. Disable the job called SSPNAME_JobDeleteExpiredSessions, right click and choose Disable Job.\n",
"I suspect these are related to the SQL Server Agent trying to login to a database that no longer exists. \nTo clear it up you need to:\n 1. Go to SQL Server Management Studio\n 2. Disable the job called <database name>_job_deleteExpiredSessions\nIf that works, then you should be all clear to delete it.\n",
"Have you tried removing the SSP using the command line? I found this worked once when we had a broken an SSP and just wanted to get rid of it.\nThe command is:\nstsadm.exe -o deletessp -title <sspname> [-deletedatabases]\n\nThe deletedatbases switch is optional.\n\nAlso, check in Central Administration under Job Definitions and Job Schedules to ensure no SSP related jobs are still running\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"database",
"search",
"sharepoint",
"sql_server",
"ssp"
] |
stackoverflow_0000004752_database_search_sharepoint_sql_server_ssp.txt
|
Q:
Delphi and COM: TLB and maintenance issues
In the company that i work, we develop all the GUI in C#, but the application kernel is mainly developed in Delphi 5 (for historical reasons), with a lot of components made in COM+. Related to this very specific sort of application a I two questions:
Experienced guys in Delphi and/or COM, do you have any workrounds to work with the buggy TLB interface ?
Some of the bugs are: IDE crashing during edition of a large TLB, lost of methods IDs, TLB corruption, etc.
Here, we haven't found any good solution. Actually we tried do upgrade do the new 2007 version. But the new IDE TLB interface has the same bugs that we found before.
How do you control TLBs versions ? The TLB file is in a binary format and conflict resolutions are very hard to do. We tried to do it exporting the interfaces descriptions to IDL and commiting into CVS, but we didn't found any good way to generate TLBs from IDL using Delphi. Additionaly, the MIDL tool provided by Microsoft, didn't parse correctly the IDL files that we exported from delphi.
A:
I think you should have a good look at Delphi 2009.
Delphi 2009 has changes to the COM support, including a text-based replacement for the binary TLB files.
You can read more on Chris Bensen's blog.
A:
In the distant past (before I started working for CodeGear) I gave up on the odd Delphi-ized IDL language that the IDE presented, and wrote my own IDL and compiled it using MS midl. This largely worked; the only catch, IIRC, was making sure dispids (id attribute) were correct on automation interfaces (dispinterfaces) for property getters & setters - there was some invariant that tlibimp expected but midl didn't guarantee.
However, now that Delphi 2009 uses a safe subset of midl syntax, and includes a compiler for this midl in the box and integrated into the IDE, these problems should be a thing of the past.
A:
We have also just installed Delphi 2009 and it does seem to have improved the support for Typelibraries. However I have worked with COM and type libraries for quite some time and here are my general gotchas that I have found over the years. I would agree its pretty buggy and is all the way up to Delphi 2006 (our version prior to using 2009).
Always have every file writeable before opening. This may sound obvious, but when working with source control sometimes we forget to do this and try to remove readonly flag after opening a file - Delphi cant deal with this. Ensure tlb is writable before opening.
If editing a standalone typelibrary you MUST have a project open. For some reason if you open a type library on its own it will not save. Create a blank project and then open your typelibrary. For some reason this allows the type library to be saved.
If your type library is used by an application or COM+ ensure that application is shut down or COM+ disabled before opening the type library. Any open apps will prevent the type library from being saved.
However I think your best solution is probably an upgrade. You get Unicode support too.
A:
Using Delphi 2009 has greatly taken much of the pain out of huge TLB files, and conversion of our existing objects was painless, but our com objects don't use any third party libraries.
We will be migrating our gui applications over once the library vendors release supported versions.
A:
Same experience with the TLB interface here: we simply stopped using it.
We work with several separate IDL files (hand-build) for different parts of our framework, making use of the #include construct to include them into the IDL of the actual application, then generate the single tlb using MIDL and tlibimp it. If the application has no IDL of it's own, pre-compiled version of the different framework TLB files are available.
Whenever the framework enters a new version, a script is run to re-generate the GUIDS on all necessary interfaces in the IDL files.
This has served us well for many years, and for us to move over the new Delphi 2009 IDL/TLB toolset will have to be not only integrated into the IDE, but also versatile when it comes to automated builds and whatnot. Can't wait to get my hands dirty with some experiments!
|
Delphi and COM: TLB and maintenance issues
|
In the company that i work, we develop all the GUI in C#, but the application kernel is mainly developed in Delphi 5 (for historical reasons), with a lot of components made in COM+. Related to this very specific sort of application a I two questions:
Experienced guys in Delphi and/or COM, do you have any workrounds to work with the buggy TLB interface ?
Some of the bugs are: IDE crashing during edition of a large TLB, lost of methods IDs, TLB corruption, etc.
Here, we haven't found any good solution. Actually we tried do upgrade do the new 2007 version. But the new IDE TLB interface has the same bugs that we found before.
How do you control TLBs versions ? The TLB file is in a binary format and conflict resolutions are very hard to do. We tried to do it exporting the interfaces descriptions to IDL and commiting into CVS, but we didn't found any good way to generate TLBs from IDL using Delphi. Additionaly, the MIDL tool provided by Microsoft, didn't parse correctly the IDL files that we exported from delphi.
|
[
"I think you should have a good look at Delphi 2009.\nDelphi 2009 has changes to the COM support, including a text-based replacement for the binary TLB files.\nYou can read more on Chris Bensen's blog.\n",
"In the distant past (before I started working for CodeGear) I gave up on the odd Delphi-ized IDL language that the IDE presented, and wrote my own IDL and compiled it using MS midl. This largely worked; the only catch, IIRC, was making sure dispids (id attribute) were correct on automation interfaces (dispinterfaces) for property getters & setters - there was some invariant that tlibimp expected but midl didn't guarantee.\nHowever, now that Delphi 2009 uses a safe subset of midl syntax, and includes a compiler for this midl in the box and integrated into the IDE, these problems should be a thing of the past.\n",
"We have also just installed Delphi 2009 and it does seem to have improved the support for Typelibraries. However I have worked with COM and type libraries for quite some time and here are my general gotchas that I have found over the years. I would agree its pretty buggy and is all the way up to Delphi 2006 (our version prior to using 2009).\n\nAlways have every file writeable before opening. This may sound obvious, but when working with source control sometimes we forget to do this and try to remove readonly flag after opening a file - Delphi cant deal with this. Ensure tlb is writable before opening.\nIf editing a standalone typelibrary you MUST have a project open. For some reason if you open a type library on its own it will not save. Create a blank project and then open your typelibrary. For some reason this allows the type library to be saved.\nIf your type library is used by an application or COM+ ensure that application is shut down or COM+ disabled before opening the type library. Any open apps will prevent the type library from being saved.\n\nHowever I think your best solution is probably an upgrade. You get Unicode support too.\n",
"Using Delphi 2009 has greatly taken much of the pain out of huge TLB files, and conversion of our existing objects was painless, but our com objects don't use any third party libraries.\nWe will be migrating our gui applications over once the library vendors release supported versions.\n",
"Same experience with the TLB interface here: we simply stopped using it.\nWe work with several separate IDL files (hand-build) for different parts of our framework, making use of the #include construct to include them into the IDL of the actual application, then generate the single tlb using MIDL and tlibimp it. If the application has no IDL of it's own, pre-compiled version of the different framework TLB files are available.\nWhenever the framework enters a new version, a script is run to re-generate the GUIDS on all necessary interfaces in the IDL files. \nThis has served us well for many years, and for us to move over the new Delphi 2009 IDL/TLB toolset will have to be not only integrated into the IDE, but also versatile when it comes to automated builds and whatnot. Can't wait to get my hands dirty with some experiments!\n"
] |
[
10,
8,
5,
3,
2
] |
[] |
[] |
[
"com",
"com_hell",
"delphi",
"typelib"
] |
stackoverflow_0000016897_com_com_hell_delphi_typelib.txt
|
Q:
How do I capture an asterisk on the form's KeyUp event? OR, How do I get a KeyChar on the KeyUp event?
I'm trying to hijack an asterisk with the form's KeyUp event. I can get the SHIFT key and the D8 key on the KeyUp event, but I can't get the * out of it. I can find it easily in the KeyPress event (e.KeyChar = "*"c), but company standards say that we have to use the KeyUp event for all such occasions. Thanks!
A:
Cache the charcode on KeyPress and then respond to KeyUp. There are other key combinations that will generate the asterisk, especially if you're facing international users who may have different keyboard layouts, so you can't rely on the KeyUp to give you the information you need.
|
How do I capture an asterisk on the form's KeyUp event? OR, How do I get a KeyChar on the KeyUp event?
|
I'm trying to hijack an asterisk with the form's KeyUp event. I can get the SHIFT key and the D8 key on the KeyUp event, but I can't get the * out of it. I can find it easily in the KeyPress event (e.KeyChar = "*"c), but company standards say that we have to use the KeyUp event for all such occasions. Thanks!
|
[
"Cache the charcode on KeyPress and then respond to KeyUp. There are other key combinations that will generate the asterisk, especially if you're facing international users who may have different keyboard layouts, so you can't rely on the KeyUp to give you the information you need.\n"
] |
[
2
] |
[] |
[] |
[
".net",
"vb.net"
] |
stackoverflow_0000097270_.net_vb.net.txt
|
Q:
LINQ FormatException
I currently have an existing database and I am using the LINQtoSQL generator tool to create the classes for me. The tool is working fine for this database and there are no errors with that tool.
When I run a LINQ to SQL query against the data, there is a row that has some invalid data somehow within the table and it is throwing a System.FormatException when it runs across this row. Does anyone know what that stems from? Does anyone know how I can narrow down the effecting column without adding them one by one to the select clause?
A:
Do you have a varchar(1) that stores an empty string?
You need to change the type from char to string in the designer (or somehow prohibit empties). The .net char type cannot hold an empty string.
|
LINQ FormatException
|
I currently have an existing database and I am using the LINQtoSQL generator tool to create the classes for me. The tool is working fine for this database and there are no errors with that tool.
When I run a LINQ to SQL query against the data, there is a row that has some invalid data somehow within the table and it is throwing a System.FormatException when it runs across this row. Does anyone know what that stems from? Does anyone know how I can narrow down the effecting column without adding them one by one to the select clause?
|
[
"Do you have a varchar(1) that stores an empty string?\nYou need to change the type from char to string in the designer (or somehow prohibit empties). The .net char type cannot hold an empty string.\n"
] |
[
2
] |
[] |
[] |
[
"exception",
"linq",
"linq_to_sql"
] |
stackoverflow_0000097293_exception_linq_linq_to_sql.txt
|
Q:
Must an SMTP client provide the MTA a globally resolvable hostname in the HELO?
In short: I'm trying to figure out if I should tell a mail administrator of a friend's employer whether their mail configuration should be fixed, or if I should revise my own policy to be more liberal in what I accept, or neither.
A friend was complaining of being unable to reach anything on my mailserver. I dug into it and it seems that the hostname provided by his mail server when it connected to mine was somewhere in the *.local space, meaning it wasn't globally resolvable.
They were rejected with "Helo command rejected: Host not found;" by my postfix mailserver. I'm perhaps strict on my UCE checks in postfix, so I whitelisted their (in my opinion, misconfigured) server but now I'm trying to figure out to what extent they actually are misconfigured, versus whether I'm just being too harsh in what I accept.
So then I checked the RFCs - RFC 821 says "The HELO receiver MAY verify that the HELO parameter really corresponds to the IP address of the sender. However, the receiver MUST NOT refuse to accept a message, even if the sender's HELO command fails verification." which suggests to me that I'm actually the one violating the RFC.
Was this portion of RFC 821 ever replaced by a future RFC, that I can point to? Or must mail servers accept mail with bogus HELOs? Are there any well respected authorities I can point to that state the HELO hostname should be valid, as a reference for contacting their mail admin?
A:
Strictly, you're both violating the RFC.
The sections of note are:
The sender-SMTP MUST ensure that the parameter in a HELO command is a valid principal host domain name for the client host.
and
The HELO receiver MAY verify that the HELO parameter really corresponds to the IP address of the sender. However, the receiver MUST NOT refuse to accept a message, even if the sender's HELO command fails verification.
Due to the prevalence of spam, mailservers these days are considerably stricter than the RFCs say they should be, and it is common to find all sorts of proprietary checks and reasons for rejection.
However, they're not doing themselves any favours at all by having an incorrect hostname in their HELO string. Whereas your mailserver will probably work perfectly well, theirs is likely to have trouble sending and receiving email from many systems.
I would let them know. If only because of their misconfiguration, they're probably not getting all the email they should be.
A:
As you cite, the RFC 822 standard leaves the behavior up to the MTA. These days, rejecting connections at the HELO stage if the name can't be resolved (and checking it against a blacklist such as spamhaus) is the only way for MTAs to keep up with the flood of spam generated by botnets.
So there's no standard that says you MUST, but if you don't, your email won't get very far.
A:
SMTP RFCs do not require it, but lots of popular systems will reject mail with bogus HELOs. Note that RFC 1033 and RFC 1912 both require all internet-reachable hosts to have a valid name; simply listing that name in the HELO will fix many problems. Some spam filters, unfortunately, also reject mail from hostnames containing strings that imply they are in dynamic address pools (e.g. "dynamic", "dsl", or a dash-separated IP address as is common with many ISPs).
One option if your friend does not have control over their reverse DNS is to use a suitable machine as smarthost for outgoing mail; e.g. their ISP's mailserver.
A:
Yes they should. Lots of other systems, including yahoo, will reject mail from hostnames they can't reverse map to the connecting IP, or that they can't resolve.
A:
Eh, I disagree. It can provide total garbage within the EHLO/HELO if it wants to. As long as it says something, and as long as I can resolve the ip address it's coming from, I'm happy.
Inside the EHLO is often a short hostname, not a FQDN.
|
Must an SMTP client provide the MTA a globally resolvable hostname in the HELO?
|
In short: I'm trying to figure out if I should tell a mail administrator of a friend's employer whether their mail configuration should be fixed, or if I should revise my own policy to be more liberal in what I accept, or neither.
A friend was complaining of being unable to reach anything on my mailserver. I dug into it and it seems that the hostname provided by his mail server when it connected to mine was somewhere in the *.local space, meaning it wasn't globally resolvable.
They were rejected with "Helo command rejected: Host not found;" by my postfix mailserver. I'm perhaps strict on my UCE checks in postfix, so I whitelisted their (in my opinion, misconfigured) server but now I'm trying to figure out to what extent they actually are misconfigured, versus whether I'm just being too harsh in what I accept.
So then I checked the RFCs - RFC 821 says "The HELO receiver MAY verify that the HELO parameter really corresponds to the IP address of the sender. However, the receiver MUST NOT refuse to accept a message, even if the sender's HELO command fails verification." which suggests to me that I'm actually the one violating the RFC.
Was this portion of RFC 821 ever replaced by a future RFC, that I can point to? Or must mail servers accept mail with bogus HELOs? Are there any well respected authorities I can point to that state the HELO hostname should be valid, as a reference for contacting their mail admin?
|
[
"Strictly, you're both violating the RFC.\nThe sections of note are: \n\nThe sender-SMTP MUST ensure that the parameter in a HELO command is a valid principal host domain name for the client host.\n\nand \n\nThe HELO receiver MAY verify that the HELO parameter really corresponds to the IP address of the sender. However, the receiver MUST NOT refuse to accept a message, even if the sender's HELO command fails verification. \n\nDue to the prevalence of spam, mailservers these days are considerably stricter than the RFCs say they should be, and it is common to find all sorts of proprietary checks and reasons for rejection.\nHowever, they're not doing themselves any favours at all by having an incorrect hostname in their HELO string. Whereas your mailserver will probably work perfectly well, theirs is likely to have trouble sending and receiving email from many systems.\nI would let them know. If only because of their misconfiguration, they're probably not getting all the email they should be.\n",
"As you cite, the RFC 822 standard leaves the behavior up to the MTA. These days, rejecting connections at the HELO stage if the name can't be resolved (and checking it against a blacklist such as spamhaus) is the only way for MTAs to keep up with the flood of spam generated by botnets.\nSo there's no standard that says you MUST, but if you don't, your email won't get very far.\n",
"SMTP RFCs do not require it, but lots of popular systems will reject mail with bogus HELOs. Note that RFC 1033 and RFC 1912 both require all internet-reachable hosts to have a valid name; simply listing that name in the HELO will fix many problems. Some spam filters, unfortunately, also reject mail from hostnames containing strings that imply they are in dynamic address pools (e.g. \"dynamic\", \"dsl\", or a dash-separated IP address as is common with many ISPs).\nOne option if your friend does not have control over their reverse DNS is to use a suitable machine as smarthost for outgoing mail; e.g. their ISP's mailserver.\n",
"Yes they should. Lots of other systems, including yahoo, will reject mail from hostnames they can't reverse map to the connecting IP, or that they can't resolve.\n",
"Eh, I disagree. It can provide total garbage within the EHLO/HELO if it wants to. As long as it says something, and as long as I can resolve the ip address it's coming from, I'm happy.\nInside the EHLO is often a short hostname, not a FQDN.\n"
] |
[
6,
4,
2,
1,
0
] |
[] |
[] |
[
"email_spam",
"postfix_mta",
"smtp"
] |
stackoverflow_0000097179_email_spam_postfix_mta_smtp.txt
|
Q:
Can I get the calling instance from within a method via reflection/diagnostics?
Is there a way via System.Reflection, System.Diagnostics or other to get a reference to the actual instance that is calling a static method without passing it in to the method itself?
For example, something along these lines
class A
{
public void DoSomething()
{
StaticClass.ExecuteMethod();
}
}
class B
{
public void DoSomething()
{
SomeOtherClass.ExecuteMethod();
}
}
public class SomeOtherClass
{
public static void ExecuteMethod()
{
// Returns an instance of A if called from class A
// or an instance of B if called from class B.
object caller = getCallingInstance();
}
}
I can get the type using System.Diagnostics.StackTrace.GetFrames, but is there a way to get a reference to the actual instance?
I am aware of the issues with reflection and performance, as well as static to static calls, and that this is generally, perhaps even almost univerally, not the right way to approach this. Part of the reason of this question is I was curious if it was doable; we are currently passing the instance in.
ExecuteMethod(instance)
And I just wondered if this was possible and still being able to access the instance.
ExecuteMethod()
@Steve Cooper:
I hadn't considered extension methods. Some variation of that might work.
A:
Consider making the method an extension method. Define it as:
public static StaticExecute(this object instance)
{
// Reference to 'instance'
}
It is called like:
this.StaticExecute();
I can't think of a way to do what you want to do directly, but I can only suggest that if you find something, you watch out for static methods, which won't have one, and anonymous methods, which will have instances of auto-generated classes, which will be a little odd.
I do wonder whether you should just pass the invoking object in as a proper parameter. After all, a static is a hint that this method doesn't depend on anything other than its input parameters. Also note that this method may be a bitch to test, as any test code you write will not have the same invoking object as the running system.
A:
I do not believe you can. Even the StackTrace and StackFrame classes just give you naming information, not access to instances.
I'm not sure exactly why you'd want to do this, but know that even if you could do it it would likely be very slow.
A better solution would be to push the instance to a thread local context before calling ExecuteMethod that you can retrieve within it or just pass the instance.
A:
Just have ExecuteMethod take an object. Then you have the instance no matter what.
A:
In the case of a static method calling your static method, there is no calling instance.
Find a different way to accomplish whatever you are trying to do.
A:
I feel like I'm missing something, here. The static method can be called from literally anywhere. There's no guarantee that a class A or class B instance will appear anywhere in the call stack.
There's got to be a better way to accomplish whatever you're trying to do.
|
Can I get the calling instance from within a method via reflection/diagnostics?
|
Is there a way via System.Reflection, System.Diagnostics or other to get a reference to the actual instance that is calling a static method without passing it in to the method itself?
For example, something along these lines
class A
{
public void DoSomething()
{
StaticClass.ExecuteMethod();
}
}
class B
{
public void DoSomething()
{
SomeOtherClass.ExecuteMethod();
}
}
public class SomeOtherClass
{
public static void ExecuteMethod()
{
// Returns an instance of A if called from class A
// or an instance of B if called from class B.
object caller = getCallingInstance();
}
}
I can get the type using System.Diagnostics.StackTrace.GetFrames, but is there a way to get a reference to the actual instance?
I am aware of the issues with reflection and performance, as well as static to static calls, and that this is generally, perhaps even almost univerally, not the right way to approach this. Part of the reason of this question is I was curious if it was doable; we are currently passing the instance in.
ExecuteMethod(instance)
And I just wondered if this was possible and still being able to access the instance.
ExecuteMethod()
@Steve Cooper:
I hadn't considered extension methods. Some variation of that might work.
|
[
"Consider making the method an extension method. Define it as:\npublic static StaticExecute(this object instance)\n{\n // Reference to 'instance'\n}\n\nIt is called like:\nthis.StaticExecute();\n\nI can't think of a way to do what you want to do directly, but I can only suggest that if you find something, you watch out for static methods, which won't have one, and anonymous methods, which will have instances of auto-generated classes, which will be a little odd. \nI do wonder whether you should just pass the invoking object in as a proper parameter. After all, a static is a hint that this method doesn't depend on anything other than its input parameters. Also note that this method may be a bitch to test, as any test code you write will not have the same invoking object as the running system.\n",
"I do not believe you can. Even the StackTrace and StackFrame classes just give you naming information, not access to instances. \nI'm not sure exactly why you'd want to do this, but know that even if you could do it it would likely be very slow.\nA better solution would be to push the instance to a thread local context before calling ExecuteMethod that you can retrieve within it or just pass the instance.\n",
"Just have ExecuteMethod take an object. Then you have the instance no matter what.\n",
"In the case of a static method calling your static method, there is no calling instance.\nFind a different way to accomplish whatever you are trying to do.\n",
"I feel like I'm missing something, here. The static method can be called from literally anywhere. There's no guarantee that a class A or class B instance will appear anywhere in the call stack.\nThere's got to be a better way to accomplish whatever you're trying to do.\n"
] |
[
10,
6,
1,
1,
0
] |
[] |
[] |
[
".net",
"c#",
"reflection"
] |
stackoverflow_0000097193_.net_c#_reflection.txt
|
Q:
jQuery create select list options from JSON, not happening as advertised?
How come this doesn't work (operating on an empty select list <select id="requestTypes"></select>
$(function() {
$.getJSON("/RequestX/GetRequestTypes/", showRequestTypes);
}
);
function showRequestTypes(data, textStatus) {
$.each(data,
function() {
var option = new Option(this.RequestTypeName, this.RequestTypeID);
// Use Jquery to get select list element
var dropdownList = $("#requestTypes");
if ($.browser.msie) {
dropdownList.add(option);
}
else {
dropdownList.add(option, null);
}
}
);
}
But this does:
Replace:
var dropdownList = $("#requestTypes");
With plain old javascript:
var dropdownList = document.getElementById("requestTypes");
A:
$("#requestTypes") returns a jQuery object that contains all the selected elements. You are attempting to call the add() method of an individual element, but instead you are calling the add() method of the jQuery object, which does something very different.
In order to access the DOM element itself, you need to treat the jQuery object as an array and get the first item out of it, by using $("#requestTypes")[0].
A:
By default, jQuery selectors return the jQuery object. Add this to get the DOM element returned:
var dropdownList = $("#requestTypes")[0];
A:
For stuff like this, I use texotela's select box plugin with its simple ajaxAddOption function.
|
jQuery create select list options from JSON, not happening as advertised?
|
How come this doesn't work (operating on an empty select list <select id="requestTypes"></select>
$(function() {
$.getJSON("/RequestX/GetRequestTypes/", showRequestTypes);
}
);
function showRequestTypes(data, textStatus) {
$.each(data,
function() {
var option = new Option(this.RequestTypeName, this.RequestTypeID);
// Use Jquery to get select list element
var dropdownList = $("#requestTypes");
if ($.browser.msie) {
dropdownList.add(option);
}
else {
dropdownList.add(option, null);
}
}
);
}
But this does:
Replace:
var dropdownList = $("#requestTypes");
With plain old javascript:
var dropdownList = document.getElementById("requestTypes");
|
[
"$(\"#requestTypes\") returns a jQuery object that contains all the selected elements. You are attempting to call the add() method of an individual element, but instead you are calling the add() method of the jQuery object, which does something very different.\nIn order to access the DOM element itself, you need to treat the jQuery object as an array and get the first item out of it, by using $(\"#requestTypes\")[0].\n",
"By default, jQuery selectors return the jQuery object. Add this to get the DOM element returned:\n var dropdownList = $(\"#requestTypes\")[0];\n\n",
"For stuff like this, I use texotela's select box plugin with its simple ajaxAddOption function.\n"
] |
[
15,
9,
4
] |
[] |
[] |
[
"drop_down_menu",
"html_select",
"javascript",
"jquery"
] |
stackoverflow_0000094674_drop_down_menu_html_select_javascript_jquery.txt
|
Q:
ASP.net ACTK DragPanel Extender on PopupControlExtender with UpdatePanel does not drag after partial postback
I have a panel on an aspx page which contains an UpdatePanel.
This panel is wrapped with both a PopUpControl Extender as well as a DragPanel Extender.
Upon initial show everything works fine, the panel pops up and closes as expected and can be dragged around as well.
There is a linkbutton within the UpdatePanel which triggers a partial postback. I originally wanted to use an imagebutton but had a lot of trouble with that so ended up using the linkbutton which works.
Once the partial postback is complete I can no longer drag the panel around.
I would love to hear suggestions on how to fix this.
Has anyone else encountered this problem?
What did you do about it?
Do you know of any other way to accomplish this combination of features without employing other third party libraries?
A:
Take a look at when the drag panel extender and popup control extender actually extend your panel.
Chances are those extenders work on an initialization event of the page. When the update panel fires and updates your page the original DOM element that was extended was replaced by the result of the update panel. Which means that you now have a control that is no longer extended.
I don't really know of an easy solution to this problem. What will probably work is if you can hook into an event after the update panel has updated the page and extend the panel again.
|
ASP.net ACTK DragPanel Extender on PopupControlExtender with UpdatePanel does not drag after partial postback
|
I have a panel on an aspx page which contains an UpdatePanel.
This panel is wrapped with both a PopUpControl Extender as well as a DragPanel Extender.
Upon initial show everything works fine, the panel pops up and closes as expected and can be dragged around as well.
There is a linkbutton within the UpdatePanel which triggers a partial postback. I originally wanted to use an imagebutton but had a lot of trouble with that so ended up using the linkbutton which works.
Once the partial postback is complete I can no longer drag the panel around.
I would love to hear suggestions on how to fix this.
Has anyone else encountered this problem?
What did you do about it?
Do you know of any other way to accomplish this combination of features without employing other third party libraries?
|
[
"Take a look at when the drag panel extender and popup control extender actually extend your panel.\nChances are those extenders work on an initialization event of the page. When the update panel fires and updates your page the original DOM element that was extended was replaced by the result of the update panel. Which means that you now have a control that is no longer extended.\nI don't really know of an easy solution to this problem. What will probably work is if you can hook into an event after the update panel has updated the page and extend the panel again.\n"
] |
[
1
] |
[] |
[] |
[
"asp.net",
"asp.net_ajax",
"webforms"
] |
stackoverflow_0000086479_asp.net_asp.net_ajax_webforms.txt
|
Q:
How do you maintain large t-sql procedures
I'm about to inherit a set of large and complex set of stored procedures that do monthly processing on very large sets of data.
We are in the process of debugging them so they match the original process which was written in VB6. The reason they decided to re write them in t-sql is because the vb process takes days and this new process takes hours.
All this is fine, but how can I make these now massive chunks of t-sql code(1.5k+ lines) even remotely readable / maintainable.
Any experience making t-sql not much of head ache is very welcome.
A:
First, create a directory full of .sql files and maintain them there. Add this set of .sql files to a revision control system. SVN works well. Have a tool that loads these into your database, overwriting any existing ones.
Have a testing database, and baseline reports showing what the output of the monthly processing should look like. Your tests should also be in the form of .sql files under version control.
You can now refactor your procs as much as you like, and run your tests afterward to confirm correct function.
A:
For formatting/pretty-fying SQL, I've had success with http://www.sqlinform.com/ - free online version you can try out, and a desktop version available too.
SQLinForm is an automatic SQL code formatter for all major databases ( ORACLE, SQL Server, DB2 / UDB, Sybase, Informix, PostgreSQL, MySQL etc) with many formatting options.
A:
Definately start by reformatting the code, especially indentations.
Then modularise the SQL. Pull out chunks into smaller, descriptively named procedures and functions in their own stand alone files. This alone I find works very well with improving my understanding of large SQL files.
A:
ApexSQLScript is a great tool for scripting out an entire database - you can then check that into source control and manage changes.
I've also found that documenting the sprocs consistently lets you pull out information about them using the data about the source code in sys.sql_modules - you can use tags or whatever to help document subsystems.
Also, use Schemas (or even multiple databases) - this will really help divide up your database into logical units and point out architectural issues.
As far as large code, I've recently found the SQL2005 CTE feature to be very useful in managing code with lots of nested queries (not even recursive). Instead of managing a bunch of nesting and indentation, CTEs can be declared and built up and then used in the final statement. This also helps in refactoring as it seems a lot easier to remove redundant nested queries and columns.
Stored Procs and UDFs are vital for managing a large code base and eliminating dark corners. I have not found views to be terribly helpful because they are not parameterizable (UDFs can be used in these cases if the result sets are small).
A:
Try to modularise the SQL as much as possible and have a set of tests which will enable you to maintain, refactor and add features when needed. I once had the pleasure of inheriting a Stored Proc that totalled 5000 lines and I still have nightmares about it. Once the project was over I printed out the stored proc for a laugh destorying X trees in the process. During one of our companies weekly stand up sessions I laid it out end to end and it streched the entire length of the building. Ised this as an example of how not to write and maintain stored procedures.
A:
One thing that you can do is have an automated script to store all changes to source control so that you can review changes to the procedures (using a diff on the previous and current versions)
A:
It's definitely not free, but for keeping your T-SQL formatted in a consistent way, Redgate Software's SQL Prompt is very handy. As long as your proc's syntax is correct, a couple of keystrokes (Ctrl+K,Y) will reformat it all instantly. The options give you a lot of control over how your SQL is formatted.
|
How do you maintain large t-sql procedures
|
I'm about to inherit a set of large and complex set of stored procedures that do monthly processing on very large sets of data.
We are in the process of debugging them so they match the original process which was written in VB6. The reason they decided to re write them in t-sql is because the vb process takes days and this new process takes hours.
All this is fine, but how can I make these now massive chunks of t-sql code(1.5k+ lines) even remotely readable / maintainable.
Any experience making t-sql not much of head ache is very welcome.
|
[
"First, create a directory full of .sql files and maintain them there. Add this set of .sql files to a revision control system. SVN works well. Have a tool that loads these into your database, overwriting any existing ones.\nHave a testing database, and baseline reports showing what the output of the monthly processing should look like. Your tests should also be in the form of .sql files under version control.\nYou can now refactor your procs as much as you like, and run your tests afterward to confirm correct function.\n",
"For formatting/pretty-fying SQL, I've had success with http://www.sqlinform.com/ - free online version you can try out, and a desktop version available too.\n\nSQLinForm is an automatic SQL code formatter for all major databases ( ORACLE, SQL Server, DB2 / UDB, Sybase, Informix, PostgreSQL, MySQL etc) with many formatting options. \n\n",
"Definately start by reformatting the code, especially indentations.\nThen modularise the SQL. Pull out chunks into smaller, descriptively named procedures and functions in their own stand alone files. This alone I find works very well with improving my understanding of large SQL files.\n",
"ApexSQLScript is a great tool for scripting out an entire database - you can then check that into source control and manage changes.\nI've also found that documenting the sprocs consistently lets you pull out information about them using the data about the source code in sys.sql_modules - you can use tags or whatever to help document subsystems.\nAlso, use Schemas (or even multiple databases) - this will really help divide up your database into logical units and point out architectural issues.\nAs far as large code, I've recently found the SQL2005 CTE feature to be very useful in managing code with lots of nested queries (not even recursive). Instead of managing a bunch of nesting and indentation, CTEs can be declared and built up and then used in the final statement. This also helps in refactoring as it seems a lot easier to remove redundant nested queries and columns.\nStored Procs and UDFs are vital for managing a large code base and eliminating dark corners. I have not found views to be terribly helpful because they are not parameterizable (UDFs can be used in these cases if the result sets are small).\n",
"Try to modularise the SQL as much as possible and have a set of tests which will enable you to maintain, refactor and add features when needed. I once had the pleasure of inheriting a Stored Proc that totalled 5000 lines and I still have nightmares about it. Once the project was over I printed out the stored proc for a laugh destorying X trees in the process. During one of our companies weekly stand up sessions I laid it out end to end and it streched the entire length of the building. Ised this as an example of how not to write and maintain stored procedures.\n",
"One thing that you can do is have an automated script to store all changes to source control so that you can review changes to the procedures (using a diff on the previous and current versions)\n",
"It's definitely not free, but for keeping your T-SQL formatted in a consistent way, Redgate Software's SQL Prompt is very handy. As long as your proc's syntax is correct, a couple of keystrokes (Ctrl+K,Y) will reformat it all instantly. The options give you a lot of control over how your SQL is formatted.\n"
] |
[
4,
2,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"maintainability",
"sql_server",
"tsql"
] |
stackoverflow_0000084880_maintainability_sql_server_tsql.txt
|
Q:
WebDev: What is the best way to do a multi-file upload?
I want (barely computer literate) people to easily submit a large number of files (pictures) through my web application. Is there a simple, robust, free/cheap, widely used, standard tool/component (Flash or .NET - sorry no java runtime on the browser) that allows a web user to select a folder or a bunch of files on their computer and upload them?
A:
swfupload, the best tool I know that lets you do that. Simple, easy to use and even has a fallback mechanism for the 1% web users that don't have flash 8+.
A:
I found that the best way to upload a bunch of files is to zip them and upload a single file (and then decomress it on server). However that's probably not a good option for the audience you are targeting.
|
WebDev: What is the best way to do a multi-file upload?
|
I want (barely computer literate) people to easily submit a large number of files (pictures) through my web application. Is there a simple, robust, free/cheap, widely used, standard tool/component (Flash or .NET - sorry no java runtime on the browser) that allows a web user to select a folder or a bunch of files on their computer and upload them?
|
[
"swfupload, the best tool I know that lets you do that. Simple, easy to use and even has a fallback mechanism for the 1% web users that don't have flash 8+.\n",
"I found that the best way to upload a bunch of files is to zip them and upload a single file (and then decomress it on server). However that's probably not a good option for the audience you are targeting.\n"
] |
[
5,
0
] |
[
"We had a company come up with a Silverlight upload that could resize the pictures before hand so that the 5MB files didn't have to be uploaded and then resized. The image resizing capability wasn't included with the clr that comes with Silverlight. Occipital came up with their own. You can see it here:\nhttp://www.occipital.com/fjcore.html\nI don't know what they would charge, but we have been extremely happy with how it works. If you don't need the resize capability before uploading then I would go with one of the flash upload options like http://swfupload.org or http://www.codeproject.com/KB/aspnet/FlashUpload.aspx\n"
] |
[
-1
] |
[
"asp.net",
"file_upload",
"web_applications"
] |
stackoverflow_0000097198_asp.net_file_upload_web_applications.txt
|
Q:
How to create a DOM from a User's input in PHP5?
How to create a DOM from a User's input in PHP5?
A:
I would use the DOM API that has been part of the core since 5. For an XML string $xml, you can build a DOM object with
$dom = new DOMDocument();
$dom->loadXML($xml);
Manipulate it with the rest of the DOM API, defined at http://uk.php.net/DOM
A:
And when you need to inject it back into some other DOM (like your HTML page) you can export it again using the $dom->saveXML() method. The problem however is that it also exports an xml header (it's even worse for the saveHTML version). To get rid of that use this:
$xml = $dom->saveXML();
$xml = substr( $xml, strlen( "<?xml version=\"1.0\"?>" ) );
A:
If the input is HTML, use the loadHTML method. Be ware that the input has to be valid code, so you might want to pipe it through html tidy first.
|
How to create a DOM from a User's input in PHP5?
|
How to create a DOM from a User's input in PHP5?
|
[
"I would use the DOM API that has been part of the core since 5. For an XML string $xml, you can build a DOM object with\n$dom = new DOMDocument();\n$dom->loadXML($xml);\n\nManipulate it with the rest of the DOM API, defined at http://uk.php.net/DOM\n",
"And when you need to inject it back into some other DOM (like your HTML page) you can export it again using the $dom->saveXML() method. The problem however is that it also exports an xml header (it's even worse for the saveHTML version). To get rid of that use this:\n$xml = $dom->saveXML();\n$xml = substr( $xml, strlen( \"<?xml version=\\\"1.0\\\"?>\" ) );\n\n",
"If the input is HTML, use the loadHTML method. Be ware that the input has to be valid code, so you might want to pipe it through html tidy first.\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"dom",
"php",
"web_applications"
] |
stackoverflow_0000078447_dom_php_web_applications.txt
|
Q:
Web Dev - Where to store state of a shopping-cart-like object?
You're building a web application. You need to store the state for a shopping cart like object during a user's session.
Some notes:
This is not exactly a shopping cart, but more like an itinerary that the user is building... but we'll use the word cart for now b/c ppl relate to it.
You do not care about "abandoned" carts
Once a cart is completed we will persist it to some server-side data store for later retrieval.
Where do you store that stateful object? And how?
server (session, db, etc?)
client (cookie key-vals, cookie JSON object, hidden form-field, etc?)
other...
Update: It was suggested that I list the platform we're targeting - tho I'm not sure its totally necessary... but lets say the front-end is built w/ASP.NET MVC.
A:
It's been my experience with the Commerce Starter Kit and MVC Storefront (and other sites I've built) that no matter what you think now, information about user interactions with your "products" is paramount to the business guys. There's so many metrics to capture - it's nuts.
I'll save you all the stuff I've been through - what's by far been the most successful for me is just creating an Order object with "NotCheckedOut" status and then adding items to it and the user adds items. This lets users have more than one cart and allows you to mine the tar out of the Orders table. It also is quite easy to transact the order - just change the status.
Persisting "as they go" also allows the user to come back and finish the cart off if they can't, for some reason. Forgiveness is massive with eCommerce.
Cookies suck, session sucks, Profile is attached to the notion of a user and it hits the DB so you might as well use the DB.
You might think you don't want to do this - but you need to trust me and know that you WILL indeed need to feed the stats wonks some data later. I promise you.
A:
I have considered what you are suggesting but have not had a client project yet to try it. The closest actually is a shopping list that you can find here...
http://www.scottcommonsense.com/toolbox.aspx
Click on Grocery Checklist to open the window. It does use ASPX, but only to manage the JS references placed on the page. The rest is done via AJAX using web services.
Previously I built an ASP.NET 2.0 site for a commerce site which used anon/auth cookies automatically. Each provides you with a GUID value which you can use to identify a user which is then associated with data in your database. I wanted the auth cookies so a user could move to different computers; work, home, etc. I avoided using the Profile fields to hold onto a complex ShoppingBasket object which was popular during the time in all the ASP.NET 2.0 books. I did not want to deal with "magic" serialization issues as the data structure changed over time. I prefer to manage db schema changes with update/alter scripts synced with software changes.
With the anon/auth cookies identifying the user on the client you can use the ASP.NET AJAX client-side to call the authentication web services using the JS proxies that are provided for you as a part of ASP.NET. You need to implement the Membership API to at least authenticate the user. The rest of the provider implementation can throw a NotImplementedException safely. You can then use your own custom ASMX web services via AJAX (see ScriptReference attribute) and update the pages with server-side data. You can completely do away with ASPX pages and just use static HTML/CSS/JS if you like.
The one big caveat is memory leaks in JS. Staying on the same page a long time increases your potential issue with memory leaks. It is a risk you can minimize by testing for long sessions and using tools like Firebug and others to look for memory leaks. Use the JS Lint tool as well as it will help identify major problems as you go.
A:
I'd be inclined to store it as a session object. This is because you're not concerned with abandoned carts, and can therefore remove the overhead of storing it in the database as it's not necessary (not to mention that you'd also need some kind of cleanup routine to remove abandoned carts from the database).
However, if you'd like users to be able to persist their carts, then the database option is better. This way, a user who is logged in will have their cart saved across sessions (so when they come back to the site and login, their cart will be restored).
You could also use a combination of the two. Users who come to the site use the session-based cart by default. When they log in, all items are moved from the session-based cart to a database-based cart, and any subsequent cart activity is applied directly to the database.
A:
In the DB tied to whatever you're using for sessions (db/memcache sessions, signed cookies) or to an authenticated user.
A:
Store it in the database.
A:
Without knowing the platform I can't give a direct answer. However, since you don't care about abandoned carts, then I would differ from my colleagues here and suggest storing it on the client. Why store it in the database if you don't care if it's abandoned?
Then again, it does depend on the size of the object you're storing -- cookies have their limits after all.
Edit: Ahh, asp.net MVC? Why not use the profile system? You can enable an anonymous profile if you don't want to bother making them log in
A:
Do you envision folks needing to be able to start on one machine (e.g. their work PC) but continue/finsih from a different machine (e.g. home PC)? If so, the answer is obvious.
A:
If you don't care about abandoned carts and have things in place for someone messing with the data on the client side... I think a cookie would be good -- especially if it's just a cookie of JSON data.
A:
I'd use an (encrypted) cookie on the client which holds the ID of the users basket. Unless it's a really busy site then abandoned baskets won't fill up the database by too much, and you can run a regular admin task to clear the abandoned orders down if you care that much. Also doing it this way the user will keep their order if they close their browser and go away, a basket in the session would be cleared at this point..
Finally this means that you don't have to worry about writing code to deal with de/serialising the data from a client-side cookie, while later worrying about actually putting that data into the database when it gets converted into an order (too many points of failure for my liking)..
A:
I'd say store the state somewhere on the server and correlate it to the user's session. While a cookie could ostensibly be an equal place to store things, if you consider security and data size, keeping as much data on the server as possible becomes a good thing.
For example, in a public terminal setting, would it be OK for someone to look at the contents of the cookie and see the list? If so, cookie's fine; if not, you'll just want an ID that links the user to the data. Doing that would also allow you to ensure the user is authenticated to the site in order to get to that data rather than storing everything on the machine - they'd need some form of credentials as well as the session identifier.
From a size perspective, sure, you're not going to be too concerned about a 4K cookie or something for a browser/broadband user, but if one of your targets is to allow a mobile phone or BlackBerry (not on 3G) to connect and have a snappy experience (and not get billed for the data), minimizing the amount of data getting passed to the client will be key.
The server storage also gives you some flexibility mentioned in some of the other answers - the user can save their cart on one machine and resume working with it on another; you can tie the cart to some form of credentials (rather than a transient session) and persist the cart long after the user has cleared their cookies; you get a little more in the way of fault tolerance - if the user's browser crashes, the site still has the data safe and sound.
If fault tolerance is important, you'll need some sort of persistent store like a database. If not, in application memory is probably fine, but you'll lose data if the app restarts. If you're in a farm environment, the store has to be centrally accessible, so you're again looking at a database.
Whether you choose to key by transient session or by credentials is going to depend on whether the users can save their data and come back later to get it. Transient session will eventually get cleaned up as "abandoned," and maybe that's OK. Tying to a user profile will let the user keep their data and explicitly abandon it. Either way, I'd make use of some sort of backing store like a database for fault tolerance and central accessibility. (Or maybe I'm overengineering the solution?)
A:
If you care about supporting users without Javascript enabled, then the server side sessions will let you use URL rewriting.
A:
If a relatively short time-out (around 2 hours, depending on your server config) is OK for the cart, then I'd say the server-side session. It's faster and more efficient than accessing the DB.
If you need a longer persistence (say some users like to leave and come back the next day), then store it in a cookie that is tamper-evident (use encryption or hashes).
|
Web Dev - Where to store state of a shopping-cart-like object?
|
You're building a web application. You need to store the state for a shopping cart like object during a user's session.
Some notes:
This is not exactly a shopping cart, but more like an itinerary that the user is building... but we'll use the word cart for now b/c ppl relate to it.
You do not care about "abandoned" carts
Once a cart is completed we will persist it to some server-side data store for later retrieval.
Where do you store that stateful object? And how?
server (session, db, etc?)
client (cookie key-vals, cookie JSON object, hidden form-field, etc?)
other...
Update: It was suggested that I list the platform we're targeting - tho I'm not sure its totally necessary... but lets say the front-end is built w/ASP.NET MVC.
|
[
"It's been my experience with the Commerce Starter Kit and MVC Storefront (and other sites I've built) that no matter what you think now, information about user interactions with your \"products\" is paramount to the business guys. There's so many metrics to capture - it's nuts.\nI'll save you all the stuff I've been through - what's by far been the most successful for me is just creating an Order object with \"NotCheckedOut\" status and then adding items to it and the user adds items. This lets users have more than one cart and allows you to mine the tar out of the Orders table. It also is quite easy to transact the order - just change the status.\nPersisting \"as they go\" also allows the user to come back and finish the cart off if they can't, for some reason. Forgiveness is massive with eCommerce. \nCookies suck, session sucks, Profile is attached to the notion of a user and it hits the DB so you might as well use the DB.\nYou might think you don't want to do this - but you need to trust me and know that you WILL indeed need to feed the stats wonks some data later. I promise you.\n",
"I have considered what you are suggesting but have not had a client project yet to try it. The closest actually is a shopping list that you can find here...\nhttp://www.scottcommonsense.com/toolbox.aspx\nClick on Grocery Checklist to open the window. It does use ASPX, but only to manage the JS references placed on the page. The rest is done via AJAX using web services.\nPreviously I built an ASP.NET 2.0 site for a commerce site which used anon/auth cookies automatically. Each provides you with a GUID value which you can use to identify a user which is then associated with data in your database. I wanted the auth cookies so a user could move to different computers; work, home, etc. I avoided using the Profile fields to hold onto a complex ShoppingBasket object which was popular during the time in all the ASP.NET 2.0 books. I did not want to deal with \"magic\" serialization issues as the data structure changed over time. I prefer to manage db schema changes with update/alter scripts synced with software changes.\nWith the anon/auth cookies identifying the user on the client you can use the ASP.NET AJAX client-side to call the authentication web services using the JS proxies that are provided for you as a part of ASP.NET. You need to implement the Membership API to at least authenticate the user. The rest of the provider implementation can throw a NotImplementedException safely. You can then use your own custom ASMX web services via AJAX (see ScriptReference attribute) and update the pages with server-side data. You can completely do away with ASPX pages and just use static HTML/CSS/JS if you like.\nThe one big caveat is memory leaks in JS. Staying on the same page a long time increases your potential issue with memory leaks. It is a risk you can minimize by testing for long sessions and using tools like Firebug and others to look for memory leaks. Use the JS Lint tool as well as it will help identify major problems as you go.\n",
"I'd be inclined to store it as a session object. This is because you're not concerned with abandoned carts, and can therefore remove the overhead of storing it in the database as it's not necessary (not to mention that you'd also need some kind of cleanup routine to remove abandoned carts from the database).\nHowever, if you'd like users to be able to persist their carts, then the database option is better. This way, a user who is logged in will have their cart saved across sessions (so when they come back to the site and login, their cart will be restored).\nYou could also use a combination of the two. Users who come to the site use the session-based cart by default. When they log in, all items are moved from the session-based cart to a database-based cart, and any subsequent cart activity is applied directly to the database.\n",
"In the DB tied to whatever you're using for sessions (db/memcache sessions, signed cookies) or to an authenticated user.\n",
"Store it in the database.\n",
"Without knowing the platform I can't give a direct answer. However, since you don't care about abandoned carts, then I would differ from my colleagues here and suggest storing it on the client. Why store it in the database if you don't care if it's abandoned? \nThen again, it does depend on the size of the object you're storing -- cookies have their limits after all. \nEdit: Ahh, asp.net MVC? Why not use the profile system? You can enable an anonymous profile if you don't want to bother making them log in\n",
"Do you envision folks needing to be able to start on one machine (e.g. their work PC) but continue/finsih from a different machine (e.g. home PC)? If so, the answer is obvious.\n",
"If you don't care about abandoned carts and have things in place for someone messing with the data on the client side... I think a cookie would be good -- especially if it's just a cookie of JSON data.\n",
"I'd use an (encrypted) cookie on the client which holds the ID of the users basket. Unless it's a really busy site then abandoned baskets won't fill up the database by too much, and you can run a regular admin task to clear the abandoned orders down if you care that much. Also doing it this way the user will keep their order if they close their browser and go away, a basket in the session would be cleared at this point..\nFinally this means that you don't have to worry about writing code to deal with de/serialising the data from a client-side cookie, while later worrying about actually putting that data into the database when it gets converted into an order (too many points of failure for my liking)..\n",
"I'd say store the state somewhere on the server and correlate it to the user's session. While a cookie could ostensibly be an equal place to store things, if you consider security and data size, keeping as much data on the server as possible becomes a good thing.\nFor example, in a public terminal setting, would it be OK for someone to look at the contents of the cookie and see the list? If so, cookie's fine; if not, you'll just want an ID that links the user to the data. Doing that would also allow you to ensure the user is authenticated to the site in order to get to that data rather than storing everything on the machine - they'd need some form of credentials as well as the session identifier.\nFrom a size perspective, sure, you're not going to be too concerned about a 4K cookie or something for a browser/broadband user, but if one of your targets is to allow a mobile phone or BlackBerry (not on 3G) to connect and have a snappy experience (and not get billed for the data), minimizing the amount of data getting passed to the client will be key.\nThe server storage also gives you some flexibility mentioned in some of the other answers - the user can save their cart on one machine and resume working with it on another; you can tie the cart to some form of credentials (rather than a transient session) and persist the cart long after the user has cleared their cookies; you get a little more in the way of fault tolerance - if the user's browser crashes, the site still has the data safe and sound.\nIf fault tolerance is important, you'll need some sort of persistent store like a database. If not, in application memory is probably fine, but you'll lose data if the app restarts. If you're in a farm environment, the store has to be centrally accessible, so you're again looking at a database.\nWhether you choose to key by transient session or by credentials is going to depend on whether the users can save their data and come back later to get it. Transient session will eventually get cleaned up as \"abandoned,\" and maybe that's OK. Tying to a user profile will let the user keep their data and explicitly abandon it. Either way, I'd make use of some sort of backing store like a database for fault tolerance and central accessibility. (Or maybe I'm overengineering the solution?)\n",
"If you care about supporting users without Javascript enabled, then the server side sessions will let you use URL rewriting.\n",
"If a relatively short time-out (around 2 hours, depending on your server config) is OK for the cart, then I'd say the server-side session. It's faster and more efficient than accessing the DB.\nIf you need a longer persistence (say some users like to leave and come back the next day), then store it in a cookie that is tamper-evident (use encryption or hashes).\n"
] |
[
26,
3,
2,
1,
1,
1,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"cookies",
"state"
] |
stackoverflow_0000096160_cookies_state.txt
|
Q:
I'm looking to use Visual Studio to write and compile using the open source version of Qt4
Does anyone have details in setting up Qt4 in Visual Studio 2008? Links to other resources would be appreciated as well.
I already know that the commercial version of Qt has applications to this end. I also realize that I'll probably need to compile from source as the installer for the open source does not support Visual Studio and installs Cygwin.
A:
Looks like it's been done with Visual Studio 2005:
http://wiki.qgis.org/qgiswiki/Building_QT_4_with_Visual_C%2B%2B_2005
I'd imagine it would work with Visual Studio 2008, even if it requires some changes.
A:
It's even simpler now with QT4, you no longer need the patch. Basically you just need to supply "-spec win32-msvc2008" to configure.
There are detailed instructions here http://tom.paschenda.org/blog/?p=28
The visual studio add-in is also open-source and available.
|
I'm looking to use Visual Studio to write and compile using the open source version of Qt4
|
Does anyone have details in setting up Qt4 in Visual Studio 2008? Links to other resources would be appreciated as well.
I already know that the commercial version of Qt has applications to this end. I also realize that I'll probably need to compile from source as the installer for the open source does not support Visual Studio and installs Cygwin.
|
[
"Looks like it's been done with Visual Studio 2005:\nhttp://wiki.qgis.org/qgiswiki/Building_QT_4_with_Visual_C%2B%2B_2005\nI'd imagine it would work with Visual Studio 2008, even if it requires some changes.\n",
"It's even simpler now with QT4, you no longer need the patch. Basically you just need to supply \"-spec win32-msvc2008\" to configure.\nThere are detailed instructions here http://tom.paschenda.org/blog/?p=28\nThe visual studio add-in is also open-source and available.\n"
] |
[
1,
1
] |
[] |
[] |
[
"qt",
"visual_studio_2008"
] |
stackoverflow_0000074521_qt_visual_studio_2008.txt
|
Q:
Unix gettimeofday() - compatible algorithm for determining week within month?
If I've got a time_t value from gettimeofday() or compatible in a Unix environment (e.g., Linux, BSD), is there a compact algorithm available that would be able to tell me the corresponding week number within the month?
Ideally the return value would work in similar to the way %W behaves in strftime() , except giving the week within the month rather than the week within the year.
I think Java has a W formatting token that does something more or less like what I'm asking.
[Everything below written after answers were posted by David Nehme, Branan, and Sparr.]
I realized that to return this result in a similar way to %W, we want to count the number of Mondays that have occurred in the month so far. If that number is zero, then 0 should be returned.
Thanks to David Nehme and Branan in particular for their solutions which started things on the right track. The bit of code returning [using Branan's variable names] ((ts->mday - 1) / 7) tells the number of complete weeks that have occurred before the current day.
However, if we're counting the number of Mondays that have occurred so far, then we want to count the number of integral weeks, including today, then consider if the fractional week left over also contains any Mondays.
To figure out whether the fractional week left after taking out the whole weeks contains a Monday, we need to consider ts->mday % 7 and compare it to the day of the week, ts->wday. This is easy to see if you write out the combinations, but if we insure the day is not Sunday (wday > 0), then anytime ts->wday <= (ts->mday % 7) we need to increment the count of Mondays by 1. This comes from considering the number of days since the start of the month, and whether, based on the current day of the week within the the first fractional week, the fractional week contains a Monday.
So I would rewrite Branan's return statement as follows:
return (ts->tm_mday / 7) + ((ts->tm_wday > 0) && (ts->tm_wday <= (ts->tm_mday % 7)));
A:
Assuming your first week is week 1:
int getWeekOfMonth()
{
time_t my_time;
struct tm *ts;
my_time = time(NULL);
ts = localtime(&my_time);
return ((ts->tm_mday -1) / 7) + 1;
}
For 0-index, drop the +1 in the return statement.
A:
If you define the first week to be days 1-7 of the month, the second week days 8-14, ... then the following code will work.
int week_of_month( const time_t *my_time)
{
struct tm *timeinfo;
timeinfo =localtime(my_time);
return 1 + (timeinfo->tm_mday-1) / 7;
}
A:
Consider this pseudo-code, since I am writing it in mostly C syntax but pretending I can borrow functionality from other languages (string->int assignment, string->time conversion). Adapt or expand for your language of choice.
int week_num_in_month(time_t timestamp) {
int first_weekday_of_month, day_of_month;
day_of_month = strftime(timestamp,"%d");
first_weekday_of_month = strftime(timefstr(strftime(timestamp,"%d/%m/01")),"%w");
return (day_of_month + first_weekday_of_month - 1 ) / 7 + 1;
}
Obviously I am assuming that you want to handle weeks of the month the way the standard time functions handle weeks of the year, as opposed to just days 1-7, 8-13, etc.
|
Unix gettimeofday() - compatible algorithm for determining week within month?
|
If I've got a time_t value from gettimeofday() or compatible in a Unix environment (e.g., Linux, BSD), is there a compact algorithm available that would be able to tell me the corresponding week number within the month?
Ideally the return value would work in similar to the way %W behaves in strftime() , except giving the week within the month rather than the week within the year.
I think Java has a W formatting token that does something more or less like what I'm asking.
[Everything below written after answers were posted by David Nehme, Branan, and Sparr.]
I realized that to return this result in a similar way to %W, we want to count the number of Mondays that have occurred in the month so far. If that number is zero, then 0 should be returned.
Thanks to David Nehme and Branan in particular for their solutions which started things on the right track. The bit of code returning [using Branan's variable names] ((ts->mday - 1) / 7) tells the number of complete weeks that have occurred before the current day.
However, if we're counting the number of Mondays that have occurred so far, then we want to count the number of integral weeks, including today, then consider if the fractional week left over also contains any Mondays.
To figure out whether the fractional week left after taking out the whole weeks contains a Monday, we need to consider ts->mday % 7 and compare it to the day of the week, ts->wday. This is easy to see if you write out the combinations, but if we insure the day is not Sunday (wday > 0), then anytime ts->wday <= (ts->mday % 7) we need to increment the count of Mondays by 1. This comes from considering the number of days since the start of the month, and whether, based on the current day of the week within the the first fractional week, the fractional week contains a Monday.
So I would rewrite Branan's return statement as follows:
return (ts->tm_mday / 7) + ((ts->tm_wday > 0) && (ts->tm_wday <= (ts->tm_mday % 7)));
|
[
"Assuming your first week is week 1:\nint getWeekOfMonth()\n{\n time_t my_time;\n struct tm *ts;\n\n my_time = time(NULL);\n ts = localtime(&my_time);\n\n return ((ts->tm_mday -1) / 7) + 1;\n}\n\nFor 0-index, drop the +1 in the return statement.\n",
"If you define the first week to be days 1-7 of the month, the second week days 8-14, ... then the following code will work.\nint week_of_month( const time_t *my_time)\n{\n struct tm *timeinfo;\n\n timeinfo =localtime(my_time);\n return 1 + (timeinfo->tm_mday-1) / 7;\n\n}\n\n",
"Consider this pseudo-code, since I am writing it in mostly C syntax but pretending I can borrow functionality from other languages (string->int assignment, string->time conversion). Adapt or expand for your language of choice.\nint week_num_in_month(time_t timestamp) {\n int first_weekday_of_month, day_of_month;\n day_of_month = strftime(timestamp,\"%d\");\n first_weekday_of_month = strftime(timefstr(strftime(timestamp,\"%d/%m/01\")),\"%w\");\n return (day_of_month + first_weekday_of_month - 1 ) / 7 + 1;\n}\n\nObviously I am assuming that you want to handle weeks of the month the way the standard time functions handle weeks of the year, as opposed to just days 1-7, 8-13, etc.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"calendar",
"date",
"time"
] |
stackoverflow_0000097276_calendar_date_time.txt
|
Q:
lining up function parameter lists with vim
When defining or calling functions with enough arguments to span multiple lines, I want vim to line them up. For example,
def myfunction(arg1, arg2, arg, ...
argsN-1, argN)
The idea is for argsN-1 to have its 'a' lined up with args1.
Does anyone have a way to have this happen automatically in vim? I've seen the align plugin for lining equal signs (in assignment statements) and such, but I'm not sure if it can be made to solve this problem?
A:
The previous poster had it, but forgot the set
:set cino=(0<Enter>
From :help cinoptions-values
The 'cinoptions' option sets how Vim performs indentation. In the list below,
"N" represents a number of your choice (the number can be negative). When
there is an 's' after the number, Vim multiplies the number by 'shiftwidth':
"1s" is 'shiftwidth', "2s" is two times 'shiftwidth', etc. You can use a
decimal point, too: "-0.5s" is minus half a 'shiftwidth'. The examples below
assume a 'shiftwidth' of 4.
...
(N When in unclosed parentheses, indent N characters from the line
with the unclosed parentheses. Add a 'shiftwidth' for every
unclosed parentheses. When N is 0 or the unclosed parentheses
is the first non-white character in its line, line up with the
next non-white character after the unclosed parentheses.
(default 'shiftwidth' * 2).
cino= cino=(0 >
if (c1 && (c2 || if (c1 && (c2 ||
c3)) c3))
foo; foo;
if (c1 && if (c1 &&
(c2 || c3)) (c2 || c3))
{ {
A:
I believe you have to issue the command:
:set cino=(0
This is when using cindent of course.
edit: I missed "set"
A:
Try the Align http://www.vim.org/scripts/script.php?script_id=294 and AutoAlign http://www.vim.org/scripts/script.php?script_id=884 scripts.
A:
you might get some good mileage out of using a language-specific external tool as a Vim filter. for example, if you can write a Perltidy config file to generate the formatting you want (it looks like you would want the -lp -vtc=2 flags), you can then pipe your existing Vim buffer through it with
:!/path/to/tidy -config /path/to/configfile
if you're going to be running this sort of command frequently, you can define an command by putting something like the following in your .vimrc:
command -range=% Tidy <line1>,<line2>!/path/to/tidy -config /path/to/configfile
|
lining up function parameter lists with vim
|
When defining or calling functions with enough arguments to span multiple lines, I want vim to line them up. For example,
def myfunction(arg1, arg2, arg, ...
argsN-1, argN)
The idea is for argsN-1 to have its 'a' lined up with args1.
Does anyone have a way to have this happen automatically in vim? I've seen the align plugin for lining equal signs (in assignment statements) and such, but I'm not sure if it can be made to solve this problem?
|
[
"The previous poster had it, but forgot the set\n:set cino=(0<Enter>\n\nFrom :help cinoptions-values\nThe 'cinoptions' option sets how Vim performs indentation. In the list below,\n\"N\" represents a number of your choice (the number can be negative). When\nthere is an 's' after the number, Vim multiplies the number by 'shiftwidth':\n\"1s\" is 'shiftwidth', \"2s\" is two times 'shiftwidth', etc. You can use a\ndecimal point, too: \"-0.5s\" is minus half a 'shiftwidth'. The examples below\nassume a 'shiftwidth' of 4.\n\n...\n\n (N When in unclosed parentheses, indent N characters from the line\n with the unclosed parentheses. Add a 'shiftwidth' for every\n unclosed parentheses. When N is 0 or the unclosed parentheses\n is the first non-white character in its line, line up with the\n next non-white character after the unclosed parentheses.\n (default 'shiftwidth' * 2).\n\n cino= cino=(0 >\n if (c1 && (c2 || if (c1 && (c2 ||\n c3)) c3))\n foo; foo;\n if (c1 && if (c1 &&\n (c2 || c3)) (c2 || c3))\n { {\n\n",
"I believe you have to issue the command:\n:set cino=(0\n\nThis is when using cindent of course.\nedit: I missed \"set\"\n",
"Try the Align http://www.vim.org/scripts/script.php?script_id=294 and AutoAlign http://www.vim.org/scripts/script.php?script_id=884 scripts.\n",
"you might get some good mileage out of using a language-specific external tool as a Vim filter. for example, if you can write a Perltidy config file to generate the formatting you want (it looks like you would want the -lp -vtc=2 flags), you can then pipe your existing Vim buffer through it with\n:!/path/to/tidy -config /path/to/configfile\n\nif you're going to be running this sort of command frequently, you can define an command by putting something like the following in your .vimrc:\ncommand -range=% Tidy <line1>,<line2>!/path/to/tidy -config /path/to/configfile\n\n"
] |
[
11,
7,
2,
1
] |
[] |
[] |
[
"editor",
"vim",
"whitespace"
] |
stackoverflow_0000088931_editor_vim_whitespace.txt
|
Q:
Image library that will auto-crop
I'm looking for a .Net library that will accept an image or filename and an aspect ratio, and crop the image to that aspect ratio. That's the easy part: I could do it myself. But I also want it to show a little intelligence in choosing exactly what content gets cropped, even if it's just picking which edge to slice.
This is for a personal project, and the pain isn't high enough to justify spending any money on it, but if you can recommend a for-pay tool go ahead. Maybe someone else will find the suggestion useful.
A:
Disclaimer: I work for a .NET Imaging vendor (Atalasoft)
It depends on what kind of image you are talking about. If you are talking about 1-bit document images (like faxes or scans) we can do this.
If you are talking about photographs, our product doesn't do this, but you might be looking for Seam carving. I wrote this application
http://www.atalasoft.com/cs/blogs/31appsin31days/archive/2008/05/26/simple-seam-carver.aspx
with our library that could be ported to just using the built-in .NET images with some work.
The idea of seam carving is to find connected paths in the image with the least interesting variation from the surrounding pixels. In the normal implementation, you would pick a continuous (but not necessarily vertical) path and remove it. If you wanted a crop, you could find the area with the least energy and remove it. My code shows how to calculate the energy of a pixel and path (how different it is from it's surrounding pixels)
If you look up seam carving, you will find some free implementations out there.
|
Image library that will auto-crop
|
I'm looking for a .Net library that will accept an image or filename and an aspect ratio, and crop the image to that aspect ratio. That's the easy part: I could do it myself. But I also want it to show a little intelligence in choosing exactly what content gets cropped, even if it's just picking which edge to slice.
This is for a personal project, and the pain isn't high enough to justify spending any money on it, but if you can recommend a for-pay tool go ahead. Maybe someone else will find the suggestion useful.
|
[
"Disclaimer: I work for a .NET Imaging vendor (Atalasoft)\nIt depends on what kind of image you are talking about. If you are talking about 1-bit document images (like faxes or scans) we can do this.\nIf you are talking about photographs, our product doesn't do this, but you might be looking for Seam carving. I wrote this application \nhttp://www.atalasoft.com/cs/blogs/31appsin31days/archive/2008/05/26/simple-seam-carver.aspx\nwith our library that could be ported to just using the built-in .NET images with some work. \nThe idea of seam carving is to find connected paths in the image with the least interesting variation from the surrounding pixels. In the normal implementation, you would pick a continuous (but not necessarily vertical) path and remove it. If you wanted a crop, you could find the area with the least energy and remove it. My code shows how to calculate the energy of a pixel and path (how different it is from it's surrounding pixels)\nIf you look up seam carving, you will find some free implementations out there.\n"
] |
[
2
] |
[] |
[] |
[
".net",
"image_manipulation"
] |
stackoverflow_0000050364_.net_image_manipulation.txt
|
Q:
Where is the best place to store user related data in asp.net?
When a customer logs in to my site, I need to know their account id and their menu id. This lets me know what data they can see on a page and what menu they get. I don't want to have to read this data over and over. Should I store this in a session variable or customize the membership user and membership provider to contain this information?
A:
As already suggested, the profile system is super easy.
http://msdn.microsoft.com/en-us/library/2y3fs9xs.aspx
A:
If you're going to use the profile provider, make sure to check out the "Optimize ASP.NET 2.0 Profile Provider" section of this article if you're running a high-traffic site:
http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx
A:
The profile system would probably suit your needs.
A:
I've used MS Table Profile Provider which allows you to specify your own database table structre to store the data rather than the XML schema used in the default profile system.
This has the added bonus of allowing you to write your own data access procedures for accessing common profile data.
|
Where is the best place to store user related data in asp.net?
|
When a customer logs in to my site, I need to know their account id and their menu id. This lets me know what data they can see on a page and what menu they get. I don't want to have to read this data over and over. Should I store this in a session variable or customize the membership user and membership provider to contain this information?
|
[
"As already suggested, the profile system is super easy.\nhttp://msdn.microsoft.com/en-us/library/2y3fs9xs.aspx\n",
"If you're going to use the profile provider, make sure to check out the \"Optimize ASP.NET 2.0 Profile Provider\" section of this article if you're running a high-traffic site:\nhttp://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx\n",
"The profile system would probably suit your needs.\n",
"I've used MS Table Profile Provider which allows you to specify your own database table structre to store the data rather than the XML schema used in the default profile system. \nThis has the added bonus of allowing you to write your own data access procedures for accessing common profile data.\n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
"asp.net",
"c#"
] |
stackoverflow_0000095120_asp.net_c#.txt
|
Q:
Efficiently get sorted sums of a sorted list
You have an ascending list of numbers, what is the most efficient algorithm you can think of to get the ascending list of sums of every two numbers in that list. Duplicates in the resulting list are irrelevant, you can remove them or avoid them if you like.
To be clear, I'm interested in the algorithm. Feel free to post code in any language and paradigm that you like.
A:
Edit as of 2018: You should probably stop reading this. (But I can't delete it as it is accepted.)
If you write out the sums like this:
1 4 5 6 8 9
---------------
2 5 6 7 9 10
8 9 10 12 13
10 11 13 14
12 14 15
16 17
18
You'll notice that since M[i,j] <= M[i,j+1] and M[i,j] <= M[i+1,j], then you only need to examine the top left "corners" and choose the lowest one.
e.g.
only 1 top left corner, pick 2
only 1, pick 5
6 or 8, pick 6
7 or 8, pick 7
9 or 8, pick 8
9 or 9, pick both :)
10 or 10 or 10, pick all
12 or 11, pick 11
12 or 12, pick both
13 or 13, pick both
14 or 14, pick both
15 or 16, pick 15
only 1, pick 16
only 1, pick 17
only 1, pick 18
Of course, when you have lots of top left corners then this solution devolves.
I'm pretty sure this problem is Ω(n²), because you have to calculate the sums for each M[i,j] -- unless someone has a better algorithm for the summation :)
A:
Rather than coding this out, I figure I'll pseudo-code it in steps and explain my logic, so that better programmers can poke holes in my logic if necessary.
On the first step we start out with a list of numbers length n. For each number we need to create a list of length n-1 becuase we aren't adding a number to itself. By the end we have a list of about n sorted lists that was generated in O(n^2) time.
step 1 (startinglist)
for each number num1 in startinglist
for each number num2 in startinglist
add num1 plus num2 into templist
add templist to sumlist
return sumlist
In step 2 because the lists were sorted by design (add a number to each element in a sorted list and the list will still be sorted) we can simply do a mergesort by merging each list together rather than mergesorting the whole lot. In the end this should take O(n^2) time.
step 2 (sumlist)
create an empty list mergedlist
for each list templist in sumlist
set mergelist equal to: merge(mergedlist,templist)
return mergedlist
The merge method would be then the normal merge step with a check to make sure that there are no duplicate sums. I won't write this out because anyone can look up mergesort.
So there's my solution. The entire algorithm is O(n^2) time. Feel free to point out any mistakes or improvements.
A:
You can do this in two lines in python with
allSums = set(a+b for a in X for b in X)
allSums = sorted(allSums)
The cost of this is n^2 (maybe an extra log factor for the set?) for the iteration and s * log(s) for the sorting where s is the size of the set.
The size of the set could be as big as n*(n-1)/2 for example if X = [1,2,4,...,2^n]. So if you want to generate this list it will take at least n^2/2 in the worst case since this is the size of the output.
However if you want to select the first k elements of the result you can do this in O(kn) using a selection algorithm for sorted X+Y matrices by Frederickson and Johnson (see here for gory details). Although this can probably be modified to generate them online by reusing computation and get an efficient generator for this set.
@deuseldorf, Peter
There is some confusion about (n!) I seriously doubt deuseldorf meant "n factorial" but simply "n, (very excited)!"
A:
The best I could come up with is to produce a matrix of sums of each pair, and then merge the rows together, a-la merge sort. I feel like I'm missing some simple insight that will reveal a much more efficient solution.
My algorithm, in Haskell:
matrixOfSums list = [[a+b | b <- list, b >= a] | a <- list]
sortedSums = foldl merge [] matrixOfSums
--A normal merge, save that we remove duplicates
merge xs [] = xs
merge [] ys = ys
merge (x:xs) (y:ys) = case compare x y of
LT -> x:(merge xs (y:ys))
EQ -> x:(merge xs (dropWhile (==x) ys))
GT -> y:(merge (x:xs) ys)
I found a minor improvement, one that's more amenable to lazy stream-based coding. Instead of merging the columns pair-wise, merge all of them at once. The advantage being that you start getting elements of the list immediately.
-- wide-merge does a standard merge (ala merge-sort) across an arbitrary number of lists
-- wideNubMerge does this while eliminating duplicates
wideNubMerge :: Ord a => [[a]] -> [a]
wideNubMerge ls = wideNubMerge1 $ filter (/= []) ls
wideNubMerge1 [] = []
wideNubMerge1 ls = mini:(wideNubMerge rest)
where mini = minimum $ map head ls
rest = map (dropWhile (== mini)) ls
betterSortedSums = wideNubMerge matrixOfSums
However, if you know you're going to use all of the sums, and there's no advantage to getting some of them earlier, go with 'foldl merge []', as it's faster.
A:
In SQL:
create table numbers(n int not null)
insert into numbers(n) values(1),(1), (2), (2), (3), (4)
select distinct num1.n+num2.n sum2n
from numbers num1
inner join numbers num2
on num1.n<>num2.n
order by sum2n
C# LINQ:
List<int> num = new List<int>{ 1, 1, 2, 2, 3, 4};
var uNum = num.Distinct().ToList();
var sums=(from num1 in uNum
from num2 in uNum
where num1!=num2
select num1+num2).Distinct();
foreach (var s in sums)
{
Console.WriteLine(s);
}
A:
This question has been wracking my brain for about a day now. Awesome.
Anyways, you can't get away from the n^2 nature of it easily, but you can do slightly better with the merge since you can bound the range to insert each element in.
If you look at all the lists you generate, they have the following form:
(a[i], a[j]) | j>=i
If you flip it 90 degrees, you get:
(a[i], a[j]) | i<=j
Now, the merge process should be taking two lists i and i+1 (which correspond to lists where the first member is always a[i] and a[i+1]), you can bound the range to insert element (a[i + 1], a[j]) into list i by the location of (a[i], a[j]) and the location of (a[i + 1], a[j + 1]).
This means that you should merge in reverse in terms of j. I don't know (yet) if you can leverage this across j as well, but it seems possible.
A:
No matter what you do, without additional constraints on the input values, you cannot do better than O(n^2), simply because you have to iterate through all pairs of numbers. The iteration will dominate sorting (which you can do in O(n log n) or faster).
|
Efficiently get sorted sums of a sorted list
|
You have an ascending list of numbers, what is the most efficient algorithm you can think of to get the ascending list of sums of every two numbers in that list. Duplicates in the resulting list are irrelevant, you can remove them or avoid them if you like.
To be clear, I'm interested in the algorithm. Feel free to post code in any language and paradigm that you like.
|
[
"Edit as of 2018: You should probably stop reading this. (But I can't delete it as it is accepted.)\nIf you write out the sums like this:\n1 4 5 6 8 9\n---------------\n2 5 6 7 9 10\n 8 9 10 12 13\n 10 11 13 14\n 12 14 15\n 16 17\n 18\n\nYou'll notice that since M[i,j] <= M[i,j+1] and M[i,j] <= M[i+1,j], then you only need to examine the top left \"corners\" and choose the lowest one.\ne.g.\n\nonly 1 top left corner, pick 2\nonly 1, pick 5\n6 or 8, pick 6\n7 or 8, pick 7\n9 or 8, pick 8\n9 or 9, pick both :)\n10 or 10 or 10, pick all\n12 or 11, pick 11\n12 or 12, pick both\n13 or 13, pick both\n14 or 14, pick both\n15 or 16, pick 15\nonly 1, pick 16\nonly 1, pick 17\nonly 1, pick 18\n\nOf course, when you have lots of top left corners then this solution devolves.\nI'm pretty sure this problem is Ω(n²), because you have to calculate the sums for each M[i,j] -- unless someone has a better algorithm for the summation :)\n",
"Rather than coding this out, I figure I'll pseudo-code it in steps and explain my logic, so that better programmers can poke holes in my logic if necessary. \nOn the first step we start out with a list of numbers length n. For each number we need to create a list of length n-1 becuase we aren't adding a number to itself. By the end we have a list of about n sorted lists that was generated in O(n^2) time.\nstep 1 (startinglist) \nfor each number num1 in startinglist\n for each number num2 in startinglist\n add num1 plus num2 into templist\n add templist to sumlist\nreturn sumlist \n\nIn step 2 because the lists were sorted by design (add a number to each element in a sorted list and the list will still be sorted) we can simply do a mergesort by merging each list together rather than mergesorting the whole lot. In the end this should take O(n^2) time.\nstep 2 (sumlist) \ncreate an empty list mergedlist\nfor each list templist in sumlist\n set mergelist equal to: merge(mergedlist,templist)\nreturn mergedlist\n\nThe merge method would be then the normal merge step with a check to make sure that there are no duplicate sums. I won't write this out because anyone can look up mergesort.\nSo there's my solution. The entire algorithm is O(n^2) time. Feel free to point out any mistakes or improvements.\n",
"You can do this in two lines in python with\nallSums = set(a+b for a in X for b in X)\nallSums = sorted(allSums)\n\nThe cost of this is n^2 (maybe an extra log factor for the set?) for the iteration and s * log(s) for the sorting where s is the size of the set.\nThe size of the set could be as big as n*(n-1)/2 for example if X = [1,2,4,...,2^n]. So if you want to generate this list it will take at least n^2/2 in the worst case since this is the size of the output.\nHowever if you want to select the first k elements of the result you can do this in O(kn) using a selection algorithm for sorted X+Y matrices by Frederickson and Johnson (see here for gory details). Although this can probably be modified to generate them online by reusing computation and get an efficient generator for this set.\n@deuseldorf, Peter\nThere is some confusion about (n!) I seriously doubt deuseldorf meant \"n factorial\" but simply \"n, (very excited)!\"\n",
"The best I could come up with is to produce a matrix of sums of each pair, and then merge the rows together, a-la merge sort. I feel like I'm missing some simple insight that will reveal a much more efficient solution.\nMy algorithm, in Haskell:\nmatrixOfSums list = [[a+b | b <- list, b >= a] | a <- list]\n\nsortedSums = foldl merge [] matrixOfSums\n\n--A normal merge, save that we remove duplicates\nmerge xs [] = xs\nmerge [] ys = ys\nmerge (x:xs) (y:ys) = case compare x y of\n LT -> x:(merge xs (y:ys))\n EQ -> x:(merge xs (dropWhile (==x) ys))\n GT -> y:(merge (x:xs) ys)\n\nI found a minor improvement, one that's more amenable to lazy stream-based coding. Instead of merging the columns pair-wise, merge all of them at once. The advantage being that you start getting elements of the list immediately.\n-- wide-merge does a standard merge (ala merge-sort) across an arbitrary number of lists\n-- wideNubMerge does this while eliminating duplicates\nwideNubMerge :: Ord a => [[a]] -> [a]\nwideNubMerge ls = wideNubMerge1 $ filter (/= []) ls\nwideNubMerge1 [] = []\nwideNubMerge1 ls = mini:(wideNubMerge rest)\n where mini = minimum $ map head ls\n rest = map (dropWhile (== mini)) ls\n\nbetterSortedSums = wideNubMerge matrixOfSums\n\nHowever, if you know you're going to use all of the sums, and there's no advantage to getting some of them earlier, go with 'foldl merge []', as it's faster.\n",
"In SQL:\ncreate table numbers(n int not null)\ninsert into numbers(n) values(1),(1), (2), (2), (3), (4)\n\n\nselect distinct num1.n+num2.n sum2n\nfrom numbers num1\ninner join numbers num2 \n on num1.n<>num2.n\norder by sum2n\n\nC# LINQ:\nList<int> num = new List<int>{ 1, 1, 2, 2, 3, 4};\nvar uNum = num.Distinct().ToList();\nvar sums=(from num1 in uNum\n from num2 in uNum \n where num1!=num2\n select num1+num2).Distinct();\nforeach (var s in sums)\n{\n Console.WriteLine(s);\n}\n\n",
"This question has been wracking my brain for about a day now. Awesome.\nAnyways, you can't get away from the n^2 nature of it easily, but you can do slightly better with the merge since you can bound the range to insert each element in.\nIf you look at all the lists you generate, they have the following form:\n(a[i], a[j]) | j>=i\nIf you flip it 90 degrees, you get:\n(a[i], a[j]) | i<=j\nNow, the merge process should be taking two lists i and i+1 (which correspond to lists where the first member is always a[i] and a[i+1]), you can bound the range to insert element (a[i + 1], a[j]) into list i by the location of (a[i], a[j]) and the location of (a[i + 1], a[j + 1]).\nThis means that you should merge in reverse in terms of j. I don't know (yet) if you can leverage this across j as well, but it seems possible.\n",
"No matter what you do, without additional constraints on the input values, you cannot do better than O(n^2), simply because you have to iterate through all pairs of numbers. The iteration will dominate sorting (which you can do in O(n log n) or faster).\n"
] |
[
12,
4,
2,
1,
1,
1,
1
] |
[
"If you are looking for a truly language agnostic solution then you will be sorely disappointed in my opinion because you'll be stuck with a for loop and some conditionals. However if you opened it up to functional languages or functional language features (I'm looking at you LINQ) then my colleagues here can fill this page with elegant examples in Ruby, Lisp, Erlang, and others.\n"
] |
[
-4
] |
[
"algorithm",
"language_agnostic"
] |
stackoverflow_0000000826_algorithm_language_agnostic.txt
|
Q:
What's the easiest non-memory intensive way to output XML from Python?
Basically, something similar to System.Xml.XmlWriter - A streaming XML Writer that doesn't incur much of a memory overhead. So that rules out xml.dom and xml.dom.minidom. Suggestions?
A:
I think you'll find XMLGenerator from xml.sax.saxutils is the closest thing to what you want.
import time
from xml.sax.saxutils import XMLGenerator
from xml.sax.xmlreader import AttributesNSImpl
LOG_LEVELS = ['DEBUG', 'WARNING', 'ERROR']
class xml_logger:
def __init__(self, output, encoding):
"""
Set up a logger object, which takes SAX events and outputs
an XML log file
"""
logger = XMLGenerator(output, encoding)
logger.startDocument()
attrs = AttributesNSImpl({}, {})
logger.startElementNS((None, u'log'), u'log', attrs)
self._logger = logger
self._output = output
self._encoding = encoding
return
def write_entry(self, level, msg):
"""
Write a log entry to the logger
level - the level of the entry
msg - the text of the entry. Must be a Unicode object
"""
#Note: in a real application, I would use ISO 8601 for the date
#asctime used here for simplicity
now = time.asctime(time.localtime())
attr_vals = {
(None, u'date'): now,
(None, u'level'): LOG_LEVELS[level],
}
attr_qnames = {
(None, u'date'): u'date',
(None, u'level'): u'level',
}
attrs = AttributesNSImpl(attr_vals, attr_qnames)
self._logger.startElementNS((None, u'entry'), u'entry', attrs)
self._logger.characters(msg)
self._logger.endElementNS((None, u'entry'), u'entry')
return
def close(self):
"""
Clean up the logger object
"""
self._logger.endElementNS((None, u'log'), u'log')
self._logger.endDocument()
return
if __name__ == "__main__":
#Test it out
import sys
xl = xml_logger(sys.stdout, 'utf-8')
xl.write_entry(2, u"Vanilla log entry")
xl.close()
You'll probably want to look at the rest of the article I got that from at http://www.xml.com/pub/a/2003/03/12/py-xml.html.
A:
I think I have your poison :
http://sourceforge.net/projects/xmlite
Cheers
A:
Some years ago I used MarkupWriter from 4suite
General-purpose utility class for generating XML (may eventually be
expanded to produce more output types)
Sample usage:
from Ft.Xml import MarkupWriter
writer = MarkupWriter(indent=u"yes")
writer.startDocument()
writer.startElement(u'xsa')
writer.startElement(u'vendor')
#Element with simple text (#PCDATA) content
writer.simpleElement(u'name', content=u'Centigrade systems')
#Note writer.text(content) still works
writer.simpleElement(u'email', content=u"[email protected]")
writer.endElement(u'vendor')
#Element with an attribute
writer.startElement(u'product', attributes={u'id': u"100\u00B0"})
#Note writer.attribute(name, value, namespace=None) still works
writer.simpleElement(u'name', content=u"100\u00B0 Server")
#XML fragment
writer.xmlFragment('<version>1.0</version><last-release>20030401</last-release>')
#Empty element
writer.simpleElement(u'changes')
writer.endElement(u'product')
writer.endElement(u'xsa')
writer.endDocument()
Note on the difference between 4Suite writers and printers
Writer - module that exposes a broad public API for building output
bit by bit
Printer - module that simply takes a DOM and creates output from it
as a whole, within one API invokation
Recently i hear a lot about how lxml is great, but I don't have first-hand experience, and I had some fun working with gnosis.
A:
xml.etree.cElementTree, included in the default distribution of CPython since 2.5. Lightning fast for both reading and writing XML.
|
What's the easiest non-memory intensive way to output XML from Python?
|
Basically, something similar to System.Xml.XmlWriter - A streaming XML Writer that doesn't incur much of a memory overhead. So that rules out xml.dom and xml.dom.minidom. Suggestions?
|
[
"I think you'll find XMLGenerator from xml.sax.saxutils is the closest thing to what you want.\n\nimport time\nfrom xml.sax.saxutils import XMLGenerator\nfrom xml.sax.xmlreader import AttributesNSImpl\n\nLOG_LEVELS = ['DEBUG', 'WARNING', 'ERROR']\n\n\nclass xml_logger:\n def __init__(self, output, encoding):\n \"\"\"\n Set up a logger object, which takes SAX events and outputs\n an XML log file\n \"\"\"\n logger = XMLGenerator(output, encoding)\n logger.startDocument()\n attrs = AttributesNSImpl({}, {})\n logger.startElementNS((None, u'log'), u'log', attrs)\n self._logger = logger\n self._output = output\n self._encoding = encoding\n return\n\n def write_entry(self, level, msg):\n \"\"\"\n Write a log entry to the logger\n level - the level of the entry\n msg - the text of the entry. Must be a Unicode object\n \"\"\"\n #Note: in a real application, I would use ISO 8601 for the date\n #asctime used here for simplicity\n now = time.asctime(time.localtime())\n attr_vals = {\n (None, u'date'): now,\n (None, u'level'): LOG_LEVELS[level],\n }\n attr_qnames = {\n (None, u'date'): u'date',\n (None, u'level'): u'level',\n }\n attrs = AttributesNSImpl(attr_vals, attr_qnames)\n self._logger.startElementNS((None, u'entry'), u'entry', attrs)\n self._logger.characters(msg)\n self._logger.endElementNS((None, u'entry'), u'entry')\n return\n\n def close(self):\n \"\"\"\n Clean up the logger object\n \"\"\"\n self._logger.endElementNS((None, u'log'), u'log')\n self._logger.endDocument()\n return\n\nif __name__ == \"__main__\":\n #Test it out\n import sys\n xl = xml_logger(sys.stdout, 'utf-8')\n xl.write_entry(2, u\"Vanilla log entry\")\n xl.close() \n\n\nYou'll probably want to look at the rest of the article I got that from at http://www.xml.com/pub/a/2003/03/12/py-xml.html.\n",
"I think I have your poison :\nhttp://sourceforge.net/projects/xmlite\nCheers\n",
"Some years ago I used MarkupWriter from 4suite\n\nGeneral-purpose utility class for generating XML (may eventually be\nexpanded to produce more output types)\n\nSample usage:\n\nfrom Ft.Xml import MarkupWriter\nwriter = MarkupWriter(indent=u\"yes\")\nwriter.startDocument()\nwriter.startElement(u'xsa')\nwriter.startElement(u'vendor')\n#Element with simple text (#PCDATA) content\nwriter.simpleElement(u'name', content=u'Centigrade systems')\n#Note writer.text(content) still works\nwriter.simpleElement(u'email', content=u\"[email protected]\")\nwriter.endElement(u'vendor')\n#Element with an attribute\nwriter.startElement(u'product', attributes={u'id': u\"100\\u00B0\"})\n#Note writer.attribute(name, value, namespace=None) still works\nwriter.simpleElement(u'name', content=u\"100\\u00B0 Server\")\n#XML fragment\nwriter.xmlFragment('<version>1.0</version><last-release>20030401</last-release>')\n#Empty element\nwriter.simpleElement(u'changes')\nwriter.endElement(u'product')\nwriter.endElement(u'xsa')\nwriter.endDocument()\n\nNote on the difference between 4Suite writers and printers\nWriter - module that exposes a broad public API for building output\n bit by bit\nPrinter - module that simply takes a DOM and creates output from it\n as a whole, within one API invokation\n\n\nRecently i hear a lot about how lxml is great, but I don't have first-hand experience, and I had some fun working with gnosis.\n",
"xml.etree.cElementTree, included in the default distribution of CPython since 2.5. Lightning fast for both reading and writing XML.\n"
] |
[
15,
2,
0,
-4
] |
[
"I've always had good results with lxml. It's a pain to install, as it's mostly a wrapper around libxml2, but lxml.etree tree objects have a .write() method that takes a file-like object to stream to.\nfrom lxml.etree import XML\n\ntree = XML('<root><a><b/></a></root>')\ntree.write(your_file_object)\n\n",
"Second vote for ElementTree (cElementTree is a C implementation that is a little faster, like cPickle vs pickle). There's some short example code here that you can look at to give you an idea of how it works: http://effbot.org/zone/element-index.htm\n(this is Fredrik Lundh, who wrote the module in the first place. It's so good it got drafted into the standard library with 2.5 :-) )\n"
] |
[
-1,
-2
] |
[
"python",
"streaming",
"xml"
] |
stackoverflow_0000093710_python_streaming_xml.txt
|
Q:
Binding a null value to a property of a web user control
Working on a somewhat complex page for configuring customers at work. The setup is that there's a main page, which contains various "panels" for various groups of settings.
In one case, there's an email address field on the main table and an "export" configuration that controls how emails are sent out. I created a main panel that selects the company, and binds to a FormView. The FormView contains a Web User Control that handles the display/configuration of the export details.
The Web User Control Contains a property to define which Config it should be handling, and it gets the value from the FormView using Bind().
Basically the control is used like this:
<syn:ExportInfo ID="eiConfigDetails" ExportInfoID='<%# Bind("ExportInfoID" ) %>' runat="server" />
The property being bound is declared like this in CodeBehind:
public int ExportInfoID
{
get
{
return Convert.ToInt32(hfID.Value);
}
set
{
try
{
hfID.Value = value.ToString();
}
catch(Exception)
{
hfID.Value="-1";
}
}
}
Whenever the ExportInfoID is null I get a null reference exception, but the kicker is that it happens BEFORE it actually tries to set the property (or it would be caught in this version.)
Anyone know what's going on or, more importantly, how to fix it...?
A:
It seems like it's because hfID.Value isn't initialized to a value yet so it can't be converted. You may wanna add a null check in your getter or some validation to make sure hfID.Value isn't null and is numeric.
A:
The Bind can't convert the null value to an int value, to set the ExportInfoID property. That's why it's not getting caught in your code. You can make the property a nullable type (int?) or you can handle the null in the bind logic.
so it would be something like this
bind receives field to get value from
bind uses reflection to get the value
bind attempts to set the ExportInfoID property // boom, error
A:
use the Null Object design pattern on hfID
http://www.cs.oberlin.edu/~jwalker/nullObjPattern/
|
Binding a null value to a property of a web user control
|
Working on a somewhat complex page for configuring customers at work. The setup is that there's a main page, which contains various "panels" for various groups of settings.
In one case, there's an email address field on the main table and an "export" configuration that controls how emails are sent out. I created a main panel that selects the company, and binds to a FormView. The FormView contains a Web User Control that handles the display/configuration of the export details.
The Web User Control Contains a property to define which Config it should be handling, and it gets the value from the FormView using Bind().
Basically the control is used like this:
<syn:ExportInfo ID="eiConfigDetails" ExportInfoID='<%# Bind("ExportInfoID" ) %>' runat="server" />
The property being bound is declared like this in CodeBehind:
public int ExportInfoID
{
get
{
return Convert.ToInt32(hfID.Value);
}
set
{
try
{
hfID.Value = value.ToString();
}
catch(Exception)
{
hfID.Value="-1";
}
}
}
Whenever the ExportInfoID is null I get a null reference exception, but the kicker is that it happens BEFORE it actually tries to set the property (or it would be caught in this version.)
Anyone know what's going on or, more importantly, how to fix it...?
|
[
"It seems like it's because hfID.Value isn't initialized to a value yet so it can't be converted. You may wanna add a null check in your getter or some validation to make sure hfID.Value isn't null and is numeric.\n",
"The Bind can't convert the null value to an int value, to set the ExportInfoID property. That's why it's not getting caught in your code. You can make the property a nullable type (int?) or you can handle the null in the bind logic.\nso it would be something like this\nbind receives field to get value from\nbind uses reflection to get the value\nbind attempts to set the ExportInfoID property // boom, error\n\n",
"use the Null Object design pattern on hfID\nhttp://www.cs.oberlin.edu/~jwalker/nullObjPattern/ \n"
] |
[
0,
0,
0
] |
[] |
[] |
[
".net",
"asp.net",
"c#"
] |
stackoverflow_0000097505_.net_asp.net_c#.txt
|
Q:
Is there a control for a .Net WinForm app that will display HTML
I have a .net (3.5) WinForms application and want to display some html on one of the forms. Is there a control that I can use for this?
A:
Yep sure is, the WebBrowser control.
A:
I was looking at the WebBrowser control but couldn't work out how to assign (set) the HTML to it...?
EDIT: Found it - Document Text
A:
what about the browser control? a bit heavy, but at least you'll get an accurate rendering.
|
Is there a control for a .Net WinForm app that will display HTML
|
I have a .net (3.5) WinForms application and want to display some html on one of the forms. Is there a control that I can use for this?
|
[
"Yep sure is, the WebBrowser control.\n",
"I was looking at the WebBrowser control but couldn't work out how to assign (set) the HTML to it...?\nEDIT: Found it - Document Text\n",
"what about the browser control? a bit heavy, but at least you'll get an accurate rendering.\n"
] |
[
8,
3,
1
] |
[] |
[] |
[
".net",
"c#",
"winforms"
] |
stackoverflow_0000097598_.net_c#_winforms.txt
|
Q:
Login failed for user 'username' - System.Data.SqlClient.SqlException with LINQ in external project / class library
This might seem obvious but I've had this error when trying to use LINQ to SQL with my business logic in a separate class library project.
I've created the DBML in a class library, with all my business logic and custom controls in this project. I'd referenced the class library from my web project and attempted to use it directly from the web project.
The error indicated the login failed for my user name. My user name and password were correct, but the fix was to copy my connection string to the correct location. I've learned the issue from another site and thought I would make a note here.
Error:
Login failed for user 'username'
System.Data.SqlClient.SqlException
A:
The LINQ designer ads the connection string to the app.config of the class library, but the web site needed to see it in the web.config of the web project. Once copied across all was well.
A:
you can pass in a connection or connection string to the data context as well.
|
Login failed for user 'username' - System.Data.SqlClient.SqlException with LINQ in external project / class library
|
This might seem obvious but I've had this error when trying to use LINQ to SQL with my business logic in a separate class library project.
I've created the DBML in a class library, with all my business logic and custom controls in this project. I'd referenced the class library from my web project and attempted to use it directly from the web project.
The error indicated the login failed for my user name. My user name and password were correct, but the fix was to copy my connection string to the correct location. I've learned the issue from another site and thought I would make a note here.
Error:
Login failed for user 'username'
System.Data.SqlClient.SqlException
|
[
"The LINQ designer ads the connection string to the app.config of the class library, but the web site needed to see it in the web.config of the web project. Once copied across all was well.\n",
"you can pass in a connection or connection string to the data context as well.\n"
] |
[
3,
2
] |
[] |
[] |
[
"linq_to_sql"
] |
stackoverflow_0000097594_linq_to_sql.txt
|
Q:
What exactly is SQL Server 2005 User Mapping?
In the new login dialog of the SQL Server 2005 Management Studio Express, what is the User Mapping actually doing? Am I restricting access to those databases that are checked? What if I check none?
A:
It's mapping user rights to specific databases. If you don't check any, that user won't have rights to any database unless it is in a server role that allows rights to individual databases.
|
What exactly is SQL Server 2005 User Mapping?
|
In the new login dialog of the SQL Server 2005 Management Studio Express, what is the User Mapping actually doing? Am I restricting access to those databases that are checked? What if I check none?
|
[
"It's mapping user rights to specific databases. If you don't check any, that user won't have rights to any database unless it is in a server role that allows rights to individual databases.\n"
] |
[
9
] |
[] |
[] |
[
"security",
"sql_server"
] |
stackoverflow_0000097614_security_sql_server.txt
|
Q:
How to load a python module into a fresh interactive shell in Komodo?
When using PyWin I can easily load a python file into a fresh interactive shell and I find this quite handy for prototyping and other exploratory tasks.
I would like to use Komodo as my python editor, but I haven't found a replacement for PyWin's ability to restart the shell and reload the current module. How can I do this in Komodo?
It is also very important to me that when I reload I get a fresh shell. I would prefer it if my previous interactions are in the shell history, but it is more important to me that the memory be isolated from the previous versions and attempts.
A:
I use Komodo Edit, which might be a little less sophisticated than full Komodo.
I create a "New Command" with %(python) -i %f as the text of the command. I have this run in a "New Console". I usually have the starting directory as %p, the top of the project directory.
The -i option runs the file and drops into interactive Python.
|
How to load a python module into a fresh interactive shell in Komodo?
|
When using PyWin I can easily load a python file into a fresh interactive shell and I find this quite handy for prototyping and other exploratory tasks.
I would like to use Komodo as my python editor, but I haven't found a replacement for PyWin's ability to restart the shell and reload the current module. How can I do this in Komodo?
It is also very important to me that when I reload I get a fresh shell. I would prefer it if my previous interactions are in the shell history, but it is more important to me that the memory be isolated from the previous versions and attempts.
|
[
"I use Komodo Edit, which might be a little less sophisticated than full Komodo.\nI create a \"New Command\" with %(python) -i %f as the text of the command. I have this run in a \"New Console\". I usually have the starting directory as %p, the top of the project directory.\nThe -i option runs the file and drops into interactive Python.\n"
] |
[
5
] |
[] |
[] |
[
"interpreter",
"komodo",
"python",
"shell"
] |
stackoverflow_0000097513_interpreter_komodo_python_shell.txt
|
Q:
Find style references that don't exist
Is there a tool that will find for me all the css classes that I am referencing in my HTML that don't actually exist?
ie. if I have <ul class="topnav" /> in my HTML and the topnav class doesn't exist in any of the referenced CSS files.
This is similar to SO#33242, which asks how to find unused CSS styles. This isn't a duplicate, as that question asks which CSS classes are not used. This is the opposite problem.
A:
You can put this JavaScript in the page that can perform this task for you:
function forItems(a, f) {
for (var i = 0; i < a.length; i++) f(a.item(i))
}
function classExists(className) {
var pattern = new RegExp('\\.' + className + '\\b'), found = false
try {
forItems(document.styleSheets, function(ss) {
// decompose only screen stylesheets
if (!ss.media.length || /\b(all|screen)\b/.test(ss.media.mediaText))
forItems(ss.cssRules, function(r) {
// ignore rules other than style rules
if (r.type == CSSRule.STYLE_RULE && r.selectorText.match(pattern)) {
found = true
throw "found"
}
})
})
} catch(e) {}
return found
}
A:
Error Console in Firefox. Although, it gives all CSS errors, so you have to read through it.
A:
IntelliJ Idea tool does that as well.
A:
This Firefox extension is does exactly what you want.
It locates all unused selectors.
|
Find style references that don't exist
|
Is there a tool that will find for me all the css classes that I am referencing in my HTML that don't actually exist?
ie. if I have <ul class="topnav" /> in my HTML and the topnav class doesn't exist in any of the referenced CSS files.
This is similar to SO#33242, which asks how to find unused CSS styles. This isn't a duplicate, as that question asks which CSS classes are not used. This is the opposite problem.
|
[
"You can put this JavaScript in the page that can perform this task for you:\nfunction forItems(a, f) {\n for (var i = 0; i < a.length; i++) f(a.item(i))\n}\n\nfunction classExists(className) {\n var pattern = new RegExp('\\\\.' + className + '\\\\b'), found = false\n\n try {\n forItems(document.styleSheets, function(ss) {\n // decompose only screen stylesheets\n if (!ss.media.length || /\\b(all|screen)\\b/.test(ss.media.mediaText))\n forItems(ss.cssRules, function(r) {\n // ignore rules other than style rules\n if (r.type == CSSRule.STYLE_RULE && r.selectorText.match(pattern)) {\n found = true\n throw \"found\"\n }\n })\n })\n } catch(e) {}\n\n\n return found\n}\n\n",
"Error Console in Firefox. Although, it gives all CSS errors, so you have to read through it.\n",
"IntelliJ Idea tool does that as well. \n",
"This Firefox extension is does exactly what you want.\nIt locates all unused selectors. \n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
"css"
] |
stackoverflow_0000079258_css.txt
|
Q:
What are the advantages and disadvantages of DTOs from a website performance perspective?
What are the advantages and disadvantages of DTOs from a website performance perspective? (I'm talking in the case where the database is accessed on a different app server to the web server - and the web server could access the database directly.)
A:
DTO's aren't a performance concern. I think what you are asking about is the performance implications of tiering. In particular, using an application tier between your web tier (web server) and data tier (database server).
Generally, the implications are that latency is increased (you have extra network roundtrips), but you gain some additional capacity by splitting the load across machines.
Another common reason (again, non-performance) that people would do that is to allow them to place the web server in the DMZ while keeping the application and database servers inside the firewall.
Another potential reason (non-performance) is the ability to plug multiple UIs on top of a single application. I've done this on past projects with great results (where the business required it).
Also, don't underestimate the work required to maintain an architecture of that nature. It's more work than a non-tiered solution, so only use it if you anticipate needing it.
That being said, the use of DTOs does not necessitate the use of Tiering.
The best description I've found of tiering comes from Martin Fowler's book, Analysis Patterns. There's a small section in the back on application facades and tiering.
Just to reiterate the previous answer, DTOs aren't a performance concern. It's just a class without methods used to provide isolation between various parts of your application.
I'd also suggest picking up Martin's other book, Patterns of Enterprise Application Architecture. The DTO "pattern" is documented there.
|
What are the advantages and disadvantages of DTOs from a website performance perspective?
|
What are the advantages and disadvantages of DTOs from a website performance perspective? (I'm talking in the case where the database is accessed on a different app server to the web server - and the web server could access the database directly.)
|
[
"DTO's aren't a performance concern. I think what you are asking about is the performance implications of tiering. In particular, using an application tier between your web tier (web server) and data tier (database server).\nGenerally, the implications are that latency is increased (you have extra network roundtrips), but you gain some additional capacity by splitting the load across machines.\nAnother common reason (again, non-performance) that people would do that is to allow them to place the web server in the DMZ while keeping the application and database servers inside the firewall.\nAnother potential reason (non-performance) is the ability to plug multiple UIs on top of a single application. I've done this on past projects with great results (where the business required it).\nAlso, don't underestimate the work required to maintain an architecture of that nature. It's more work than a non-tiered solution, so only use it if you anticipate needing it.\nThat being said, the use of DTOs does not necessitate the use of Tiering.\nThe best description I've found of tiering comes from Martin Fowler's book, Analysis Patterns. There's a small section in the back on application facades and tiering.\nJust to reiterate the previous answer, DTOs aren't a performance concern. It's just a class without methods used to provide isolation between various parts of your application.\nI'd also suggest picking up Martin's other book, Patterns of Enterprise Application Architecture. The DTO \"pattern\" is documented there.\n"
] |
[
3
] |
[] |
[] |
[
"dto_mapping",
"ejb",
"jakarta_ee",
"performance",
"rpc"
] |
stackoverflow_0000097532_dto_mapping_ejb_jakarta_ee_performance_rpc.txt
|
Q:
Image cropping C# without .net library
Can anyone advise on how to crop an image, let's say jpeg, without using any .NET framework constructs, just raw bytes? Since this is the only* way in Silverlight...
Or point to a library?
I'm not concerned with rendering i'm wanting to manipulate a jpg before uploading.
*There are no GDI+(System.Drawing) or WPF(System.Windows.Media.Imaging) libraries available in Silverlight.
Lockbits requires GDI+, clarified question
Using fjcore: http://code.google.com/p/fjcore/ to resize but no way to crop :(
A:
You could easily write crop yourself in fjcore. Start with the code for Resizer
http://web.archive.org/web/20140304090029/http://code.google.com:80/p/fjcore/source/browse/trunk/FJCore/Resize/ImageResizer.cs?
and FilterNNResize -- you can see how the image data is stored -- it's just simple arrays of pixels.
The important part is:
for (int y = 0; y < _newHeight; y++)
{
i_sY = (int)sY; sX = 0;
UpdateProgress((double)y / _newHeight);
for (int x = 0; x < _newWidth; x++)
{
i_sX = (int)sX;
_destinationData[0][x, y] = _sourceData[0][i_sX, i_sY];
if (_color) {
_destinationData[1][x, y] = _sourceData[1][i_sX, i_sY];
_destinationData[2][x, y] = _sourceData[2][i_sX, i_sY];
}
sX += xStep;
}
sY += yStep;
}
shows you that the data is stored in an array of color planes (1 element for 8bpp gray, 3 elements for color) and each element has a 2-D array of bytes (x, y) for the image.
You just need to loop through the destination pixels, copying then from the appropriate place in the source.
edit: don't forget to provide the patch to the author of fjcore
A:
ImageMagick does a pretty good job. If you're ok with handing off editing tasks to your server...
(Seriously? The recommended way of manipulating images in Silverlight is to work with raw bytes? That's... incredibly lame.)
A:
I'm taking a look at : http://code.google.com/p/fjcore/source/checkout
A dependency free image processing library.
A:
where is silverlight executed?
Is there any reason at all to send an complete picture to the client to make the client crop it?
Do it on the server... (if you are not creating an image editor that is..)
|
Image cropping C# without .net library
|
Can anyone advise on how to crop an image, let's say jpeg, without using any .NET framework constructs, just raw bytes? Since this is the only* way in Silverlight...
Or point to a library?
I'm not concerned with rendering i'm wanting to manipulate a jpg before uploading.
*There are no GDI+(System.Drawing) or WPF(System.Windows.Media.Imaging) libraries available in Silverlight.
Lockbits requires GDI+, clarified question
Using fjcore: http://code.google.com/p/fjcore/ to resize but no way to crop :(
|
[
"You could easily write crop yourself in fjcore. Start with the code for Resizer\nhttp://web.archive.org/web/20140304090029/http://code.google.com:80/p/fjcore/source/browse/trunk/FJCore/Resize/ImageResizer.cs?\nand FilterNNResize -- you can see how the image data is stored -- it's just simple arrays of pixels.\nThe important part is:\nfor (int y = 0; y < _newHeight; y++)\n{\n i_sY = (int)sY; sX = 0;\n\n UpdateProgress((double)y / _newHeight);\n\n for (int x = 0; x < _newWidth; x++)\n {\n i_sX = (int)sX;\n\n _destinationData[0][x, y] = _sourceData[0][i_sX, i_sY];\n\n if (_color) {\n\n _destinationData[1][x, y] = _sourceData[1][i_sX, i_sY];\n _destinationData[2][x, y] = _sourceData[2][i_sX, i_sY];\n }\n\n sX += xStep;\n }\n sY += yStep;\n}\n\nshows you that the data is stored in an array of color planes (1 element for 8bpp gray, 3 elements for color) and each element has a 2-D array of bytes (x, y) for the image.\nYou just need to loop through the destination pixels, copying then from the appropriate place in the source.\nedit: don't forget to provide the patch to the author of fjcore\n",
"ImageMagick does a pretty good job. If you're ok with handing off editing tasks to your server...\n(Seriously? The recommended way of manipulating images in Silverlight is to work with raw bytes? That's... incredibly lame.)\n",
"I'm taking a look at : http://code.google.com/p/fjcore/source/checkout\nA dependency free image processing library.\n",
"where is silverlight executed?\nIs there any reason at all to send an complete picture to the client to make the client crop it?\nDo it on the server... (if you are not creating an image editor that is..)\n"
] |
[
3,
2,
2,
0
] |
[] |
[] |
[
"c#",
"image_manipulation",
"silverlight"
] |
stackoverflow_0000037048_c#_image_manipulation_silverlight.txt
|
Q:
How to prevent others from using my .Net assembly?
I have an assembly which should not be used by any application other than the designated executable. Please give me some instructions to do so.
A:
You can sign the assembly and the executable with the same key and then put a check in the constructor of the classes you want to protect:
public class NotForAnyoneElse {
public NotForAnyoneElse() {
if (typeof(NotForAnyoneElse).Assembly.GetName().GetPublicKeyToken() != Assembly.GetEntryAssembly().GetName().GetPublicKeyToken()) {
throw new SomeException(...);
}
}
}
A:
In .Net 2.0 or better, make everything internal, and then use Friend Assemblies
http://msdn.microsoft.com/en-us/library/0tke9fxk.aspx
This will not stop reflection. I want to incorporate some of the information from below. If you absolutely need to stop anyone from calling, probably the best solution is:
ILMerge the .exe and .dll
obfuscate the final .exe
You could also check up the call stack and get the assembly for each caller and make sure that they are all signed with the same key as the assembly.
A:
100% completely impossible without jumping through some hoops.
One of the perks of using the .NET is the ability to use reflection, that is load up an assembly and inspect it, dynamically call methods, etc. This is what makes interop between VB.NET and F# possible.
However, since your code is in a managed assembly that means that anybody can add a reference to your code and invoke its public methods or load it using reflection and call private methods. Even if you 'obfuscate' your code, people will still be able to use reflection and invoke your code. However, since all the names will be masked doing anything is prohibitavely difficult.
If you must ship your .NET code in a fashion that prevents other people from executing it, you might be able to NGEN your binary (compile it to x86) and ship those binaries.
I don't know the specifics of your situation, but obfuscation should be good enough.
A:
You could also look at using the Netz executable packer and compressor.
This takes your assemblies and your .exe file and packs them into a single executable so they're not visible to the outside world without a bit of digging around.
My guess is that this is sufficient to prevent access for most .net programmers.
A big benefit of the .netz approach is that it does not require you to change your code. Another benefit is that it really simplifies your installation process.
A:
You should be able to make everything internally scoped, and then use the InternalsVisibleTo Attribute to grant only that one assembly access to the internal methods.
A:
The Code Access Security attribute that @Charles Graham mentions is StrongNameIdentityPermissionAttribute
A:
As some people have mentioned, use the InternalsVisibleTo attribute and mark everything as internal. This of course won't guard against reflection.
One thing that hasnt been mentioned is to ilmerge your assemblies into your main .exe/.dll/whatever, this will up the barrier for entry a bit (people won't be able to see your assemby sitting on its own asking to be referenced), but wont stop the reflection route..
UPDATE: Also, IIRC, ilmerge has a feature where it can automaticaly internalise the merged assemblies, which would mean you don't need to use InternalsVisibleTo at all
A:
I'm not sure if this is an available avenue for you, but perhaps you can host the assembly using WCF or ASP.NET web services and use some sort of authentication scheme (LDAP, public/rpivate key pairs, etc.) to ensure only allowed clients connect. This would keep your assembly physically out of anyone else's hands and you can control who connects to it. Just a thought.
A:
You might be able to set this in the Code Access Security policies on the assembly.
A:
You can use obfuscation.
That will turn:
int MySecretPrimeDetectionAlgorithm(int lastPrimeNumber);
Into something unreadable like:
int Asdfasdfasdfasdfasdfasdfasdf(int qwerqwerqwerqwerqwerqwer);
Others will still be able to use your assembly, but it will be difficult to make any sensible.
A:
It sounds like you are looking for a protection or obfuscation tool. While there isn't a silver bullet, the protection tool I recommend is smartassembly. Some alternatives are Salamander Obfuscator, dotfuscator, and Xenocode.
Unfortunately, if you give your bytes to someone to be read... if they have enough time and effort, they can find way to load and call your code. To preemptively answer a comment I see you ask frequently: Salamander will prevent your code from being loaded directly into the Reflector tool, but I've had better (ie: more reliable) experiences with smartassembly.
Hope this helps. :)
A:
If the assembly was a web service for example, you could ensure the designated executable passes a secret value in the SOAP message.
|
How to prevent others from using my .Net assembly?
|
I have an assembly which should not be used by any application other than the designated executable. Please give me some instructions to do so.
|
[
"You can sign the assembly and the executable with the same key and then put a check in the constructor of the classes you want to protect:\npublic class NotForAnyoneElse {\n public NotForAnyoneElse() {\n if (typeof(NotForAnyoneElse).Assembly.GetName().GetPublicKeyToken() != Assembly.GetEntryAssembly().GetName().GetPublicKeyToken()) {\n throw new SomeException(...);\n }\n }\n}\n\n",
"In .Net 2.0 or better, make everything internal, and then use Friend Assemblies\nhttp://msdn.microsoft.com/en-us/library/0tke9fxk.aspx\nThis will not stop reflection. I want to incorporate some of the information from below. If you absolutely need to stop anyone from calling, probably the best solution is:\n\nILMerge the .exe and .dll\nobfuscate the final .exe\n\nYou could also check up the call stack and get the assembly for each caller and make sure that they are all signed with the same key as the assembly.\n",
"100% completely impossible without jumping through some hoops.\nOne of the perks of using the .NET is the ability to use reflection, that is load up an assembly and inspect it, dynamically call methods, etc. This is what makes interop between VB.NET and F# possible.\nHowever, since your code is in a managed assembly that means that anybody can add a reference to your code and invoke its public methods or load it using reflection and call private methods. Even if you 'obfuscate' your code, people will still be able to use reflection and invoke your code. However, since all the names will be masked doing anything is prohibitavely difficult.\nIf you must ship your .NET code in a fashion that prevents other people from executing it, you might be able to NGEN your binary (compile it to x86) and ship those binaries. \nI don't know the specifics of your situation, but obfuscation should be good enough.\n",
"You could also look at using the Netz executable packer and compressor. \nThis takes your assemblies and your .exe file and packs them into a single executable so they're not visible to the outside world without a bit of digging around. \nMy guess is that this is sufficient to prevent access for most .net programmers. \nA big benefit of the .netz approach is that it does not require you to change your code. Another benefit is that it really simplifies your installation process.\n",
"You should be able to make everything internally scoped, and then use the InternalsVisibleTo Attribute to grant only that one assembly access to the internal methods.\n",
"The Code Access Security attribute that @Charles Graham mentions is StrongNameIdentityPermissionAttribute\n",
"As some people have mentioned, use the InternalsVisibleTo attribute and mark everything as internal. This of course won't guard against reflection.\nOne thing that hasnt been mentioned is to ilmerge your assemblies into your main .exe/.dll/whatever, this will up the barrier for entry a bit (people won't be able to see your assemby sitting on its own asking to be referenced), but wont stop the reflection route..\nUPDATE: Also, IIRC, ilmerge has a feature where it can automaticaly internalise the merged assemblies, which would mean you don't need to use InternalsVisibleTo at all\n",
"I'm not sure if this is an available avenue for you, but perhaps you can host the assembly using WCF or ASP.NET web services and use some sort of authentication scheme (LDAP, public/rpivate key pairs, etc.) to ensure only allowed clients connect. This would keep your assembly physically out of anyone else's hands and you can control who connects to it. Just a thought.\n",
"You might be able to set this in the Code Access Security policies on the assembly.\n",
"You can use obfuscation. \nThat will turn:\nint MySecretPrimeDetectionAlgorithm(int lastPrimeNumber);\n\nInto something unreadable like:\nint Asdfasdfasdfasdfasdfasdfasdf(int qwerqwerqwerqwerqwerqwer);\n\nOthers will still be able to use your assembly, but it will be difficult to make any sensible.\n",
"It sounds like you are looking for a protection or obfuscation tool. While there isn't a silver bullet, the protection tool I recommend is smartassembly. Some alternatives are Salamander Obfuscator, dotfuscator, and Xenocode.\nUnfortunately, if you give your bytes to someone to be read... if they have enough time and effort, they can find way to load and call your code. To preemptively answer a comment I see you ask frequently: Salamander will prevent your code from being loaded directly into the Reflector tool, but I've had better (ie: more reliable) experiences with smartassembly.\nHope this helps. :)\n",
"If the assembly was a web service for example, you could ensure the designated executable passes a secret value in the SOAP message.\n"
] |
[
14,
11,
9,
3,
2,
2,
2,
2,
1,
1,
1,
0
] |
[
"Just require a pass code to be sent in using a function call and if it hasn't been authorized then nothing works, like .setAuthorizeCode('123456') then in every single place that can be used have it check if authorizeCode != 123456 then throw error or just exit out... It doesn't sound like a good answer for re-usability but that is exactly the point.\nThe only time it could be used is by you and when you hard code the authorize code into the program.\nJust a thought, could be what you are looking for or could inspire you to something better.\n"
] |
[
-2
] |
[
".net",
"assemblies",
"security"
] |
stackoverflow_0000096624_.net_assemblies_security.txt
|
Q:
Custom URL Extensions/Routing Without IIS Access
I have a need to use extensionless URLs. I do not have access to IIS (6.0) so I cannot map requests to ASP.NET and handle with a HttpHandler/HttpModule. However, I can set a custom 404 page via web host control panel.
My current plan is to perform necessary logic in the custom 404 page, but it "feels wrong". Are there any recommendations that I am missing?
Edited: Added "Without IIS Access" to the title since someone thought this was a repeat question.
A:
Without access to IIS, that would be your only option.
A:
The 404 page really is your only option if you can't map the requests. I've seen several blog packages that do this to enable magic URLs like .../archive/YYYY/MM/DD and such - there's no such page, so it hits the 404 page and the 404 page does the redirection.
|
Custom URL Extensions/Routing Without IIS Access
|
I have a need to use extensionless URLs. I do not have access to IIS (6.0) so I cannot map requests to ASP.NET and handle with a HttpHandler/HttpModule. However, I can set a custom 404 page via web host control panel.
My current plan is to perform necessary logic in the custom 404 page, but it "feels wrong". Are there any recommendations that I am missing?
Edited: Added "Without IIS Access" to the title since someone thought this was a repeat question.
|
[
"Without access to IIS, that would be your only option.\n",
"The 404 page really is your only option if you can't map the requests. I've seen several blog packages that do this to enable magic URLs like .../archive/YYYY/MM/DD and such - there's no such page, so it hits the 404 page and the 404 page does the redirection.\n"
] |
[
1,
0
] |
[] |
[] |
[
"asp.net",
"iis_6"
] |
stackoverflow_0000097528_asp.net_iis_6.txt
|
Q:
Given this XML, is there an xpath that will give me the 'test' and 'name' values?
I need to get the value of the 'test' attribute in the xsl:when tag, and the 'name' attribute in the xsl:call-template tag. This xpath gets me pretty close:
..../xsl:template/xsl:choose/xsl:when
But that just returns the 'when' elements, not the exact attribute values I need.
Here is a snippet of my XML:
<xsl:template match="field">
<xsl:choose>
<xsl:when test="@name='First Name'">
<xsl:call-template name="handleColumn_1" />
</xsl:when>
</xsl:choose>
A:
do you want .../xsl:template/xsl:choose/xsl:when/@test
If you want to actually get the value 'First Name' out of the test attribute, you're out of luck -- the content inside the attribute is just a string, and not a piece of xml, so you can't xpath it. If you need to get that, you must use string manipulation (Eg, substring) to get the right content
A:
Steve Cooper answered the first part. For the second part, you can use:
.../xsl:template/xsl:choose/xsl:when[@test="@name='First Name'"]/xsl:call-template/@name
Which will match specifically the xsl:when in your above snippet. If you want it to match generally, then you can use:
.../xsl:template/xsl:choose/xsl:when/xsl:call-template/@name
|
Given this XML, is there an xpath that will give me the 'test' and 'name' values?
|
I need to get the value of the 'test' attribute in the xsl:when tag, and the 'name' attribute in the xsl:call-template tag. This xpath gets me pretty close:
..../xsl:template/xsl:choose/xsl:when
But that just returns the 'when' elements, not the exact attribute values I need.
Here is a snippet of my XML:
<xsl:template match="field">
<xsl:choose>
<xsl:when test="@name='First Name'">
<xsl:call-template name="handleColumn_1" />
</xsl:when>
</xsl:choose>
|
[
"do you want .../xsl:template/xsl:choose/xsl:when/@test\nIf you want to actually get the value 'First Name' out of the test attribute, you're out of luck -- the content inside the attribute is just a string, and not a piece of xml, so you can't xpath it. If you need to get that, you must use string manipulation (Eg, substring) to get the right content\n",
"Steve Cooper answered the first part. For the second part, you can use:\n.../xsl:template/xsl:choose/xsl:when[@test=\"@name='First Name'\"]/xsl:call-template/@name\n\nWhich will match specifically the xsl:when in your above snippet. If you want it to match generally, then you can use:\n.../xsl:template/xsl:choose/xsl:when/xsl:call-template/@name\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"xml",
"xslt"
] |
stackoverflow_0000097474_xml_xslt.txt
|
Q:
How to start in Windows development?
I've been a Unix-based web programmer for years (Perl and PHP). I'm also competent with C and C++ (and bash and that sort of sysadmin sort of stuff) in terms of the language itself. I've never had a problem learning a new language (I mucked around with Java a few years ago and whilst I could write it I just didn't like it as a language).
What I don't have any experience with is the vast array of frameworks that exist for writing graphical Windows applications.
I have a few ideas for Windows-based applications that I want to work through. I could do this is Perl/TCL/TK but I want something more "native" for a variety of reasons.
Through my current company I have access to Microsoft tools (and the licences to use them for "development") so I've decided to teach myself something new.
So, I've got Visual Studio 2008 installed. I fired it up, cliked "New Project" and then got absolutely confused by the variety of types of new project I could start.
Can someone please help me understand not only the fundemental differences but also any advice on what sort of things each type lends itself to?
Assuming I'm going down the C++ route (I know the language hence not choosing C# - unless this is actually more advisable...) I could use:
Windows Forms
MFC Application
Win32
I also know that away from Microsoft I could use wxWidgets. wxWidgets does appeal to me (cross platform, etc) but how does this compare to the various Microsoft options above? I also know Qt exists.
A:
It depends on how 'close to the metal' you want to be. Choose .Net/C#/Windows Forms/WPF if you want to quickly write Windows-only applications. Choose C++/MFC if you are determined to learn a platform that is not easy to use and has wards from 15 years of legacy code, but gives you infinite control over every little detail (to be clear: MFC is Windows-only, too).
MFC is a wrapper around the C win32 api, plus some extra goodies that package standard functionality. It helps a lot to know how the win32 api works. To learn this, I recommend 'Programming Windows' by Charles Petzold (called 'the Petzold' by oldtimers). You can also choose to start with MFC. Have a look at the many samples and tutorials that are included with Visual Studio and on sites like codeproject.com.
.Net / C# is a lot easier to use. It abstracts away a lot of the Win32 api, but it's still a wrapper - so for some things you'll need to 'drop down a level', like you used to have with Visual Basic. IMHO (and I'll probably get modded down for this), C# is the new Visual Basic except that it's not so ugly as a language and that it's statically typed. To be fair, it has some advantages too, like not requiring the strange VB runtime (but it does require .Net, so...)
A:
C# is the language of choice for Windows development, for me. I came from the same kind of background as you, and I found C# incredibly refreshing. I really love this language, and .NET is now my platform of choice. Plus, it's easy to keep in touch with your Unix roots via Mono development. Really, .NET is a great platform and you should explore it.
Also, when it comes to Visual Studio, you have to remember that the different projects basically only specify what kind of libraries are included, by default, and the build process. If you want to stay with a Unix style Makefile, you could do windows development with Mono.
Alex
A:
Windows Forms is by far the nicest of those. However, using windows forms from C++ will just confuse you more if you don't already know what you're doing, because then you're really using C++/CLI, which might just as well be a completely different language. Better off going C# if you want to go that route.
MFC is probably closest to what you're familiar with. But, again, Windows Forms is so much nicer.
A:
I really would choose C# instead of C++. For Windows client apps, it can't be beat. For C/C++ dude like you, the syntax learning curve will be short. The difficulty will be learning the .NET framework, but that's the cost you'll have to incur one way or the other.
Once you select C#, just pick either Windows Forms or WPF Application. Both are client-side application types. If you pick WPF Application, you'll also have to learn XAML, which is a fairly new, but massively powerful, concept.
A:
I've tried doing some C++ programming in .Net (Windows Forms). And while it was possible it was certainly not a pleasurable experience, mostly because you have some extra keywords and such which differ from standarad C++. But if you're willing to learn some more C++ it's an option.
Myself I have started working on a project using C# which works really well. It's easy to learn to if you have a background in C++.
I wouldn't for the world touch the Win32 API ever again. It's really terrible!
A:
If you're just interested in writing graphical windows applications, just stick with "Windows Form Application". It will start you out with a blank windows form and a class that contains your main() method.
The "Console Application" project is probably the simplest, it just creates one class file for you with a main() and that's it.
The "Class Library" project has scaffolding and default build settings for creating a DLL.
There generally aren't any fundamental differences between the different kinds of projects. All they do is set up some default includes for you and generate some scaffolding code (e.g., a blank windows form) to get you started.
I do recommend learning C#. If you know Java it won't be too much of a leap for you. The initial version of C# was actually designed to be exactly like Java, but they have diverged a bit over the years.
A:
I think what I would suggest has a lot more to do with your objective. If you are looking to build your own application and want to get it to market quickly, and it has to be Windows, then I would go with C# WF as others have suggested.
If you are looking to make yourself more employable then I would go with C#/ASP.Net. This way you are learning C# but are also learning more about web developent, in general, and ASP.Net in particular. I think you will find that Windows Forms is a lot easier, comparatively, and not really be worth spending a lot of time on.
So if I were you I would build my application so that the majority of it is seperated from the interface. I would first learn how to make that code interact in ASP.Net, and then I would try it in Windows Forms. If you can do that you will learn a lot of really important skills for .Net framework development.
A:
IMHO, wxWidgets is better than any of those. For example, I know of many people that converted their projects from MFC to wx. wxWidgets has all the MFC has (in early wx versions, a lot of classes were clones of MFC classes), and a lot more. It is not just a GUI library, but you have wrappers for all kinds of common tasks, like reading/writing XML files, Windows Registry, manipulation of various graphic types and image data, classes for conversion between character sets, etc. There are also a lot of add-on classes at wxCode website that can enhance your applications easily.
wxWidgets is also cross-platform, has full Unicode support, and only advances further. If you decide to give it a try, make sure you try wxFormBuilder for easy, WYSIWYG builder of user interface (dialogs, windows, ...).
|
How to start in Windows development?
|
I've been a Unix-based web programmer for years (Perl and PHP). I'm also competent with C and C++ (and bash and that sort of sysadmin sort of stuff) in terms of the language itself. I've never had a problem learning a new language (I mucked around with Java a few years ago and whilst I could write it I just didn't like it as a language).
What I don't have any experience with is the vast array of frameworks that exist for writing graphical Windows applications.
I have a few ideas for Windows-based applications that I want to work through. I could do this is Perl/TCL/TK but I want something more "native" for a variety of reasons.
Through my current company I have access to Microsoft tools (and the licences to use them for "development") so I've decided to teach myself something new.
So, I've got Visual Studio 2008 installed. I fired it up, cliked "New Project" and then got absolutely confused by the variety of types of new project I could start.
Can someone please help me understand not only the fundemental differences but also any advice on what sort of things each type lends itself to?
Assuming I'm going down the C++ route (I know the language hence not choosing C# - unless this is actually more advisable...) I could use:
Windows Forms
MFC Application
Win32
I also know that away from Microsoft I could use wxWidgets. wxWidgets does appeal to me (cross platform, etc) but how does this compare to the various Microsoft options above? I also know Qt exists.
|
[
"It depends on how 'close to the metal' you want to be. Choose .Net/C#/Windows Forms/WPF if you want to quickly write Windows-only applications. Choose C++/MFC if you are determined to learn a platform that is not easy to use and has wards from 15 years of legacy code, but gives you infinite control over every little detail (to be clear: MFC is Windows-only, too).\nMFC is a wrapper around the C win32 api, plus some extra goodies that package standard functionality. It helps a lot to know how the win32 api works. To learn this, I recommend 'Programming Windows' by Charles Petzold (called 'the Petzold' by oldtimers). You can also choose to start with MFC. Have a look at the many samples and tutorials that are included with Visual Studio and on sites like codeproject.com.\n.Net / C# is a lot easier to use. It abstracts away a lot of the Win32 api, but it's still a wrapper - so for some things you'll need to 'drop down a level', like you used to have with Visual Basic. IMHO (and I'll probably get modded down for this), C# is the new Visual Basic except that it's not so ugly as a language and that it's statically typed. To be fair, it has some advantages too, like not requiring the strange VB runtime (but it does require .Net, so...)\n",
"C# is the language of choice for Windows development, for me. I came from the same kind of background as you, and I found C# incredibly refreshing. I really love this language, and .NET is now my platform of choice. Plus, it's easy to keep in touch with your Unix roots via Mono development. Really, .NET is a great platform and you should explore it.\nAlso, when it comes to Visual Studio, you have to remember that the different projects basically only specify what kind of libraries are included, by default, and the build process. If you want to stay with a Unix style Makefile, you could do windows development with Mono.\nAlex\n",
"Windows Forms is by far the nicest of those. However, using windows forms from C++ will just confuse you more if you don't already know what you're doing, because then you're really using C++/CLI, which might just as well be a completely different language. Better off going C# if you want to go that route.\nMFC is probably closest to what you're familiar with. But, again, Windows Forms is so much nicer.\n",
"I really would choose C# instead of C++. For Windows client apps, it can't be beat. For C/C++ dude like you, the syntax learning curve will be short. The difficulty will be learning the .NET framework, but that's the cost you'll have to incur one way or the other. \nOnce you select C#, just pick either Windows Forms or WPF Application. Both are client-side application types. If you pick WPF Application, you'll also have to learn XAML, which is a fairly new, but massively powerful, concept.\n",
"I've tried doing some C++ programming in .Net (Windows Forms). And while it was possible it was certainly not a pleasurable experience, mostly because you have some extra keywords and such which differ from standarad C++. But if you're willing to learn some more C++ it's an option.\nMyself I have started working on a project using C# which works really well. It's easy to learn to if you have a background in C++.\nI wouldn't for the world touch the Win32 API ever again. It's really terrible!\n",
"If you're just interested in writing graphical windows applications, just stick with \"Windows Form Application\". It will start you out with a blank windows form and a class that contains your main() method.\nThe \"Console Application\" project is probably the simplest, it just creates one class file for you with a main() and that's it.\nThe \"Class Library\" project has scaffolding and default build settings for creating a DLL.\nThere generally aren't any fundamental differences between the different kinds of projects. All they do is set up some default includes for you and generate some scaffolding code (e.g., a blank windows form) to get you started.\nI do recommend learning C#. If you know Java it won't be too much of a leap for you. The initial version of C# was actually designed to be exactly like Java, but they have diverged a bit over the years.\n",
"I think what I would suggest has a lot more to do with your objective. If you are looking to build your own application and want to get it to market quickly, and it has to be Windows, then I would go with C# WF as others have suggested.\nIf you are looking to make yourself more employable then I would go with C#/ASP.Net. This way you are learning C# but are also learning more about web developent, in general, and ASP.Net in particular. I think you will find that Windows Forms is a lot easier, comparatively, and not really be worth spending a lot of time on.\nSo if I were you I would build my application so that the majority of it is seperated from the interface. I would first learn how to make that code interact in ASP.Net, and then I would try it in Windows Forms. If you can do that you will learn a lot of really important skills for .Net framework development.\n",
"IMHO, wxWidgets is better than any of those. For example, I know of many people that converted their projects from MFC to wx. wxWidgets has all the MFC has (in early wx versions, a lot of classes were clones of MFC classes), and a lot more. It is not just a GUI library, but you have wrappers for all kinds of common tasks, like reading/writing XML files, Windows Registry, manipulation of various graphic types and image data, classes for conversion between character sets, etc. There are also a lot of add-on classes at wxCode website that can enhance your applications easily.\nwxWidgets is also cross-platform, has full Unicode support, and only advances further. If you decide to give it a try, make sure you try wxFormBuilder for easy, WYSIWYG builder of user interface (dialogs, windows, ...).\n"
] |
[
4,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"windows"
] |
stackoverflow_0000092556_windows.txt
|
Q:
How is a web service request handled in ASP.Net
When a client makes a web service request, how does asp.net assign an instance of the service class to handle that request?
Is a new instance of the service class created per request or is there pooling happening or is there a singleton instance used to handle all requests?
A:
For classic ASMX services you definitely get a new instance with each request, just like an ASPX request. For a WCF service (.SVC) you do have more options, such as running as a singleton.
If you are interested in doing work with a singleton and pooling you can use the ASMX service simply as the lightweight proxy to pass the parameters back and forth. Your implementation of the service could be a singleton that lives with the App Pool for your web site. You will need to account for the App Pool being reset occasionally as that is how IIS manages ASP.NET sites.
What you can also do is run a Windows Service with a WCF service that is always running. This service would listen to localhost on an endpoint only accessible from the same machine. You can then have the ASMX service call to the WCF service locally. This will allow you to always ensure your state is alive as long as you like even when IIS restarts the App Pool. Naturally, you can also change the security for the WCF Windows Service to allow access remotely with a password if you want to allow multiple web services to use the same service host for the purpose of improved resource usage.
|
How is a web service request handled in ASP.Net
|
When a client makes a web service request, how does asp.net assign an instance of the service class to handle that request?
Is a new instance of the service class created per request or is there pooling happening or is there a singleton instance used to handle all requests?
|
[
"For classic ASMX services you definitely get a new instance with each request, just like an ASPX request. For a WCF service (.SVC) you do have more options, such as running as a singleton.\nIf you are interested in doing work with a singleton and pooling you can use the ASMX service simply as the lightweight proxy to pass the parameters back and forth. Your implementation of the service could be a singleton that lives with the App Pool for your web site. You will need to account for the App Pool being reset occasionally as that is how IIS manages ASP.NET sites.\nWhat you can also do is run a Windows Service with a WCF service that is always running. This service would listen to localhost on an endpoint only accessible from the same machine. You can then have the ASMX service call to the WCF service locally. This will allow you to always ensure your state is alive as long as you like even when IIS restarts the App Pool. Naturally, you can also change the security for the WCF Windows Service to allow access remotely with a password if you want to allow multiple web services to use the same service host for the purpose of improved resource usage.\n"
] |
[
1
] |
[] |
[] |
[
"asp.net",
"web_services"
] |
stackoverflow_0000097452_asp.net_web_services.txt
|
Q:
Unicode debug visualizer in Visual Studio 2008
Is there a unicode debug visualizer in Visual Studio 2008? I have a xml file that I'm pretty sure is in unicode. When I open it in wordpad, it shows the japanese characters correctly. When I read the file into a string using File.ReadAllText (UTF8), all the japanese characters show up as blocks in the string visualizer. If I use the xml visualizer, the characters show up correctly.
A:
If you're getting square blocks, rather than complete garbage, you probably just need to specify a more suitable font in Visual Studio (in Tools | Options | Fonts and Colors). Try MS Gothic or MS Mincho (both Japanese fonts); I am guessing your issue can be resolved by tweaking the settings for [Watch, Locals and Autos Tool Windows], but it could be somewhere else.
Not all applications magically font-link to a font that contains the characters you want to display.
A:
You say it's Unicode, so why not use File.ReadAllText(Encoding.Unicode) then?
|
Unicode debug visualizer in Visual Studio 2008
|
Is there a unicode debug visualizer in Visual Studio 2008? I have a xml file that I'm pretty sure is in unicode. When I open it in wordpad, it shows the japanese characters correctly. When I read the file into a string using File.ReadAllText (UTF8), all the japanese characters show up as blocks in the string visualizer. If I use the xml visualizer, the characters show up correctly.
|
[
"If you're getting square blocks, rather than complete garbage, you probably just need to specify a more suitable font in Visual Studio (in Tools | Options | Fonts and Colors). Try MS Gothic or MS Mincho (both Japanese fonts); I am guessing your issue can be resolved by tweaking the settings for [Watch, Locals and Autos Tool Windows], but it could be somewhere else.\nNot all applications magically font-link to a font that contains the characters you want to display.\n",
"You say it's Unicode, so why not use File.ReadAllText(Encoding.Unicode) then?\n"
] |
[
3,
1
] |
[] |
[] |
[
"debugging",
"unicode",
"visual_studio_2008"
] |
stackoverflow_0000081949_debugging_unicode_visual_studio_2008.txt
|
Q:
C# 'generic' type problem
C# question (.net 3.5). I have a class, ImageData, that has a field ushort[,] pixels. I am dealing with proprietary image formats. The ImageData class takes a file location in the constructor, then switches on file extension to determine how to decode. In several of the image files, there is a "bit depth" field in the header. After I decode the header I read the pixel values into the "pixels" array. So far I have not had more than 16bpp, so I'm okay. But what if I have 32bpp?
What I want to do is have the type of pixels be determined at runtime. I want to do this after I read the bit depth out of the header and before I copy the pixel data into memory. Any ideas?
A:
I would say not to do that work in the construtor - A constructor should not do so much work, in my opinion. Use a factory method that reads the file to determine the bit depth, then have it construct the correct generic variant of the class and return it.
A:
To boil down your problem, you want to be able to have a class that has a ushort[,] pixels field (16-bits per pixel) sometimes and a uint32[,] pixels field (32-bits per pixel) some other times. There are a couple different ways to achieve this.
You could create replacements for ushort / uint32 by making a Pixel class with 32-bit and 16-bit sub-classes, overriding various operators up the wazoo, but this incurs a lot of overhead, is tricky to get right and even trickier to determine if its right. Alternately you could create proxy classes for your pixel data (which would contain the ushort[,] or uint32[,] arrays and would have all the necessary accessors to be useful). The downside there is that you would likely end up with a lot of special case code in the ImageData class which executed one way or the other depending on some 16-bit/32-bit mode flag.
The better solution, I think, would be to sub-class ImageData into 16-bit and 32-bit classes, and use a factory method to create instances. E.g. ImageData is the base class, ImageData16bpp and ImageData32bpp are sub-classes, static method ImageData.Create(string imageFilename) is the factory method which creates either ImageData16bpp or ImageData32bpp depending on the header data. For example:
public static ImageData Create(string imageFilename)
{
// ...
ImageDataHeader imageHeader = ParseHeader(imageFilename);
ImageData newImageData;
if (imageHeader.bpp == 32)
{
newImageData = new ImageData32(imageFilename, imageHeader);
}
else
{
newImageData = new ImageData16(imageFilename, imageHeader);
}
// ...
return newImageData;
}
A:
Have your decode function return an object of type Array, which is the base class of all arrays. Then people who care about the type can do "if (a is ushort[,])" and so on if they want to go through the pixels. If you do it this way, you need to allocate the array in ImageData, not the other way around.
Alternatively, the caller probably knows what kind of pixel array they want you to use. Even if it's an 8bpp or 16bpp image, if you're decoding it to a 32bpp screen, you need to use uint instead of ushort. So you could write an ImageData function that will decode into integers of whatever type T is.
The root of your problem is that you don't know how to decide what kind of output format you want. You need to figure that out first, and the program syntax comes second.
|
C# 'generic' type problem
|
C# question (.net 3.5). I have a class, ImageData, that has a field ushort[,] pixels. I am dealing with proprietary image formats. The ImageData class takes a file location in the constructor, then switches on file extension to determine how to decode. In several of the image files, there is a "bit depth" field in the header. After I decode the header I read the pixel values into the "pixels" array. So far I have not had more than 16bpp, so I'm okay. But what if I have 32bpp?
What I want to do is have the type of pixels be determined at runtime. I want to do this after I read the bit depth out of the header and before I copy the pixel data into memory. Any ideas?
|
[
"I would say not to do that work in the construtor - A constructor should not do so much work, in my opinion. Use a factory method that reads the file to determine the bit depth, then have it construct the correct generic variant of the class and return it.\n",
"To boil down your problem, you want to be able to have a class that has a ushort[,] pixels field (16-bits per pixel) sometimes and a uint32[,] pixels field (32-bits per pixel) some other times. There are a couple different ways to achieve this.\nYou could create replacements for ushort / uint32 by making a Pixel class with 32-bit and 16-bit sub-classes, overriding various operators up the wazoo, but this incurs a lot of overhead, is tricky to get right and even trickier to determine if its right. Alternately you could create proxy classes for your pixel data (which would contain the ushort[,] or uint32[,] arrays and would have all the necessary accessors to be useful). The downside there is that you would likely end up with a lot of special case code in the ImageData class which executed one way or the other depending on some 16-bit/32-bit mode flag.\nThe better solution, I think, would be to sub-class ImageData into 16-bit and 32-bit classes, and use a factory method to create instances. E.g. ImageData is the base class, ImageData16bpp and ImageData32bpp are sub-classes, static method ImageData.Create(string imageFilename) is the factory method which creates either ImageData16bpp or ImageData32bpp depending on the header data. For example:\npublic static ImageData Create(string imageFilename)\n{\n // ...\n ImageDataHeader imageHeader = ParseHeader(imageFilename);\n ImageData newImageData;\n if (imageHeader.bpp == 32)\n {\n newImageData = new ImageData32(imageFilename, imageHeader);\n }\n else\n {\n newImageData = new ImageData16(imageFilename, imageHeader);\n }\n // ...\n return newImageData;\n}\n\n",
"Have your decode function return an object of type Array, which is the base class of all arrays. Then people who care about the type can do \"if (a is ushort[,])\" and so on if they want to go through the pixels. If you do it this way, you need to allocate the array in ImageData, not the other way around.\nAlternatively, the caller probably knows what kind of pixel array they want you to use. Even if it's an 8bpp or 16bpp image, if you're decoding it to a 32bpp screen, you need to use uint instead of ushort. So you could write an ImageData function that will decode into integers of whatever type T is.\nThe root of your problem is that you don't know how to decide what kind of output format you want. You need to figure that out first, and the program syntax comes second.\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"c#",
"generics",
"image"
] |
stackoverflow_0000097565_c#_generics_image.txt
|
Q:
How can I build C# ImageList Images from smaller component images?
I'd like to make status icons for a C# WinForms TreeList control. The statuses are combinations of other statuses (eg. a user node might be inactive or banned or inactive and banned), and the status icon is comprised of non-overlapping, smaller glyphs.
I'd really like to avoid having to hand-generate all the possibly permutations of status icons if I can avoid it.
Is it possible to create an image list (or just a bunch of bitmap resources or something) that I can use to generate the ImageList programmatically?
I'm poking around the System.Drawing classes and nothing's jumping out at me. Also, I'm stuck with .Net 2.0.
A:
Bitmap image1 = ...
Bitmap image2 = ...
Bitmap combined = new Bitmap(image1.Width, image1.Height);
using (Graphics g = Graphics.FromImage(combined)) {
g.DrawImage(image1, new Point(0, 0));
g.DrawImage(image2, new Point(0, 0);
}
imageList.Add(combined);
A:
Just use Images.Add from the ImageList to add in the individual images. So, something like:
Image img = Image.FromStream( /*get stream from resources*/ );
ImageList1.Images.Add( img );
|
How can I build C# ImageList Images from smaller component images?
|
I'd like to make status icons for a C# WinForms TreeList control. The statuses are combinations of other statuses (eg. a user node might be inactive or banned or inactive and banned), and the status icon is comprised of non-overlapping, smaller glyphs.
I'd really like to avoid having to hand-generate all the possibly permutations of status icons if I can avoid it.
Is it possible to create an image list (or just a bunch of bitmap resources or something) that I can use to generate the ImageList programmatically?
I'm poking around the System.Drawing classes and nothing's jumping out at me. Also, I'm stuck with .Net 2.0.
|
[
"Bitmap image1 = ...\nBitmap image2 = ...\n\nBitmap combined = new Bitmap(image1.Width, image1.Height);\nusing (Graphics g = Graphics.FromImage(combined)) {\n g.DrawImage(image1, new Point(0, 0));\n g.DrawImage(image2, new Point(0, 0);\n}\n\nimageList.Add(combined);\n\n",
"Just use Images.Add from the ImageList to add in the individual images. So, something like:\n\nImage img = Image.FromStream( /*get stream from resources*/ );\nImageList1.Images.Add( img );\n\n"
] |
[
1,
0
] |
[] |
[] |
[
".net_2.0",
"c#",
"image",
"system.drawing",
"winforms"
] |
stackoverflow_0000096871_.net_2.0_c#_image_system.drawing_winforms.txt
|
Q:
Algorithm for hit test in non-overlapping rectangles
I have a collection of non-overlapping rectangles that cover an enclosing rectangle. What is the best way to find the containing rectangle for a mouse click?
The obvious answer is to have an array of rectangles and to search them in sequence, making the search O(n). Is there some way to order them by position so that the algorithm is less than O(n), say, O(log n) or O(sqrt(n))?
A:
You can organize your rectangles in a quad or kd-tree. That gives you O(log n). That's the mainstream method.
Another interesting data-structure for this problem are R-trees. These can be very efficient if you have to deal with lots of rectangles.
http://en.wikipedia.org/wiki/R-tree
And then there is the O(1) method of simply generating a bitmap at the same size as your screen, fill it with a place-holder for "no rectangle" and draw the hit-rectangle indices into that bitmap. A lookup becomes as simple as:
int id = bitmap_getpixel (mouse.x, mouse.y)
if (id != -1)
{
hit_rectange (id);
}
else
{
no_hit();
}
Obviously that method only works well if your rectangles don't change that often and if you can spare the memory for the bitmap.
A:
Create an Interval Tree. Query the Interval Tree. Consult 'Algorithms' from MIT press.
An Interval Tree is best implemented as a Red-Black Tree.
Keep in mind that it is only advisable to sort your rectangles if you are going to be clicking at them more then you are changing their positions, usually.
You'll have to keep in mind, that you have build build your indices for different axis separately. E.g., you have to see if you overlap an interval on X and on Y. One obvious optimization is to only check for overlap on either X interval, then immediately check for overlap on Y.
Also, most stock or 'classbook' Interval Trees only check for a single interval, and only return a single Interval (but you said "non-overlapping" didn't you?)
A:
Shove them in a quadtree.
A:
Use a BSP tree to store the rectangles.
|
Algorithm for hit test in non-overlapping rectangles
|
I have a collection of non-overlapping rectangles that cover an enclosing rectangle. What is the best way to find the containing rectangle for a mouse click?
The obvious answer is to have an array of rectangles and to search them in sequence, making the search O(n). Is there some way to order them by position so that the algorithm is less than O(n), say, O(log n) or O(sqrt(n))?
|
[
"You can organize your rectangles in a quad or kd-tree. That gives you O(log n). That's the mainstream method.\nAnother interesting data-structure for this problem are R-trees. These can be very efficient if you have to deal with lots of rectangles.\nhttp://en.wikipedia.org/wiki/R-tree\nAnd then there is the O(1) method of simply generating a bitmap at the same size as your screen, fill it with a place-holder for \"no rectangle\" and draw the hit-rectangle indices into that bitmap. A lookup becomes as simple as:\n int id = bitmap_getpixel (mouse.x, mouse.y)\n if (id != -1)\n {\n hit_rectange (id);\n }\n else\n {\n no_hit();\n }\n\nObviously that method only works well if your rectangles don't change that often and if you can spare the memory for the bitmap.\n",
"Create an Interval Tree. Query the Interval Tree. Consult 'Algorithms' from MIT press.\nAn Interval Tree is best implemented as a Red-Black Tree.\nKeep in mind that it is only advisable to sort your rectangles if you are going to be clicking at them more then you are changing their positions, usually.\nYou'll have to keep in mind, that you have build build your indices for different axis separately. E.g., you have to see if you overlap an interval on X and on Y. One obvious optimization is to only check for overlap on either X interval, then immediately check for overlap on Y.\nAlso, most stock or 'classbook' Interval Trees only check for a single interval, and only return a single Interval (but you said \"non-overlapping\" didn't you?)\n",
"Shove them in a quadtree.\n",
"Use a BSP tree to store the rectangles.\n"
] |
[
6,
1,
0,
0
] |
[] |
[] |
[
".net",
"geometry"
] |
stackoverflow_0000097762_.net_geometry.txt
|
Q:
Regexes and multiple multi-character delimeters
Suppose you have the following string:
white sand, tall waves, warm sun
It's easy to write a regular expression that will match the delimiters, which the Java String.split() method can use to give you an array containing the tokens "white sand", "tall waves" and "warm sun":
\s*,\s*
Now say you have this string:
white sand and tall waves and warm sun
Again, the regex to split the tokens is easy (ensuring you don't get the "and" inside the word "sand"):
\s+and\s+
Now, consider this string:
white sand, tall waves and warm sun
Can a regex be written that will match the delimiters correctly, allowing you to split the string into the same tokens as in the previous two cases? Alternatively, can a regex be written that will match the tokens themselves and omit the delimiters? (Any amount of white space on either side of a comma or the word "and" should be considered part of the delimiter.)
Edit: As has been pointed out in the comments, the correct answer should robustly handle delimiters at the beginning or end of the input string. The ideal answer should be able to take a string like ",white sand, tall waves and warm sun and " and provide these exact three tokens:
[ "white sand", "tall waves", "warm sun" ]
...without extra empty tokens or extra white space at the start or end of any token.
Edit: It's been pointed out that extra empty tokens are unavoidable with String.split(), so that's been removed as a criterion for the "perfect" regex.
Thanks everyone for your responses! I've tried to make sure I upvoted everyone who contributed a workable regex that wasn't essentially a duplicate. Dan's answer was the most robust (it even handles ",white sand, tall waves,and warm sun and " reasonably, with that odd comma placement after the word "waves"), so I've marked his as the accepted answer. The regex provided by nsayer was a close second.
A:
This should be pretty resilient, and handle stuff like delimiters at the end of the string ("foo and bar and ", for example)
\s*(?:\band\b|,)\s*
A:
This should catch both 'and' or ','
(?:\sand|,)\s
A:
The problem with
\s*(,|(and))\s*
is that it would split up "sand" inappropriately.
The problem with
\s+(,|(and))\s+
is that it requires spaces around commas.
The right answer probably has to be
(\s*,\s*)|(\s+and\s+)
I'll cheat a little on the concept of returning the strings surrounded by delimiters by suggesting that lots of languages have a "split" operator that does exactly what you want when the regex specifies the form of the delimiter itself. See the Java String.split() function.
A:
Would this work?
\s*(,|\s+and)\s+
A:
Yes, that's what regexp are for :
\s*(?:and|,)\s*
The | defines alternatives, the () groups the selectors and the :? ensure the regexp engine won't try to retain the value between the ().
EDIT : to avoid the sand pitfall (thanks for notifying) :
\s*(?:[^s]and|,)\s*
A:
(?:(?<!s)and\s+|\,\s+)
Might work
Don't have a way to test it, but took out the just space matcher.
A:
Maybe:
((\s*,\s*)|(\s+and\s+))
I'm not a java programmer, so I'm not sure if java regex allows '?'
|
Regexes and multiple multi-character delimeters
|
Suppose you have the following string:
white sand, tall waves, warm sun
It's easy to write a regular expression that will match the delimiters, which the Java String.split() method can use to give you an array containing the tokens "white sand", "tall waves" and "warm sun":
\s*,\s*
Now say you have this string:
white sand and tall waves and warm sun
Again, the regex to split the tokens is easy (ensuring you don't get the "and" inside the word "sand"):
\s+and\s+
Now, consider this string:
white sand, tall waves and warm sun
Can a regex be written that will match the delimiters correctly, allowing you to split the string into the same tokens as in the previous two cases? Alternatively, can a regex be written that will match the tokens themselves and omit the delimiters? (Any amount of white space on either side of a comma or the word "and" should be considered part of the delimiter.)
Edit: As has been pointed out in the comments, the correct answer should robustly handle delimiters at the beginning or end of the input string. The ideal answer should be able to take a string like ",white sand, tall waves and warm sun and " and provide these exact three tokens:
[ "white sand", "tall waves", "warm sun" ]
...without extra empty tokens or extra white space at the start or end of any token.
Edit: It's been pointed out that extra empty tokens are unavoidable with String.split(), so that's been removed as a criterion for the "perfect" regex.
Thanks everyone for your responses! I've tried to make sure I upvoted everyone who contributed a workable regex that wasn't essentially a duplicate. Dan's answer was the most robust (it even handles ",white sand, tall waves,and warm sun and " reasonably, with that odd comma placement after the word "waves"), so I've marked his as the accepted answer. The regex provided by nsayer was a close second.
|
[
"This should be pretty resilient, and handle stuff like delimiters at the end of the string (\"foo and bar and \", for example)\n\\s*(?:\\band\\b|,)\\s*\n\n",
"This should catch both 'and' or ','\n(?:\\sand|,)\\s\n\n",
"The problem with\n\\s*(,|(and))\\s*\n\nis that it would split up \"sand\" inappropriately.\nThe problem with\n\\s+(,|(and))\\s+\n\nis that it requires spaces around commas.\nThe right answer probably has to be\n(\\s*,\\s*)|(\\s+and\\s+)\n\nI'll cheat a little on the concept of returning the strings surrounded by delimiters by suggesting that lots of languages have a \"split\" operator that does exactly what you want when the regex specifies the form of the delimiter itself. See the Java String.split() function.\n",
"Would this work?\n\\s*(,|\\s+and)\\s+\n\n",
"Yes, that's what regexp are for :\n\\s*(?:and|,)\\s*\n\nThe | defines alternatives, the () groups the selectors and the :? ensure the regexp engine won't try to retain the value between the ().\nEDIT : to avoid the sand pitfall (thanks for notifying) :\n\\s*(?:[^s]and|,)\\s*\n\n",
"(?:(?<!s)and\\s+|\\,\\s+)\n\nMight work\nDon't have a way to test it, but took out the just space matcher.\n",
"Maybe:\n((\\s*,\\s*)|(\\s+and\\s+))\nI'm not a java programmer, so I'm not sure if java regex allows '?'\n"
] |
[
5,
2,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"regex"
] |
stackoverflow_0000097435_regex.txt
|
Q:
What is the best tool to convert common video formats to FLV on a Linux CLI
Part of a new product I have been assigned to work on involves server-side conversion of the 'common' video formats to something that Flash can play.
As far as I know, my only option is to convert to FLV. I have been giving ffmpeg a go around, but I'm finding a few WMV files that come out with garbled sound (I've tried playing with the audio rates).
Are there any other 'good' CLI converters for Linux? Or are there other video formats that Flash can play?
A:
Flash can play the following formats:
FLV with AAC or MP3 audio, and FLV1 (Sorenson Spark H.263), VP6, or H.264 video.
MP4 with AAC or MP3 audio, and H.264 video (mp4s must be hinted with qt-faststart or mp4box).
ffmpeg is an overall good conversion utility; mencoder works better with obscure and proprietary formats (due to the w32codecs binary decoder package) but its muxing is rather suboptimal (read: often totally broken). One solution might be to encode H.264 with x264 through mencoder, and then mux separately with mp4box.
As a developer of x264 (and avid user of flash for online video playback), I've had quite a bit of experience in this kind of stuff, so if you want more assistance I'm also available on Freenode IRC on #x264, #ffmpeg, and #mplayer.
A:
Most encoders, by default (ffmpeg included) put the header atom of the mp4 (the "moov atom") at the end of the video, since they can't place the header until they're done encoding. However, in order for the file to start playback before its done downloading, the moov atom has to be moved to the front.
To do this, you have to (re)mux using mp4box (which does it by default) or use qt-faststart, a script for ffmpeg that simply moves the atom to the front. Its quite simple.
Note that for FLV, by default, ffmpeg will use the FLV1 video format, which is pretty terrible; its over a decade old by this point and its efficiency is rather awful given modern standards. You're much better off using a more modern format like H.264.
|
What is the best tool to convert common video formats to FLV on a Linux CLI
|
Part of a new product I have been assigned to work on involves server-side conversion of the 'common' video formats to something that Flash can play.
As far as I know, my only option is to convert to FLV. I have been giving ffmpeg a go around, but I'm finding a few WMV files that come out with garbled sound (I've tried playing with the audio rates).
Are there any other 'good' CLI converters for Linux? Or are there other video formats that Flash can play?
|
[
"Flash can play the following formats:\nFLV with AAC or MP3 audio, and FLV1 (Sorenson Spark H.263), VP6, or H.264 video.\nMP4 with AAC or MP3 audio, and H.264 video (mp4s must be hinted with qt-faststart or mp4box).\n\nffmpeg is an overall good conversion utility; mencoder works better with obscure and proprietary formats (due to the w32codecs binary decoder package) but its muxing is rather suboptimal (read: often totally broken). One solution might be to encode H.264 with x264 through mencoder, and then mux separately with mp4box.\nAs a developer of x264 (and avid user of flash for online video playback), I've had quite a bit of experience in this kind of stuff, so if you want more assistance I'm also available on Freenode IRC on #x264, #ffmpeg, and #mplayer.\n",
"Most encoders, by default (ffmpeg included) put the header atom of the mp4 (the \"moov atom\") at the end of the video, since they can't place the header until they're done encoding. However, in order for the file to start playback before its done downloading, the moov atom has to be moved to the front.\nTo do this, you have to (re)mux using mp4box (which does it by default) or use qt-faststart, a script for ffmpeg that simply moves the atom to the front. Its quite simple.\nNote that for FLV, by default, ffmpeg will use the FLV1 video format, which is pretty terrible; its over a decade old by this point and its efficiency is rather awful given modern standards. You're much better off using a more modern format like H.264.\n"
] |
[
15,
2
] |
[] |
[] |
[
"flash",
"flv",
"linux",
"video"
] |
stackoverflow_0000097781_flash_flv_linux_video.txt
|
Q:
How do indicate the SQL default library in an IBM iSeries 2 connection string to an AS/400?
I'm connecting to an AS/400 stored procedure layer using the IBM iSeries Access for Windows package. This provides a .NET DLL with classes similar to those in the System.Data namespace. As such we use their implementation of the connection class and provide it with a connection string.
Does anyone know how I can amend the connection string to indicate the default library it should use?
A:
If you are connecting through .NET:
Provider=IBMDA400;Data Source=as400.com;User Id=user;Password=password;Default Collection=yourLibrary;
Default Collection is the parameter that sets the library where your programs should start executing.
And if you are connecting through ODBC from Windows (like setting up a driver in the control panel):
DRIVER=Client Access ODBC Driver(32-bit);SYSTEM=as400.com;EXTCOLINFO=1;UID=user;PWD=password;LibraryList=yourLibrary
In this case LibraryList is the parameter to set, remember this is for ODBC connection.
There are two drivers from IBM to connect to the AS400, the older one uses the above connection string, if you have the newest version of the client software from IBM called "System i Access for Windows" then you should use this connection string:
DRIVER=iSeries Access ODBC Driver;SYSTEM=as400.com;EXTCOLINFO=1;UID=user;PWD=password;LibraryList=yourLibrary
The last is pretty much the same, only the DRIVER parameter value changes.
If you are using this in a .NET application don't forget to add the providerName parameter to your XML tag and define the API used for connecting which would be OleDb in this case:
providerName="System.Data.OleDb"
A:
Snippet from some Delphi source code using the Client Access Express Driver. Probably not exactly what you are looking for, but it may help others that stumble upon this post. The DBQ part is the default library, and the System part is the AS400/DB2 host name.
ConnectionString :=
'Driver={Client Access ODBC Driver (32-bit)};' +
'System=' + System + ';' +
'DBQ=' + Lib + ';' +
'TRANSLATE=1;' +
'CMT=0;' +
//'DESC=Client Access Express ODBC data source;' +
'QAQQINILIB=;' +
'PKG=QGPL/DEFAULT(IBM),2,0,1,0,512;' +
'SORTTABLE=;' +
'LANGUAGEID=ENU;' +
'XLATEDLL=;' +
'DFTPKGLIB=QGPL;';
A:
Are you using the Catalog Library List parameter for OLE DB? This is what my connection string typically looks like:
<add name="AS400ConnectionString" connectionString="Data Source=DEVL820;Initial Catalog=Q1A_DATABASE_SRVR;Persist Security Info=False;User ID=BLAH;Password=BLAHBLAH;Provider=IBMDASQL.DataSource.1;**Catalog Library List="HTSUTST, HTEUSRJ, HTEDTA"**" providerName="System.Data.OleDb" />
|
How do indicate the SQL default library in an IBM iSeries 2 connection string to an AS/400?
|
I'm connecting to an AS/400 stored procedure layer using the IBM iSeries Access for Windows package. This provides a .NET DLL with classes similar to those in the System.Data namespace. As such we use their implementation of the connection class and provide it with a connection string.
Does anyone know how I can amend the connection string to indicate the default library it should use?
|
[
"If you are connecting through .NET:\nProvider=IBMDA400;Data Source=as400.com;User Id=user;Password=password;Default Collection=yourLibrary;\n\nDefault Collection is the parameter that sets the library where your programs should start executing.\nAnd if you are connecting through ODBC from Windows (like setting up a driver in the control panel):\nDRIVER=Client Access ODBC Driver(32-bit);SYSTEM=as400.com;EXTCOLINFO=1;UID=user;PWD=password;LibraryList=yourLibrary\n\nIn this case LibraryList is the parameter to set, remember this is for ODBC connection.\nThere are two drivers from IBM to connect to the AS400, the older one uses the above connection string, if you have the newest version of the client software from IBM called \"System i Access for Windows\" then you should use this connection string:\nDRIVER=iSeries Access ODBC Driver;SYSTEM=as400.com;EXTCOLINFO=1;UID=user;PWD=password;LibraryList=yourLibrary\n\nThe last is pretty much the same, only the DRIVER parameter value changes.\nIf you are using this in a .NET application don't forget to add the providerName parameter to your XML tag and define the API used for connecting which would be OleDb in this case:\nproviderName=\"System.Data.OleDb\"\n\n",
"Snippet from some Delphi source code using the Client Access Express Driver. Probably not exactly what you are looking for, but it may help others that stumble upon this post. The DBQ part is the default library, and the System part is the AS400/DB2 host name.\nConnectionString :=\n 'Driver={Client Access ODBC Driver (32-bit)};' +\n 'System=' + System + ';' +\n 'DBQ=' + Lib + ';' +\n 'TRANSLATE=1;' +\n 'CMT=0;' +\n //'DESC=Client Access Express ODBC data source;' +\n 'QAQQINILIB=;' +\n 'PKG=QGPL/DEFAULT(IBM),2,0,1,0,512;' + \n 'SORTTABLE=;' +\n 'LANGUAGEID=ENU;' +\n 'XLATEDLL=;' +\n 'DFTPKGLIB=QGPL;';\n\n",
"Are you using the Catalog Library List parameter for OLE DB? This is what my connection string typically looks like:\n<add name=\"AS400ConnectionString\" connectionString=\"Data Source=DEVL820;Initial Catalog=Q1A_DATABASE_SRVR;Persist Security Info=False;User ID=BLAH;Password=BLAHBLAH;Provider=IBMDASQL.DataSource.1;**Catalog Library List="HTSUTST, HTEUSRJ, HTEDTA"**\" providerName=\"System.Data.OleDb\" />\n\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
".net",
"connection_string",
"database",
"ibm_midrange"
] |
stackoverflow_0000084310_.net_connection_string_database_ibm_midrange.txt
|
Q:
How do you limit height of a Sytem.Windows.Form to an exact value?
What I am trying to achieve is a form that has a button on it that causes the Form to 'drop-down' and become larger, displaying more information. My current attempt is this:
private void btnExpand_Click(object sender, EventArgs e)
{
if (btnExpand.Text == ">")
{
btnExpand.Text = "<";
_expanded = true;
this.MinimumSize = new Size(1, 300);
this.MaximumSize = new Size(int.MaxValue, 300);
}
else
{
btnExpand.Text = ">";
_expanded = false;
this.MinimumSize = new Size(1, 104);
this.MaximumSize = new Size(int.MaxValue, 104);
}
}
Which works great! Except for one small detail... Note that the width values are supposed to be able to go from 1 to int.MaxValue? Well, in practice, they go from this.Width to int.MaxValue, ie. you can make the form larger, but never smaller again. I'm at a loss for why this would occur. Anyone have any ideas?
For the record: I've also tried a Form.Resize handler that set the Height of the form to the same value depending on whatever the boolean _expanded was set to, but I ended up with the same side effect.
PS: I'm using .NET 3.5 in Visual Studio 2008. Other solutions are welcome, but this was my thoughts on how it "should" be done and how I attempted to do it.
Edit: Seems the code works, as per the accepted answers response. If anyone else has troubles with this particular problem, check the AutoSize property of your form, it should be FALSE, not TRUE. (This is the default, but I'd switched it on as I was using the form and a label with autosize also on for displaying debugging info earlier)
A:
As per the docs, use 0 to denote no maximum or minimum size. Tho, I just tried it and it didn't like 0 at all. So I used int.MaxValue like you did and it worked. What version of the the framework you using?
A:
Actually, having a look at the MinimumSize and MaximumSize (.NET 3.5) in reflector its pretty clear that the designed behaviour is not quite the same as the docs suggest. There is some minimum width constraints determined from a helper class and 0 has no special meaning (i.e. no limit.
Another note, I see in your code above that you are Expanding or Contracting based upon the text value of your Button, this is a bad idea, if someone comes along later and changes the text in the designer to say, "Expand" instead of < without looking at your code it will then have an unexpected side effect, presumably you have some code somewhere that changes the button text, it would be better to have a state variable somewhere and switch on that.
|
How do you limit height of a Sytem.Windows.Form to an exact value?
|
What I am trying to achieve is a form that has a button on it that causes the Form to 'drop-down' and become larger, displaying more information. My current attempt is this:
private void btnExpand_Click(object sender, EventArgs e)
{
if (btnExpand.Text == ">")
{
btnExpand.Text = "<";
_expanded = true;
this.MinimumSize = new Size(1, 300);
this.MaximumSize = new Size(int.MaxValue, 300);
}
else
{
btnExpand.Text = ">";
_expanded = false;
this.MinimumSize = new Size(1, 104);
this.MaximumSize = new Size(int.MaxValue, 104);
}
}
Which works great! Except for one small detail... Note that the width values are supposed to be able to go from 1 to int.MaxValue? Well, in practice, they go from this.Width to int.MaxValue, ie. you can make the form larger, but never smaller again. I'm at a loss for why this would occur. Anyone have any ideas?
For the record: I've also tried a Form.Resize handler that set the Height of the form to the same value depending on whatever the boolean _expanded was set to, but I ended up with the same side effect.
PS: I'm using .NET 3.5 in Visual Studio 2008. Other solutions are welcome, but this was my thoughts on how it "should" be done and how I attempted to do it.
Edit: Seems the code works, as per the accepted answers response. If anyone else has troubles with this particular problem, check the AutoSize property of your form, it should be FALSE, not TRUE. (This is the default, but I'd switched it on as I was using the form and a label with autosize also on for displaying debugging info earlier)
|
[
"As per the docs, use 0 to denote no maximum or minimum size. Tho, I just tried it and it didn't like 0 at all. So I used int.MaxValue like you did and it worked. What version of the the framework you using?\n",
"Actually, having a look at the MinimumSize and MaximumSize (.NET 3.5) in reflector its pretty clear that the designed behaviour is not quite the same as the docs suggest. There is some minimum width constraints determined from a helper class and 0 has no special meaning (i.e. no limit.\nAnother note, I see in your code above that you are Expanding or Contracting based upon the text value of your Button, this is a bad idea, if someone comes along later and changes the text in the designer to say, \"Expand\" instead of < without looking at your code it will then have an unexpected side effect, presumably you have some code somewhere that changes the button text, it would be better to have a state variable somewhere and switch on that.\n"
] |
[
1,
0
] |
[] |
[] |
[
"c#",
"winforms"
] |
stackoverflow_0000097092_c#_winforms.txt
|
Q:
How to send mail from ASP.NET with IIS6 SMTP in a dedicated server?
I'm trying to configure a dedicated server that runs ASP.NET to send mail through the local IIS SMTP server but mail is getting stuck in the Queue folder and doesn't get delivered.
I'm using this code in an .aspx page to test:
<%@ Page Language="C#" AutoEventWireup="true" %>
<% new System.Net.Mail.SmtpClient("localhost").Send("[email protected]",
"[email protected]", "testing...", "Hello, world.com"); %>
Then, I added the following to the Web.config file:
<system.net>
<mailSettings>
<smtp>
<network host="localhost"/>
</smtp>
</mailSettings>
</system.net>
In the IIS Manager I've changed the following in the properties of the "Default SMTP Virtual Server".
General: [X] Enable Logging
Access / Authentication: [X] Windows Integrated Authentication
Access / Relay Restrictions: (o) Only the list below, Granted 127.0.0.1
Delivery / Advanced: Fully qualified domain name = thedomain.com
Finally, I run the SMTPDiag.exe tool like this:
C:\>smtpdiag.exe [email protected] [email protected]
Searching for Exchange external DNS settings.
Computer name is THEDOMAIN.
Failed to connect to the domain controller. Error: 8007054b
Checking SOA for gmail.com.
Checking external DNS servers.
Checking internal DNS servers.
SOA serial number match: Passed.
Checking local domain records.
Checking MX records using TCP: thedomain.com.
Checking MX records using UDP: thedomain.com.
Both TCP and UDP queries succeeded. Local DNS test passed.
Checking remote domain records.
Checking MX records using TCP: gmail.com.
Checking MX records using UDP: gmail.com.
Both TCP and UDP queries succeeded. Remote DNS test passed.
Checking MX servers listed for [email protected].
Connecting to gmail-smtp-in.l.google.com [209.85.199.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to gmail-smtp-in.l.google.com.
Connecting to gmail-smtp-in.l.google.com [209.85.199.114] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to gmail-smtp-in.l.google.com.
Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt2.gmail-smtp-in.l.google.com.
Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.114] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt2.gmail-smtp-in.l.google.com.
Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt1.gmail-smtp-in.l.google.com.
Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt2.gmail-smtp-in.l.google.com.
Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.114] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt2.gmail-smtp-in.l.google.com.
Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.114] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt1.gmail-smtp-in.l.google.com.
Connecting to gsmtp183.google.com [64.233.183.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to gsmtp183.google.com.
Connecting to gsmtp147.google.com [209.85.147.27] on port 25.
Connecting to the server failed. Error: 10051
Failed to submit mail to gsmtp147.google.com.
I'm using ASP.NET 2.0, Windows 2003 Server and the IIS that comes with it.
Can you tell me what else to change to fix the problem?
Thanks
@mattlant
This is a dedicated server that's why I'm installing the SMTP manually.
EDIT: I use exchange so its a little
different, but its called a smart host
in exchange, but in plain SMTP service
config i think its called something
else. Cant remember exactly the
setting name.
Thank you for pointing me at the Smart host field. Mail is getting delivered now.
In the Default SMTP Virtual Server properties, the Delivery tab, click Advanced and fill the "Smart host" field with the address that your provider gives you. In my case (GoDaddy) it was k2smtpout.secureserver.net.
More info here: http://help.godaddy.com/article/1283
A:
I find the best thing usually depending on how much email there is, is to just forward the mail through your ISP's SMTP server. Less headaches. Looks like that's where you are having issues, from your SMTP to external servers, not asp.net to your SMTP.
Just have your SMTP server set to send it to your ISP, or you can configure asp.net to send to it.
EDIT: I use exchange so it's a little different, but it's called a smart host in exchange, but in plain SMTP service config I think it's called something else.
I can't remember exactly the setting name.
A:
By the looks of things your firewall isn't letting SMTP (TCP port 25) out of your network.
A:
two really obvious questions (just in case they haven't been covered)
1. has windows firewall been disabled?
2. do you have a personal/company firewall that is preventing your mail from being sent?
|
How to send mail from ASP.NET with IIS6 SMTP in a dedicated server?
|
I'm trying to configure a dedicated server that runs ASP.NET to send mail through the local IIS SMTP server but mail is getting stuck in the Queue folder and doesn't get delivered.
I'm using this code in an .aspx page to test:
<%@ Page Language="C#" AutoEventWireup="true" %>
<% new System.Net.Mail.SmtpClient("localhost").Send("[email protected]",
"[email protected]", "testing...", "Hello, world.com"); %>
Then, I added the following to the Web.config file:
<system.net>
<mailSettings>
<smtp>
<network host="localhost"/>
</smtp>
</mailSettings>
</system.net>
In the IIS Manager I've changed the following in the properties of the "Default SMTP Virtual Server".
General: [X] Enable Logging
Access / Authentication: [X] Windows Integrated Authentication
Access / Relay Restrictions: (o) Only the list below, Granted 127.0.0.1
Delivery / Advanced: Fully qualified domain name = thedomain.com
Finally, I run the SMTPDiag.exe tool like this:
C:\>smtpdiag.exe [email protected] [email protected]
Searching for Exchange external DNS settings.
Computer name is THEDOMAIN.
Failed to connect to the domain controller. Error: 8007054b
Checking SOA for gmail.com.
Checking external DNS servers.
Checking internal DNS servers.
SOA serial number match: Passed.
Checking local domain records.
Checking MX records using TCP: thedomain.com.
Checking MX records using UDP: thedomain.com.
Both TCP and UDP queries succeeded. Local DNS test passed.
Checking remote domain records.
Checking MX records using TCP: gmail.com.
Checking MX records using UDP: gmail.com.
Both TCP and UDP queries succeeded. Remote DNS test passed.
Checking MX servers listed for [email protected].
Connecting to gmail-smtp-in.l.google.com [209.85.199.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to gmail-smtp-in.l.google.com.
Connecting to gmail-smtp-in.l.google.com [209.85.199.114] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to gmail-smtp-in.l.google.com.
Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt2.gmail-smtp-in.l.google.com.
Connecting to alt2.gmail-smtp-in.l.google.com [209.85.135.114] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt2.gmail-smtp-in.l.google.com.
Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt1.gmail-smtp-in.l.google.com.
Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt2.gmail-smtp-in.l.google.com.
Connecting to alt2.gmail-smtp-in.l.google.com [74.125.79.114] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt2.gmail-smtp-in.l.google.com.
Connecting to alt1.gmail-smtp-in.l.google.com [209.85.133.114] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to alt1.gmail-smtp-in.l.google.com.
Connecting to gsmtp183.google.com [64.233.183.27] on port 25.
Connecting to the server failed. Error: 10060
Failed to submit mail to gsmtp183.google.com.
Connecting to gsmtp147.google.com [209.85.147.27] on port 25.
Connecting to the server failed. Error: 10051
Failed to submit mail to gsmtp147.google.com.
I'm using ASP.NET 2.0, Windows 2003 Server and the IIS that comes with it.
Can you tell me what else to change to fix the problem?
Thanks
@mattlant
This is a dedicated server that's why I'm installing the SMTP manually.
EDIT: I use exchange so its a little
different, but its called a smart host
in exchange, but in plain SMTP service
config i think its called something
else. Cant remember exactly the
setting name.
Thank you for pointing me at the Smart host field. Mail is getting delivered now.
In the Default SMTP Virtual Server properties, the Delivery tab, click Advanced and fill the "Smart host" field with the address that your provider gives you. In my case (GoDaddy) it was k2smtpout.secureserver.net.
More info here: http://help.godaddy.com/article/1283
|
[
"I find the best thing usually depending on how much email there is, is to just forward the mail through your ISP's SMTP server. Less headaches. Looks like that's where you are having issues, from your SMTP to external servers, not asp.net to your SMTP.\nJust have your SMTP server set to send it to your ISP, or you can configure asp.net to send to it.\nEDIT: I use exchange so it's a little different, but it's called a smart host in exchange, but in plain SMTP service config I think it's called something else. \nI can't remember exactly the setting name.\n",
"By the looks of things your firewall isn't letting SMTP (TCP port 25) out of your network.\n",
"two really obvious questions (just in case they haven't been covered)\n1. has windows firewall been disabled?\n2. do you have a personal/company firewall that is preventing your mail from being sent?\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"asp.net",
"iis",
"smtp"
] |
stackoverflow_0000097840_asp.net_iis_smtp.txt
|
Q:
How to import homepath into c program using gcc
I am using gcc for windows. The OS is windows XP.
How do I import the homepath variable into my c program so I can write to c:\%homepath%\desktop? I would like to use something similar to:
fd = fopen("C:\\%%homepath%%\\desktop\\helloworld.txt","w");
A:
Use getenv() to get the value of an environment variable, then use sprintf or strcat to compose the path.
A:
Use getenv("homepath") to get the value of environment variable. You should handle the case in which the variable has not been defined (getenv returns NULL in that case).
To compose the path use sprintf
char * homepath = getenv("homepath");
if(homepath == null) {
/* variable HOMEPATH has not been defined */
}
sprintf(path,"%s\\desktop\\helloworld.txt",homepath);
You should make path big enough to accomodate the value homepath and \\desktop\\helloworld.txt.
Also note the use of \\ in the string. You can't use single \.
A:
Note: you actually need to get the value of HOMEDRIVE as well, and prepend that to HOMEPATH. In many corporate environments, the home directories are kept on large network appliances or servers.
|
How to import homepath into c program using gcc
|
I am using gcc for windows. The OS is windows XP.
How do I import the homepath variable into my c program so I can write to c:\%homepath%\desktop? I would like to use something similar to:
fd = fopen("C:\\%%homepath%%\\desktop\\helloworld.txt","w");
|
[
"Use getenv() to get the value of an environment variable, then use sprintf or strcat to compose the path.\n",
"Use getenv(\"homepath\") to get the value of environment variable. You should handle the case in which the variable has not been defined (getenv returns NULL in that case).\nTo compose the path use sprintf\nchar * homepath = getenv(\"homepath\");\n\nif(homepath == null) {\n /* variable HOMEPATH has not been defined */ \n}\n\nsprintf(path,\"%s\\\\desktop\\\\helloworld.txt\",homepath);\n\nYou should make path big enough to accomodate the value homepath and \\\\desktop\\\\helloworld.txt.\nAlso note the use of \\\\ in the string. You can't use single \\.\n",
"Note: you actually need to get the value of HOMEDRIVE as well, and prepend that to HOMEPATH. In many corporate environments, the home directories are kept on large network appliances or servers.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"c",
"gcc"
] |
stackoverflow_0000075937_c_gcc.txt
|
Q:
Properly scoped transactions in Stored Procs
Suppose I have a stored procedure that manages its own transaction
CREATE PROCEDURE theProc
AS
BEGIN
BEGIN TRANSACTION
-- do some stuff
IF @ThereIsAProblem
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
END
If I call this proc from an existing transaction, the proc can ROLLBACK the external transaction.
BEGIN TRANSACTION
EXEC theProc
COMMIT TRANSACTION
How do I properly scope the transaction within the stored procedure, so that the stored procedure does not rollback external transactions?
A:
The syntax to do this probably varies by database. But in Transact-SQL what you do is check @@TRANCOUNT to see if you are in a transaction. If you are then you want to create a savepoint, and at the end you can just pass through the end of the function (believing a commit or rollback will happen later) or else rollback to your savepoint.
See Microsoft's documentation on savepoints for more.
Support for savepoints is fairly widespread, but I think the mechanism (assuming there is one) for finding out that you're currently in a transaction will vary quite a bit.
A:
use @@trancount to see if you're already in a transaction when entering
|
Properly scoped transactions in Stored Procs
|
Suppose I have a stored procedure that manages its own transaction
CREATE PROCEDURE theProc
AS
BEGIN
BEGIN TRANSACTION
-- do some stuff
IF @ThereIsAProblem
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
END
If I call this proc from an existing transaction, the proc can ROLLBACK the external transaction.
BEGIN TRANSACTION
EXEC theProc
COMMIT TRANSACTION
How do I properly scope the transaction within the stored procedure, so that the stored procedure does not rollback external transactions?
|
[
"The syntax to do this probably varies by database. But in Transact-SQL what you do is check @@TRANCOUNT to see if you are in a transaction. If you are then you want to create a savepoint, and at the end you can just pass through the end of the function (believing a commit or rollback will happen later) or else rollback to your savepoint.\nSee Microsoft's documentation on savepoints for more.\nSupport for savepoints is fairly widespread, but I think the mechanism (assuming there is one) for finding out that you're currently in a transaction will vary quite a bit.\n",
"use @@trancount to see if you're already in a transaction when entering\n"
] |
[
2,
1
] |
[] |
[] |
[
"scope",
"sql_server",
"stored_procedures",
"transactions"
] |
stackoverflow_0000097857_scope_sql_server_stored_procedures_transactions.txt
|
Q:
Experience with SQLExpress for a multi-user commercial application?
I have inherited a VB6/Access application that we have developed and sold for many years. We're going to SQL Server 2005 Express Edition and .Net.
The application can be multi-user. Currently the setup is simple for the customer -- Navigate to the folder to create the database on first launch; second user browses to the same file.
If we go with SQLExpress I believe our application will require more involvement to configure SQLExpress on the server. But I think we will get better security, and (with no code changes) a SQL version for larger customers.
How can I create the best customer experience from an installation and tech support point of view? What issues have come up for you? What install procedures have worked?
Do you set up a separate install for server/client, or just provide good instructions? What kinds of things do customers get wrong on the first try?
A:
A deployment project from Visual Studio allows you to install a SQL Server Express instance with ease.
We have the same kind of scenario for our applications and it means you do need separate installations for the client and server.
Our server installation deals with either installing a new SQL Server or upgrading the schema of an existing installation if necessary. The client installation simply packages up the files required by the client. You have to consider the scenario of upgrading the database schema and ensuring the clients have the updated client version which works against the new schema. We achieve this in a simple way by:
Storing a version id in the database e.g. 1.0.1
Updating the AssemblyInfo.cs of the client application and ensuring the assembly version matches the version stored in the database. If it doesn't it prompts the user to install the new version.
For the best possible user experience you would like to be able to install a new server version and for all the clients to auto update. We have a method for doing this and I can give you more details if required.
A:
How many users and how much data?
I don't know if there are general guidelines on how many users is "too much" for SQL Express 2005 but there is a hard data limit of 4GB. My guess would be you wouldn't hit that what with the Access heritage but it would be a good thing to know.
You can have SQL Express' installation automated. I've seen it done because something my wife installed did it and she's the last person I would suspect to install it.
There is also SQL Server Compact Edition which I believe targets .NET 3.5 as well as Windows Mobile. I believe it's more analogous to the "single file database" bits like you got with Access.
A:
I would recommend going the SQL Express route and including it in the install package. The installer has a ton of command-line options, and you can use SQL scripts to do any post-install configuration to the database (i.e., enabling/disabling CLR integration, OpenRowset, other features).
In addition, it's much more stable than the old MSDE 2000 installs; I had nightmares supporting that. I've also found that 99 times out of 100, putting default DB install parameters makes people happy.
SQL Express Weblog
How to install SQL 2008 From the command prompt
A:
SQL Express will be a huge step up from Access in capability and reliability. It shouldn't need any more configuration, just a different approach provided you know what your doing.
A:
At my company we use it a lot. Had some problems with slow startup times on slower machines.
Out install experience is not perfect - there is a utility that will restore database that the application uses as a database template.
A:
Since you're still considering going to SQLExpress, has your group considered SQLite? You can still have the database functionality you require without having to install an engine on the client's system.
A:
SQL Server 2005 Express Edition is easily installed with any decent installer. A couple hours of work is all it will take! If your heart is set on it then don't let the fear of installation hold you back.
A:
Another option: I believe you can also use SQL Server Express in a "file mode" where you just point to the MDF file and don't have to have an instance of SQL Server running. That would be very similar to how it sounds like your current app uses Access. I'm not sure how this works with multiuser situations, but it might be something to investigate.
|
Experience with SQLExpress for a multi-user commercial application?
|
I have inherited a VB6/Access application that we have developed and sold for many years. We're going to SQL Server 2005 Express Edition and .Net.
The application can be multi-user. Currently the setup is simple for the customer -- Navigate to the folder to create the database on first launch; second user browses to the same file.
If we go with SQLExpress I believe our application will require more involvement to configure SQLExpress on the server. But I think we will get better security, and (with no code changes) a SQL version for larger customers.
How can I create the best customer experience from an installation and tech support point of view? What issues have come up for you? What install procedures have worked?
Do you set up a separate install for server/client, or just provide good instructions? What kinds of things do customers get wrong on the first try?
|
[
"A deployment project from Visual Studio allows you to install a SQL Server Express instance with ease.\nWe have the same kind of scenario for our applications and it means you do need separate installations for the client and server.\nOur server installation deals with either installing a new SQL Server or upgrading the schema of an existing installation if necessary. The client installation simply packages up the files required by the client. You have to consider the scenario of upgrading the database schema and ensuring the clients have the updated client version which works against the new schema. We achieve this in a simple way by:\nStoring a version id in the database e.g. 1.0.1\nUpdating the AssemblyInfo.cs of the client application and ensuring the assembly version matches the version stored in the database. If it doesn't it prompts the user to install the new version.\nFor the best possible user experience you would like to be able to install a new server version and for all the clients to auto update. We have a method for doing this and I can give you more details if required.\n",
"How many users and how much data?\nI don't know if there are general guidelines on how many users is \"too much\" for SQL Express 2005 but there is a hard data limit of 4GB. My guess would be you wouldn't hit that what with the Access heritage but it would be a good thing to know.\nYou can have SQL Express' installation automated. I've seen it done because something my wife installed did it and she's the last person I would suspect to install it.\nThere is also SQL Server Compact Edition which I believe targets .NET 3.5 as well as Windows Mobile. I believe it's more analogous to the \"single file database\" bits like you got with Access.\n",
"I would recommend going the SQL Express route and including it in the install package. The installer has a ton of command-line options, and you can use SQL scripts to do any post-install configuration to the database (i.e., enabling/disabling CLR integration, OpenRowset, other features).\nIn addition, it's much more stable than the old MSDE 2000 installs; I had nightmares supporting that. I've also found that 99 times out of 100, putting default DB install parameters makes people happy.\nSQL Express Weblog\nHow to install SQL 2008 From the command prompt\n",
"SQL Express will be a huge step up from Access in capability and reliability. It shouldn't need any more configuration, just a different approach provided you know what your doing.\n",
"At my company we use it a lot. Had some problems with slow startup times on slower machines.\nOut install experience is not perfect - there is a utility that will restore database that the application uses as a database template.\n",
"Since you're still considering going to SQLExpress, has your group considered SQLite? You can still have the database functionality you require without having to install an engine on the client's system.\n",
"SQL Server 2005 Express Edition is easily installed with any decent installer. A couple hours of work is all it will take! If your heart is set on it then don't let the fear of installation hold you back.\n",
"Another option: I believe you can also use SQL Server Express in a \"file mode\" where you just point to the MDF file and don't have to have an instance of SQL Server running. That would be very similar to how it sounds like your current app uses Access. I'm not sure how this works with multiuser situations, but it might be something to investigate.\n"
] |
[
3,
2,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
".net",
"installation",
"sql_server",
"user_experience"
] |
stackoverflow_0000096997_.net_installation_sql_server_user_experience.txt
|
Q:
MessageBox loses focus in maximized MDI form
I have an MDI application (written in .NET 2.0) which lets users open multiple child forms. The child forms are always maximized inside the MDI parent. When the MDI parent is maximized and I attempt to do a MessageBox.Show, the MessageBox doesn't show. If I do an alt-tab (or even just press alt) the MessageBox pops to the front.
Any ideas how to make that sucker show up to begin with?
This is only a problem when the MDI parent is maximized...
A:
Try using
MessageBox.Show(Window owner, string message, string caption)
Setting the MDI application as owner so the MB is shown in the front
Ah, you should also add some tags to your post.
|
MessageBox loses focus in maximized MDI form
|
I have an MDI application (written in .NET 2.0) which lets users open multiple child forms. The child forms are always maximized inside the MDI parent. When the MDI parent is maximized and I attempt to do a MessageBox.Show, the MessageBox doesn't show. If I do an alt-tab (or even just press alt) the MessageBox pops to the front.
Any ideas how to make that sucker show up to begin with?
This is only a problem when the MDI parent is maximized...
|
[
"Try using\nMessageBox.Show(Window owner, string message, string caption)\nSetting the MDI application as owner so the MB is shown in the front\nAh, you should also add some tags to your post.\n"
] |
[
2
] |
[] |
[] |
[
".net_2.0",
"mdi",
"messagebox"
] |
stackoverflow_0000097344_.net_2.0_mdi_messagebox.txt
|
Q:
How to implement paging for asp:DataList in .NET 2.0?
I spent hours researching the problem, and just want to share a solution in case you ever need to implement paging for asp:DataList in .NET 2.0.
My specific requirement was to have "Previous" and "Next" links and page number links.
A:
I moved this from the question, so it doesn't appear as "Not Answered"...
PagedDataSource solution in this article was the most elegant and simple solution for this problem.
If you have a better solution - post it here please.
p.s. I'm not affiliated with that website in any way.
|
How to implement paging for asp:DataList in .NET 2.0?
|
I spent hours researching the problem, and just want to share a solution in case you ever need to implement paging for asp:DataList in .NET 2.0.
My specific requirement was to have "Previous" and "Next" links and page number links.
|
[
"I moved this from the question, so it doesn't appear as \"Not Answered\"...\nPagedDataSource solution in this article was the most elegant and simple solution for this problem.\nIf you have a better solution - post it here please.\np.s. I'm not affiliated with that website in any way.\n"
] |
[
3
] |
[] |
[] |
[
".net_2.0",
"asp.net",
"c#",
"datalist",
"paging"
] |
stackoverflow_0000097124_.net_2.0_asp.net_c#_datalist_paging.txt
|
Q:
Suggestions wanted with Lists or Enumerators of T when inheriting from generic classes
I know the answer is not going to be simple, and I already use a couple of (I think ugly) cludges. I am simply looking for some elegant answers.
Abstract class:
public interface IOtherObjects;
public abstract class MyObjects<T> where T : IOtherObjects
{
...
public List<T> ToList()
{
...
}
}
Children:
public class MyObjectsA : MyObjects<OtherObjectA> //(where OtherObjectA implements IOtherObjects)
{
}
public class MyObjectsB : MyObjects<OtherObjectB> //(where OtherObjectB implements IOtherObjects)
{
}
Is it possible, looping through a collection of MyObjects (or other similar grouping, generic or otherwise) to then utilise to ToList method of the MyObjects base class, as we do not specifically know the type of T at this point.
EDIT
As for specific examples, whenever this has come up, I've thought about it for a while, and done something different instead, so there is no current requirement. but as it has come up quite frequently, I thought I would float it.
EDIT
@Sara, it's not the specific type of the collection I care about, it could be a List, but still the ToList method of each instance is relatively unusable, without an anonymous type)
@aku, true, and this question may be relatively hypothetical, however being able to retrieve, and work with a list of T of objects, knowing only their base type would be very useful. Having the ToList returning a List Of BaseType has been one of my workarounds
EDIT @ all: So far, this has been the sort of discussion I was hoping for, though it largely confirms all I suspected. Thanks all so far, but anyone else, feel free to input.
EDIT@Rob, Yes it works for a defined type, but not when the type is only known as a List of IOtherObjects.
@Rob Again Thanks. That has usually been my cludgy workaround (no disrespect :) ). Either that or using the ConvertAll function to Downcast through a delegate. Thanks for taking the time to understand the problem.
QUALIFYING EDIT in case I have been a little confusing
To be more precise, (I may have let my latest implementation of this get it too complex):
lets say I have 2 object types, B and C inheriting from object A.
Many scenarios have presented themselves where, from a List of B or a List of C, or in other cases a List of either - but I don't know which if I am at a base class, I have needed a less specific List of A.
The above example was a watered-down example of the List Of Less Specific problem's latest incarnation.
Usually it has presented itself, as I think through possible scenarios that limit the amount of code that needs writing and seems a little more elegant than other options. I really wanted a discussion of possibilities and other points of view, which I have more or less got. I am surprised no one has mentioned ConvertAll() so far, as that is another workaround I have used, but a little too verbose for the scenarios at hand
@Rob Yet Again and Sara
Thanks, however I do feel I understand generics in all their static contexted glory, and did understand the issues at play here.
The actual design of our system and usage of generics it (and I can say this without only a touch of bias, as I was only one of the players in the design), has been done well. It is when I have been working with the core API, I have found situations when I have been in the wrong scope for doing something simply, instead I had to deal with them with a little less elegant than I like (trying either to be clever or perhaps lazy - I'll accept either of those labels).
My distaste for what I termed a cludge is largely that we require to do a loop through our record set simply to convert the objects to their base value which may be a performance hit.
I guess I was wondering if anyone else had come across this in their coding before, and if anyone had been cleverer, or at least more elegant, than me in dealing with it.
A:
In your case MyObjectsA and MyObjectsB don't have common predecessor. Generic class is template for different classes not a common base class. If you want to have common properties in different classes use interfaces. You can't call ToList in a loop cause it has different signature in different classes. You can create ToList that returns objects rather than specific type.
A:
why do you have a collection of MyObjects? Is there a specific reason you don't have a List?
A:
You can still probably access the ToList() method, but since you are unsure of the type, won't this work?
foreach(var myObject in myObjectsList)
foreach(var obj in myObject.ToList())
//do something
Of course this will only work on C# 3.0.
Note that the use of var is merely to remove the requirement of knowing what type the lists contain; as opposed to Frank's comments that I have delusions that var will make typing dynamic.
A:
OK, I am confused, the following code works fine for me (curiosity got the better of me!):
// Original Code Snipped for Brevity - See Edit History if Req'd
Or have I missed something?
Update Following Response from OP
OK now I am really confused..
What you are saying is that you want to get a List of Typed values from a generic/abstract List? (the child classes therefore become irrelevant).
You cannot return a Typed List if the Types are children/interface implementors - they do not match! You can of course get a List of items that are of a specific type from the abstract List like so:
public List<OfType> TypedList<OfType>() where OfType : IOtherObjects
{
List<OfType> rtn = new List<OfType>();
foreach (IOtherObjects o in _objects)
{
Type objType = o.GetType();
Type reqType = typeof(OfType);
if (objType == reqType)
rtn.Add((OfType)o);
}
return rtn;
}
If I am still off-base here can you please reword your question?! (It doesn't seem like I am the only one unsure of what you are driving at). I am trying to establish if there is a misunderstanding of generics on your part.
Another Update :D
Right, so it looks like you want/need the option to get the typed List, or the base list yes?
This would make your abstract class look like this - you can use ToList to get the concrete type, or ToBaseList() to get a List of the interface type. This should work in any scenarios you have. Does that help?
public abstract class MyObjects<T> where T : IOtherObjects
{
List<T> _objects = new List<T>();
public List<T> ToList()
{
return _objects;
}
public List<IOtherObjects> ToBaseList()
{
List<IOtherObjects> rtn = new List<IOtherObjects>();
foreach (IOtherObjects o in _objects)
{
rtn.Add(o);
}
return rtn;
}
}
Update #3
It's not really a "cludgy" workaround (no disrespect taken) - thats the only way to do it.. I think the bigger issue here is a design/grok problem. You said you had a problem, this code solves it. But if you were expecting to do something like:
public abstract class MyObjects<T> where T : IOtherObjects
{
List<T> _objects = new List<T>();
public List<IOtherObjects> Objects
{ get { return _objects; } }
}
#warning This won't compile, its for demo's sake.
And be able to pick-and-choose the types that come out of it, how else could you do it?! I get the feeling you do not really understand what the point of generics are, and you are trying to get them to do something they are not designed for!?
A:
If you have
class B : A
class C : A
And you have
List<B> listB;
List<C> listC;
that you wish to treat as a List of the parent type
Then you should use
List<A> listA = listB.Cast<A>().Concat(listC.Cast<A>()).ToList()
A:
I have recently found the
List<A>.Cast<B>().ToList<B>()
pattern.
It does exactly what I was looking for,
|
Suggestions wanted with Lists or Enumerators of T when inheriting from generic classes
|
I know the answer is not going to be simple, and I already use a couple of (I think ugly) cludges. I am simply looking for some elegant answers.
Abstract class:
public interface IOtherObjects;
public abstract class MyObjects<T> where T : IOtherObjects
{
...
public List<T> ToList()
{
...
}
}
Children:
public class MyObjectsA : MyObjects<OtherObjectA> //(where OtherObjectA implements IOtherObjects)
{
}
public class MyObjectsB : MyObjects<OtherObjectB> //(where OtherObjectB implements IOtherObjects)
{
}
Is it possible, looping through a collection of MyObjects (or other similar grouping, generic or otherwise) to then utilise to ToList method of the MyObjects base class, as we do not specifically know the type of T at this point.
EDIT
As for specific examples, whenever this has come up, I've thought about it for a while, and done something different instead, so there is no current requirement. but as it has come up quite frequently, I thought I would float it.
EDIT
@Sara, it's not the specific type of the collection I care about, it could be a List, but still the ToList method of each instance is relatively unusable, without an anonymous type)
@aku, true, and this question may be relatively hypothetical, however being able to retrieve, and work with a list of T of objects, knowing only their base type would be very useful. Having the ToList returning a List Of BaseType has been one of my workarounds
EDIT @ all: So far, this has been the sort of discussion I was hoping for, though it largely confirms all I suspected. Thanks all so far, but anyone else, feel free to input.
EDIT@Rob, Yes it works for a defined type, but not when the type is only known as a List of IOtherObjects.
@Rob Again Thanks. That has usually been my cludgy workaround (no disrespect :) ). Either that or using the ConvertAll function to Downcast through a delegate. Thanks for taking the time to understand the problem.
QUALIFYING EDIT in case I have been a little confusing
To be more precise, (I may have let my latest implementation of this get it too complex):
lets say I have 2 object types, B and C inheriting from object A.
Many scenarios have presented themselves where, from a List of B or a List of C, or in other cases a List of either - but I don't know which if I am at a base class, I have needed a less specific List of A.
The above example was a watered-down example of the List Of Less Specific problem's latest incarnation.
Usually it has presented itself, as I think through possible scenarios that limit the amount of code that needs writing and seems a little more elegant than other options. I really wanted a discussion of possibilities and other points of view, which I have more or less got. I am surprised no one has mentioned ConvertAll() so far, as that is another workaround I have used, but a little too verbose for the scenarios at hand
@Rob Yet Again and Sara
Thanks, however I do feel I understand generics in all their static contexted glory, and did understand the issues at play here.
The actual design of our system and usage of generics it (and I can say this without only a touch of bias, as I was only one of the players in the design), has been done well. It is when I have been working with the core API, I have found situations when I have been in the wrong scope for doing something simply, instead I had to deal with them with a little less elegant than I like (trying either to be clever or perhaps lazy - I'll accept either of those labels).
My distaste for what I termed a cludge is largely that we require to do a loop through our record set simply to convert the objects to their base value which may be a performance hit.
I guess I was wondering if anyone else had come across this in their coding before, and if anyone had been cleverer, or at least more elegant, than me in dealing with it.
|
[
"In your case MyObjectsA and MyObjectsB don't have common predecessor. Generic class is template for different classes not a common base class. If you want to have common properties in different classes use interfaces. You can't call ToList in a loop cause it has different signature in different classes. You can create ToList that returns objects rather than specific type.\n",
"why do you have a collection of MyObjects? Is there a specific reason you don't have a List?\n",
"You can still probably access the ToList() method, but since you are unsure of the type, won't this work?\nforeach(var myObject in myObjectsList)\n foreach(var obj in myObject.ToList())\n //do something\n\nOf course this will only work on C# 3.0.\nNote that the use of var is merely to remove the requirement of knowing what type the lists contain; as opposed to Frank's comments that I have delusions that var will make typing dynamic.\n",
"OK, I am confused, the following code works fine for me (curiosity got the better of me!):\n// Original Code Snipped for Brevity - See Edit History if Req'd\n\nOr have I missed something?\nUpdate Following Response from OP\nOK now I am really confused..\nWhat you are saying is that you want to get a List of Typed values from a generic/abstract List? (the child classes therefore become irrelevant).\nYou cannot return a Typed List if the Types are children/interface implementors - they do not match! You can of course get a List of items that are of a specific type from the abstract List like so:\n public List<OfType> TypedList<OfType>() where OfType : IOtherObjects\n {\n List<OfType> rtn = new List<OfType>();\n\n foreach (IOtherObjects o in _objects)\n {\n Type objType = o.GetType();\n Type reqType = typeof(OfType);\n\n if (objType == reqType)\n rtn.Add((OfType)o);\n }\n\n return rtn;\n }\n\nIf I am still off-base here can you please reword your question?! (It doesn't seem like I am the only one unsure of what you are driving at). I am trying to establish if there is a misunderstanding of generics on your part.\nAnother Update :D\nRight, so it looks like you want/need the option to get the typed List, or the base list yes?\nThis would make your abstract class look like this - you can use ToList to get the concrete type, or ToBaseList() to get a List of the interface type. This should work in any scenarios you have. Does that help?\npublic abstract class MyObjects<T> where T : IOtherObjects\n{\n List<T> _objects = new List<T>();\n\n public List<T> ToList()\n {\n return _objects;\n }\n\n public List<IOtherObjects> ToBaseList()\n {\n List<IOtherObjects> rtn = new List<IOtherObjects>();\n foreach (IOtherObjects o in _objects)\n {\n rtn.Add(o);\n }\n return rtn;\n }\n}\n\nUpdate #3\nIt's not really a \"cludgy\" workaround (no disrespect taken) - thats the only way to do it.. I think the bigger issue here is a design/grok problem. You said you had a problem, this code solves it. But if you were expecting to do something like:\npublic abstract class MyObjects<T> where T : IOtherObjects\n{\n List<T> _objects = new List<T>();\n\n public List<IOtherObjects> Objects\n { get { return _objects; } }\n}\n#warning This won't compile, its for demo's sake.\n\nAnd be able to pick-and-choose the types that come out of it, how else could you do it?! I get the feeling you do not really understand what the point of generics are, and you are trying to get them to do something they are not designed for!?\n",
"If you have\nclass B : A\nclass C : A\n\nAnd you have\nList<B> listB;\nList<C> listC;\n\nthat you wish to treat as a List of the parent type\nThen you should use\nList<A> listA = listB.Cast<A>().Concat(listC.Cast<A>()).ToList()\n\n",
"I have recently found the \nList<A>.Cast<B>().ToList<B>()\n\npattern. \nIt does exactly what I was looking for,\n"
] |
[
2,
2,
1,
1,
1,
0
] |
[
"Generics are used for static time type checks not runtime dispatch. Use inheritance/interfaces for runtime dispatch, use generics for compile-time type guarantees.\ninterface IMyObjects : IEnumerable<IOtherObjects> {}\nabstract class MyObjects<T> : IMyObjects where T : IOtherObjects {}\n\nIEnumerable<IMyObjects> objs = ...;\nforeach (IMyObjects mo in objs) {\n foreach (IOtherObjects oo in mo) {\n Console.WriteLine(oo);\n }\n}\n\n(Obviously, I prefer Enumerables over Lists.)\nOR Just use a proper dynamic language like VB. :-)\n"
] |
[
-1
] |
[
"c#",
"generics"
] |
stackoverflow_0000053395_c#_generics.txt
|
Q:
Physical Address in JAVA
How do I get the physical addresses of my machine in Java?
A:
As of Java 6, java.net.NetworkInterface class now has the method getHardwareAddress()
http://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getHardwareAddress()
If that's too new, there are UUID packages which try various methods per OS to ask for it. Try e.g. http://johannburkard.de/blog/programming/java/MAC-address-lookup-using-Java.html
A:
try {
InetAddress addr = InetAddress.getLocalHost();
// Get IP Address
byte[] ipAddr = addr.getAddress();
// Get hostname
String hostname = addr.getHostName();
} catch (UnknownHostException e) {
}
A:
I think this might be what you're looking for, in the Java API for the InetAddress class: http://java.sun.com/javase/6/docs/api/java/net/InetAddress.html
getLocalHost()
A:
If you need you need the MAC address you are going to require JNI. I use a library called JUG to generate UUIDs based using the real MAC address of the machine. You can consult their source code to see how this is accomplished on Linux, Solaris, Windows and Mac platforms.
|
Physical Address in JAVA
|
How do I get the physical addresses of my machine in Java?
|
[
"As of Java 6, java.net.NetworkInterface class now has the method getHardwareAddress()\nhttp://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getHardwareAddress()\nIf that's too new, there are UUID packages which try various methods per OS to ask for it. Try e.g. http://johannburkard.de/blog/programming/java/MAC-address-lookup-using-Java.html\n",
"try {\n InetAddress addr = InetAddress.getLocalHost();\n\n // Get IP Address\n byte[] ipAddr = addr.getAddress();\n\n // Get hostname\n String hostname = addr.getHostName();\n} catch (UnknownHostException e) {\n}\n\n",
"I think this might be what you're looking for, in the Java API for the InetAddress class: http://java.sun.com/javase/6/docs/api/java/net/InetAddress.html\ngetLocalHost() \n\n",
"If you need you need the MAC address you are going to require JNI. I use a library called JUG to generate UUIDs based using the real MAC address of the machine. You can consult their source code to see how this is accomplished on Linux, Solaris, Windows and Mac platforms.\n"
] |
[
2,
0,
0,
0
] |
[] |
[] |
[
"java",
"macos"
] |
stackoverflow_0000097392_java_macos.txt
|
Q:
How can I get that huge security icon on my secure site?
If I go to www.paypal.com, Firefox displays a huge icon in the location bar. Is it possible to get my web site to do this without paying $2700 to Verisign? Where is the best place to buy SSL certificates and not break the bank?
A:
You're talking about EV (extended validation) SSL. Digicert are very competitive for this ($488 per year) and also standard SSL certificates. Whoever you go for though, make sure you check what browser compatibility they have as some of the cheaper ones do not have as wide support as the more expensive ones meaning you're kinda getting what you pay for.
Edit: also, EV is only supported on the more recent browsers (not IE6 for example).
A:
I have had great luck with GeoTrust. No options that I know of are what I would call "cheap", but you can do better than Verisign pricing and GeoTrust is one place where that is true.
A:
What you are talking about is a EV Cert. The EV stands for Extended Validation. Basically the larger price pays for someone to really look into you business and verify that you are who you say you are. I have used Verisign for my sites.
Here is a list of Certificates that are included in Firefox.
These are typically very pricey and for good reason.
A:
The icon you see is from an Extended Validation Certificate (EV Certificate). They are notoriously higher-priced, though Verisign is not the only certificate authority that sells them. You can find them for around the $500 mark as well. Microsoft maintains a list of CAs that work with IE7. I selected two or three at random and found one that would sell me an EV Cert for just under $500.
A:
To get the green bar, your CA needs to pass an audit. Be sure that you're buying an Extended Validation cert, and using https.
|
How can I get that huge security icon on my secure site?
|
If I go to www.paypal.com, Firefox displays a huge icon in the location bar. Is it possible to get my web site to do this without paying $2700 to Verisign? Where is the best place to buy SSL certificates and not break the bank?
|
[
"You're talking about EV (extended validation) SSL. Digicert are very competitive for this ($488 per year) and also standard SSL certificates. Whoever you go for though, make sure you check what browser compatibility they have as some of the cheaper ones do not have as wide support as the more expensive ones meaning you're kinda getting what you pay for.\nEdit: also, EV is only supported on the more recent browsers (not IE6 for example).\n",
"I have had great luck with GeoTrust. No options that I know of are what I would call \"cheap\", but you can do better than Verisign pricing and GeoTrust is one place where that is true.\n",
"What you are talking about is a EV Cert. The EV stands for Extended Validation. Basically the larger price pays for someone to really look into you business and verify that you are who you say you are. I have used Verisign for my sites.\nHere is a list of Certificates that are included in Firefox.\nThese are typically very pricey and for good reason. \n",
"The icon you see is from an Extended Validation Certificate (EV Certificate). They are notoriously higher-priced, though Verisign is not the only certificate authority that sells them. You can find them for around the $500 mark as well. Microsoft maintains a list of CAs that work with IE7. I selected two or three at random and found one that would sell me an EV Cert for just under $500.\n",
"To get the green bar, your CA needs to pass an audit. Be sure that you're buying an Extended Validation cert, and using https.\n"
] |
[
3,
2,
1,
1,
0
] |
[] |
[] |
[
"security"
] |
stackoverflow_0000098026_security.txt
|
Q:
Gathering OS and tool version numbers for build archive purposes
Our automated build machine needs to archive the version numbers of the OS plus various tools used during each build. (In case we ever need to replicate exactly the same build later on, perhaps when the machine is long dead.)
I see the command "msinfo32.exe" can be used to dump a whole load of system version information, which we might as well archive.
Does anyone know of a way to easily archive the version numbers of the Visual Studio tools?
What mechanisms do other developers use to gather this kind of information for archive purposes?
Extra information for Fabio Gomes.
I agree with you that in 5 years time it'll probably be impossible to recreate the exact OS and tool configuration (down to the nearest security update). Unfortunately this really comes from a contractual requirement. As part of our deliverable to a customer we must provide a copy of all source code and clear instructions on exactly how to replicate the build. It's probably impossible for us to meet this requirement perfectly.
So - I'll just mark your answer as correct (I agree with you that it's practically impossible), and get on with playing with the rest of stack overflow. :)
PS. It would be really great if stack overflow supported replies to answers instead of having to edit the original question.. But I see it has already been denied.
A:
If you are building on the command line you could tell it to be verbose and capture all of the output to a text file to archiving with each build.
eg, msbuild <build_file> > myfile.txt
A:
Sorry, but what could lead for a need to replicated the exact same build in the future?
In my experience, either you keep your product installers safe or start a new build from scratch.
Also IMO the only way to replicated the exact same build in the future is to run your build machine on a Virtual Machine and keep the VM backup around.
I think that most softwares you will need to replicate the exact same build in the future will not be available anymore, so you will need to keep a copy of every software version you install in this machine.
Could you be more specific about the problem you are trying to solve?
A:
An alternative suggestion: put the relevant tools (compilers, system headers and libraries, etc.) in your repository itself, rather than expecting them to be locally installed. See also a blog post I wrote on this topic.
I have Visual Studio, gcc, etc. all checked into my Subversion repository.
|
Gathering OS and tool version numbers for build archive purposes
|
Our automated build machine needs to archive the version numbers of the OS plus various tools used during each build. (In case we ever need to replicate exactly the same build later on, perhaps when the machine is long dead.)
I see the command "msinfo32.exe" can be used to dump a whole load of system version information, which we might as well archive.
Does anyone know of a way to easily archive the version numbers of the Visual Studio tools?
What mechanisms do other developers use to gather this kind of information for archive purposes?
Extra information for Fabio Gomes.
I agree with you that in 5 years time it'll probably be impossible to recreate the exact OS and tool configuration (down to the nearest security update). Unfortunately this really comes from a contractual requirement. As part of our deliverable to a customer we must provide a copy of all source code and clear instructions on exactly how to replicate the build. It's probably impossible for us to meet this requirement perfectly.
So - I'll just mark your answer as correct (I agree with you that it's practically impossible), and get on with playing with the rest of stack overflow. :)
PS. It would be really great if stack overflow supported replies to answers instead of having to edit the original question.. But I see it has already been denied.
|
[
"If you are building on the command line you could tell it to be verbose and capture all of the output to a text file to archiving with each build. \neg, msbuild <build_file> > myfile.txt\n",
"Sorry, but what could lead for a need to replicated the exact same build in the future?\nIn my experience, either you keep your product installers safe or start a new build from scratch.\nAlso IMO the only way to replicated the exact same build in the future is to run your build machine on a Virtual Machine and keep the VM backup around.\nI think that most softwares you will need to replicate the exact same build in the future will not be available anymore, so you will need to keep a copy of every software version you install in this machine.\nCould you be more specific about the problem you are trying to solve?\n",
"An alternative suggestion: put the relevant tools (compilers, system headers and libraries, etc.) in your repository itself, rather than expecting them to be locally installed. See also a blog post I wrote on this topic.\nI have Visual Studio, gcc, etc. all checked into my Subversion repository.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"archive",
"build_automation",
"versions"
] |
stackoverflow_0000006085_archive_build_automation_versions.txt
|
Q:
I'm using Wincrypt for Diffie-Hellman-- can I export the shared secret in plain text?
OK-- thanks to Mike, I was able to get Wincrypt to generate a Diffie-Hellman keypair. I figured out out to export the public key, and how to import the other party's public key. According to the docs, upon import of the other party's public key, the shared secret has been computed. Great.
I now need to get ahold of that shared secret, but I don't think its possible. Simply calling CryptExportKey with a type of PLAINTEXTKEYBLOB fails unless I call CryptSetKeyParam to change the algorithm id from CALG_AGREEDKEY_ANY to something... else. But I don't want something else, I want the shared secret. The API, however, seems designed to discourage this.
Any ideas out there? I should note that the problem here is that I'm only writing one side of an implementation of WiFi Protected Setup. So the protocol is defined for me, and the other party is not giving me HCRYPTKEYs.
A:
This looks like what you need...
from: http://msdn.microsoft.com/en-us/library/aa381969(VS.85).aspx
To import a Diffie-Hellman public key and calculate the secret session key
Call the CryptAcquireContext function to get a handle to the Microsoft Diffie-Hellman Cryptographic Provider.
Create a Diffie-Hellman key by calling the CryptGenKey function to create a new key, or by calling the CryptGetUserKey function to retrieve an existing key.
To import the Diffie-Hellman public key into the CSP, call the CryptImportKey function, passing a pointer to the public key BLOB in the pbData parameter, the length of the BLOB in the dwDataLen parameter, and the handle to the Diffie-Hellman key in the hPubKey parameter. This causes the calculation, (Y^X) mod P, to be performed, thus creating the shared, secret key and completing the key exchange. This function call returns a handle to the new, secret, session key in the hKey parameter.
At this point, the imported Diffie-Hellman is of type CALG_AGREEDKEY_ANY. Before the key can be used, it must be converted into a session key type. This is accomplished by calling the CryptSetKeyParam function with dwParam set to KP_ALGID and with pbData set to a pointer to a ALG_ID value that represents a session key, such as CALG_RC4. The key must be converted before using the shared key in the CryptEncrypt or CryptDecrypt function. Calls made to either of these functions prior to converting the key type will fail.
The secret session key is now ready to be used for encryption or decryption.
When the key is no longer needed, destroy the key handle by calling the CryptDestroyKey function.
|
I'm using Wincrypt for Diffie-Hellman-- can I export the shared secret in plain text?
|
OK-- thanks to Mike, I was able to get Wincrypt to generate a Diffie-Hellman keypair. I figured out out to export the public key, and how to import the other party's public key. According to the docs, upon import of the other party's public key, the shared secret has been computed. Great.
I now need to get ahold of that shared secret, but I don't think its possible. Simply calling CryptExportKey with a type of PLAINTEXTKEYBLOB fails unless I call CryptSetKeyParam to change the algorithm id from CALG_AGREEDKEY_ANY to something... else. But I don't want something else, I want the shared secret. The API, however, seems designed to discourage this.
Any ideas out there? I should note that the problem here is that I'm only writing one side of an implementation of WiFi Protected Setup. So the protocol is defined for me, and the other party is not giving me HCRYPTKEYs.
|
[
"This looks like what you need...\nfrom: http://msdn.microsoft.com/en-us/library/aa381969(VS.85).aspx\n\nTo import a Diffie-Hellman public key and calculate the secret session key\n\nCall the CryptAcquireContext function to get a handle to the Microsoft Diffie-Hellman Cryptographic Provider.\nCreate a Diffie-Hellman key by calling the CryptGenKey function to create a new key, or by calling the CryptGetUserKey function to retrieve an existing key.\nTo import the Diffie-Hellman public key into the CSP, call the CryptImportKey function, passing a pointer to the public key BLOB in the pbData parameter, the length of the BLOB in the dwDataLen parameter, and the handle to the Diffie-Hellman key in the hPubKey parameter. This causes the calculation, (Y^X) mod P, to be performed, thus creating the shared, secret key and completing the key exchange. This function call returns a handle to the new, secret, session key in the hKey parameter.\nAt this point, the imported Diffie-Hellman is of type CALG_AGREEDKEY_ANY. Before the key can be used, it must be converted into a session key type. This is accomplished by calling the CryptSetKeyParam function with dwParam set to KP_ALGID and with pbData set to a pointer to a ALG_ID value that represents a session key, such as CALG_RC4. The key must be converted before using the shared key in the CryptEncrypt or CryptDecrypt function. Calls made to either of these functions prior to converting the key type will fail.\nThe secret session key is now ready to be used for encryption or decryption.\nWhen the key is no longer needed, destroy the key handle by calling the CryptDestroyKey function.\n\n"
] |
[
2
] |
[] |
[] |
[
"cryptoapi",
"cryptography",
"diffie_hellman"
] |
stackoverflow_0000087694_cryptoapi_cryptography_diffie_hellman.txt
|
Q:
Well developed web site architecture using linq to sql?
Anybody found yet a good web site architecture using linq to sql? Any help will be very helpful!
A:
We just finished up an internal IT project banking heavily on Linq2Sql and it paid off. I was a bit skeptical at first, but I think it worked out great in the end. Just remember, the fundamentals don't change.
try to stay as stateless as possible
keep clean lines between your services and data access
don't fight linq, use it. If it isn't helping you, you are probably doing something wrong
Our implementation ended up being a hybrid of the Andrew Siemer and Beth Massi approach (a bit heavier on the Andrew side) and in C#
A:
What, apart from StackOverflow? ;-)
A:
Remember Linq is a technology that sits atop the typical data access structures. Therefore all rules that have applied thus far still hold. Just because you can get to data a little easier in the client app doesnt mean you throw out the architecture best practices for data access.
A:
Rob Conery's MVC Storefront
As others have said, linq-to-sql is no different to any other ORM so the architecture is the same as you would use for NHibernate and others.
|
Well developed web site architecture using linq to sql?
|
Anybody found yet a good web site architecture using linq to sql? Any help will be very helpful!
|
[
"We just finished up an internal IT project banking heavily on Linq2Sql and it paid off. I was a bit skeptical at first, but I think it worked out great in the end. Just remember, the fundamentals don't change.\n\ntry to stay as stateless as possible\nkeep clean lines between your services and data access\ndon't fight linq, use it. If it isn't helping you, you are probably doing something wrong\n\nOur implementation ended up being a hybrid of the Andrew Siemer and Beth Massi approach (a bit heavier on the Andrew side) and in C#\n",
"What, apart from StackOverflow? ;-)\n",
"Remember Linq is a technology that sits atop the typical data access structures. Therefore all rules that have applied thus far still hold. Just because you can get to data a little easier in the client app doesnt mean you throw out the architecture best practices for data access. \n",
"Rob Conery's MVC Storefront\nAs others have said, linq-to-sql is no different to any other ORM so the architecture is the same as you would use for NHibernate and others. \n"
] |
[
4,
2,
1,
1
] |
[] |
[] |
[
"architecture",
"linq_to_sql"
] |
stackoverflow_0000096226_architecture_linq_to_sql.txt
|
Q:
How to put up an off-the-shelf https to http gateway?
I have an HTTP server which is in our internal network and accessible only from inside it. I would like to put another server that would listen to an HTTPS port accessible from outside, and forward the requests to that HTTP server (and send back the responses via HTTPS). I know that there are several ways to do this with some programming involved (and I myself made a temporary solution with Tomcat and a very simple servlet I wrote), but is there a way to do the same just plugging parts already made (like Apache + modules)?
A:
This is the sort of use-case that stunnel is designed for. There is a specific example of using stunnel to wrap an HTTP server.
You should consider whether this is really a good idea, though. Web applications designed for use inside a corporate firewall are often fairly lax about security. Merely encrypting the connections prevents casual eavesdropping, but does not secure the site. If an attacker finds your outward facing server and starts connecting to it, they can still try to find exploitable flaws in the web service (SQL injection, cross-site scripting, etc).
A:
With Apache look into mod_proxy.
Apache 2.2 mod_proxy docs
Apache 2.0 mod_proxy docs
|
How to put up an off-the-shelf https to http gateway?
|
I have an HTTP server which is in our internal network and accessible only from inside it. I would like to put another server that would listen to an HTTPS port accessible from outside, and forward the requests to that HTTP server (and send back the responses via HTTPS). I know that there are several ways to do this with some programming involved (and I myself made a temporary solution with Tomcat and a very simple servlet I wrote), but is there a way to do the same just plugging parts already made (like Apache + modules)?
|
[
"This is the sort of use-case that stunnel is designed for. There is a specific example of using stunnel to wrap an HTTP server.\nYou should consider whether this is really a good idea, though. Web applications designed for use inside a corporate firewall are often fairly lax about security. Merely encrypting the connections prevents casual eavesdropping, but does not secure the site. If an attacker finds your outward facing server and starts connecting to it, they can still try to find exploitable flaws in the web service (SQL injection, cross-site scripting, etc).\n",
"With Apache look into mod_proxy.\nApache 2.2 mod_proxy docs\nApache 2.0 mod_proxy docs\n"
] |
[
2,
1
] |
[] |
[] |
[
"apache",
"gateway",
"http",
"https",
"webserver"
] |
stackoverflow_0000097983_apache_gateway_http_https_webserver.txt
|
Q:
How can I get the current exception in a WinForms TraceListener
I am modifying an existing WinForms app which is setup with a custom TraceListener which logs any unhandled errors that occur in the app. It seems to me like the TraceListener gets the message part of the exception (which is what gets logged), but not the other exception information. I would like to be able to get at the exception object (to get the stacktrace and other info).
In ASP.NET, which I am more familiar with, I would call Server.GetLastError to get the most recent exception, but of course that won't work in WinForms.
How can I get the most recent exception?
A:
I assume that you have set an event handler that catches unhandled domain exceptions and thread exceptions. In that delegate you probably call the trace listener to log the exception. Simply issue an extra call to set the exception context.
[STAThread]
private static void Main()
{
// Add the event handler for handling UI thread exceptions
Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException);
// Add the event handler for handling non-UI thread exceptions
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);
...
Application.Run(new Form1());
}
private static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
MyTraceListener.Instance.ExceptionContext = e;
Trace.WriteLine(e.ToString());
}
private static void Application_ThreadException(object sender, ThreadExceptionEventArgs e)
{
// similar to above CurrentDomain_UnhandledException
}
...
Trace.Listeners.Add(MyTraceListener.Instance);
...
class MyTraceListener : System.Diagnostics.TraceListener
{
...
public Object ExceptionContext { get; set; }
public static MyTraceListener Instance { get { ... } }
}
On the Write methods in MyTraceListener you can get the exception context and work with that. Remember to sync exception context.
|
How can I get the current exception in a WinForms TraceListener
|
I am modifying an existing WinForms app which is setup with a custom TraceListener which logs any unhandled errors that occur in the app. It seems to me like the TraceListener gets the message part of the exception (which is what gets logged), but not the other exception information. I would like to be able to get at the exception object (to get the stacktrace and other info).
In ASP.NET, which I am more familiar with, I would call Server.GetLastError to get the most recent exception, but of course that won't work in WinForms.
How can I get the most recent exception?
|
[
"I assume that you have set an event handler that catches unhandled domain exceptions and thread exceptions. In that delegate you probably call the trace listener to log the exception. Simply issue an extra call to set the exception context.\n[STAThread]\nprivate static void Main()\n{\n // Add the event handler for handling UI thread exceptions\n Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException);\n // Add the event handler for handling non-UI thread exceptions\n AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);\n ...\n Application.Run(new Form1());\n}\n\nprivate static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)\n{\n MyTraceListener.Instance.ExceptionContext = e;\n Trace.WriteLine(e.ToString());\n}\n\nprivate static void Application_ThreadException(object sender, ThreadExceptionEventArgs e)\n{\n // similar to above CurrentDomain_UnhandledException\n}\n\n...\n\nTrace.Listeners.Add(MyTraceListener.Instance);\n\n...\n\nclass MyTraceListener : System.Diagnostics.TraceListener\n{\n ...\n public Object ExceptionContext { get; set; }\n public static MyTraceListener Instance { get { ... } }\n}\n\nOn the Write methods in MyTraceListener you can get the exception context and work with that. Remember to sync exception context.\n"
] |
[
3
] |
[] |
[] |
[
"exception",
"winforms"
] |
stackoverflow_0000097104_exception_winforms.txt
|
Q:
Moving from VSS to SVN
I need to write a script to make a source safe project ready to be moved to subversion, what do I need to do so far I can think of:
remove .scc files
remove .vspcc files
Do I need to remove the "read-only" attribute of all the files as well, or will that not matter?
What language would you write this script in, I was planning to do it in python (os.walk is great), but maybe powershell would be more appropriate to chnage file attributes on windows (if I need to)?
Is there anything else you can think of that needs to be done before I move the project to SVN?
A:
For a manual migration (or to understand what happens) :
remove "read-only" attribute for all files
remove *.scc
remove *.vssscc
remove *.vspscc
delete *.suo
remove "SourceCodeControl..." section from your solution.sln file
remove "SccProjectName..." section from your projects.csproj files
A:
I also have a hard drive with a bunch of VSS projects that I want to move to SVN projects.
Couldn't you just use Vss2Svn? It sounds like this will convert VSS projects to SVN, but I have not tried it and have not heard anything about it. Has anyone tried this to move from VSS to SVN? Or is it better to do it manually?
A:
http://www.poweradmin.com/sourcecode/vssmigrate.aspx
A:
The best thing you could do is a clean export of all the files in the repository. Blow away anything that is VSS.
Once you've done that then just do a subversion import and you'll be ready to go. If you write a script you'll just have one more maintenance & failure point. Thus my preference for just doing a clean import.
|
Moving from VSS to SVN
|
I need to write a script to make a source safe project ready to be moved to subversion, what do I need to do so far I can think of:
remove .scc files
remove .vspcc files
Do I need to remove the "read-only" attribute of all the files as well, or will that not matter?
What language would you write this script in, I was planning to do it in python (os.walk is great), but maybe powershell would be more appropriate to chnage file attributes on windows (if I need to)?
Is there anything else you can think of that needs to be done before I move the project to SVN?
|
[
"For a manual migration (or to understand what happens) :\n\nremove \"read-only\" attribute for all files\nremove *.scc\nremove *.vssscc\nremove *.vspscc\ndelete *.suo\nremove \"SourceCodeControl...\" section from your solution.sln file\nremove \"SccProjectName...\" section from your projects.csproj files\n\n",
"I also have a hard drive with a bunch of VSS projects that I want to move to SVN projects.\nCouldn't you just use Vss2Svn? It sounds like this will convert VSS projects to SVN, but I have not tried it and have not heard anything about it. Has anyone tried this to move from VSS to SVN? Or is it better to do it manually?\n",
"http://www.poweradmin.com/sourcecode/vssmigrate.aspx\n",
"The best thing you could do is a clean export of all the files in the repository. Blow away anything that is VSS.\nOnce you've done that then just do a subversion import and you'll be ready to go. If you write a script you'll just have one more maintenance & failure point. Thus my preference for just doing a clean import.\n"
] |
[
8,
3,
2,
2
] |
[] |
[] |
[
"svn",
"version_control",
"visual_sourcesafe"
] |
stackoverflow_0000094058_svn_version_control_visual_sourcesafe.txt
|
Q:
WAS hosting vs. Windows Service hosting
I'm working on a project using Windows 2008, .NET 3.5 and WCF for some internal services and the question of how to host the services has arisen.
Since we're using Windows 2008 I was thinking it'd be good to take advantage of Windows Process Activation Service (WAS) although the feeling on the project seems to be that using Windows Services would be better.
So what's the low down on using WAS to host WCF services in comparison to a Windows Service? Are there any real advantages to using Windows Services or is WAS the way to go?
A:
Recently I had to answer very similar question and these are the reasons why I decided to use IIS 7.0 and WAS instead of Windows Service infrastructure.
IIS 7.0 is much more robust host and it comes with numerous features that make debugging easy. Failed requests tracing, worker process recycling, process orphaning to name a few.
IIS 7.0 gives you more option to specify what should happen with the worker process in certain circumstances.
If you host your service under IIS it doesn't have a worker process assigned to it until the very first request. This is something that was a desired behaviour from my perspective but it might be different in your case. Windows Service gives you the ability to start your service in a more deterministic way.
From my experience WAS itself doesn't provide increased reliability. It's biggest advantage is that it exposes the richness of IIS to applications that use protocols different than HTTP. By different I mean: TCP, named pipes and MSMQ.
The only disadvantage of using WAS that I'm aware of is that the address your service is exposed at needs to be compliant with some sort of pattern. How it looks like in case of MSMQ is described here
|
WAS hosting vs. Windows Service hosting
|
I'm working on a project using Windows 2008, .NET 3.5 and WCF for some internal services and the question of how to host the services has arisen.
Since we're using Windows 2008 I was thinking it'd be good to take advantage of Windows Process Activation Service (WAS) although the feeling on the project seems to be that using Windows Services would be better.
So what's the low down on using WAS to host WCF services in comparison to a Windows Service? Are there any real advantages to using Windows Services or is WAS the way to go?
|
[
"Recently I had to answer very similar question and these are the reasons why I decided to use IIS 7.0 and WAS instead of Windows Service infrastructure.\n\nIIS 7.0 is much more robust host and it comes with numerous features that make debugging easy. Failed requests tracing, worker process recycling, process orphaning to name a few.\nIIS 7.0 gives you more option to specify what should happen with the worker process in certain circumstances. \nIf you host your service under IIS it doesn't have a worker process assigned to it until the very first request. This is something that was a desired behaviour from my perspective but it might be different in your case. Windows Service gives you the ability to start your service in a more deterministic way.\nFrom my experience WAS itself doesn't provide increased reliability. It's biggest advantage is that it exposes the richness of IIS to applications that use protocols different than HTTP. By different I mean: TCP, named pipes and MSMQ.\nThe only disadvantage of using WAS that I'm aware of is that the address your service is exposed at needs to be compliant with some sort of pattern. How it looks like in case of MSMQ is described here\n\n"
] |
[
11
] |
[] |
[] |
[
".net",
"iis_7",
"wcf"
] |
stackoverflow_0000097830_.net_iis_7_wcf.txt
|
Q:
How to get runonce to run, without having to have an administrator login
Is there any way to force an update of software using RunOnce, without having an administrator log in, if there is a service running as Administrator running in the background?
EDIT: The main thing I want to be able to do is Run when the RunOnce does, I.E. before Explorer starts. I need to be able to install things, without booting into the Administrator account.
A:
I'm not sure I understand the question. Let me try:
The service you mention, is it yours? If so, you can add code to it to imitate Windows: from your service, examine the RunOnce value and launch the executable it specifies. You can use the CreateProcessAsUser() API to launch it in the context of an arbitrary user. After launching the process, delete the RunOnce entry.
Or have I misunderstood your question?
EDIT: A service does not depend on any user being logged in. You can start your update process from the service as soon as the service itself starts, it will happen before any real user logs in to the computer.
|
How to get runonce to run, without having to have an administrator login
|
Is there any way to force an update of software using RunOnce, without having an administrator log in, if there is a service running as Administrator running in the background?
EDIT: The main thing I want to be able to do is Run when the RunOnce does, I.E. before Explorer starts. I need to be able to install things, without booting into the Administrator account.
|
[
"I'm not sure I understand the question. Let me try:\nThe service you mention, is it yours? If so, you can add code to it to imitate Windows: from your service, examine the RunOnce value and launch the executable it specifies. You can use the CreateProcessAsUser() API to launch it in the context of an arbitrary user. After launching the process, delete the RunOnce entry.\nOr have I misunderstood your question?\nEDIT: A service does not depend on any user being logged in. You can start your update process from the service as soon as the service itself starts, it will happen before any real user logs in to the computer.\n"
] |
[
2
] |
[] |
[] |
[
"installation",
"registry",
"runonce",
"windows_services"
] |
stackoverflow_0000098163_installation_registry_runonce_windows_services.txt
|
Q:
At which point in the lifecycle does GetConnectionInterface get called?
I have this method on a webpart:
private IFilterData _filterData = null;
[ConnectionConsumer("Filter Data Consumer")]
public void GetConnectionInterface(IFilterData filterData)
{
_filterData = filterData;
}
Now, before I can call upon _filterData, I need to know when i can expect it to not be null. When is this?!
Without knowing this, the best I can do is stuff all of my _filterWebpart dependent code into the last lines of OnPreRender and hope for the best.
A:
According to this document, it looks like Load.
http://msdn.microsoft.com/en-us/library/ms366536.aspx
|
At which point in the lifecycle does GetConnectionInterface get called?
|
I have this method on a webpart:
private IFilterData _filterData = null;
[ConnectionConsumer("Filter Data Consumer")]
public void GetConnectionInterface(IFilterData filterData)
{
_filterData = filterData;
}
Now, before I can call upon _filterData, I need to know when i can expect it to not be null. When is this?!
Without knowing this, the best I can do is stuff all of my _filterWebpart dependent code into the last lines of OnPreRender and hope for the best.
|
[
"According to this document, it looks like Load.\nhttp://msdn.microsoft.com/en-us/library/ms366536.aspx\n"
] |
[
0
] |
[] |
[] |
[
"asp.net",
"page_lifecycle",
"sharepoint",
"web_parts",
"webpart_connection"
] |
stackoverflow_0000097054_asp.net_page_lifecycle_sharepoint_web_parts_webpart_connection.txt
|
Q:
ASP.Net application cannot Login to SQL Server Database when deployed to Web Server
I am having a problem with deploying a ASP.NET V2 web application to our deployment environment and am having trouble with the sql server setup .
When I run the website I get a Login failed for user 'MOETP\MOERSVPWLG$'. error when it tries to connect to the database.
This seems to be the network service user which is the behaviour I want from the application but I don't seem to be able to allow the network service user to access the database.
Some details about the setup. IIS 6 and SQL Server 2005 are both setup on the same server in the deployment environment. The only change from the test setup I made is to point the database connection string to the new live database and of course copy everything over.
My assumption at this point is that there is something that needs to be done to the SQL server setup to allow connections from asp.net. But I can't see what it could be.
Any Ideas?
A:
It sounds like you're able to connect to the database alright and you're using integrated windows authentication.
With integrated windows authentication your connection to your database is going to use whatever your application pool user identity is using. You have to make sure that the user identity that asp.net is using is on the database server.
A:
If it is a fresh install not everything may be setup. Check SQL Server Configuration Manager, http://msdn.microsoft.com/en-us/library/ms174212.aspx. Step by step instructions http://download.pro.parallels.com/10.3.1/docs/windows/Guides/pcpw_upgrade_guide/7351.htm.
A:
The user name you've indicated in your post is what the Network Service account on one machine looks like to other machines, ie "DOMAIN\MACHINENAME$".
If you are connecting from IIS6 on one machine to SQL Server on another machine and you are using Network Service for the application pool's process identity then you need to explicitly add 'MOETP\MOERSVPWLG$' as a login to the SQL Server, and map it to an appropriate database user and role. Type that name in exactly as the login name (minus quotes, of course).
A:
Make sure there is a login created for the user you are trying to log in as on the sql server.
A:
There's a few different things it could be.
Are you using integrated windows authentication? If so, you need to make sure the user ASP.net is running as can talk to the database (or impersonate one that can).
Does the web server have permission to talk to the database? Sometimes a web server is deployed in a DMZ.
If you are using a SQL Server login, does that same login exist on the production server with the same permissions?
|
ASP.Net application cannot Login to SQL Server Database when deployed to Web Server
|
I am having a problem with deploying a ASP.NET V2 web application to our deployment environment and am having trouble with the sql server setup .
When I run the website I get a Login failed for user 'MOETP\MOERSVPWLG$'. error when it tries to connect to the database.
This seems to be the network service user which is the behaviour I want from the application but I don't seem to be able to allow the network service user to access the database.
Some details about the setup. IIS 6 and SQL Server 2005 are both setup on the same server in the deployment environment. The only change from the test setup I made is to point the database connection string to the new live database and of course copy everything over.
My assumption at this point is that there is something that needs to be done to the SQL server setup to allow connections from asp.net. But I can't see what it could be.
Any Ideas?
|
[
"It sounds like you're able to connect to the database alright and you're using integrated windows authentication.\nWith integrated windows authentication your connection to your database is going to use whatever your application pool user identity is using. You have to make sure that the user identity that asp.net is using is on the database server.\n",
"If it is a fresh install not everything may be setup. Check SQL Server Configuration Manager, http://msdn.microsoft.com/en-us/library/ms174212.aspx. Step by step instructions http://download.pro.parallels.com/10.3.1/docs/windows/Guides/pcpw_upgrade_guide/7351.htm.\n",
"The user name you've indicated in your post is what the Network Service account on one machine looks like to other machines, ie \"DOMAIN\\MACHINENAME$\". \nIf you are connecting from IIS6 on one machine to SQL Server on another machine and you are using Network Service for the application pool's process identity then you need to explicitly add 'MOETP\\MOERSVPWLG$' as a login to the SQL Server, and map it to an appropriate database user and role. Type that name in exactly as the login name (minus quotes, of course).\n",
"Make sure there is a login created for the user you are trying to log in as on the sql server.\n",
"There's a few different things it could be.\nAre you using integrated windows authentication? If so, you need to make sure the user ASP.net is running as can talk to the database (or impersonate one that can).\nDoes the web server have permission to talk to the database? Sometimes a web server is deployed in a DMZ.\nIf you are using a SQL Server login, does that same login exist on the production server with the same permissions?\n"
] |
[
1,
1,
1,
0,
0
] |
[] |
[] |
[
"asp.net",
"sql_server_2005",
"web_deployment_project"
] |
stackoverflow_0000097899_asp.net_sql_server_2005_web_deployment_project.txt
|
Q:
How can I tell if I have an open relay?
I'm trying to work out if I have an open relay on my server. How do I do that?
I've tried http://www.abuse.net/relay.html
and it reports:
Hmmn, at first glance, host appeared to accept a message for relay.
THIS MAY OR MAY NOT MEAN THAT IT'S AN OPEN RELAY.
Some systems appear to accept relay mail, but then reject messages internally rather than delivering them, but you cannot tell at this point whether the message will be relayed or not.
What further tests can I do to determine if the server has an open relay?
A:
Eh? As your link tells you, register for the site and it will give you an address @abuse.net, valid for 24 hours. Enter that address into the testing form. If your abuse.net account receives the test email, you have an open relay.
A:
You could try setting up a email client to sent email through your server, from an email address that isn't hosted on the same server. If you can successfuly send mail, from an email address at a different domain, without entering a login and password for your SMTP server, then it's probably an open relay.
A:
This depends on your MTA and how you've configured it. Ultimately there is only one thing you must do to prevent relaying. Restrict relaying to authenticated users and/or restrict relaying to specific IPs. I prefer to restrict all IPs except localhost on my mail server and require authentication from everyone else.
The common mistake is to allow more IPs than necessary. Imagine a user on a cable modem who decides to allow the roommate's laptop to relay with the statement 192.168.1.0/24 rather than the more specific 192.168.1.0/29. Now anyone else on the /24 can relay off the server.
|
How can I tell if I have an open relay?
|
I'm trying to work out if I have an open relay on my server. How do I do that?
I've tried http://www.abuse.net/relay.html
and it reports:
Hmmn, at first glance, host appeared to accept a message for relay.
THIS MAY OR MAY NOT MEAN THAT IT'S AN OPEN RELAY.
Some systems appear to accept relay mail, but then reject messages internally rather than delivering them, but you cannot tell at this point whether the message will be relayed or not.
What further tests can I do to determine if the server has an open relay?
|
[
"Eh? As your link tells you, register for the site and it will give you an address @abuse.net, valid for 24 hours. Enter that address into the testing form. If your abuse.net account receives the test email, you have an open relay.\n",
"You could try setting up a email client to sent email through your server, from an email address that isn't hosted on the same server. If you can successfuly send mail, from an email address at a different domain, without entering a login and password for your SMTP server, then it's probably an open relay.\n",
"This depends on your MTA and how you've configured it. Ultimately there is only one thing you must do to prevent relaying. Restrict relaying to authenticated users and/or restrict relaying to specific IPs. I prefer to restrict all IPs except localhost on my mail server and require authentication from everyone else.\nThe common mistake is to allow more IPs than necessary. Imagine a user on a cable modem who decides to allow the roommate's laptop to relay with the statement 192.168.1.0/24 rather than the more specific 192.168.1.0/29. Now anyone else on the /24 can relay off the server.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"smtp"
] |
stackoverflow_0000060648_smtp.txt
|
Q:
MIPS Assembly Pointer-to-a Pointer?
I think I know how to handle this case, but I just want to make sure I have it right. Say you have the following C code:
int myInt = 3;
int* myPointer = &myInt;
int** mySecondPointer = &myPointer;
P contains an address that points to a place in memory which has another address. I'd like to modify the second address. So the MIPS code:
la $t0, my_new_address
lw $t1, ($a0) # address that points to the address we want to modify
sw $t0, ($t1) # load address into memory pointed to by $t1
Is that the way you would do it?
A:
Yes, that's correct as far as I can tell. It would have been easier if you used the same variable names (e.g. symbols instead of hard register names).
Why haven't you simply compiled the c-code and took a look at the list-file or assembly-output? I always do that when in doubt.
|
MIPS Assembly Pointer-to-a Pointer?
|
I think I know how to handle this case, but I just want to make sure I have it right. Say you have the following C code:
int myInt = 3;
int* myPointer = &myInt;
int** mySecondPointer = &myPointer;
P contains an address that points to a place in memory which has another address. I'd like to modify the second address. So the MIPS code:
la $t0, my_new_address
lw $t1, ($a0) # address that points to the address we want to modify
sw $t0, ($t1) # load address into memory pointed to by $t1
Is that the way you would do it?
|
[
"Yes, that's correct as far as I can tell. It would have been easier if you used the same variable names (e.g. symbols instead of hard register names).\nWhy haven't you simply compiled the c-code and took a look at the list-file or assembly-output? I always do that when in doubt.\n"
] |
[
4
] |
[] |
[] |
[
"assembly",
"mips",
"pointers"
] |
stackoverflow_0000098236_assembly_mips_pointers.txt
|
Q:
How can I make Windows software run as a different user within a script?
I'm using a build script that calls Wise to create some install files. The problem is that the Wise license only allows it to be run under one particular user account, which is not the same account that my build script will run under. I know Windows has the runas command but this won't work for an automated script as there is no way to enter the password via the command line.
A:
This might help: Why doesn't the RunAs program accept a password on the command line?
A:
I recommend taking a look at CPAU.
Command line tool for starting process
in alternate security context.
Basically this is a runas replacement.
Also allows you to create job files
and encode the id, password, and
command line in a file so it can be
used by normal users.
You can use it like this (examples):
CPAU -u user [-p password] -ex "WhatToRun" [switches]
Or you can create a ".job" file which will have the user and password encoded inside of it. This way you can avoid having to put the password for the user inside your build script.
A:
It's a bit of a workaround solution, but you can create a scheduled task that runs as your user account, and have it run regularly, maybe once every minute. Yes, you'll have to wait for it to run then.
This task can then look for some data files to process, and do the real work only if they are there.
A:
This might help, it's a class I've used in another project to let people make their own accounts; everyone had to have access to the program, but the same account couldn't be allowed to have access to the LDAP stuff, so the program uses this class to run it as a different user.
http://www.codeproject.com/KB/dotnet/UserImpersonationInNET.aspx
|
How can I make Windows software run as a different user within a script?
|
I'm using a build script that calls Wise to create some install files. The problem is that the Wise license only allows it to be run under one particular user account, which is not the same account that my build script will run under. I know Windows has the runas command but this won't work for an automated script as there is no way to enter the password via the command line.
|
[
"This might help: Why doesn't the RunAs program accept a password on the command line?\n",
"I recommend taking a look at CPAU. \n\nCommand line tool for starting process\n in alternate security context.\n Basically this is a runas replacement.\n Also allows you to create job files\n and encode the id, password, and\n command line in a file so it can be\n used by normal users.\n\nYou can use it like this (examples):\nCPAU -u user [-p password] -ex \"WhatToRun\" [switches]\n\nOr you can create a \".job\" file which will have the user and password encoded inside of it. This way you can avoid having to put the password for the user inside your build script.\n",
"It's a bit of a workaround solution, but you can create a scheduled task that runs as your user account, and have it run regularly, maybe once every minute. Yes, you'll have to wait for it to run then.\nThis task can then look for some data files to process, and do the real work only if they are there.\n",
"This might help, it's a class I've used in another project to let people make their own accounts; everyone had to have access to the program, but the same account couldn't be allowed to have access to the LDAP stuff, so the program uses this class to run it as a different user.\nhttp://www.codeproject.com/KB/dotnet/UserImpersonationInNET.aspx\n"
] |
[
2,
2,
1,
0
] |
[] |
[] |
[
"build_process",
"scripting",
"windows",
"wise"
] |
stackoverflow_0000098134_build_process_scripting_windows_wise.txt
|
Q:
Error 0x8007F303 occurs during printing of reports from MOSS using SRS viewer web part
When attempting to print using the SSRS Viewer Web Part in SharePoint I get the following error.
An error occured during printing. (0x8007F303)
The settings we are using in this box (production) are exactly the same as the settings in testing where this works perfectly fine.
Anyone have any good ideas or faced this before?
A:
I found some ideas by Googling.
Someone had issue with "SSRS server configured for Sharepoint Integrated mode with Cumulative update package 3 for SQL Server 2005 Service Pack 2" but "the problem vanished after installing the .NET framework 3.0 SP1"
You can get this error if you have "old instances of the old
ReportViewer control in your web sites bin directories or anywhere else it
could be accessed by your web application."
It's another error 0x800C0005, but there is an incident where the error only occurred in production environment. bradsy@Microsoft says
You can enable Client print logging by
setting the follow reg key. Once
enabled, you can look in your print
users temporary (cd %temp%) directory
and find a print log file. Windows
Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server\80\Reporting Services]
"LogRSClientPrintInfo"=dword:00000001
You can send the log file to me and I
can take a look to see if there is any
extra information.
Maybe you should collect the log and send it to the forum.
A:
You may have a custom authentication in use for Reporting Services defined in your
web.config. Check if that is the case, remove the custom authentication and try again.
A:
This may seem obvious, but have you checked that you have at least one valid printer installed and available?
|
Error 0x8007F303 occurs during printing of reports from MOSS using SRS viewer web part
|
When attempting to print using the SSRS Viewer Web Part in SharePoint I get the following error.
An error occured during printing. (0x8007F303)
The settings we are using in this box (production) are exactly the same as the settings in testing where this works perfectly fine.
Anyone have any good ideas or faced this before?
|
[
"I found some ideas by Googling.\n\nSomeone had issue with \"SSRS server configured for Sharepoint Integrated mode with Cumulative update package 3 for SQL Server 2005 Service Pack 2\" but \"the problem vanished after installing the .NET framework 3.0 SP1\"\nYou can get this error if you have \"old instances of the old \nReportViewer control in your web sites bin directories or anywhere else it \ncould be accessed by your web application.\"\nIt's another error 0x800C0005, but there is an incident where the error only occurred in production environment. bradsy@Microsoft says\n\n\nYou can enable Client print logging by\n setting the follow reg key. Once\n enabled, you can look in your print\n users temporary (cd %temp%) directory\n and find a print log file. Windows\nRegistry Editor Version 5.00\n[HKEY_CURRENT_USER\\Software\\Microsoft\\Microsoft SQL Server\\80\\Reporting Services]\n\"LogRSClientPrintInfo\"=dword:00000001\nYou can send the log file to me and I\n can take a look to see if there is any\n extra information.\n\nMaybe you should collect the log and send it to the forum.\n",
"You may have a custom authentication in use for Reporting Services defined in your\nweb.config. Check if that is the case, remove the custom authentication and try again.\n",
"This may seem obvious, but have you checked that you have at least one valid printer installed and available?\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
".net",
"moss",
"reporting_services",
"sql",
"sql_server"
] |
stackoverflow_0000069073_.net_moss_reporting_services_sql_sql_server.txt
|
Q:
Codegear RAD Studio help system is corrupted
I've been using Codegear RAD Studio for a over a year now but since the "May08 Help Update" the help system no longer works. If I open the help the contents pane is entirely blank. If I hit F1 I get the following error: "Unable to interpret the specified HxC file."
I've searched for the answer using search engines and the Codegear forums but so far nothing seems to fix the problem. I'd rather not do a full reinstall if possible. Has anyone else experienced this issue and know how to fix it?
A:
It sounds like you need to do a complete uninstall/reinstall. Alas.
Be sure to check http://docs.codegear.com for the latest in Delphi help. On that site you can also download the Delphi 2007 help in various forms, including PDF and CHM.
A:
You probably got a corrupted file from the download. I would try download again and reinstall the help.
|
Codegear RAD Studio help system is corrupted
|
I've been using Codegear RAD Studio for a over a year now but since the "May08 Help Update" the help system no longer works. If I open the help the contents pane is entirely blank. If I hit F1 I get the following error: "Unable to interpret the specified HxC file."
I've searched for the answer using search engines and the Codegear forums but so far nothing seems to fix the problem. I'd rather not do a full reinstall if possible. Has anyone else experienced this issue and know how to fix it?
|
[
"It sounds like you need to do a complete uninstall/reinstall. Alas.\nBe sure to check http://docs.codegear.com for the latest in Delphi help. On that site you can also download the Delphi 2007 help in various forms, including PDF and CHM. \n",
"You probably got a corrupted file from the download. I would try download again and reinstall the help.\n"
] |
[
3,
0
] |
[] |
[] |
[
"delphi",
"delphi_2007"
] |
stackoverflow_0000097927_delphi_delphi_2007.txt
|
Q:
How to know about memory consumption in mysql?
how can one know that each process or a thread is consuming how much memory in MYSQL?
A:
Assuming you just want just the memory usage of the mysql server program.
On windows you can use Process Explorer
On linux you can use the top command.
Use "ps -e" to find the pid of the mysql process
Then use "top -p {pid}" where {pid} is the pid of the mysql process.
A:
on linux you can also use top|grep mysql to get a running report of the stats of the mysql process, 1 row per top refresh period.
|
How to know about memory consumption in mysql?
|
how can one know that each process or a thread is consuming how much memory in MYSQL?
|
[
"Assuming you just want just the memory usage of the mysql server program.\nOn windows you can use Process Explorer \nOn linux you can use the top command.\n\nUse \"ps -e\" to find the pid of the mysql process\nThen use \"top -p {pid}\" where {pid} is the pid of the mysql process. \n\n",
"on linux you can also use top|grep mysql to get a running report of the stats of the mysql process, 1 row per top refresh period.\n"
] |
[
10,
2
] |
[] |
[] |
[
"mysql_management"
] |
stackoverflow_0000098223_mysql_management.txt
|
Q:
Why can't I convert 'char**' to a 'const char* const*' in C?
The following code snippet (correctly) gives a warning in C and an error in C++ (using gcc & g++ respectively, tested with versions 3.4.5 and 4.2.1; MSVC does not seem to care):
char **a;
const char** b = a;
I can understand and accept this.
The C++ solution to this problem is to change b to be a const char * const *, which disallows reassignment of the pointers and prevents you from circumventing const-correctness (C++ FAQ).
char **a;
const char* const* b = a;
However, in pure C, the corrected version (using const char * const *) still gives a warning, and I don't understand why.
Is there a way to get around this without using a cast?
To clarify:
1) Why does this generate a warning in C? It should be entirely const-safe, and the C++ compiler seems to recognize it as such.
2) What is the correct way to go about accepting this char** as a parameter while saying (and having the compiler enforce) that I will not be modifying the characters it points to?
For example, if I wanted to write a function:
void f(const char* const* in) {
// Only reads the data from in, does not write to it
}
And I wanted to invoke it on a char**, what would be the correct type for the parameter?
A:
I had this same problem a few years ago and it irked me to no end.
The rules in C are more simply stated (i.e. they don't list exceptions like converting char** to const char*const*). Consequenlty, it's just not allowed. With the C++ standard, they included more rules to allow cases like this.
In the end, it's just a problem in the C standard. I hope the next standard (or technical report) will address this.
A:
To be considered compatible, the source pointer should be const in the immediately anterior indirection level. So, this will give you the warning in GCC:
char **a;
const char* const* b = a;
But this won't:
const char **a;
const char* const* b = a;
Alternatively, you can cast it:
char **a;
const char* const* b = (const char **)a;
You would need the same cast to invoke the function f() as you mentioned. As far as I know, there's no way to make an implicit conversion in this case (except in C++).
A:
However, in pure C, this still gives a warning, and I don't understand why
You've already identified the problem -- this code is not const-correct. "Const correct" means that, except for const_cast and C-style casts removing const, you can never modify a const object through those const pointers or references.
The value of const-correctness -- const is there, in large part, to detect programmer errors. If you declare something as const, you're stating that you don't think it should be modified -- or at least, those with access to the const version only should not be able to modifying it. Consider:
void foo(const int*);
As declared, foo doesn't have permission to modify the integer pointed to by its argument.
If you're not sure why the code you posted isn't const-correct, consider the following code, only slightly different from HappyDude's code:
char *y;
char **a = &y; // a points to y
const char **b = a; // now b also points to y
// const protection has been violated, because:
const char x = 42; // x must never be modified
*b = &x; // the type of *b is const char *, so set it
// with &x which is const char* ..
// .. so y is set to &x... oops;
*y = 43; // y == &x... so attempting to modify const
// variable. oops! undefined behavior!
cout << x << endl;
Non-const types can only convert to const types in particular ways to prevent any circumvention of const on a data-type without an explicit cast.
Objects initially declared const are particularly special -- the compiler can assume they never change. However, if b can be assigned the value of a without a cast, then you could inadvertently attempt to modify a const variable. This would not only break the check you asked the compiler to make, to disallow you from changing that variables value -- it would also allow you break the compiler optimizations!
On some compilers, this will print 42, on some 43, and others, the program will crash.
Edit-add:
HappyDude: Your comment is spot on. Either the C langauge, or the C compiler you're using, treats const char * const * fundamentally differently than the C++ language treats it. Perhaps consider silencing the compiler warning for this source line only.
A:
This is annoying, but if you're willing to add another level of redirection, you can often do the following to push down into the pointer-to-pointer:
char c = 'c';
char *p = &c;
char **a = &p;
const char *bi = *a;
const char * const * b = &bi;
It has a slightly different meaning, but it's usually workable, and it doesn't use a cast.
A:
I'm not able to get an error when implicitly casting char** to const char * const *, at least on MSVC 14 (VS2k5) and g++ 3.3.3. GCC 3.3.3 issues a warning, which I'm not exactly sure if it is correct in doing.
test.c:
#include <stdlib.h>
#include <stdio.h>
void foo(const char * const * bar)
{
printf("bar %s null\n", bar ? "is not" : "is");
}
int main(int argc, char **argv)
{
char **x = NULL;
const char* const*y = x;
foo(x);
foo(y);
return 0;
}
Output with compile as C code: cl /TC /W4 /Wp64 test.c
test.c(8) : warning C4100: 'argv' : unreferenced formal parameter
test.c(8) : warning C4100: 'argc' : unreferenced formal parameter
Output with compile as C++ code: cl /TP /W4 /Wp64 test.c
test.c(8) : warning C4100: 'argv' : unreferenced formal parameter
test.c(8) : warning C4100: 'argc' : unreferenced formal parameter
Output with gcc: gcc -Wall test.c
test2.c: In function `main':
test2.c:11: warning: initialization from incompatible pointer type
test2.c:12: warning: passing arg 1 of `foo' from incompatible pointer type
Output with g++: g++ -Wall test.C
no output
A:
I'm pretty sure that the const keyword does not imply the data can't be changed/is constant, only that the data will be treated as read-only. Consider this:
const volatile int *const serial_port = SERIAL_PORT;
which is valid code. How can volatile and const co-exist? Simple. volatile tells the compiler to always read the memory when using the data and const tells the compiler to create an error when an attempt is made to write to the memory using the serial_port pointer.
Does const help the compiler's optimiser? No. Not at all. Because constness can be added to and removed from data through casting, the compiler cannot figure out if const data really is constant (since the cast could be done in a different translation unit). In C++ you also have the mutable keyword to complicate matters further.
char *const p = (char *) 0xb000;
//error: p = (char *) 0xc000;
char **q = (char **)&p;
*q = (char *)0xc000; // p is now 0xc000
What happens when an attempt is made to write to memory that really is read only (ROM, for example) probably isn't defined in the standard at all.
|
Why can't I convert 'char**' to a 'const char* const*' in C?
|
The following code snippet (correctly) gives a warning in C and an error in C++ (using gcc & g++ respectively, tested with versions 3.4.5 and 4.2.1; MSVC does not seem to care):
char **a;
const char** b = a;
I can understand and accept this.
The C++ solution to this problem is to change b to be a const char * const *, which disallows reassignment of the pointers and prevents you from circumventing const-correctness (C++ FAQ).
char **a;
const char* const* b = a;
However, in pure C, the corrected version (using const char * const *) still gives a warning, and I don't understand why.
Is there a way to get around this without using a cast?
To clarify:
1) Why does this generate a warning in C? It should be entirely const-safe, and the C++ compiler seems to recognize it as such.
2) What is the correct way to go about accepting this char** as a parameter while saying (and having the compiler enforce) that I will not be modifying the characters it points to?
For example, if I wanted to write a function:
void f(const char* const* in) {
// Only reads the data from in, does not write to it
}
And I wanted to invoke it on a char**, what would be the correct type for the parameter?
|
[
"I had this same problem a few years ago and it irked me to no end.\nThe rules in C are more simply stated (i.e. they don't list exceptions like converting char** to const char*const*). Consequenlty, it's just not allowed. With the C++ standard, they included more rules to allow cases like this.\nIn the end, it's just a problem in the C standard. I hope the next standard (or technical report) will address this.\n",
"To be considered compatible, the source pointer should be const in the immediately anterior indirection level. So, this will give you the warning in GCC:\nchar **a;\nconst char* const* b = a;\n\nBut this won't:\nconst char **a;\nconst char* const* b = a;\n\nAlternatively, you can cast it:\nchar **a;\nconst char* const* b = (const char **)a;\n\nYou would need the same cast to invoke the function f() as you mentioned. As far as I know, there's no way to make an implicit conversion in this case (except in C++). \n",
"\nHowever, in pure C, this still gives a warning, and I don't understand why\n\nYou've already identified the problem -- this code is not const-correct. \"Const correct\" means that, except for const_cast and C-style casts removing const, you can never modify a const object through those const pointers or references.\nThe value of const-correctness -- const is there, in large part, to detect programmer errors. If you declare something as const, you're stating that you don't think it should be modified -- or at least, those with access to the const version only should not be able to modifying it. Consider:\nvoid foo(const int*);\n\nAs declared, foo doesn't have permission to modify the integer pointed to by its argument. \nIf you're not sure why the code you posted isn't const-correct, consider the following code, only slightly different from HappyDude's code:\nchar *y;\n\nchar **a = &y; // a points to y\nconst char **b = a; // now b also points to y\n\n// const protection has been violated, because:\n\nconst char x = 42; // x must never be modified\n*b = &x; // the type of *b is const char *, so set it \n // with &x which is const char* ..\n // .. so y is set to &x... oops;\n*y = 43; // y == &x... so attempting to modify const \n // variable. oops! undefined behavior!\ncout << x << endl;\n\nNon-const types can only convert to const types in particular ways to prevent any circumvention of const on a data-type without an explicit cast. \nObjects initially declared const are particularly special -- the compiler can assume they never change. However, if b can be assigned the value of a without a cast, then you could inadvertently attempt to modify a const variable. This would not only break the check you asked the compiler to make, to disallow you from changing that variables value -- it would also allow you break the compiler optimizations!\nOn some compilers, this will print 42, on some 43, and others, the program will crash. \nEdit-add:\nHappyDude: Your comment is spot on. Either the C langauge, or the C compiler you're using, treats const char * const * fundamentally differently than the C++ language treats it. Perhaps consider silencing the compiler warning for this source line only.\n",
"This is annoying, but if you're willing to add another level of redirection, you can often do the following to push down into the pointer-to-pointer:\nchar c = 'c';\nchar *p = &c;\nchar **a = &p;\n\nconst char *bi = *a;\nconst char * const * b = &bi;\n\nIt has a slightly different meaning, but it's usually workable, and it doesn't use a cast.\n",
"I'm not able to get an error when implicitly casting char** to const char * const *, at least on MSVC 14 (VS2k5) and g++ 3.3.3. GCC 3.3.3 issues a warning, which I'm not exactly sure if it is correct in doing.\ntest.c:\n#include <stdlib.h> \n#include <stdio.h>\nvoid foo(const char * const * bar)\n{\n printf(\"bar %s null\\n\", bar ? \"is not\" : \"is\");\n}\n\nint main(int argc, char **argv) \n{\n char **x = NULL; \n const char* const*y = x;\n foo(x);\n foo(y);\n return 0; \n}\n\nOutput with compile as C code: cl /TC /W4 /Wp64 test.c\ntest.c(8) : warning C4100: 'argv' : unreferenced formal parameter\ntest.c(8) : warning C4100: 'argc' : unreferenced formal parameter\n\nOutput with compile as C++ code: cl /TP /W4 /Wp64 test.c\ntest.c(8) : warning C4100: 'argv' : unreferenced formal parameter\ntest.c(8) : warning C4100: 'argc' : unreferenced formal parameter\n\nOutput with gcc: gcc -Wall test.c\ntest2.c: In function `main':\ntest2.c:11: warning: initialization from incompatible pointer type\ntest2.c:12: warning: passing arg 1 of `foo' from incompatible pointer type\n\nOutput with g++: g++ -Wall test.C\nno output\n",
"I'm pretty sure that the const keyword does not imply the data can't be changed/is constant, only that the data will be treated as read-only. Consider this:\nconst volatile int *const serial_port = SERIAL_PORT;\n\nwhich is valid code. How can volatile and const co-exist? Simple. volatile tells the compiler to always read the memory when using the data and const tells the compiler to create an error when an attempt is made to write to the memory using the serial_port pointer.\nDoes const help the compiler's optimiser? No. Not at all. Because constness can be added to and removed from data through casting, the compiler cannot figure out if const data really is constant (since the cast could be done in a different translation unit). In C++ you also have the mutable keyword to complicate matters further.\nchar *const p = (char *) 0xb000;\n//error: p = (char *) 0xc000;\nchar **q = (char **)&p;\n*q = (char *)0xc000; // p is now 0xc000\n\nWhat happens when an attempt is made to write to memory that really is read only (ROM, for example) probably isn't defined in the standard at all.\n"
] |
[
60,
11,
10,
1,
0,
0
] |
[] |
[] |
[
"c",
"const_correctness",
"constants",
"pointers"
] |
stackoverflow_0000078125_c_const_correctness_constants_pointers.txt
|
Q:
Firing UI control events from a Unit Test
As a beginner to TDD I am trying to write a test that assumes a property has had its value changed on a PropertyGrid (C#, WinForms, .NET 3.5).
Changing a property on an object in a property grid does not fire the event (fair enough, as it's a UI raised event, so I can see why changing the owned object may be invisible to it).
I also had the same issue with getting an AfterSelect on a TreeView to fire when changing the SelectedNode property.
I could have a function that my unit test can call that simulates the code a UI event would fire, but that would be cluttering up my code, and unless I make it public, I would have to write all my tests in the same project, or even class, of the objects I am testing (again, I see this as clutter). This seems ugly to me, and would suffer from maintainability problems.
Is there a convention to do this sort of UI based unit-testing
A:
To unit test your code you will need to mock up an object of the UI interface element. There are many tools you can use to do this, and I can't recommend one over another. There's a good comparison between MoQ and Rhino Mocks here at Phil Haack's blog that I've found useful and might be useful to you.
Anothing thing to consider if you're using TDD is creating an interface to your views will assist in the TDD process. There is a design model for this (probably more than one, but this is one I use) called Model View Presenter (now split into Passive View and Supervisor Controller). Following one of these will make your code behind far more testable in the future.
Also, bear in mind that testing the UI itself cannot be done through unit testing. A test automation tool as already suggested in another answer will be appropriate for this, but not for unit testing your code.
A:
Microsoft has UI Automation built into the .Net Framework. You may be able to use this to simulate a user utilising your software in the normal way.
There is an MSDN article "Using UI Automation for Automated Testing which is a good starting point.
A:
One option I would recommend for its simplicty is to have your UI just call a helper class or method on the firing of the event and unit test that. Make sure it (your event handler in the UI) has as little logic as possible and then from there I'm sure you'll know what to do.
It can be pretty difficult to reach 100% coverage in your unit tests. By difficult I mean of course inefficient. Even once you get good at something like that it will, in my opinion, probably add more complexity to your code base than your unit test would merit. If you're not sure how to get your logic segmented into a separate class or method, that's another question I would love to help with.
I'll be interested to see what other techniques people have to work with this kind of issue.
|
Firing UI control events from a Unit Test
|
As a beginner to TDD I am trying to write a test that assumes a property has had its value changed on a PropertyGrid (C#, WinForms, .NET 3.5).
Changing a property on an object in a property grid does not fire the event (fair enough, as it's a UI raised event, so I can see why changing the owned object may be invisible to it).
I also had the same issue with getting an AfterSelect on a TreeView to fire when changing the SelectedNode property.
I could have a function that my unit test can call that simulates the code a UI event would fire, but that would be cluttering up my code, and unless I make it public, I would have to write all my tests in the same project, or even class, of the objects I am testing (again, I see this as clutter). This seems ugly to me, and would suffer from maintainability problems.
Is there a convention to do this sort of UI based unit-testing
|
[
"To unit test your code you will need to mock up an object of the UI interface element. There are many tools you can use to do this, and I can't recommend one over another. There's a good comparison between MoQ and Rhino Mocks here at Phil Haack's blog that I've found useful and might be useful to you.\nAnothing thing to consider if you're using TDD is creating an interface to your views will assist in the TDD process. There is a design model for this (probably more than one, but this is one I use) called Model View Presenter (now split into Passive View and Supervisor Controller). Following one of these will make your code behind far more testable in the future.\nAlso, bear in mind that testing the UI itself cannot be done through unit testing. A test automation tool as already suggested in another answer will be appropriate for this, but not for unit testing your code.\n",
"Microsoft has UI Automation built into the .Net Framework. You may be able to use this to simulate a user utilising your software in the normal way.\nThere is an MSDN article \"Using UI Automation for Automated Testing which is a good starting point.\n",
"One option I would recommend for its simplicty is to have your UI just call a helper class or method on the firing of the event and unit test that. Make sure it (your event handler in the UI) has as little logic as possible and then from there I'm sure you'll know what to do.\nIt can be pretty difficult to reach 100% coverage in your unit tests. By difficult I mean of course inefficient. Even once you get good at something like that it will, in my opinion, probably add more complexity to your code base than your unit test would merit. If you're not sure how to get your logic segmented into a separate class or method, that's another question I would love to help with.\nI'll be interested to see what other techniques people have to work with this kind of issue.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"c#",
"unit_testing",
"winforms"
] |
stackoverflow_0000098196_c#_unit_testing_winforms.txt
|
Q:
Is it possible to integrate SSRS reports with webforms
Is it possible to integrate SSRS reports to the webforms..an example will be enough to keep me moving.
A:
Absolutely it is.
What you are looking for is the ReportViewer control, located in the Microsoft.Reporting.WebForms assembly. It will allow you to place a control right on your web form that will give people an interface for setting report parameters and getting the report.
Alternatively you can set all the parameters yourself and output the report in whatever format you need. We use it in our application to output PDF.
For instance - this is how we setup a reportviewer object for one of our reports and get the PDF, and then send it back to the user. The particular code block is a web handler.
public void ProcessRequest(HttpContext context)
{
string report = null;
int managerId = -1;
int planId = -1;
GetParametersFromSession(context.Session, out report, out managerId, out planId);
if (report == null || managerId == -1 || planId == -1)
{
return;
}
CultureInfo currentCulture = Thread.CurrentThread.CurrentCulture;
List<ReportParameter> parameters = new List<ReportParameter>();
parameters.Add(new ReportParameter("Prefix", report));
parameters.Add(new ReportParameter("ManagerId", managerId.ToString()));
parameters.Add(new ReportParameter("ActionPlanId", planId.ToString()));
string language = Thread.CurrentThread.CurrentCulture.Name;
language = String.Format("{0}_{1}", language.Substring(0, 2), language.Substring(3, 2).ToLower());
parameters.Add(new ReportParameter("Lang", language));
ReportViewer rv = new ReportViewer();
rv.ProcessingMode = ProcessingMode.Remote;
rv.ServerReport.ReportServerUrl = new Uri(ConfigurationManager.AppSettings["ReportServer"]);
if (ConfigurationManager.AppSettings["DbYear"] == "2007")
{
rv.ServerReport.ReportPath = "/ActionPlanning/Plan";
}
else
{
rv.ServerReport.ReportPath = String.Format("/ActionPlanning{0}/Plan", ConfigurationManager.AppSettings["DbYear"]);
}
rv.ServerReport.SetParameters(parameters);
string mimeType = null;
string encoding = null;
string extension = null;
string[] streamIds = null;
Warning[] warnings = null;
byte[] output = rv.ServerReport.Render("pdf", null, out mimeType, out encoding, out extension, out streamIds, out warnings);
context.Response.ContentType = mimeType;
context.Response.BinaryWrite(output);
}
A:
this is a knowledge base article which describes how to render report output to an aspx page in a particular file format.
http://support.microsoft.com/kb/875447/en-us
A:
Be warned that you will lose some functionality such as the parameter selection stuff when you do not use the URL Access method.
Report server URL access supports HTML Viewer and the extended functionality of the report toolbar. The SOAP API does not support this type of rendered report. You need to design and develop your own report toolbar, if you render reports using SOAP.
http://msdn.microsoft.com/en-us/library/ms155089.aspx
|
Is it possible to integrate SSRS reports with webforms
|
Is it possible to integrate SSRS reports to the webforms..an example will be enough to keep me moving.
|
[
"Absolutely it is.\nWhat you are looking for is the ReportViewer control, located in the Microsoft.Reporting.WebForms assembly. It will allow you to place a control right on your web form that will give people an interface for setting report parameters and getting the report.\nAlternatively you can set all the parameters yourself and output the report in whatever format you need. We use it in our application to output PDF.\nFor instance - this is how we setup a reportviewer object for one of our reports and get the PDF, and then send it back to the user. The particular code block is a web handler.\npublic void ProcessRequest(HttpContext context)\n{\n string report = null;\n int managerId = -1;\n int planId = -1;\n GetParametersFromSession(context.Session, out report, out managerId, out planId);\n if (report == null || managerId == -1 || planId == -1)\n {\n return;\n }\n\n CultureInfo currentCulture = Thread.CurrentThread.CurrentCulture;\n\n List<ReportParameter> parameters = new List<ReportParameter>();\n parameters.Add(new ReportParameter(\"Prefix\", report));\n parameters.Add(new ReportParameter(\"ManagerId\", managerId.ToString()));\n parameters.Add(new ReportParameter(\"ActionPlanId\", planId.ToString()));\n string language = Thread.CurrentThread.CurrentCulture.Name;\n language = String.Format(\"{0}_{1}\", language.Substring(0, 2), language.Substring(3, 2).ToLower());\n parameters.Add(new ReportParameter(\"Lang\", language));\n\n ReportViewer rv = new ReportViewer();\n rv.ProcessingMode = ProcessingMode.Remote;\n rv.ServerReport.ReportServerUrl = new Uri(ConfigurationManager.AppSettings[\"ReportServer\"]);\n if (ConfigurationManager.AppSettings[\"DbYear\"] == \"2007\")\n {\n rv.ServerReport.ReportPath = \"/ActionPlanning/Plan\";\n }\n else\n {\n rv.ServerReport.ReportPath = String.Format(\"/ActionPlanning{0}/Plan\", ConfigurationManager.AppSettings[\"DbYear\"]);\n }\n rv.ServerReport.SetParameters(parameters);\n\n string mimeType = null;\n string encoding = null;\n string extension = null;\n string[] streamIds = null;\n Warning[] warnings = null;\n byte[] output = rv.ServerReport.Render(\"pdf\", null, out mimeType, out encoding, out extension, out streamIds, out warnings);\n\n context.Response.ContentType = mimeType;\n context.Response.BinaryWrite(output);\n}\n\n",
"this is a knowledge base article which describes how to render report output to an aspx page in a particular file format.\nhttp://support.microsoft.com/kb/875447/en-us\n",
"Be warned that you will lose some functionality such as the parameter selection stuff when you do not use the URL Access method.\n\nReport server URL access supports HTML Viewer and the extended functionality of the report toolbar. The SOAP API does not support this type of rendered report. You need to design and develop your own report toolbar, if you render reports using SOAP.\n\nhttp://msdn.microsoft.com/en-us/library/ms155089.aspx\n"
] |
[
7,
0,
0
] |
[] |
[] |
[
"reporting"
] |
stackoverflow_0000098274_reporting.txt
|
Q:
Why is this X.509 certificate considered invalid?
I have a given certificate installed on my server. That certificate has valid dates, and seems perfectly valid in the Windows certificates MMC snap-in.
However, when I try to read the certificate, in order to use it in an HttpRequest, I can't find it. Here is the code used:
X509Store store = new X509Store(StoreName.Root, StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadOnly); X509Certificate2Collection col =
store.Certificates.Find(X509FindType.FindBySerialNumber, "xxx", true);
xxx is the serial number; the argument true means "only valid certificates". The returned collection is empty.
The strange thing is that if I pass false, indicating invalid certificates are acceptable, the collection contains one element—the certificate with the specified serial number.
In conclusion: the certificate appears valid, but the Find method treats it as invalid! Why?
A:
Try verifying the certificate chain using the X509Chain class. This can tell you exactly why the certificate isn't considered valid.
As erickson suggested, your X509Store may not have the trusted certificate from the CA in the chain. If you used OpenSSL or another tool to generate your own self-signed CA, you need to add the public certificate for that CA to the X509Store.
A:
Is the issuer's certificate present in the X509Store? A certificate is only valid if it's signed by someone you trust.
Is this a certificate from a real CA, or one that you signed yourself? Certificate signing tools often used by developers, like OpenSSL, don't add some important extensions by default.
A:
I believe x509 certs are tied to a particular user. Could it be invalid because in the code you are accessing it as a different user than the one for which it was created?
|
Why is this X.509 certificate considered invalid?
|
I have a given certificate installed on my server. That certificate has valid dates, and seems perfectly valid in the Windows certificates MMC snap-in.
However, when I try to read the certificate, in order to use it in an HttpRequest, I can't find it. Here is the code used:
X509Store store = new X509Store(StoreName.Root, StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadOnly); X509Certificate2Collection col =
store.Certificates.Find(X509FindType.FindBySerialNumber, "xxx", true);
xxx is the serial number; the argument true means "only valid certificates". The returned collection is empty.
The strange thing is that if I pass false, indicating invalid certificates are acceptable, the collection contains one element—the certificate with the specified serial number.
In conclusion: the certificate appears valid, but the Find method treats it as invalid! Why?
|
[
"Try verifying the certificate chain using the X509Chain class. This can tell you exactly why the certificate isn't considered valid.\nAs erickson suggested, your X509Store may not have the trusted certificate from the CA in the chain. If you used OpenSSL or another tool to generate your own self-signed CA, you need to add the public certificate for that CA to the X509Store.\n",
"Is the issuer's certificate present in the X509Store? A certificate is only valid if it's signed by someone you trust.\nIs this a certificate from a real CA, or one that you signed yourself? Certificate signing tools often used by developers, like OpenSSL, don't add some important extensions by default.\n",
"I believe x509 certs are tied to a particular user. Could it be invalid because in the code you are accessing it as a different user than the one for which it was created?\n"
] |
[
8,
6,
3
] |
[] |
[] |
[
"c#",
"certificate",
"ssl",
"x509"
] |
stackoverflow_0000098074_c#_certificate_ssl_x509.txt
|
Q:
Same source code on two machines yield different executable behavior
Here's the scenario:
A C# Windows Application project stored in SVN is used to create an executable. Normally, a build server handles the build process and creates builds at regular intervals which are used by testing. In this particular instance I was asked to modify a specific build and create the executable.
I'm not entirely sure if the build server modifies the project files, but I know it creates a tag in SVN of the source code it used to compile the executables. Using that tag I've checked out the code on a second machine, which is a development machine. I then compiled the source on the development machine.
When executed, the application that was compiled on the development machine does not function exactly like the one compiled by the build server. For example, on the testing machines a DateTime Parse execption is detected by the application. However, the build machine's executable does not throw any exeptions. If I run the executable on the development machine no exceptions are thrown.
So in summary, both machines are theoretically using the same source code and projects.
The development machine's executable only works on the dev machine. The Build machine's executable works on every machine, including the dev machine.
Are the machine's Regional Settings or Time Zone stored in the compiled executable? Any idea what might cause this behaviour or how to check the executables to find the possible differences and correct them?
Unfortunately, I cannot take a testing machine and attach a debugger to it. As soon as I can I will.
A:
The app uses the Regional Settings of the machine it's running on, and it looks like it is your problem. You can force a thread to use a specific culture by setting System.Threading.Thread.CurrentThread.CurrentCulture and System.Threading.Thread.CurrentThread.CurrentUICulture to a specific value.
A:
It's possible that the two machines have different versions of an underlying dll that isn't part of your build process. I've seen this happen when distributing services across our internal server farm.
A:
Can you run the program on the build machine under a debugger?
If so, then debug the problem - there's no need to guess.
Have the debugger on the dev machine catch the exception, set a break point at the same place on the build machine. See what's different between the two.
A:
I've seen different "Regional and Language Options" on XP cause this sort of behavior. Do these match on both machines? Start | Settings | Control Panel | Regional and Language Options...
A:
I have a couple questions - do both machines have identical regional settings and where are your error logs? I would hope ;-) you have exceptions being handled and written to disk, event logs .. something to help with problems like this.
Where does the date come from that is being parsed? If it is in your db maybe you have bad data too.
A:
I had a similar problem once (except in C++) When I compared the sizes of the compiled executables, they were way off. Unfortunately, after days of searching, the best solution I found was to uninstall VS05 and re-install it.
A:
Why are you using a build server anyways, for C# code, if I may ask?
The build times for C# when I was using it were hardly noticable (<2s). Is the app really that big?
A:
The build system probably makes a release version, while the manual build on the dev PC makes a debug version. The debug version has more error checking in it. See if you can manually build a release version and see if there are still differences.
A:
The same source code rarely if every builds the same program on different computers. You should always assume the programs are different, never expect them to be the same. In an environment like linux with a good package manager and periodic and or random updates, never expect the same source code to build the same program on the same computer either. The higher the language the worse it gets. Building a program for the debugger is drastically different than building for release. The debugger version even without the debugger hides bugs that you wont find until you go to the release build. You basically get to debug the program twice if you rely too much on a debugger environment.
|
Same source code on two machines yield different executable behavior
|
Here's the scenario:
A C# Windows Application project stored in SVN is used to create an executable. Normally, a build server handles the build process and creates builds at regular intervals which are used by testing. In this particular instance I was asked to modify a specific build and create the executable.
I'm not entirely sure if the build server modifies the project files, but I know it creates a tag in SVN of the source code it used to compile the executables. Using that tag I've checked out the code on a second machine, which is a development machine. I then compiled the source on the development machine.
When executed, the application that was compiled on the development machine does not function exactly like the one compiled by the build server. For example, on the testing machines a DateTime Parse execption is detected by the application. However, the build machine's executable does not throw any exeptions. If I run the executable on the development machine no exceptions are thrown.
So in summary, both machines are theoretically using the same source code and projects.
The development machine's executable only works on the dev machine. The Build machine's executable works on every machine, including the dev machine.
Are the machine's Regional Settings or Time Zone stored in the compiled executable? Any idea what might cause this behaviour or how to check the executables to find the possible differences and correct them?
Unfortunately, I cannot take a testing machine and attach a debugger to it. As soon as I can I will.
|
[
"The app uses the Regional Settings of the machine it's running on, and it looks like it is your problem. You can force a thread to use a specific culture by setting System.Threading.Thread.CurrentThread.CurrentCulture and System.Threading.Thread.CurrentThread.CurrentUICulture to a specific value.\n",
"It's possible that the two machines have different versions of an underlying dll that isn't part of your build process. I've seen this happen when distributing services across our internal server farm.\n",
"Can you run the program on the build machine under a debugger?\nIf so, then debug the problem - there's no need to guess.\nHave the debugger on the dev machine catch the exception, set a break point at the same place on the build machine. See what's different between the two.\n",
"I've seen different \"Regional and Language Options\" on XP cause this sort of behavior. Do these match on both machines? Start | Settings | Control Panel | Regional and Language Options...\n",
"I have a couple questions - do both machines have identical regional settings and where are your error logs? I would hope ;-) you have exceptions being handled and written to disk, event logs .. something to help with problems like this.\nWhere does the date come from that is being parsed? If it is in your db maybe you have bad data too.\n",
"I had a similar problem once (except in C++) When I compared the sizes of the compiled executables, they were way off. Unfortunately, after days of searching, the best solution I found was to uninstall VS05 and re-install it.\n",
"Why are you using a build server anyways, for C# code, if I may ask?\nThe build times for C# when I was using it were hardly noticable (<2s). Is the app really that big?\n",
"The build system probably makes a release version, while the manual build on the dev PC makes a debug version. The debug version has more error checking in it. See if you can manually build a release version and see if there are still differences.\n",
"The same source code rarely if every builds the same program on different computers. You should always assume the programs are different, never expect them to be the same. In an environment like linux with a good package manager and periodic and or random updates, never expect the same source code to build the same program on the same computer either. The higher the language the worse it gets. Building a program for the debugger is drastically different than building for release. The debugger version even without the debugger hides bugs that you wont find until you go to the release build. You basically get to debug the program twice if you rely too much on a debugger environment.\n"
] |
[
4,
2,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"compiler_construction",
"datetime"
] |
stackoverflow_0000086959_.net_compiler_construction_datetime.txt
|
Q:
emulate unix 'cut' using standard windows command line/batch commands
Is there a way to emulate the unix cut command on windows XP, without resorting to cygwin or other non-standard windows capabilities?
Example: Use tasklist /v, find the specific task by the window title, then extract the PID from that list to pass to taskkill.
A:
FYI, tasklist and taskkill already have filtering capabilities:
tasklist /FI "imagename eq chrome.exe"
taskkill /F /FI "imagename eq iexplore.exe"
If you want more general functionality, batch scripts (ugh) can help. For example:
for /f "tokens=1,2 delims= " %%i in ('tasklist /v') do (
if "%%i" == "%~1" (
echo TASKKILL /PID %%j
)
)
There's a fair amount of help for the windows command-line. Type "help" to get a list of commands with a simple summary then type "help " for more information about that command (e.g. "help for").
|
emulate unix 'cut' using standard windows command line/batch commands
|
Is there a way to emulate the unix cut command on windows XP, without resorting to cygwin or other non-standard windows capabilities?
Example: Use tasklist /v, find the specific task by the window title, then extract the PID from that list to pass to taskkill.
|
[
"FYI, tasklist and taskkill already have filtering capabilities:\ntasklist /FI \"imagename eq chrome.exe\"\ntaskkill /F /FI \"imagename eq iexplore.exe\"\n\nIf you want more general functionality, batch scripts (ugh) can help. For example:\nfor /f \"tokens=1,2 delims= \" %%i in ('tasklist /v') do (\n if \"%%i\" == \"%~1\" (\n echo TASKKILL /PID %%j\n )\n)\n\nThere's a fair amount of help for the windows command-line. Type \"help\" to get a list of commands with a simple summary then type \"help \" for more information about that command (e.g. \"help for\").\n"
] |
[
10
] |
[] |
[] |
[
"command_line",
"unix",
"windows"
] |
stackoverflow_0000098363_command_line_unix_windows.txt
|
Q:
Why is there no main() function in vxWorks?
When using vxWorks as a development platform, we can't write our application with the standard main() function. Why can't we have a main function?
A:
Before the 6.0 version VxWorks only
supported kernel execution environment for tasks and did not support
processes, which is the traditional application execution environment
on OS like Unix or Windows. Tasks have an entry point which is the
address of the code to execute as a task. This address corresponds to
a C or assembly function. It can be a symbol named "main" but there
are C/C++ language assumptions about the main() function that are not
supported in the kernel environment (in particular the traditional
handling of the argc and argv parameters). Furthermore, prior to
VxWorks 6.0, all tasks execute kernel code. You can picture the kernel
as a common repository of code all linked together and then you'll see
that you cannot have several symbols of the same name ("main") since
this would create name collisions.
Now this is accurate only if you link your application code to the
kernel image. If you were to download your application code then the
module loader will accept to load several modules each with a main()
routine. However the last "main" symbol registered in the system
symbol table is the only one you can access via the target shell. If
you want to start tasks executing the code of one of the first loaded
modules you'd have to use the addresses of the previous main()
function. This is possible but not convenient. It is far more
practical to give different names to the entry points of tasks (may be
like "xxxStart" where "xxx" is a name meaningful for what the task is
supposed to do).
Starting with VxWorks 6.0 the OS supports a process environment. This
means, among many other things, that you can have a traditional main()
routine and that its argc and argv parameters are properly handled,
and that the application code is executing in a context (user context)
which is different from the kernel context, thus ensuring the
isolation between application code (which can be flaky) and kernel
code (which is not supposed to be flaky).
PAD
|
Why is there no main() function in vxWorks?
|
When using vxWorks as a development platform, we can't write our application with the standard main() function. Why can't we have a main function?
|
[
"Before the 6.0 version VxWorks only\nsupported kernel execution environment for tasks and did not support\nprocesses, which is the traditional application execution environment\non OS like Unix or Windows. Tasks have an entry point which is the\naddress of the code to execute as a task. This address corresponds to\na C or assembly function. It can be a symbol named \"main\" but there\nare C/C++ language assumptions about the main() function that are not\nsupported in the kernel environment (in particular the traditional\nhandling of the argc and argv parameters). Furthermore, prior to\nVxWorks 6.0, all tasks execute kernel code. You can picture the kernel\nas a common repository of code all linked together and then you'll see\nthat you cannot have several symbols of the same name (\"main\") since\nthis would create name collisions.\nNow this is accurate only if you link your application code to the\nkernel image. If you were to download your application code then the\nmodule loader will accept to load several modules each with a main()\nroutine. However the last \"main\" symbol registered in the system\nsymbol table is the only one you can access via the target shell. If\nyou want to start tasks executing the code of one of the first loaded\nmodules you'd have to use the addresses of the previous main()\nfunction. This is possible but not convenient. It is far more\npractical to give different names to the entry points of tasks (may be\nlike \"xxxStart\" where \"xxx\" is a name meaningful for what the task is\nsupposed to do).\nStarting with VxWorks 6.0 the OS supports a process environment. This\nmeans, among many other things, that you can have a traditional main()\nroutine and that its argc and argv parameters are properly handled,\nand that the application code is executing in a context (user context)\nwhich is different from the kernel context, thus ensuring the\nisolation between application code (which can be flaky) and kernel\ncode (which is not supposed to be flaky). \nPAD\n"
] |
[
14
] |
[] |
[] |
[
"vxworks"
] |
stackoverflow_0000098465_vxworks.txt
|
Q:
Get list of records with multiple entries on the same date
I need to return a list of record id's from a table that may/may not have multiple entries with that record id on the same date. The same date criteria is key - if a record has three entries on 09/10/2008, then I need all three returned. If the record only has one entry on 09/12/2008, then I don't need it.
A:
SELECT id, datefield, count(*) FROM tablename GROUP BY datefield
HAVING count(*) > 1
A:
Since you mentioned needing all three records, I am assuming you want the data as well. If you just need the id's, you can just use the group by query. To return the data, just join to that as a subquery
select * from table
inner join (
select id, date
from table
group by id, date
having count(*) > 1) grouped
on table.id = grouped.id and table.date = grouped.date
A:
The top post (Leigh Caldwell) will not return duplicate records and needs to be down modded. It will identify the duplicate keys. Furthermore, it will not work if your database doesn't allows the group by to not include all select fields (many do not).
If your date field includes a time stamp then you'll need to truncate that out using one of the methods documented above ( I prefer: dateadd(dd,0, datediff(dd,0,@DateTime)) ).
I think Scott Nichols gave the correct answer and here's a script to prove it:
declare @duplicates table (
id int,
datestamp datetime,
ipsum varchar(200))
insert into @duplicates (id,datestamp,ipsum) values (1,'9/12/2008','ipsum primis in faucibus')
insert into @duplicates (id,datestamp,ipsum) values (1,'9/12/2008','Vivamus consectetuer. ')
insert into @duplicates (id,datestamp,ipsum) values (2,'9/12/2008','condimentum posuere, quam.')
insert into @duplicates (id,datestamp,ipsum) values (2,'9/13/2008','Donec eu sapien vel dui')
insert into @duplicates (id,datestamp,ipsum) values (3,'9/12/2008','In velit nulla, faucibus sed')
select a.* from @duplicates a
inner join (select id,datestamp, count(1) as number
from @duplicates
group by id,datestamp
having count(1) > 1) b
on (a.id = b.id and a.datestamp = b.datestamp)
A:
SELECT RecordID
FROM aTable
WHERE SameDate IN
(SELECT SameDate
FROM aTable
GROUP BY SameDate
HAVING COUNT(SameDate) > 1)
A:
GROUP BY with HAVING is your friend:
select id, count(*) from records group by date having count(*) > 1
A:
select id from tbl where date in
(select date from tbl group by date having count(*)>1)
A:
For matching on just the date part of a Datetime:
select * from Table
where id in (
select alias1.id from Table alias1, Table alias2
where alias1.id != alias2.id
and datediff(day, alias1.date, alias2.date) = 0
)
I think. This is based on my assumption that you need them on the same day month and year, but not the same time of day, so I did not use a Group by clause. From the other posts it looks like I could have more cleverly used a Having clause. Can you use a having or group by on a datediff expression?
A:
If I understand your question correctly you could do something similar to:
select
recordID
from
tablewithrecords as a
left join (
select
count(recordID) as recordcount
from
tblwithrecords
where
recorddate='9/10/08'
) as b on a.recordID=b.recordID
where
b.recordcount>1
A:
http://www.sql-server-performance.com/articles/dba/delete_duplicates_p1.aspx will get you going. Also, http://en.allexperts.com/q/MS-SQL-1450/2008/8/SQL-query-fetch-duplicate.htm
I found these by searching Google for 'sql duplicate data'. You'll see this isn't an unusual problem.
A:
SELECT * FROM the_table WHERE ROW(record_id,date) IN
( SELECT record_id, date FROM the_table
GROUP BY record_id, date WHERE COUNT(*) > 1 )
A:
I'm not sure I understood your question, but maybe you want something like this:
SELECT id, COUNT(*) AS same_date FROM foo GROUP BY id, date HAVING same_date = 3;
This is just written from my mind and not tested in any way. Read the GROUP BY and HAVING section here. If this is not what you meant, please ignore this answer.
A:
Note that there's some extra processing necessary if you're using a SQL DateTime field. If you've got that extra time data in there, then you can't just use that column as-is. You've got to normalize the DateTime to a single value for all records contained within the day.
In SQL Server here's a little trick to do that:
SELECT CAST(FLOOR(CAST(CURRENT_TIMESTAMP AS float)) AS DATETIME)
You cast the DateTime into a float, which represents the Date as the integer portion and the Time as the fraction of a day that's passed. Chop off that decimal portion, then cast that back to a DateTime, and you've got midnight at the beginning of that day.
A:
SELECT id, count(*)
INTO #tmp
FROM tablename
WHERE date = @date
GROUP BY id
HAVING count(*) > 1
SELECT *
FROM tablename t
WHERE EXISTS (SELECT 1 FROM #tmp WHERE id = t.id)
DROP TABLE tablename
A:
Without knowing the exact structure of your tables or what type of database you're using it's hard to answer. However if you're using MS SQL and if you have a true date/time field that has different times that the records were entered on the same date then something like this should work:
select record_id,
convert(varchar, date_created, 101) as log date,
count(distinct date_created) as num_of_entries
from record_log_table
group by convert(varchar, date_created, 101), record_id
having count(distinct date_created) > 1
Hope this helps.
A:
TrickyNixon writes;
The top post (Leigh Caldwell) will not return duplicate records and needs to be down modded.
Yet the question doesn't ask about duplicate records. It asks about duplicate record-ids on the same date...
GROUP-BY,HAVING seems good to me. I've used it in production before.
.
Something to watch out for:
SELECT ... FROM ... GROUP BY ... HAVING count(*)>1
Will, on most database systems, run in O(NlogN) time. It's a good solution. (Select is O(N), sort is O(NlogN), group by is O(N), having is O(N) -- Worse case. Best case, date is indexed and the sort operation is more efficient.)
.
Select ... from ..., .... where a.data = b.date
Granted only idiots do a Cartesian join. But you're looking at O(N^2) time. For some databases, this also creates a "temporary" table. It's all insignificant when your table has only 10 rows. But it's gonna hurt when that table grows!
Ob link: http://en.wikipedia.org/wiki/Join_(SQL)
|
Get list of records with multiple entries on the same date
|
I need to return a list of record id's from a table that may/may not have multiple entries with that record id on the same date. The same date criteria is key - if a record has three entries on 09/10/2008, then I need all three returned. If the record only has one entry on 09/12/2008, then I don't need it.
|
[
"SELECT id, datefield, count(*) FROM tablename GROUP BY datefield\n HAVING count(*) > 1\n\n",
"Since you mentioned needing all three records, I am assuming you want the data as well. If you just need the id's, you can just use the group by query. To return the data, just join to that as a subquery\nselect * from table\ninner join (\n select id, date\n from table \n group by id, date \n having count(*) > 1) grouped \n on table.id = grouped.id and table.date = grouped.date\n\n",
"The top post (Leigh Caldwell) will not return duplicate records and needs to be down modded. It will identify the duplicate keys. Furthermore, it will not work if your database doesn't allows the group by to not include all select fields (many do not).\nIf your date field includes a time stamp then you'll need to truncate that out using one of the methods documented above ( I prefer: dateadd(dd,0, datediff(dd,0,@DateTime)) ).\nI think Scott Nichols gave the correct answer and here's a script to prove it:\ndeclare @duplicates table (\nid int,\ndatestamp datetime,\nipsum varchar(200))\n\ninsert into @duplicates (id,datestamp,ipsum) values (1,'9/12/2008','ipsum primis in faucibus')\ninsert into @duplicates (id,datestamp,ipsum) values (1,'9/12/2008','Vivamus consectetuer. ')\ninsert into @duplicates (id,datestamp,ipsum) values (2,'9/12/2008','condimentum posuere, quam.')\ninsert into @duplicates (id,datestamp,ipsum) values (2,'9/13/2008','Donec eu sapien vel dui')\ninsert into @duplicates (id,datestamp,ipsum) values (3,'9/12/2008','In velit nulla, faucibus sed')\n\nselect a.* from @duplicates a\ninner join (select id,datestamp, count(1) as number\n from @duplicates\n group by id,datestamp\n having count(1) > 1) b\n on (a.id = b.id and a.datestamp = b.datestamp)\n\n",
"SELECT RecordID\nFROM aTable\nWHERE SameDate IN\n (SELECT SameDate\n FROM aTable\n GROUP BY SameDate\n HAVING COUNT(SameDate) > 1)\n\n",
"GROUP BY with HAVING is your friend:\nselect id, count(*) from records group by date having count(*) > 1\n\n",
"select id from tbl where date in\n(select date from tbl group by date having count(*)>1)\n\n",
"For matching on just the date part of a Datetime:\nselect * from Table\nwhere id in (\n select alias1.id from Table alias1, Table alias2\n where alias1.id != alias2.id\n and datediff(day, alias1.date, alias2.date) = 0\n)\n\nI think. This is based on my assumption that you need them on the same day month and year, but not the same time of day, so I did not use a Group by clause. From the other posts it looks like I could have more cleverly used a Having clause. Can you use a having or group by on a datediff expression?\n",
"If I understand your question correctly you could do something similar to:\nselect\n recordID\nfrom\n tablewithrecords as a\n left join (\n select\n count(recordID) as recordcount\n from\n tblwithrecords\n where\n recorddate='9/10/08'\n ) as b on a.recordID=b.recordID\nwhere\n b.recordcount>1\n\n",
"http://www.sql-server-performance.com/articles/dba/delete_duplicates_p1.aspx will get you going. Also, http://en.allexperts.com/q/MS-SQL-1450/2008/8/SQL-query-fetch-duplicate.htm\nI found these by searching Google for 'sql duplicate data'. You'll see this isn't an unusual problem.\n",
"SELECT * FROM the_table WHERE ROW(record_id,date) IN \n ( SELECT record_id, date FROM the_table \n GROUP BY record_id, date WHERE COUNT(*) > 1 )\n\n",
"I'm not sure I understood your question, but maybe you want something like this:\nSELECT id, COUNT(*) AS same_date FROM foo GROUP BY id, date HAVING same_date = 3;\n\nThis is just written from my mind and not tested in any way. Read the GROUP BY and HAVING section here. If this is not what you meant, please ignore this answer.\n",
"Note that there's some extra processing necessary if you're using a SQL DateTime field. If you've got that extra time data in there, then you can't just use that column as-is. You've got to normalize the DateTime to a single value for all records contained within the day. \nIn SQL Server here's a little trick to do that:\nSELECT CAST(FLOOR(CAST(CURRENT_TIMESTAMP AS float)) AS DATETIME)\n\nYou cast the DateTime into a float, which represents the Date as the integer portion and the Time as the fraction of a day that's passed. Chop off that decimal portion, then cast that back to a DateTime, and you've got midnight at the beginning of that day.\n",
"\nSELECT id, count(*)\nINTO #tmp\nFROM tablename\nWHERE date = @date\nGROUP BY id\nHAVING count(*) > 1\n\nSELECT *\nFROM tablename t\nWHERE EXISTS (SELECT 1 FROM #tmp WHERE id = t.id)\n\nDROP TABLE tablename\n\n",
"Without knowing the exact structure of your tables or what type of database you're using it's hard to answer. However if you're using MS SQL and if you have a true date/time field that has different times that the records were entered on the same date then something like this should work:\nselect record_id, \n convert(varchar, date_created, 101) as log date, \n count(distinct date_created) as num_of_entries\nfrom record_log_table\ngroup by convert(varchar, date_created, 101), record_id\nhaving count(distinct date_created) > 1\n\nHope this helps.\n",
"TrickyNixon writes;\n\nThe top post (Leigh Caldwell) will not return duplicate records and needs to be down modded.\n\nYet the question doesn't ask about duplicate records. It asks about duplicate record-ids on the same date...\nGROUP-BY,HAVING seems good to me. I've used it in production before.\n.\nSomething to watch out for:\nSELECT ... FROM ... GROUP BY ... HAVING count(*)>1\nWill, on most database systems, run in O(NlogN) time. It's a good solution. (Select is O(N), sort is O(NlogN), group by is O(N), having is O(N) -- Worse case. Best case, date is indexed and the sort operation is more efficient.)\n.\nSelect ... from ..., .... where a.data = b.date\nGranted only idiots do a Cartesian join. But you're looking at O(N^2) time. For some databases, this also creates a \"temporary\" table. It's all insignificant when your table has only 10 rows. But it's gonna hurt when that table grows!\nOb link: http://en.wikipedia.org/wiki/Join_(SQL)\n"
] |
[
5,
2,
2,
2,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1
] |
[] |
[] |
[
"sql"
] |
stackoverflow_0000076724_sql.txt
|
Q:
OpenGL: texture and plain color respond differently to ambient light?
This is a rather old problem I've had with an OpenGL application.
I have a rather complex model, some polygons in it are untextured and colored using a plain color with glColor() and others are textured. Some of the texture is the same color as the untextured polygons and there should be no visible seam between the two.
The problem is that when I turn up the ambient component of the light source, a seam between the two kinds of polygons emerge.
see this image: http://www.shiny.co.il/shooshx/colorBug2.png
The left image is without any ambient light and the right image is with ambient light of (0.2,0.2,0.2).
the RGB value of the color on the texture is identical to the RGB value of the colored faces. The textures alpha is set to 1.0 everywhere.
To shade the texture I use GL_MODULATE.
Can anyone think of a reason why that would happen and of a possible solution?
A:
You mention that you set the color with glColor(), so I assume that GL_COLOR_MATERIAL is on? What setting do you use for glColorMaterial()? In this case it should be GL_AMBIENT_AND_DIFFUSE, so that the glColor() call affects the ambient color as well as the diffuse color. (This is the default.)
You could also try to set all material colours to white (with glMaterial()) before rendering the texture mapped faces. With some settings (don't remember which), the texture itself gets modulated by the current color.
Hope this helps or at least points you into a useful direction.
|
OpenGL: texture and plain color respond differently to ambient light?
|
This is a rather old problem I've had with an OpenGL application.
I have a rather complex model, some polygons in it are untextured and colored using a plain color with glColor() and others are textured. Some of the texture is the same color as the untextured polygons and there should be no visible seam between the two.
The problem is that when I turn up the ambient component of the light source, a seam between the two kinds of polygons emerge.
see this image: http://www.shiny.co.il/shooshx/colorBug2.png
The left image is without any ambient light and the right image is with ambient light of (0.2,0.2,0.2).
the RGB value of the color on the texture is identical to the RGB value of the colored faces. The textures alpha is set to 1.0 everywhere.
To shade the texture I use GL_MODULATE.
Can anyone think of a reason why that would happen and of a possible solution?
|
[
"You mention that you set the color with glColor(), so I assume that GL_COLOR_MATERIAL is on? What setting do you use for glColorMaterial()? In this case it should be GL_AMBIENT_AND_DIFFUSE, so that the glColor() call affects the ambient color as well as the diffuse color. (This is the default.)\nYou could also try to set all material colours to white (with glMaterial()) before rendering the texture mapped faces. With some settings (don't remember which), the texture itself gets modulated by the current color.\nHope this helps or at least points you into a useful direction.\n"
] |
[
2
] |
[] |
[] |
[
"opengl",
"textures"
] |
stackoverflow_0000098451_opengl_textures.txt
|
Q:
(N)Hibernate - is it possible to dynamically map multiple tables to the one class
I have the situation where i use GIS software which stores the information about GIS objects into separate database table for each type/class of GIS object (road, river, building, sea, ...) and keeps the metadata table in which it stores info about the class name and its DB table.
Those GIS objects of different classes share some parameters, i.e. Description and ID. I'd like to represent all of these different GIS classes with one common C# class (let's call it GisObject), which is enough for what i need to do from the non-GIS part of the application which lists GIS objects of the given GIS class.
The problem for me is how to map those objects using NHibernate to explain to the NHibernate when creating a C# GisObject to receive and use the table name as a parameter which will be read from the meta table (it can be in two steps, i can manually fetch the table name in first step and then pass it down to the NHibernate when pulling GisObject data).
Has anybody dealt with this kind of situation, and can it be done at all?
A:
It sounds like the simplest thing to do here may be to create an abstract base class with all of the common GIS members and then to inherit the other X classes that will have nothing more than the necessary NHibernate mappings. I would then use the Factory pattern to create the object of the specific type using your metadata.
A:
@Brian Chiasson
Unfortunately, it's not an option to create all classes of GIS data because classes are created dynamically in the application. Every GIS data of the same type should be a class, but my user has the possibility to get new set of data and put it in the database. I can't know in front which classes my user will have in the application. Therefore, the in-front per-class mapping model doesn't work because tomorrow there will be another new database table, and a need to create new class with new mapping.
@all
There might be a possibility to write my own custom query in the XML config file of my GisObject class, then in the data access class fetching that query using the
string qs = getSession().getNamedQuery(queryName);
and use the string replace to inject database name (by replacing some placeholder string) which i will pass as a parameter.
qs = qs.replace(":tablename:", tableName);
How do you feel about that solution? I know it might be a security risk in an uncontrolled environment where the table name would be fetched as the user input, but in this case, i have a meta table containing right and valid table names for the GIS data classes which i will read before calling the query for fetching data for the specific class of GIS objects.
A:
one way you could do it is to declare an interface say IGisObject that has the common properties declared on the interface. Then implement a concrete class which maps to each table. That way they'll still be all of type IGisObject.
A:
You can have a look at what Ayende is saying here : MultiTable Entities.
But since you have separate tables , i don't think it will work.
You can also check out nhuser group
A:
I guess I'd ask the question to why you are going after the GIS data directly in the database and not using what API that is typically provided as an abstraction for you. If this is an ESRI system there are tools that allow you to create static database views into their GIS objects and then maybe from that point it might be appropriate for data extract.
A:
From the NHibernate documentation, you could use one of the inheritance mappings.
You might also have a separate class for each table, but have them all implement some common interface
|
(N)Hibernate - is it possible to dynamically map multiple tables to the one class
|
I have the situation where i use GIS software which stores the information about GIS objects into separate database table for each type/class of GIS object (road, river, building, sea, ...) and keeps the metadata table in which it stores info about the class name and its DB table.
Those GIS objects of different classes share some parameters, i.e. Description and ID. I'd like to represent all of these different GIS classes with one common C# class (let's call it GisObject), which is enough for what i need to do from the non-GIS part of the application which lists GIS objects of the given GIS class.
The problem for me is how to map those objects using NHibernate to explain to the NHibernate when creating a C# GisObject to receive and use the table name as a parameter which will be read from the meta table (it can be in two steps, i can manually fetch the table name in first step and then pass it down to the NHibernate when pulling GisObject data).
Has anybody dealt with this kind of situation, and can it be done at all?
|
[
"It sounds like the simplest thing to do here may be to create an abstract base class with all of the common GIS members and then to inherit the other X classes that will have nothing more than the necessary NHibernate mappings. I would then use the Factory pattern to create the object of the specific type using your metadata.\n",
"@Brian Chiasson\nUnfortunately, it's not an option to create all classes of GIS data because classes are created dynamically in the application. Every GIS data of the same type should be a class, but my user has the possibility to get new set of data and put it in the database. I can't know in front which classes my user will have in the application. Therefore, the in-front per-class mapping model doesn't work because tomorrow there will be another new database table, and a need to create new class with new mapping.\n@all\nThere might be a possibility to write my own custom query in the XML config file of my GisObject class, then in the data access class fetching that query using the \nstring qs = getSession().getNamedQuery(queryName);\n\nand use the string replace to inject database name (by replacing some placeholder string) which i will pass as a parameter. \nqs = qs.replace(\":tablename:\", tableName);\n\nHow do you feel about that solution? I know it might be a security risk in an uncontrolled environment where the table name would be fetched as the user input, but in this case, i have a meta table containing right and valid table names for the GIS data classes which i will read before calling the query for fetching data for the specific class of GIS objects.\n",
"one way you could do it is to declare an interface say IGisObject that has the common properties declared on the interface. Then implement a concrete class which maps to each table. That way they'll still be all of type IGisObject.\n",
"You can have a look at what Ayende is saying here : MultiTable Entities.\nBut since you have separate tables , i don't think it will work. \nYou can also check out nhuser group\n",
"I guess I'd ask the question to why you are going after the GIS data directly in the database and not using what API that is typically provided as an abstraction for you. If this is an ESRI system there are tools that allow you to create static database views into their GIS objects and then maybe from that point it might be appropriate for data extract.\n",
"From the NHibernate documentation, you could use one of the inheritance mappings.\nYou might also have a separate class for each table, but have them all implement some common interface\n"
] |
[
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"c#",
"gis",
"hibernate",
"nhibernate",
"orm"
] |
stackoverflow_0000051654_c#_gis_hibernate_nhibernate_orm.txt
|
Q:
Mocking method results
I'm trying to find a way to fake the result of a method called from within another method.
I have a "LoadData" method which calls a separate helper to get some data and then it will transform it (I'm interested in testing the transformed result).
So I have code like this:
public class MyClass(){
public void LoadData(){
SomeProperty = Helper.GetSomeData();
}
public object SomeProperty {get;set;}
}
I want to have a known result from the Helper.GetSomeData() method. Can I use a mocking framework (I've got fairly limited experience with Rhino Mocks but am open to anything) to force an expected result? If so, how?
*Edit - yeah as expected I couldn't achieve the hack I wanted, I'll have to work out a better way to set up the data.
A:
You have a problem there. I don't know if thats a simplified scenario of your code, but if the Helper class is used that way, then your code is not testable. First, the Helper class is used directly, so you can't replace it with a mock. Second, you're calling a static method. I don't know about C#, but in Java you can't override static methods.
You'll have to do some refactoring to be able to inject a mock object with a dummy GetSomeData() method.
In this simplified version of your code is difficult to give you a straight answer. You have some options:
Create an interface for the Helper class and provide a way for the client to inject the Helper implementation to the MyClass class. But if Helper is just really a utility class it doesn't make much sense.
Create a protected method in MyClass called getSomeData and make it only call Helper.LoadSomeData. Then replace the call to Helper.LoadSomeData in LoadData with for getSomeData. Now you can mock the getSomeData method to return the dummy value.
Beware of simply creating an interface to Helper class and inject it via method. This can expose implementation details. Why a client should provide an implementation of a utility class to call a simple operation? This will increase the complexity of MyClass clients.
A:
I would recommend converting what you have into something like this:
public class MyClass()
{
private IHelper _helper;
public MyClass()
{
//Default constructor normal code would use.
this._helper = new Helper();
}
public MyClass(IHelper helper)
{
if(helper == null)
{
throw new NullException(); //I forget the exact name but you get my drift ;)
}
this._helper = helper;
}
public void LoadData()
{
SomeProperty = this._helper.GetSomeData();
}
public object SomeProperty {get;set;}
}
Now your class supports what is known as dependency injection. This allows you to inject the implementation of the helper class and it ensures that your class need only depend on the interface. When you mock this know you just create a mock that uses the IHelper interface and pass it in to the constructor and your class will use that as though it is the real Helper class.
Now if you're stuck using the Helper class as a static class then I would suggest that you use a proxy/adapter pattern and wrap the static class with another class that supports the IHelper interface (that you will also need to create).
If at some point you want to take this a step further you could completely remove the default Helper implementation from the revised class and use IoC (Inversion of Control) containers. If thiis is new to you though, I would recommend focusing first on the fundamentals of why all of this extra hassle is worth while (it is IMHO).
Your unit tests will look something like this psuedo-code:
public Amazing_Mocking_Test()
{
//Mock object setup
MockObject mockery = new MockObject();
IHelper myMock = (IHelper)mockery.createMockObject<IHelper>();
mockery.On(myMock).Expect("GetSomeData").WithNoArguments().Return(Anything);
//The actual test
MyClass testClass = new MyClass(myMock);
testClass.LoadData();
//Ensure the mock had all of it's expectations met.
mockery.VerifyExpectations();
}
Feel free to comment if you have any questions. (By the way I have no clue if this code all works I just typed it in my browser, I'm mainly illustrating the concepts).
A:
As far as I know, you should create an interface or a base abstract class for the Helper object. With Rhino Mocks you can then return the value you want.
Alternatively, you can add an overload for LoadData that accepts as parameters the data that you normally retrieve from the Helper object. This might even be easier.
A:
You might want to look into Typemock Isolator, which can "fake" method calls without forcing you to refactor your code.
I am a dev in that company, but the solution is viable if you would want to choose not to change your design (or forced not to change it for testability)
it's at www.Typemock.com
Roy
blog: ISerializable.com
A:
Yes, a mocking framework is exactly what you're looking for. You can record / arrange how you want certain mocked out / stubbed classes to return.
Rhino Mocks, Typemock, and Moq are all good options for doing this.
Steven Walther's post on using Rhino Mocks helped me a lot when I first started playing with Rhino Mocks.
A:
I would try something like this:
public class MyClass(){
public void LoadData(IHelper helper){
SomeProperty = helper.GetSomeData();
}
This way you can mock up the helper class using for example MOQ.
|
Mocking method results
|
I'm trying to find a way to fake the result of a method called from within another method.
I have a "LoadData" method which calls a separate helper to get some data and then it will transform it (I'm interested in testing the transformed result).
So I have code like this:
public class MyClass(){
public void LoadData(){
SomeProperty = Helper.GetSomeData();
}
public object SomeProperty {get;set;}
}
I want to have a known result from the Helper.GetSomeData() method. Can I use a mocking framework (I've got fairly limited experience with Rhino Mocks but am open to anything) to force an expected result? If so, how?
*Edit - yeah as expected I couldn't achieve the hack I wanted, I'll have to work out a better way to set up the data.
|
[
"You have a problem there. I don't know if thats a simplified scenario of your code, but if the Helper class is used that way, then your code is not testable. First, the Helper class is used directly, so you can't replace it with a mock. Second, you're calling a static method. I don't know about C#, but in Java you can't override static methods.\nYou'll have to do some refactoring to be able to inject a mock object with a dummy GetSomeData() method.\nIn this simplified version of your code is difficult to give you a straight answer. You have some options:\n\nCreate an interface for the Helper class and provide a way for the client to inject the Helper implementation to the MyClass class. But if Helper is just really a utility class it doesn't make much sense.\nCreate a protected method in MyClass called getSomeData and make it only call Helper.LoadSomeData. Then replace the call to Helper.LoadSomeData in LoadData with for getSomeData. Now you can mock the getSomeData method to return the dummy value.\n\n\nBeware of simply creating an interface to Helper class and inject it via method. This can expose implementation details. Why a client should provide an implementation of a utility class to call a simple operation? This will increase the complexity of MyClass clients.\n",
"I would recommend converting what you have into something like this:\npublic class MyClass()\n{\n private IHelper _helper;\n\n public MyClass()\n {\n //Default constructor normal code would use.\n this._helper = new Helper();\n }\n\n public MyClass(IHelper helper)\n {\n if(helper == null)\n {\n throw new NullException(); //I forget the exact name but you get my drift ;)\n }\n this._helper = helper;\n }\n\n public void LoadData()\n {\n SomeProperty = this._helper.GetSomeData();\n }\n public object SomeProperty {get;set;}\n}\n\nNow your class supports what is known as dependency injection. This allows you to inject the implementation of the helper class and it ensures that your class need only depend on the interface. When you mock this know you just create a mock that uses the IHelper interface and pass it in to the constructor and your class will use that as though it is the real Helper class.\nNow if you're stuck using the Helper class as a static class then I would suggest that you use a proxy/adapter pattern and wrap the static class with another class that supports the IHelper interface (that you will also need to create).\nIf at some point you want to take this a step further you could completely remove the default Helper implementation from the revised class and use IoC (Inversion of Control) containers. If thiis is new to you though, I would recommend focusing first on the fundamentals of why all of this extra hassle is worth while (it is IMHO).\nYour unit tests will look something like this psuedo-code:\npublic Amazing_Mocking_Test()\n{\n //Mock object setup\n MockObject mockery = new MockObject();\n IHelper myMock = (IHelper)mockery.createMockObject<IHelper>();\n mockery.On(myMock).Expect(\"GetSomeData\").WithNoArguments().Return(Anything);\n\n //The actual test\n MyClass testClass = new MyClass(myMock);\n testClass.LoadData();\n\n //Ensure the mock had all of it's expectations met.\n mockery.VerifyExpectations();\n}\n\nFeel free to comment if you have any questions. (By the way I have no clue if this code all works I just typed it in my browser, I'm mainly illustrating the concepts).\n",
"As far as I know, you should create an interface or a base abstract class for the Helper object. With Rhino Mocks you can then return the value you want.\nAlternatively, you can add an overload for LoadData that accepts as parameters the data that you normally retrieve from the Helper object. This might even be easier.\n",
"You might want to look into Typemock Isolator, which can \"fake\" method calls without forcing you to refactor your code.\nI am a dev in that company, but the solution is viable if you would want to choose not to change your design (or forced not to change it for testability)\nit's at www.Typemock.com\nRoy\nblog: ISerializable.com\n",
"Yes, a mocking framework is exactly what you're looking for. You can record / arrange how you want certain mocked out / stubbed classes to return.\nRhino Mocks, Typemock, and Moq are all good options for doing this.\nSteven Walther's post on using Rhino Mocks helped me a lot when I first started playing with Rhino Mocks.\n",
"I would try something like this:\npublic class MyClass(){\n public void LoadData(IHelper helper){\n SomeProperty = helper.GetSomeData();\n }\n\nThis way you can mock up the helper class using for example MOQ.\n"
] |
[
10,
8,
2,
2,
0,
0
] |
[] |
[] |
[
"c#",
"mocking",
"rhino_mocks"
] |
stackoverflow_0000090657_c#_mocking_rhino_mocks.txt
|
Q:
Is there any way to change the .NET JIT compiler to favor performance over compile time?
I was wondering if there's any way to change the behavior of the .NET JIT compiler, by specifying a preference for more in-depth optimizations. Failing that, it would be nice if it could do some kind of profile-guided optimization, if it doesn't already.
A:
This is set when you compile your assembly. There are two types of optimizations:
IL optimization
JIT Native Code quality.
The default setting is this
/optimize- /debug-
This means unoptimized IL, and optimized native code.
/optimize /debug(+/full/pdbonly)
This means unoptimized IL, and unoptimized native code (best debug settings).
Finally, to get the fastest performance:
/optimize+ /debug(-/+/full/pdbonly)
This produces optimized IL and optimized native code.
When producing unoptimized IL, the compiler will insert NOP instructions all over the code. This makes code easier to debug by allowing breakpoints to be set on control flow instructions such as for, while,if,else, try, catch etc.
The CLR does a remarkably good job of optimizing code regardless. Once a method is JIT'ed, the pointer on a call or a callvirt instruction is pointed directly to the native code.
Additionally, the CLR will take advantage of any architecture tricks available when JIT'ing your code. This means that an assembly ran through the JIT will run faster than an assembly pre-compiled by using Ngen (albeit with a slightly slower start up time), as NGen will compile for all platforms, and not take advantage of any tricks.
|
Is there any way to change the .NET JIT compiler to favor performance over compile time?
|
I was wondering if there's any way to change the behavior of the .NET JIT compiler, by specifying a preference for more in-depth optimizations. Failing that, it would be nice if it could do some kind of profile-guided optimization, if it doesn't already.
|
[
"This is set when you compile your assembly. There are two types of optimizations:\n\nIL optimization\nJIT Native Code quality.\n\nThe default setting is this\n /optimize- /debug-\n\nThis means unoptimized IL, and optimized native code.\n /optimize /debug(+/full/pdbonly) \n\nThis means unoptimized IL, and unoptimized native code (best debug settings).\nFinally, to get the fastest performance:\n/optimize+ /debug(-/+/full/pdbonly)\n\nThis produces optimized IL and optimized native code.\nWhen producing unoptimized IL, the compiler will insert NOP instructions all over the code. This makes code easier to debug by allowing breakpoints to be set on control flow instructions such as for, while,if,else, try, catch etc.\nThe CLR does a remarkably good job of optimizing code regardless. Once a method is JIT'ed, the pointer on a call or a callvirt instruction is pointed directly to the native code. \nAdditionally, the CLR will take advantage of any architecture tricks available when JIT'ing your code. This means that an assembly ran through the JIT will run faster than an assembly pre-compiled by using Ngen (albeit with a slightly slower start up time), as NGen will compile for all platforms, and not take advantage of any tricks.\n"
] |
[
20
] |
[] |
[] |
[
".net",
"clr",
"jit",
"ngen",
"performance"
] |
stackoverflow_0000098624_.net_clr_jit_ngen_performance.txt
|
Q:
Generating unique account numbers - recursive call
Hi i need to generate 9 digit unique account numbers. Here is my pseudocode:
function generateAccNo()
generate an account number between 100,000,000 and 999,999,999
if the account number already exists in the DB
call generateAccNo() /* recursive call */
else
return new accout number
end if
end function
The function seems to be working well, however I am a bit worried about the recursive call.
Will this cause any memory leaks (PHP 5 under apache)?
Is this an acceptable way to tackle this problem?
Thanks for your input.
A:
You realize this could very well cause a stack overflow, right? As the number of customesr increases, the probability of not finding a an acceptable account number increases.
Also, why can't you just do sequential account numbers and just increase by one every time? With this approach, you'd just have to read the max id currently in the database and just increment it.
Sorry to be so blunt, but your solution is a terrible way to tackle the problem. It'll use tons of memory (as the stack possibly grows infinitely) and it will makes tons of expensive calls to the database.
You should really consider some other approach:
I strongly recommend just incrementing the customer number every time you create a customer. In fact, if you set up your db properly (with auto increment on the id column), you won't even have to set the id. The id will be set for you whenever you insert a new customer.
A:
I really don't think it comes down to recursion vs. looping, both are prone to problems as the dataset grows and if the random number generation is not correctly implemented. Two ideas come to mind:
. GUID
If a truly unique id is required with as little effort as possible, consider a GUID, your DB will most likely be able to assign on for you on insert, if not create one in code. It is guaranteed to be unique although it is not very user friendly. However, in combination with a sequential AccountRecordId generated by the DB on insert you would have a solid combination
. Composite Key: Random + Sequential
One way to address all the needs, although at the surface it feels a bit kludgy, is to create a composite account number from a sequential db key of 5 digits (or more) and then another 5 digits of randomness. If the random number was duplicated it would not matter as the sequential id would guarantee the uniqueness of the entire account number
A:
There's no need to use a recursive call here. Run a simple while loop in the function testing against non-existence as the conditional, e.g.
function generateAccNo()
generate an account number between 100,000,000 and 999,999,999
while ( the account number already exists in the DB ) {
generate new account number;
}
return new account number
end function
Randomly generating-and-testing is a sub-optimal approach to generating unique account numbers, though, if this code is for anything other than a toy.
A:
It seems fine, but I think you need some sort of die condition, how many times are you going to let this run before you give up?
I know this seems unlikely with the huge number range, but something could go wrong that just drops you back to the previous call, which will call itself again, ad-nauseum.
A:
Generating account numbers sequentially is a security risk - you should find some other algorithm to do it.
A:
Alternately, you can maintain a separate table containing a buffer of generated, known to be unique account numbers. This table should have an auto-incrementing integer id. When you want an account number, simply pull the record with the lowest index in the buffer and remove it from that table. Have some process that runs regularly which replenishes the buffer and makes sure it has capacity >> normal usage. The advantage is that the amount of time experienced by the end user spent creating an account number will be essentially constant.
Also, I should note that the processing overhead or risks of recursion or iteration, the real issue is determinism and the overhead of repeating database queries. I like TheZenker's solution of random + sequential. Guaranteed to generate a unique id without adding unnecessary overhead.
A:
You do not need to use recursion here. A simple loop would be just as fast and consume less stack space.
A:
You could put it in a while loop:
function generateAccNo()
while (true) {
generate an account number between 100,000,000 and 999,999,999
if the account number already exists in the DB
/* do nothing */
else
return new accout number
end if
}
end function
A:
Why not:
lock_db
do
account_num <= generate number
while account_num in db
put row with account_num in db
unlock_db
A:
Why not have the database handle this? IN SQL Server, you can just have an identity column that starts at 100000000. Or you could use sql in whatever db that you have. Just get the max id plus 1.
|
Generating unique account numbers - recursive call
|
Hi i need to generate 9 digit unique account numbers. Here is my pseudocode:
function generateAccNo()
generate an account number between 100,000,000 and 999,999,999
if the account number already exists in the DB
call generateAccNo() /* recursive call */
else
return new accout number
end if
end function
The function seems to be working well, however I am a bit worried about the recursive call.
Will this cause any memory leaks (PHP 5 under apache)?
Is this an acceptable way to tackle this problem?
Thanks for your input.
|
[
"You realize this could very well cause a stack overflow, right? As the number of customesr increases, the probability of not finding a an acceptable account number increases. \nAlso, why can't you just do sequential account numbers and just increase by one every time? With this approach, you'd just have to read the max id currently in the database and just increment it.\nSorry to be so blunt, but your solution is a terrible way to tackle the problem. It'll use tons of memory (as the stack possibly grows infinitely) and it will makes tons of expensive calls to the database. \nYou should really consider some other approach:\nI strongly recommend just incrementing the customer number every time you create a customer. In fact, if you set up your db properly (with auto increment on the id column), you won't even have to set the id. The id will be set for you whenever you insert a new customer.\n",
"I really don't think it comes down to recursion vs. looping, both are prone to problems as the dataset grows and if the random number generation is not correctly implemented. Two ideas come to mind:\n. GUID\nIf a truly unique id is required with as little effort as possible, consider a GUID, your DB will most likely be able to assign on for you on insert, if not create one in code. It is guaranteed to be unique although it is not very user friendly. However, in combination with a sequential AccountRecordId generated by the DB on insert you would have a solid combination\n. Composite Key: Random + Sequential\nOne way to address all the needs, although at the surface it feels a bit kludgy, is to create a composite account number from a sequential db key of 5 digits (or more) and then another 5 digits of randomness. If the random number was duplicated it would not matter as the sequential id would guarantee the uniqueness of the entire account number\n",
"There's no need to use a recursive call here. Run a simple while loop in the function testing against non-existence as the conditional, e.g.\nfunction generateAccNo()\n\n generate an account number between 100,000,000 and 999,999,999\n\n while ( the account number already exists in the DB ) {\n generate new account number;\n }\n return new account number\n\nend function\n\nRandomly generating-and-testing is a sub-optimal approach to generating unique account numbers, though, if this code is for anything other than a toy.\n",
"It seems fine, but I think you need some sort of die condition, how many times are you going to let this run before you give up?\nI know this seems unlikely with the huge number range, but something could go wrong that just drops you back to the previous call, which will call itself again, ad-nauseum.\n",
"Generating account numbers sequentially is a security risk - you should find some other algorithm to do it.\n",
"Alternately, you can maintain a separate table containing a buffer of generated, known to be unique account numbers. This table should have an auto-incrementing integer id. When you want an account number, simply pull the record with the lowest index in the buffer and remove it from that table. Have some process that runs regularly which replenishes the buffer and makes sure it has capacity >> normal usage. The advantage is that the amount of time experienced by the end user spent creating an account number will be essentially constant.\nAlso, I should note that the processing overhead or risks of recursion or iteration, the real issue is determinism and the overhead of repeating database queries. I like TheZenker's solution of random + sequential. Guaranteed to generate a unique id without adding unnecessary overhead.\n",
"You do not need to use recursion here. A simple loop would be just as fast and consume less stack space.\n",
"You could put it in a while loop:\nfunction generateAccNo()\n\n while (true) { \n\n generate an account number between 100,000,000 and 999,999,999\n\n if the account number already exists in the DB \n /* do nothing */\n else\n return new accout number\n end if\n }\n\nend function\n\n",
"Why not:\nlock_db\ndo\n account_num <= generate number\nwhile account_num in db\n\nput row with account_num in db\n\nunlock_db\n\n",
"Why not have the database handle this? IN SQL Server, you can just have an identity column that starts at 100000000. Or you could use sql in whatever db that you have. Just get the max id plus 1.\n"
] |
[
8,
3,
2,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"records",
"unique"
] |
stackoverflow_0000098497_records_unique.txt
|
Q:
Preallocating file space in C#?
I am creating a downloading application and I wish to preallocate room on the harddrive for the files before they are actually downloaded as they could potentially be rather large, and noone likes to see "This drive is full, please delete some files and try again." So, in that light, I wrote this.
// Quick, and very dirty
System.IO.File.WriteAllBytes(filename, new byte[f.Length]);
It works, atleast until you download a file that is several hundred MB's, or potentially even GB's and you throw Windows into a thrashing frenzy if not totally wipe out the pagefile and kill your systems memory altogether. Oops.
So, with a little more enlightenment, I set out with the following algorithm.
using (FileStream outFile = System.IO.File.Create(filename))
{
// 4194304 = 4MB; loops from 1 block in so that we leave the loop one
// block short
byte[] buff = new byte[4194304];
for (int i = buff.Length; i < f.Length; i += buff.Length)
{
outFile.Write(buff, 0, buff.Length);
}
outFile.Write(buff, 0, f.Length % buff.Length);
}
This works, well even, and doesn't suffer the crippling memory problem of the last solution. It's still slow though, especially on older hardware since it writes out (potentially GB's worth of) data out to the disk.
The question is this: Is there a better way of accomplishing the same thing? Is there a way of telling Windows to create a file of x size and simply allocate the space on the filesystem rather than actually write out a tonne of data. I don't care about initialising the data in the file at all (the protocol I'm using - bittorrent - provides hashes for the files it sends, hence worst case for random uninitialised data is I get a lucky coincidence and part of the file is correct).
A:
FileStream.SetLength is the one you want. The syntax:
public override void SetLength(
long value
)
A:
If you have to create the file, I think that you can probably do something like this:
using (FileStream outFile = System.IO.File.Create(filename))
{
outFile.Seek(<length_to_write>-1, SeekOrigin.Begin);
OutFile.WriteByte(0);
}
Where length_to_write would be the size in bytes of the file to write. I'm not sure that I have the C# syntax correct (not on a computer to test), but I've done similar things in C++ in the past and it's worked.
A:
Unfortunately, you can't really do this just by seeking to the end. That will set the file length to something huge, but may not actually allocate disk blocks for storage. So when you go to write the file, it will still fail.
|
Preallocating file space in C#?
|
I am creating a downloading application and I wish to preallocate room on the harddrive for the files before they are actually downloaded as they could potentially be rather large, and noone likes to see "This drive is full, please delete some files and try again." So, in that light, I wrote this.
// Quick, and very dirty
System.IO.File.WriteAllBytes(filename, new byte[f.Length]);
It works, atleast until you download a file that is several hundred MB's, or potentially even GB's and you throw Windows into a thrashing frenzy if not totally wipe out the pagefile and kill your systems memory altogether. Oops.
So, with a little more enlightenment, I set out with the following algorithm.
using (FileStream outFile = System.IO.File.Create(filename))
{
// 4194304 = 4MB; loops from 1 block in so that we leave the loop one
// block short
byte[] buff = new byte[4194304];
for (int i = buff.Length; i < f.Length; i += buff.Length)
{
outFile.Write(buff, 0, buff.Length);
}
outFile.Write(buff, 0, f.Length % buff.Length);
}
This works, well even, and doesn't suffer the crippling memory problem of the last solution. It's still slow though, especially on older hardware since it writes out (potentially GB's worth of) data out to the disk.
The question is this: Is there a better way of accomplishing the same thing? Is there a way of telling Windows to create a file of x size and simply allocate the space on the filesystem rather than actually write out a tonne of data. I don't care about initialising the data in the file at all (the protocol I'm using - bittorrent - provides hashes for the files it sends, hence worst case for random uninitialised data is I get a lucky coincidence and part of the file is correct).
|
[
"FileStream.SetLength is the one you want. The syntax:\npublic override void SetLength(\n long value\n)\n\n",
"If you have to create the file, I think that you can probably do something like this:\nusing (FileStream outFile = System.IO.File.Create(filename))\n{\n outFile.Seek(<length_to_write>-1, SeekOrigin.Begin);\n OutFile.WriteByte(0);\n}\n\nWhere length_to_write would be the size in bytes of the file to write. I'm not sure that I have the C# syntax correct (not on a computer to test), but I've done similar things in C++ in the past and it's worked.\n",
"Unfortunately, you can't really do this just by seeking to the end. That will set the file length to something huge, but may not actually allocate disk blocks for storage. So when you go to write the file, it will still fail.\n"
] |
[
30,
6,
2
] |
[] |
[] |
[
"c#",
"file",
"file_io"
] |
stackoverflow_0000098774_c#_file_file_io.txt
|
Q:
FinalBuilder Enumeration of files and folders
What's the best way to enumerate a set of files and folders using FinalBuilder?
The context of my question is, I want to compare a source folder with a destination folder, and replace any matching files in the destination folder that are older than the source folder.
Any suggestions?
A:
ok, for future reference, it turns out that under the catgeory "Iterators" there are two very helpful actions.
File/Fileset Iterator
Folder Iterator
Further digging revealed the Robocopy Mirror action, which does exactly what I was looking for, namely syncing the destination folder with the source folder. No need to write my own file iteration routines.
|
FinalBuilder Enumeration of files and folders
|
What's the best way to enumerate a set of files and folders using FinalBuilder?
The context of my question is, I want to compare a source folder with a destination folder, and replace any matching files in the destination folder that are older than the source folder.
Any suggestions?
|
[
"ok, for future reference, it turns out that under the catgeory \"Iterators\" there are two very helpful actions.\n\nFile/Fileset Iterator\nFolder Iterator\n\nFurther digging revealed the Robocopy Mirror action, which does exactly what I was looking for, namely syncing the destination folder with the source folder. No need to write my own file iteration routines.\n"
] |
[
2
] |
[] |
[] |
[
"finalbuilder"
] |
stackoverflow_0000098722_finalbuilder.txt
|
Q:
Something wrong in the datetime format
When i compile the application in my laptop it runs fine but when i run the same application in the server something is wrong with the date format. But when i checked the system date time format wit my laptop and server it the same format. Can anyone tell me what is wrong.
A:
You also have to check the default culture set in the laptop and the server. Are they running in the same culture? Different culture settings will have different default time formats.
A:
Is your application running on the server under a different user account? Regional settings are per-user so date and time formats will depend on the user that the application is running under. You can log in as the application user and check the Regional settings to determine if that's the issue.
This is really common in ASP.NET where the developer builds the app on their workstation and upon deployment to QA or production they find that Regional settings differ because the App Pool is using a service identity.
A:
Is the system clock running UTC, and the OS changing it to local time on the server. This is a pretty common configuration, but will break programs that bypass the OS's functions to retrieve the date/time.
|
Something wrong in the datetime format
|
When i compile the application in my laptop it runs fine but when i run the same application in the server something is wrong with the date format. But when i checked the system date time format wit my laptop and server it the same format. Can anyone tell me what is wrong.
|
[
"You also have to check the default culture set in the laptop and the server. Are they running in the same culture? Different culture settings will have different default time formats.\n",
"Is your application running on the server under a different user account? Regional settings are per-user so date and time formats will depend on the user that the application is running under. You can log in as the application user and check the Regional settings to determine if that's the issue.\nThis is really common in ASP.NET where the developer builds the app on their workstation and upon deployment to QA or production they find that Regional settings differ because the App Pool is using a service identity.\n",
"Is the system clock running UTC, and the OS changing it to local time on the server. This is a pretty common configuration, but will break programs that bypass the OS's functions to retrieve the date/time.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"c#"
] |
stackoverflow_0000098805_c#.txt
|
Q:
Max value of int in ChucK
What is the maximum value of an int in ChucK? Is there a symbolic constant for it?
A:
New in the latest version!
<<<Math.INT_MAX>>>;
For reference though, it uses the "long" keyword in C++ to represent integers.
So on 32-bit computers the max should be 0x7FFFFFFF, or 2147483647.
On 64-bit computers it will be 0x7FFFFFFFFFFFFFFFFF, or 9223372036854775807.
Answer from Kassen and Stephen Sinclair on the chuck-users mailing list.
A:
The ChucK API reference uses the C int type, so the maximum value would depend on your local machine (2^31-1, around two billion on standard 32-bit x86). I don't see any references to retrieving limits, but if ChucK is extensible using C you could add a function that returns MAXINT.
|
Max value of int in ChucK
|
What is the maximum value of an int in ChucK? Is there a symbolic constant for it?
|
[
"New in the latest version!\n<<<Math.INT_MAX>>>;\n\nFor reference though, it uses the \"long\" keyword in C++ to represent integers.\nSo on 32-bit computers the max should be 0x7FFFFFFF, or 2147483647.\nOn 64-bit computers it will be 0x7FFFFFFFFFFFFFFFFF, or 9223372036854775807.\nAnswer from Kassen and Stephen Sinclair on the chuck-users mailing list.\n",
"The ChucK API reference uses the C int type, so the maximum value would depend on your local machine (2^31-1, around two billion on standard 32-bit x86). I don't see any references to retrieving limits, but if ChucK is extensible using C you could add a function that returns MAXINT.\n"
] |
[
6,
1
] |
[] |
[] |
[
"chuck"
] |
stackoverflow_0000098479_chuck.txt
|
Q:
Are there any free java/flash image editor plug-in/applets that i can include in my site?
I am currently re-developing an e-commerce back end system and I was wondering if anyone here had come accross a free ajax script/plug-in/flash file which could allow our users to upload an image and offer some basic editing tools, i.e. crop, rotate, resize?
We currently have a PHP based image resizing tool using the GD library that simply reduces the image down to the correct dimensions but I was hoping we could add in some Flickr style functionality to spruce up their images a bit before placing them on the site.
I'm not too fussed what language it is in or how it is implemented. The important things are that it is free (or v. cheap) and that it is ok for commercial use.
A:
AJAX Image Editor seems like one of the more popular Free choices. It offers crop, resize, some effects, etc.
|
Are there any free java/flash image editor plug-in/applets that i can include in my site?
|
I am currently re-developing an e-commerce back end system and I was wondering if anyone here had come accross a free ajax script/plug-in/flash file which could allow our users to upload an image and offer some basic editing tools, i.e. crop, rotate, resize?
We currently have a PHP based image resizing tool using the GD library that simply reduces the image down to the correct dimensions but I was hoping we could add in some Flickr style functionality to spruce up their images a bit before placing them on the site.
I'm not too fussed what language it is in or how it is implemented. The important things are that it is free (or v. cheap) and that it is ok for commercial use.
|
[
"AJAX Image Editor seems like one of the more popular Free choices. It offers crop, resize, some effects, etc.\n"
] |
[
1
] |
[] |
[] |
[
"ajax",
"image_editor",
"image_manipulation",
"web_based"
] |
stackoverflow_0000097484_ajax_image_editor_image_manipulation_web_based.txt
|
Q:
Managing the patch level of multiple windows systems
In an environment with multiple windows servers what is the best way to ensure patch compliance accross all systems?
Is there a simple tool (some sort of client/server app?) that allows reports to be generated showing the status of all the systems so any that aren't automatically patching themselves can be fixed without having to manually check each systemevery time an audit is needed?
A:
WSUS is good for Windows, it's for large distributed enterprises.
A:
Microsoft Windows Server Update Services
Microsoft Windows Server Update
Services (WSUS) enables information
technology administrators to deploy
the latest Microsoft product updates
to computers running the Windows
operating system. By using WSUS,
administrators can fully manage the
distribution of updates that are
released through Microsoft Update to
computers in their network.
A WSUS server will download all of the patches you specify, for the products you choose. You then configure your clients to get their updates from the local WSUS server instead of directly from Microsoft. You can group your client machines and approve/disapprove the patches that you want each group to install. The WSUS server will give you lots of reports as to which clients need what patches, etc.
It's pretty easy to setup, if you follow the Microsoft white paper.
|
Managing the patch level of multiple windows systems
|
In an environment with multiple windows servers what is the best way to ensure patch compliance accross all systems?
Is there a simple tool (some sort of client/server app?) that allows reports to be generated showing the status of all the systems so any that aren't automatically patching themselves can be fixed without having to manually check each systemevery time an audit is needed?
|
[
"WSUS is good for Windows, it's for large distributed enterprises.\n",
"Microsoft Windows Server Update Services\n\nMicrosoft Windows Server Update\n Services (WSUS) enables information\n technology administrators to deploy\n the latest Microsoft product updates\n to computers running the Windows\n operating system. By using WSUS,\n administrators can fully manage the\n distribution of updates that are\n released through Microsoft Update to\n computers in their network.\n\nA WSUS server will download all of the patches you specify, for the products you choose. You then configure your clients to get their updates from the local WSUS server instead of directly from Microsoft. You can group your client machines and approve/disapprove the patches that you want each group to install. The WSUS server will give you lots of reports as to which clients need what patches, etc.\nIt's pretty easy to setup, if you follow the Microsoft white paper.\n"
] |
[
1,
1
] |
[] |
[] |
[
"patch",
"sysadmin",
"system_administration",
"windows"
] |
stackoverflow_0000098930_patch_sysadmin_system_administration_windows.txt
|
Q:
Where can I find a QR (quick response) Code component/API for Windows Mobile?
I am looking for a 3rd party solution to integrate a QR code reader in Windows Mobile Applications (.NET Compact Framework). The component should integrate Reader (camera) and Decoder (algorithm).
I tried out the QuickMark reader, which can be called outside the application and communicates using Windows Messages. It works quiet well, but doesn't give me every option I need (e.g. it has to be installed etc.).
Are there other good solutions which I may have missed? Anything Open Source? Tested on different devices?
A:
Here is an open source C# port of the Java QR Code library.
A:
Did you try this one: QRCode .NET Compact Framework Package ?
|
Where can I find a QR (quick response) Code component/API for Windows Mobile?
|
I am looking for a 3rd party solution to integrate a QR code reader in Windows Mobile Applications (.NET Compact Framework). The component should integrate Reader (camera) and Decoder (algorithm).
I tried out the QuickMark reader, which can be called outside the application and communicates using Windows Messages. It works quiet well, but doesn't give me every option I need (e.g. it has to be installed etc.).
Are there other good solutions which I may have missed? Anything Open Source? Tested on different devices?
|
[
"Here is an open source C# port of the Java QR Code library.\n",
"Did you try this one: QRCode .NET Compact Framework Package ?\n"
] |
[
2,
0
] |
[] |
[] |
[
"compact_framework",
"components",
"qr_code",
"windows_mobile"
] |
stackoverflow_0000062173_compact_framework_components_qr_code_windows_mobile.txt
|
Q:
What is the point of the Lowered* columns in asp.net membership tables?
What is the architectural reason for the column names prefixed with "Lowered" in the SQL schema for ASP.Net membership and friends? Some examples of the columns in question are below:
aspnet_Applications.LoweredApplicationName
aspnet_users.LoweredUserName
aspnet_membership.LowerEmail
I see that the lowered columns are indexed, but it seems to me that you could just index the associated non-lowered column and leave out the apparent duplication.
I'm sure there is a good reason for them to exist, but I can't figure it out.
A:
There is no purpose for this in a non case sensitive database like SQL Server. This is a reusable database regardless of the type of database in which you are using. E.g. Informix is case-sensitive for all string data which is stored. Using this database on an Informix server would be a good reason to have/use this column instead of lower()'ing the column yourself. I am not saying that you can't do case sensitive searches in SQL Server by any means (varbinary, BINARY_CHECKSUM, runtime/declarative COLLATE, etc.). This would change the functionality of the out of the box database.
The idea of any calculated column is to save cycles on doing those calculations during querying. Most especially during large queries. The other thought is one which you had in indexing those columns. Again, this is done to save cycles.
|
What is the point of the Lowered* columns in asp.net membership tables?
|
What is the architectural reason for the column names prefixed with "Lowered" in the SQL schema for ASP.Net membership and friends? Some examples of the columns in question are below:
aspnet_Applications.LoweredApplicationName
aspnet_users.LoweredUserName
aspnet_membership.LowerEmail
I see that the lowered columns are indexed, but it seems to me that you could just index the associated non-lowered column and leave out the apparent duplication.
I'm sure there is a good reason for them to exist, but I can't figure it out.
|
[
"There is no purpose for this in a non case sensitive database like SQL Server. This is a reusable database regardless of the type of database in which you are using. E.g. Informix is case-sensitive for all string data which is stored. Using this database on an Informix server would be a good reason to have/use this column instead of lower()'ing the column yourself. I am not saying that you can't do case sensitive searches in SQL Server by any means (varbinary, BINARY_CHECKSUM, runtime/declarative COLLATE, etc.). This would change the functionality of the out of the box database. \nThe idea of any calculated column is to save cycles on doing those calculations during querying. Most especially during large queries. The other thought is one which you had in indexing those columns. Again, this is done to save cycles.\n"
] |
[
2
] |
[] |
[] |
[
"asp.net"
] |
stackoverflow_0000098908_asp.net.txt
|
Q:
WCF netTCPBinding - Is transport encryption enough?
I've got a WCF service which handles some sensitive data. I'd like to make sure I keep that data from being exposed and so I'm looking at netTCPBinding... primarily because I can control the network it runs across and performance is a high priority.
I recognize that there are two areas that can be encrypted: transport level and message level. I intend to use certificates to encrypt at the transport level, which I understand uses TLS over TCP.
The calling clients are also mine and so I control the transport level. Since I anticipate no change in the transport layer, do I need to bother with message level encryption? It seems unnecessary unless I want the flexibility of changing the transport.
A:
The message-level encryption is needed when you do not control an intermediary. Intermediary services need to be able to modify the soap headers and could peek at your sensitive data for malicious purposes. But if you control everything from initial sender to ultimate receiver, then you do not need encryption at that level.
I work on a project that uses netTCP for internal services, and I can confirm it works well.
A:
In general terms, as long as you're dealing with point to point connections, and certificates are being validated on both sides (particularly if you're using mutual authentication), then yes, transport level security might be enough. Checking the certificates is useful to ensure that someone doesn't supplant the server (or no man-in-the-middle gets in the way).
Message-level security becomes more useful when you need to do content signing or you need non-repudiation and particularly when you have intermediaries (routers) between the client and server and want to make sure they can route the message without actually looking at its contents.
A:
I think you're spot on. If you don't plan on moving this to another transport mechanism I cant see why you would need both message- and transport encryption. If performance is a key factor skipping message encryption will save you some performance since you don't have to add protection on sending/receiving each messages.
|
WCF netTCPBinding - Is transport encryption enough?
|
I've got a WCF service which handles some sensitive data. I'd like to make sure I keep that data from being exposed and so I'm looking at netTCPBinding... primarily because I can control the network it runs across and performance is a high priority.
I recognize that there are two areas that can be encrypted: transport level and message level. I intend to use certificates to encrypt at the transport level, which I understand uses TLS over TCP.
The calling clients are also mine and so I control the transport level. Since I anticipate no change in the transport layer, do I need to bother with message level encryption? It seems unnecessary unless I want the flexibility of changing the transport.
|
[
"The message-level encryption is needed when you do not control an intermediary. Intermediary services need to be able to modify the soap headers and could peek at your sensitive data for malicious purposes. But if you control everything from initial sender to ultimate receiver, then you do not need encryption at that level.\nI work on a project that uses netTCP for internal services, and I can confirm it works well.\n",
"In general terms, as long as you're dealing with point to point connections, and certificates are being validated on both sides (particularly if you're using mutual authentication), then yes, transport level security might be enough. Checking the certificates is useful to ensure that someone doesn't supplant the server (or no man-in-the-middle gets in the way).\nMessage-level security becomes more useful when you need to do content signing or you need non-repudiation and particularly when you have intermediaries (routers) between the client and server and want to make sure they can route the message without actually looking at its contents.\n",
"I think you're spot on. If you don't plan on moving this to another transport mechanism I cant see why you would need both message- and transport encryption. If performance is a key factor skipping message encryption will save you some performance since you don't have to add protection on sending/receiving each messages.\n"
] |
[
5,
4,
2
] |
[] |
[] |
[
".net",
"encryption",
"networking",
"tcp",
"wcf"
] |
stackoverflow_0000099001_.net_encryption_networking_tcp_wcf.txt
|
Q:
Oracle ASP.net Provider-model objects performance
Oracle 11g version of ODP.Net introduces the provider model objects (session state provider, identity provider etc) which lets the application to store these information in an oracle DB without writing custom provider implementation.
Has anyone has done any performance benchmarking on these objects? how do they compare in performance to the sql server implementations provided with .net? I am particularly interested in the performance of the sessionstate provider.
A:
I would recommend you download a copy of Reflector and compare the codebases for the SQL Server and Oracle providers (They shouldn't be that complicated.)
I'm going to guess that they're going to look almost identical and perform (from a .NET runtime perspective) similarly.
Whether the Oracle backend is any faster...that's another story.
|
Oracle ASP.net Provider-model objects performance
|
Oracle 11g version of ODP.Net introduces the provider model objects (session state provider, identity provider etc) which lets the application to store these information in an oracle DB without writing custom provider implementation.
Has anyone has done any performance benchmarking on these objects? how do they compare in performance to the sql server implementations provided with .net? I am particularly interested in the performance of the sessionstate provider.
|
[
"I would recommend you download a copy of Reflector and compare the codebases for the SQL Server and Oracle providers (They shouldn't be that complicated.) \nI'm going to guess that they're going to look almost identical and perform (from a .NET runtime perspective) similarly. \nWhether the Oracle backend is any faster...that's another story.\n"
] |
[
1
] |
[] |
[] |
[
"asp.net",
"oracle",
"oracle11g",
"provider"
] |
stackoverflow_0000078952_asp.net_oracle_oracle11g_provider.txt
|
Q:
Assign variables with a regular expression
I'm looking for a method to assign variables with patterns in regular expressions with C++ .NET
something like
String^ speed;
String^ size;
"command SPEED=[speed] SIZE=[size]"
Right now I'm using IndexOf() and Substring() but it is quite ugly
A:
String^ speed; String^ size;
Match m;
Regex theregex = new Regex (
"SPEED=(?<speed>(.*?)) SIZE=(?<size>(.*?)) ",
RegexOptions::ExplicitCapture);
m = theregex.Match (yourinputstring);
if (m.Success)
{
if (m.Groups["speed"].Success)
speed = m.Groups["speed"].Value;
if (m.Groups["size"].Success)
size = m.Groups["size"].Value;
}
else
throw new FormatException ("Input options not recognized");
Apologies for syntax errors, I don't have a compiler to test with right now.
A:
If I understand your question correctly, you are looking for capturing groups. I'm not familiar with the .net api, but in java this would look something like:
Pattern pattern = Pattern.compile("command SPEED=(\d+) SIZE=(\d+)");
Matcher matcher = pattern.matcher(inputStr);
if (matcher.find()) {
speed = matcher.group(1);
size = matcher.group(2);
}
There are two capturing groups in the regex pattern above, designated by the two sets of parentheses. In java these must be referenced by number but in some other languages you are able to reference them by name.
A:
If you put all the variables in a class, you can use reflection to iterate over its fields, getting their names and values and plugging them into a string.
Given an instance of some class named InputArgs:
foreach (FieldInfo f in typeof(InputArgs).GetFields()) {
string = Regex.replace("\\[" + f.Name + "\\]",
f.GetValue(InputArgs).ToString());
}
|
Assign variables with a regular expression
|
I'm looking for a method to assign variables with patterns in regular expressions with C++ .NET
something like
String^ speed;
String^ size;
"command SPEED=[speed] SIZE=[size]"
Right now I'm using IndexOf() and Substring() but it is quite ugly
|
[
"String^ speed; String^ size;\nMatch m;\nRegex theregex = new Regex (\n \"SPEED=(?<speed>(.*?)) SIZE=(?<size>(.*?)) \",\n RegexOptions::ExplicitCapture);\nm = theregex.Match (yourinputstring);\nif (m.Success)\n{\n if (m.Groups[\"speed\"].Success)\n speed = m.Groups[\"speed\"].Value;\n if (m.Groups[\"size\"].Success)\n size = m.Groups[\"size\"].Value;\n}\nelse\n throw new FormatException (\"Input options not recognized\");\n\nApologies for syntax errors, I don't have a compiler to test with right now.\n",
"If I understand your question correctly, you are looking for capturing groups. I'm not familiar with the .net api, but in java this would look something like:\nPattern pattern = Pattern.compile(\"command SPEED=(\\d+) SIZE=(\\d+)\");\nMatcher matcher = pattern.matcher(inputStr);\nif (matcher.find()) {\n speed = matcher.group(1);\n size = matcher.group(2);\n}\n\nThere are two capturing groups in the regex pattern above, designated by the two sets of parentheses. In java these must be referenced by number but in some other languages you are able to reference them by name.\n",
"If you put all the variables in a class, you can use reflection to iterate over its fields, getting their names and values and plugging them into a string.\nGiven an instance of some class named InputArgs:\nforeach (FieldInfo f in typeof(InputArgs).GetFields()) {\n string = Regex.replace(\"\\\\[\" + f.Name + \"\\\\]\",\n f.GetValue(InputArgs).ToString());\n}\n\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
".net",
"parsing",
"regex",
"string"
] |
stackoverflow_0000098827_.net_parsing_regex_string.txt
|
Q:
Generate a current datestamp in Java
What is the best way to generate a current datestamp in Java?
YYYY-MM-DD:hh-mm-ss
A:
Using the standard JDK, you will want to use java.text.SimpleDateFormat
Date myDate = new Date();
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd:HH-mm-ss");
String myDateString = sdf.format(myDate);
However, if you have the option to use the Apache Commons Lang package, you can use org.apache.commons.lang.time.FastDateFormat
Date myDate = new Date();
FastDateFormat fdf = FastDateFormat.getInstance("yyyy-MM-dd:HH-mm-ss");
String myDateString = fdf.format(myDate);
FastDateFormat has the benefit of being thread safe, so you can use a single instance throughout your application. It is strictly for formatting dates and does not support parsing like SimpleDateFormat does in the following example:
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd:HH-mm-ss");
Date yourDate = sdf.parse("2008-09-18:22-03-15");
A:
Date d = new Date();
String formatted = new SimpleDateFormat ("yyyy-MM-dd:HH-mm-ss").format (d);
System.out.println (formatted);
A:
There's also
long timestamp = System.currentTimeMillis()
which is what new Date() (@John Millikin) uses internally. Once you have that, you can format it however you like.
A:
SimpleDateFormatter is what you want.
A:
final DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd:hh-mm-ss");
formatter.format(new Date());
The JavaDoc for SimpleDateFormat provides information on date and time pattern strings.
|
Generate a current datestamp in Java
|
What is the best way to generate a current datestamp in Java?
YYYY-MM-DD:hh-mm-ss
|
[
"Using the standard JDK, you will want to use java.text.SimpleDateFormat\nDate myDate = new Date();\nSimpleDateFormat sdf = new SimpleDateFormat(\"yyyy-MM-dd:HH-mm-ss\");\nString myDateString = sdf.format(myDate);\n\nHowever, if you have the option to use the Apache Commons Lang package, you can use org.apache.commons.lang.time.FastDateFormat\nDate myDate = new Date();\nFastDateFormat fdf = FastDateFormat.getInstance(\"yyyy-MM-dd:HH-mm-ss\");\nString myDateString = fdf.format(myDate);\n\nFastDateFormat has the benefit of being thread safe, so you can use a single instance throughout your application. It is strictly for formatting dates and does not support parsing like SimpleDateFormat does in the following example:\nSimpleDateFormat sdf = new SimpleDateFormat(\"yyyy-MM-dd:HH-mm-ss\");\nDate yourDate = sdf.parse(\"2008-09-18:22-03-15\");\n\n",
"Date d = new Date();\nString formatted = new SimpleDateFormat (\"yyyy-MM-dd:HH-mm-ss\").format (d);\nSystem.out.println (formatted);\n\n",
"There's also\nlong timestamp = System.currentTimeMillis() \n\nwhich is what new Date() (@John Millikin) uses internally. Once you have that, you can format it however you like.\n",
"SimpleDateFormatter is what you want. \n",
"final DateFormat formatter = new SimpleDateFormat(\"yyyy-MM-dd:hh-mm-ss\");\n\nformatter.format(new Date());\n\nThe JavaDoc for SimpleDateFormat provides information on date and time pattern strings.\n"
] |
[
27,
8,
1,
0,
0
] |
[] |
[] |
[
"datestamp",
"java"
] |
stackoverflow_0000099098_datestamp_java.txt
|
Q:
Does such a procedure exist in a Scheme standard and if yes, how is it called?
I looked for the name of a procedure, which applies a tree structure of procedures to a tree structure of data, yielding a tree structure of results - all three trees having the same structure.
Such a procedure might have the signature:
(map-tree data functree)
Its return value would be the result of elementwise application of functree's elements on the corresponding data elements.
Examples (assuming that the procedure is called map-tree):
Example 1:
(define *2 (lambda (x) (* 2 x)))
; and similar definitions for *3 and *5
(map-tree '(100 (10 1)) '(*2 (*3 *5)))
would yield the result (200 (30 5))
Example 2:
(map-tree '(((aa . ab) (bb . bc)) (cc . (cd . ce)))
'((car cdr) cadr))
yields the result ((aa bc) cd)
However I did not find such a function in the SLIB documentation, which I consulted.
Does such a procedure already exist?
If not, what would be a suitable name for the procedure, and how would you order its arguments?
A:
I don't have a very good name for the function. I'm pasting my implementation below (I've called it map-traversing; others should suggest a better name). I've made the argument order mirror that of map itself.
(define (map-traversing func data)
(if (list? func)
(map map-traversing func data)
(func data)))
Using your sample data, we have:
(map-traversing `((,car ,cdr) ,cadr) '(((aa . ab) (bb . bc)) (cc cd . ce)))
The second sample requires SRFI 26. (Allows writing (cut * 2 <>) instead of (lambda (x) (* 2 x)).)
(map-traversing `(,(cut * 2 <>) (,(cut * 3 <>) ,(cut * 5 <>))) '(100 (10 1)))
The most important thing is that your functions must all be unquoted, unlike your example.
A:
I found that with the follwing definition of map-traversing, you don't need to unquote the functions:
(define (map-traversing func data)
(if (list? func)
(map map-traversing func data)
(apply (eval func (interaction-environment)) (list data))))
Note: in my installed version of Guile, due to some reason, only (interaction-environment) does not raise the Unbound variable error. The other environments i.e. (scheme-report-environment 5) and (null-environment 5) raise this error.
Note 2: Subsequently, I found in [1] that for (scheme-report-environment 5) and (null-environment 5) to work, you need first to (use-modules (ice-9 r5rs))
[1]: http://www.mail-archive.com/[email protected]/msg04368.html 'Re: guile -c "(scheme-report-environment 5)" ==> ERROR: Unbound variable: scheme-report-environment'
|
Does such a procedure exist in a Scheme standard and if yes, how is it called?
|
I looked for the name of a procedure, which applies a tree structure of procedures to a tree structure of data, yielding a tree structure of results - all three trees having the same structure.
Such a procedure might have the signature:
(map-tree data functree)
Its return value would be the result of elementwise application of functree's elements on the corresponding data elements.
Examples (assuming that the procedure is called map-tree):
Example 1:
(define *2 (lambda (x) (* 2 x)))
; and similar definitions for *3 and *5
(map-tree '(100 (10 1)) '(*2 (*3 *5)))
would yield the result (200 (30 5))
Example 2:
(map-tree '(((aa . ab) (bb . bc)) (cc . (cd . ce)))
'((car cdr) cadr))
yields the result ((aa bc) cd)
However I did not find such a function in the SLIB documentation, which I consulted.
Does such a procedure already exist?
If not, what would be a suitable name for the procedure, and how would you order its arguments?
|
[
"I don't have a very good name for the function. I'm pasting my implementation below (I've called it map-traversing; others should suggest a better name). I've made the argument order mirror that of map itself.\n(define (map-traversing func data)\n (if (list? func)\n (map map-traversing func data)\n (func data)))\n\nUsing your sample data, we have:\n(map-traversing `((,car ,cdr) ,cadr) '(((aa . ab) (bb . bc)) (cc cd . ce)))\n\nThe second sample requires SRFI 26. (Allows writing (cut * 2 <>) instead of (lambda (x) (* 2 x)).)\n(map-traversing `(,(cut * 2 <>) (,(cut * 3 <>) ,(cut * 5 <>))) '(100 (10 1)))\n\nThe most important thing is that your functions must all be unquoted, unlike your example.\n",
"I found that with the follwing definition of map-traversing, you don't need to unquote the functions:\n(define (map-traversing func data)\n (if (list? func)\n (map map-traversing func data)\n (apply (eval func (interaction-environment)) (list data))))\nNote: in my installed version of Guile, due to some reason, only (interaction-environment) does not raise the Unbound variable error. The other environments i.e. (scheme-report-environment 5) and (null-environment 5) raise this error. \nNote 2: Subsequently, I found in [1] that for (scheme-report-environment 5) and (null-environment 5) to work, you need first to (use-modules (ice-9 r5rs))\n[1]: http://www.mail-archive.com/[email protected]/msg04368.html 'Re: guile -c \"(scheme-report-environment 5)\" ==> ERROR: Unbound variable: scheme-report-environment'\n"
] |
[
3,
1
] |
[] |
[] |
[
"mapping",
"scheme"
] |
stackoverflow_0000098394_mapping_scheme.txt
|
Q:
Stored procedures a no-go in the php/mysql world?
I'm quoting part of an answer which I received for another question of mine:
In the PHP/MySQL world I would say
stored procedures are no-go
I would like to know: Is that so? Why? Why not?
[edit]I mean this as a general question without a specific need in mind[/edit]
A:
I develop and maintain a large PHP/MySQL application. Here is my experience with stored procedures.
Over time our application has grown very complex. And with all the logic on the php side, some operations would query the database with over 100 short queries.
MySQL is so quick that the performance was still acceptable, but not great.
We made the decision in our latest version of the software to move some of the logic to stored procedures for complex operations.
We did achieve a significant performance gain due to the fact that we did not have to send data back and forth between PHP and MySQL.
I do agree with the other posters here that PL/SQL is not a modern language and is difficult to debug.
Bottom Line: Stored Procedures are a great tool for certain situations. But I would not recommend using them unless you have a good reason. For simple applications, stored procedures are not worth the hassle.
A:
When using stored procedures with MySQL, you will often need to use the mysqli interface in PHP and not the regular mysql interface.
The reason for this is due to the fact that the stored procedures often will return more than 1 result set. If it does, the mysql API can not handle it and will you get errors.
The mysqli interface has functions to handling these multiple result sets, functions such as mysqli_more_results and mysqli_next_result.
Keep in mind that if you return any result set at all from the stored procedure, then you need to use these APIs, as the stored procedure generates 1 result set for the actual execution, and then 1 additional one for each result set intentionally returned from the stored procedure.
A:
Do you have a specific need in mind which makes you consider them? Stored procedures are much less portable than "plain" SQL, that's usually why people don't want to use them. Also, having written a fair share of PL/SQL, I must say that the procedural way of writing code adds complexity and it's just not very modern or testable. They might be handy in some special cases where you need to optimize, but I'd certainly think twice. Jeff has similar opinions.
A:
I generally stay away from stored procedures because it adds load to the database which is 99% of the time, your biggest bottleneck. Adding a new php server is nothing compared to making your MySQL db replicate.
A:
This is a subjective question.
I would personally include all calculations within PHP and only really use MySQL as a table.
But, If you feel that it is easier to use stored procedures then by all means, go ahead and do it.
A:
There's possibly a phobia of stored procedures with mysql, partly due to not being overwhelmingly powerful ( compared to Postgresql and even MSSQL, mysqls stored procedures are greatly lacking ).
On the plus: They make interfacing with it from more than one language easier.
If somebody states that "using stored procedures is bad because its not portable to different databases" then this of course means they think you're likely to switch databases, which means they in turn saying they think you shouldn't be using mysql.
It is popular to use ORM's these days, but I personally think ORM is a BadThing (Question:82882)
A:
I would not say "stored procedures are a no-go", I would say "Don't use them without a good reason".
MySQL stored procedures have a particularly horrible syntax (Oracle and MSSQL are pretty awful too), maintaining them just complicates your application.
Do use a stored procedure if you have a real (measurable) reason to do so, otherwise don't. That's my opinion anyway.
A:
I think that using stored procedures can offer some abstraction in certain applications, as in any where you would use the same SQL code chunk to update or add the same data, you could then create the one sproc save_user($attr.....) rather that repeating yourself all over the place.
Agreed the syntax is hairy and if your used to MSSQL and oracle sprocs there are differences that can fustrate.
A:
You should also be aware that stored procedures were not supported in Mysql before version 5.0. http://dev.mysql.com/doc/refman/5.0/en/stored-routines.html Also stored procedures tended to be be a bit weird in that implementation. Now that Mysql 5.1 is starting to crop up in the wild I see more use of stored procedures with Mysql.
A:
I make limited use of stored procedures, and it works well. I am the lead dev for one of my companies clients, working on their e-comm website. The client has a stock system, we implemented a set of stored procedures on their system and built an API to communicate with it. This allowed us to abstract their database and they could implement logic in the stored procedures. Simple but met the business requirement very well.
|
Stored procedures a no-go in the php/mysql world?
|
I'm quoting part of an answer which I received for another question of mine:
In the PHP/MySQL world I would say
stored procedures are no-go
I would like to know: Is that so? Why? Why not?
[edit]I mean this as a general question without a specific need in mind[/edit]
|
[
"I develop and maintain a large PHP/MySQL application. Here is my experience with stored procedures.\nOver time our application has grown very complex. And with all the logic on the php side, some operations would query the database with over 100 short queries.\nMySQL is so quick that the performance was still acceptable, but not great.\nWe made the decision in our latest version of the software to move some of the logic to stored procedures for complex operations.\nWe did achieve a significant performance gain due to the fact that we did not have to send data back and forth between PHP and MySQL.\nI do agree with the other posters here that PL/SQL is not a modern language and is difficult to debug.\nBottom Line: Stored Procedures are a great tool for certain situations. But I would not recommend using them unless you have a good reason. For simple applications, stored procedures are not worth the hassle.\n",
"When using stored procedures with MySQL, you will often need to use the mysqli interface in PHP and not the regular mysql interface. \nThe reason for this is due to the fact that the stored procedures often will return more than 1 result set. If it does, the mysql API can not handle it and will you get errors. \nThe mysqli interface has functions to handling these multiple result sets, functions such as mysqli_more_results and mysqli_next_result.\nKeep in mind that if you return any result set at all from the stored procedure, then you need to use these APIs, as the stored procedure generates 1 result set for the actual execution, and then 1 additional one for each result set intentionally returned from the stored procedure.\n",
"Do you have a specific need in mind which makes you consider them? Stored procedures are much less portable than \"plain\" SQL, that's usually why people don't want to use them. Also, having written a fair share of PL/SQL, I must say that the procedural way of writing code adds complexity and it's just not very modern or testable. They might be handy in some special cases where you need to optimize, but I'd certainly think twice. Jeff has similar opinions.\n",
"I generally stay away from stored procedures because it adds load to the database which is 99% of the time, your biggest bottleneck. Adding a new php server is nothing compared to making your MySQL db replicate.\n",
"This is a subjective question.\nI would personally include all calculations within PHP and only really use MySQL as a table.\nBut, If you feel that it is easier to use stored procedures then by all means, go ahead and do it.\n",
"There's possibly a phobia of stored procedures with mysql, partly due to not being overwhelmingly powerful ( compared to Postgresql and even MSSQL, mysqls stored procedures are greatly lacking ). \nOn the plus: They make interfacing with it from more than one language easier. \nIf somebody states that \"using stored procedures is bad because its not portable to different databases\" then this of course means they think you're likely to switch databases, which means they in turn saying they think you shouldn't be using mysql. \nIt is popular to use ORM's these days, but I personally think ORM is a BadThing (Question:82882) \n",
"I would not say \"stored procedures are a no-go\", I would say \"Don't use them without a good reason\".\nMySQL stored procedures have a particularly horrible syntax (Oracle and MSSQL are pretty awful too), maintaining them just complicates your application. \nDo use a stored procedure if you have a real (measurable) reason to do so, otherwise don't. That's my opinion anyway.\n",
"I think that using stored procedures can offer some abstraction in certain applications, as in any where you would use the same SQL code chunk to update or add the same data, you could then create the one sproc save_user($attr.....) rather that repeating yourself all over the place. \nAgreed the syntax is hairy and if your used to MSSQL and oracle sprocs there are differences that can fustrate.\n",
"You should also be aware that stored procedures were not supported in Mysql before version 5.0. http://dev.mysql.com/doc/refman/5.0/en/stored-routines.html Also stored procedures tended to be be a bit weird in that implementation. Now that Mysql 5.1 is starting to crop up in the wild I see more use of stored procedures with Mysql.\n",
"I make limited use of stored procedures, and it works well. I am the lead dev for one of my companies clients, working on their e-comm website. The client has a stock system, we implemented a set of stored procedures on their system and built an API to communicate with it. This allowed us to abstract their database and they could implement logic in the stored procedures. Simple but met the business requirement very well.\n"
] |
[
23,
10,
6,
6,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"mysql",
"php",
"stored_procedures"
] |
stackoverflow_0000083419_mysql_php_stored_procedures.txt
|
Q:
Black-box vs White-box Reuse
What are the pros/cons of using black-box reuse over white-box reuse?
A:
In my experience, White box reuse is normally done through inheritance and black box is done through composition.
White Box Reuse
Pro: You can customize the module to fit the specific situation, this allows reuse in more situations
Con: You now own the customized result, so it adds to your code complexity.
Black Box Reuse
Pro: Simplicity and Cleanliness
Con: Many times it is just not possible
verdict:
I prefer Black Box whenever possible.
A:
White-box:
pros:
simple (very natural concept)
you have more control over things
cons:
requires intrinsic knowledge on
component internals
can be difficult to implement (OO inheritance constraints)
sometimes it leads to broken\incorrect inheritance chains
Black-box:
pros:
low coupling (gives late binding and other goodies)
cons:
not obvious (code is much harder to understand)
interfaces are more fragile than classes (i.e. interfaces vs inheritance)
A:
I'm not sure what those specific terms mean, so I'll take a stab at defining what they are before I continue:
Black box reuse is using a class/function/code unmodified in a different project
White box reuse is taking a class/function/code from one project and modifying it to suit the needs of another project.
The pros to black-box reuse are that once the code has been written, debugged, and tested, you can reuse it countless times in different circumstances. The downside is that truly black-box-reusable code is rare and can take time and effort to format the API and calling code and make it consistent with the black box approach (no context leaking).
The pros to white-box reuse are that you can indeed use your code more than once without having to first extricate it from the original project. You simply copy and modify and you're on your way. This type of reuse is much more common, but it also has a few downsides. Mostly, if you discover a bug in one implementation, you need to check to make sure that it's fixed in all the other implementations. This can be difficult if they diverge widely, as often happens.
A:
@Kyle,
Black-Box reuse means that you use component without knowing it's internals. All you have is a component interface.
White-box reuse means that you know how component is implemented. Usually White-box reuse means class inheritance.
|
Black-box vs White-box Reuse
|
What are the pros/cons of using black-box reuse over white-box reuse?
|
[
"In my experience, White box reuse is normally done through inheritance and black box is done through composition. \nWhite Box Reuse\nPro: You can customize the module to fit the specific situation, this allows reuse in more situations\nCon: You now own the customized result, so it adds to your code complexity.\nBlack Box Reuse\nPro: Simplicity and Cleanliness\nCon: Many times it is just not possible\nverdict:\nI prefer Black Box whenever possible.\n",
"White-box:\npros:\n\nsimple (very natural concept)\nyou have more control over things\n\ncons:\n\nrequires intrinsic knowledge on\ncomponent internals\ncan be difficult to implement (OO inheritance constraints)\nsometimes it leads to broken\\incorrect inheritance chains \n\nBlack-box:\npros:\n\nlow coupling (gives late binding and other goodies)\n\ncons:\n\nnot obvious (code is much harder to understand)\ninterfaces are more fragile than classes (i.e. interfaces vs inheritance)\n\n",
"I'm not sure what those specific terms mean, so I'll take a stab at defining what they are before I continue:\n\nBlack box reuse is using a class/function/code unmodified in a different project\nWhite box reuse is taking a class/function/code from one project and modifying it to suit the needs of another project.\n\nThe pros to black-box reuse are that once the code has been written, debugged, and tested, you can reuse it countless times in different circumstances. The downside is that truly black-box-reusable code is rare and can take time and effort to format the API and calling code and make it consistent with the black box approach (no context leaking).\nThe pros to white-box reuse are that you can indeed use your code more than once without having to first extricate it from the original project. You simply copy and modify and you're on your way. This type of reuse is much more common, but it also has a few downsides. Mostly, if you discover a bug in one implementation, you need to check to make sure that it's fixed in all the other implementations. This can be difficult if they diverge widely, as often happens.\n",
"@Kyle,\nBlack-Box reuse means that you use component without knowing it's internals. All you have is a component interface.\nWhite-box reuse means that you know how component is implemented. Usually White-box reuse means class inheritance.\n"
] |
[
9,
3,
3,
2
] |
[] |
[] |
[
"oop"
] |
stackoverflow_0000098984_oop.txt
|
Q:
Windows Server 2003 NLB drainstop notification on stop
How would I drainstop one of the nodes in a MS NLB via command line and then get notified of its completion?
If there's no way to callback, is there an easy way to poll?
A:
http://technet.microsoft.com/en-us/library/cc772833.aspx has it all.
Run the drainstop and then query until it is drained.
nlb query yourServer
|
Windows Server 2003 NLB drainstop notification on stop
|
How would I drainstop one of the nodes in a MS NLB via command line and then get notified of its completion?
If there's no way to callback, is there an easy way to poll?
|
[
"http://technet.microsoft.com/en-us/library/cc772833.aspx has it all.\nRun the drainstop and then query until it is drained.\nnlb query yourServer\n"
] |
[
4
] |
[] |
[] |
[
"load_balancing",
"windows_server_2003"
] |
stackoverflow_0000097833_load_balancing_windows_server_2003.txt
|
Q:
Anyone using WPF for real LOB applications?
Anyone using WPF for real LOB applications?
We have all seen the clever demos of WPF showing videos mapped onto 3D elements. These look great but what about the real world of line-of-business applications that make up the majority of developers efforts. Is WPF just for nice eye candy?
A:
While we discussing it, smart guys are building amazing apps:
Lawson Smart Office brings WPF goodness to the enterprise
IGT’s Next-Generation UI with WPF
Billy Hollis on Getting Smart with WPF
A:
Just rolling out a WPF LOB application to about 400 municipal locations. Not heavy on the eye-candy but very heavy on databinding.
WPF is custom made for LOB!
Many drawbacks (ie no refactoring) were recently fixed in SP1, but tools are still, to put it mildly, retarded.
I find that ironic seeing that XAML was invented for easy tooling.
To use WPF, you really need to understand some fundamentals in the WPF object model, and I don't see the designer/developer workflow happening anytime soon.
There's a really steep learning curve, but it is worth it.
Tasks that used to be huge are trivial now, and conversely, tasks that used to be dead simple are near impossible.
A:
I worked on the Helios product in this setup. WPF on top of lots of other stuff, including C++.
WPF is what I would recommend if you were developing in .NET and wanted a smart client application with a heavily customized UI. If you were thinking about using a simple Windows-y UI, go with Windows Forms.
A:
I am member of a Danish architecture group in which many of the members are focused solely on building WinForms app (I'm a web guy myself). During our meetings the topic of building Windows apps in Winforms vs. WPF has come up a number of times and each time after much the discussion the conclusion is that whiæe WPF does allow you to build some very nice looking apps they go for WinForms because they lose too much productivity at this point.
The main reason for sticking with Winforms is tools. They are improving though.
A:
IMO WPF is just starting to become a viable path for real software companies.. companies that have to maintain existing install bases are just now easing into .net 3.5 in their next-gen projects, and as part of that WPF is being considered..
i think the real issue is that WPF isnt for web apps, its for distributed apps, and as such there is a longer timeframe involved in getting it to market.. .net 3.5 may be used in a lot of hosted web apps, but it is just starting to show up in distributed apps, and with it WCF, WPF, etc..
i would argue that within the next 2 years you will see many WPF applications popup.. we are developing WPF apps right now for back end bank processing - so yes it is viable and being used for real apps - they just might not be out yet ;)
A:
I feel the eye candy demos are targeted mostly towards designers. Having said that, there is a huge potential in improving usability of LOB apps using WPF. Check this article about the potential of Silverlight.
Line-of-business applications have a notorious reputation for being all business and no pleasure. The fact is that "user experience" has never really been a top concern when developing line-of-business (LOB) applications. While many LOB-style applications are putting an increasing emphasis on usability, they often fall short on appeal. User experience is actually a combination of both usability and appeal.
A:
We have started using it in peripherals to the main application, sort of as a POC as well as a learning opportiunity.
It's looking OK, but we only have 1 graphic artist here who is overworked, and without him, WPF apps still look graphically like developer designed apps.
As well as us coders not being graphical, we're largely still building Forms apps in WPF rather than leveraging fully the power of WPF. I'm sure we could do wonders with more resources and more experience, and am looking forwards to doing so.
We are also considering using Silverlight to appease the boss's belief that there is nothing you can do in a forms app that can't be done on the web. It's a dangerous line though, as he might start believing he's right and we were all just complaining about nothing (actually, he already does :) )
A:
A friend of mine used WPF for some darn cool tree (as in tree-view) rendering where it did a little better than showing a simple sliding view. I might be able to talk him into putting it into the public domain or somedthing.
|
Anyone using WPF for real LOB applications?
|
Anyone using WPF for real LOB applications?
We have all seen the clever demos of WPF showing videos mapped onto 3D elements. These look great but what about the real world of line-of-business applications that make up the majority of developers efforts. Is WPF just for nice eye candy?
|
[
"While we discussing it, smart guys are building amazing apps:\nLawson Smart Office brings WPF goodness to the enterprise\nIGT’s Next-Generation UI with WPF\nBilly Hollis on Getting Smart with WPF\n",
"Just rolling out a WPF LOB application to about 400 municipal locations. Not heavy on the eye-candy but very heavy on databinding.\nWPF is custom made for LOB! \nMany drawbacks (ie no refactoring) were recently fixed in SP1, but tools are still, to put it mildly, retarded.\nI find that ironic seeing that XAML was invented for easy tooling.\nTo use WPF, you really need to understand some fundamentals in the WPF object model, and I don't see the designer/developer workflow happening anytime soon.\nThere's a really steep learning curve, but it is worth it.\nTasks that used to be huge are trivial now, and conversely, tasks that used to be dead simple are near impossible. \n",
"I worked on the Helios product in this setup. WPF on top of lots of other stuff, including C++.\nWPF is what I would recommend if you were developing in .NET and wanted a smart client application with a heavily customized UI. If you were thinking about using a simple Windows-y UI, go with Windows Forms.\n",
"I am member of a Danish architecture group in which many of the members are focused solely on building WinForms app (I'm a web guy myself). During our meetings the topic of building Windows apps in Winforms vs. WPF has come up a number of times and each time after much the discussion the conclusion is that whiæe WPF does allow you to build some very nice looking apps they go for WinForms because they lose too much productivity at this point.\nThe main reason for sticking with Winforms is tools. They are improving though.\n",
"IMO WPF is just starting to become a viable path for real software companies.. companies that have to maintain existing install bases are just now easing into .net 3.5 in their next-gen projects, and as part of that WPF is being considered.. \ni think the real issue is that WPF isnt for web apps, its for distributed apps, and as such there is a longer timeframe involved in getting it to market.. .net 3.5 may be used in a lot of hosted web apps, but it is just starting to show up in distributed apps, and with it WCF, WPF, etc..\ni would argue that within the next 2 years you will see many WPF applications popup.. we are developing WPF apps right now for back end bank processing - so yes it is viable and being used for real apps - they just might not be out yet ;)\n",
"I feel the eye candy demos are targeted mostly towards designers. Having said that, there is a huge potential in improving usability of LOB apps using WPF. Check this article about the potential of Silverlight.\nLine-of-business applications have a notorious reputation for being all business and no pleasure. The fact is that \"user experience\" has never really been a top concern when developing line-of-business (LOB) applications. While many LOB-style applications are putting an increasing emphasis on usability, they often fall short on appeal. User experience is actually a combination of both usability and appeal. \n",
"We have started using it in peripherals to the main application, sort of as a POC as well as a learning opportiunity. \nIt's looking OK, but we only have 1 graphic artist here who is overworked, and without him, WPF apps still look graphically like developer designed apps. \nAs well as us coders not being graphical, we're largely still building Forms apps in WPF rather than leveraging fully the power of WPF. I'm sure we could do wonders with more resources and more experience, and am looking forwards to doing so.\nWe are also considering using Silverlight to appease the boss's belief that there is nothing you can do in a forms app that can't be done on the web. It's a dangerous line though, as he might start believing he's right and we were all just complaining about nothing (actually, he already does :) )\n",
"A friend of mine used WPF for some darn cool tree (as in tree-view) rendering where it did a little better than showing a simple sliding view. I might be able to talk him into putting it into the public domain or somedthing.\n"
] |
[
9,
6,
3,
2,
2,
1,
1,
1
] |
[] |
[] |
[
".net",
"wpf"
] |
stackoverflow_0000061853_.net_wpf.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.