content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Stored Procedures in MS-SQL Server 2005 and Oracle
How to translate MS-SQL Server 2005 stored procedures into Oracle stored procedures?
A chart contrasting corresponding features from each environment with each other - would be helpful.
A:
http://vyaskn.tripod.com/oracle_sql_server_differences_equivalents.htm is a good resource
|
Stored Procedures in MS-SQL Server 2005 and Oracle
|
How to translate MS-SQL Server 2005 stored procedures into Oracle stored procedures?
A chart contrasting corresponding features from each environment with each other - would be helpful.
|
[
"http://vyaskn.tripod.com/oracle_sql_server_differences_equivalents.htm is a good resource\n"
] |
[
3
] |
[] |
[] |
[
"oracle",
"sql_server_2005",
"stored_procedures"
] |
stackoverflow_0000071563_oracle_sql_server_2005_stored_procedures.txt
|
Q:
Best way to deal with RoutingError in Rails 2.1.x?
I'm playing with the routing.rb code in Rails 2.1, and trying to to get it to the point where I can do something useful with the RoutingError exception that is thrown when it can't find the appropriate path.
This is a somewhat tricky problem, because there are some class of URLs which are just plain BAD: the /azenv.php bot attacks, the people typing /bar/foo/baz into the URL, etc... we don't want that.
Then there's subtle routing problems, where we do want to be notified: /artists/ for example, or ///. In these situations, we may want an error being thrown, or not... or we get Google sending us URLs which used to be valid but are no longer because people deleted them.
In each of these situations, I want a way to contain, analyze and filter the path that we get back, or at least some Railsy way to manage routing past the normal 'fallback catchall' url. Does this exist?
EDIT:
So the code here is:
# File vendor/rails/actionpack/lib/action_controller/rescue.rb, line 141
def rescue_action_without_handler(exception)
log_error(exception) if logger
erase_results if performed?
# Let the exception alter the response if it wants.
# For example, MethodNotAllowed sets the Allow header.
if exception.respond_to?(:handle_response!)
exception.handle_response!(response)
end
if consider_all_requests_local || local_request?
rescue_action_locally(exception)
else
rescue_action_in_public(exception)
end
end
So our best option is to override log_error(exception) so that we can filter down the exceptions according to the exception. So in ApplicationController
def log_error(exception)
message = '...'
if should_log_exception_as_debug?(exception)
logger.debug(message)
else
logger.error(message)
end
end
def should_log_exception_as_debug?(exception)
return (ActionController::RoutingError === exception)
end
Salt for additional logic where we want different controller logic, routes, etc.
A:
Nooooo!!! Don't implement method_missing on your controller! And please try to avoid action_missing as well.
The frequently touted pattern is to add a route:
map.connect '*', :controller => 'error', :action => 'not_found'
Where you can show an appropriate error.
Rails also has a mechanism called rescue_action_in_public where you can write your own error handling logic -- we really should clean it up and encourage people to use it. PDI! :-)
A:
There's the method_missing method. You could implement that in your Application Controller and catch all missing actions, maybe logging those and redirecting to the index action of the relevant controller. This approach would ignore everything that can't be routed to a controller, which is pretty close to what you want.
Alternatively, I'd just log all errors, extract the URL and sort it by # of times it occured.
|
Best way to deal with RoutingError in Rails 2.1.x?
|
I'm playing with the routing.rb code in Rails 2.1, and trying to to get it to the point where I can do something useful with the RoutingError exception that is thrown when it can't find the appropriate path.
This is a somewhat tricky problem, because there are some class of URLs which are just plain BAD: the /azenv.php bot attacks, the people typing /bar/foo/baz into the URL, etc... we don't want that.
Then there's subtle routing problems, where we do want to be notified: /artists/ for example, or ///. In these situations, we may want an error being thrown, or not... or we get Google sending us URLs which used to be valid but are no longer because people deleted them.
In each of these situations, I want a way to contain, analyze and filter the path that we get back, or at least some Railsy way to manage routing past the normal 'fallback catchall' url. Does this exist?
EDIT:
So the code here is:
# File vendor/rails/actionpack/lib/action_controller/rescue.rb, line 141
def rescue_action_without_handler(exception)
log_error(exception) if logger
erase_results if performed?
# Let the exception alter the response if it wants.
# For example, MethodNotAllowed sets the Allow header.
if exception.respond_to?(:handle_response!)
exception.handle_response!(response)
end
if consider_all_requests_local || local_request?
rescue_action_locally(exception)
else
rescue_action_in_public(exception)
end
end
So our best option is to override log_error(exception) so that we can filter down the exceptions according to the exception. So in ApplicationController
def log_error(exception)
message = '...'
if should_log_exception_as_debug?(exception)
logger.debug(message)
else
logger.error(message)
end
end
def should_log_exception_as_debug?(exception)
return (ActionController::RoutingError === exception)
end
Salt for additional logic where we want different controller logic, routes, etc.
|
[
"Nooooo!!! Don't implement method_missing on your controller! And please try to avoid action_missing as well.\nThe frequently touted pattern is to add a route:\nmap.connect '*', :controller => 'error', :action => 'not_found'\n\nWhere you can show an appropriate error.\nRails also has a mechanism called rescue_action_in_public where you can write your own error handling logic -- we really should clean it up and encourage people to use it. PDI! :-)\n",
"There's the method_missing method. You could implement that in your Application Controller and catch all missing actions, maybe logging those and redirecting to the index action of the relevant controller. This approach would ignore everything that can't be routed to a controller, which is pretty close to what you want. \nAlternatively, I'd just log all errors, extract the URL and sort it by # of times it occured.\n"
] |
[
4,
1
] |
[] |
[] |
[
"routing",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000051210_routing_ruby_ruby_on_rails.txt
|
Q:
profile-guided optimization (C)
Anyone know this compiler feature? It seems GCC support that. How does it work? What is the potential gain? In which case it's good? Inner loops?
(this question is specific, not about optimization in general, thanks)
A:
It works by placing extra code to count the number of times each codepath is taken. When you compile a second time the compiler uses the knowledge gained about execution of your program that it could only guess at before. There are a couple things PGO can work toward:
Deciding which functions should be inlined or not depending on how often they are called.
Deciding how to place hints about which branch of an "if" statement should be predicted on based on the percentage of calls going one way or the other.
Deciding how to optimize loops based on how many iterations get taken each time that loop is called.
You never really know how much these things can help until you test it.
A:
PGO gives about a 5% speed boost when compiling x264, the project I work on, and we have a built-in system for it (make fprofiled). Its a nice free speed boost in some cases, and probably helps more in applications that, unlike x264, are less made up of handwritten assembly.
A:
Jason's advise is right on. The best speedups you are going to get come from "discovering" that you let an O(n2) algorithm slip into an inner loop somewhere, or that you can cache certain computations outside of expensive functions.
Compared to the micro-optimizations that PGO can trigger, these are the big winners. Once you've done that level of optimization PGO might be able to help. We never had much luck with it though - the cost of the instrumentation was such that our application become unusably slow (by several orders of magnitude).
I like using Intel VTune as a profiler primarily because it is non-invasive compared to instrumenting profilers which change behaviour too much.
A:
The fun thing about optimization is that speed gains are found in the unlikeliest of places.
It's also the reason you need a profiler, rather than guessing where the speed problems are.
I recommend starting with a profiler (gperf if you're using GCC) and just start poking around the results of running your application through some normal operations.
|
profile-guided optimization (C)
|
Anyone know this compiler feature? It seems GCC support that. How does it work? What is the potential gain? In which case it's good? Inner loops?
(this question is specific, not about optimization in general, thanks)
|
[
"It works by placing extra code to count the number of times each codepath is taken. When you compile a second time the compiler uses the knowledge gained about execution of your program that it could only guess at before. There are a couple things PGO can work toward:\n\nDeciding which functions should be inlined or not depending on how often they are called.\nDeciding how to place hints about which branch of an \"if\" statement should be predicted on based on the percentage of calls going one way or the other.\nDeciding how to optimize loops based on how many iterations get taken each time that loop is called.\n\nYou never really know how much these things can help until you test it.\n",
"PGO gives about a 5% speed boost when compiling x264, the project I work on, and we have a built-in system for it (make fprofiled). Its a nice free speed boost in some cases, and probably helps more in applications that, unlike x264, are less made up of handwritten assembly.\n",
"Jason's advise is right on. The best speedups you are going to get come from \"discovering\" that you let an O(n2) algorithm slip into an inner loop somewhere, or that you can cache certain computations outside of expensive functions.\nCompared to the micro-optimizations that PGO can trigger, these are the big winners. Once you've done that level of optimization PGO might be able to help. We never had much luck with it though - the cost of the instrumentation was such that our application become unusably slow (by several orders of magnitude).\nI like using Intel VTune as a profiler primarily because it is non-invasive compared to instrumenting profilers which change behaviour too much.\n",
"The fun thing about optimization is that speed gains are found in the unlikeliest of places.\nIt's also the reason you need a profiler, rather than guessing where the speed problems are.\nI recommend starting with a profiler (gperf if you're using GCC) and just start poking around the results of running your application through some normal operations.\n"
] |
[
12,
6,
4,
2
] |
[] |
[] |
[
"c",
"c++",
"gcc",
"optimization",
"pgo"
] |
stackoverflow_0000052557_c_c++_gcc_optimization_pgo.txt
|
Q:
Referenced Assemblies in Web Site
Am I correct in assuming that I always need to explicitly deploy referenced assemblies when their source changes?
A:
Yes you do. If you use the publish command in Visual Studios, it will include all the assemblies you need in the folder you selected to publish your site.
If a .dll has changed and you need to update your site, you can just publish again or copy the .dll.
|
Referenced Assemblies in Web Site
|
Am I correct in assuming that I always need to explicitly deploy referenced assemblies when their source changes?
|
[
"Yes you do. If you use the publish command in Visual Studios, it will include all the assemblies you need in the folder you selected to publish your site. \nIf a .dll has changed and you need to update your site, you can just publish again or copy the .dll.\n"
] |
[
2
] |
[] |
[] |
[
"asp.net"
] |
stackoverflow_0000071602_asp.net.txt
|
Q:
is it possible to concat arguments to a command line using backquotes ?
I would like to execute an OS command from my ruby script but I want to add an argument from a ruby variable.
I know that's possible by using keyword system like that :
#!/usr/bin/env ruby
directory = '/home/paulgreg/'
system 'ls ' + directory
but is that possible by using the "backquotes or backticks syntax" ?
(I mean by using that syntax : ls)
A:
No, that will just concatenate the output from ls and the contents of directory.
But you can do this:
#!/usr/bin/env ruby
directory = '/home/paulgreg/'
`ls #{directory}`
A:
`ls #{directory}`
isn't very safe because you're going to run into problems with path names that have spaces in them.
It's safer to do something like this:
directory = '/home/paulgreg/'
args = []
args << "/bin/ls"
args << directory
system(*args)
A:
Nick is right, but there is no need to assemble the args piecewise:
directory = '/Volumes/Omg a space/'
system('/bin/ls', directory)
|
is it possible to concat arguments to a command line using backquotes ?
|
I would like to execute an OS command from my ruby script but I want to add an argument from a ruby variable.
I know that's possible by using keyword system like that :
#!/usr/bin/env ruby
directory = '/home/paulgreg/'
system 'ls ' + directory
but is that possible by using the "backquotes or backticks syntax" ?
(I mean by using that syntax : ls)
|
[
"No, that will just concatenate the output from ls and the contents of directory.\nBut you can do this:\n#!/usr/bin/env ruby\ndirectory = '/home/paulgreg/'\n`ls #{directory}`\n\n",
"`ls #{directory}` \n\nisn't very safe because you're going to run into problems with path names that have spaces in them. \nIt's safer to do something like this:\ndirectory = '/home/paulgreg/'\n\nargs = []\nargs << \"/bin/ls\"\nargs << directory\n\nsystem(*args)\n\n",
"Nick is right, but there is no need to assemble the args piecewise:\ndirectory = '/Volumes/Omg a space/'\nsystem('/bin/ls', directory)\n\n"
] |
[
6,
6,
1
] |
[] |
[] |
[
"command_line",
"command_line_interface",
"ruby"
] |
stackoverflow_0000041630_command_line_command_line_interface_ruby.txt
|
Q:
How would I change an Access DB from ISO-8859-2 to UTF-8 in a connection string?
I have a database in ISO-8859-2 format, but I need to create XML in UTF-8. This means that I must encode the database before prinitng in UTF-8. I know very little about ASP.Net, so I'm hoping someone can help.
In PHP I would do something like this:
db_connect();
mysql_query("SET NAMES 'UTF8'");
mysql_query("SET character_set_client='UTF8'");
This is my ASP.Net code for database connection:
'CONNECTION TO DATABASE
dim dbconn,sql,dbcomm
dbconn=New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;" & _
"Data Source=" & Server.MapPath("../baze/test.mdb"))
dbconn.Open()
sql="SELECT * FROM nekretnine, tipovinekretnina WHERE nekretnine.idtipnekretnine = tipovinekretnina.idtipnekretnine ORDER BY nekretnine.idnekretnine"
dbcomm=New OleDbCommand(sql,dbconn)
dbread=dbcomm.ExecuteReader()
while dbread.Read()
Where and how do I encode to UTF-8?
A:
Assuming you have a value string in str, this is the pure-.NET way of doing this.
var encoding = System.Text.Encoding.GetEncoding("iso-8859-2");
var bytes = System.Text.Encoding.Convert(encoding, System.Text.Encoding.Default, encoding.GetBytes(str));
var newString = System.Text.Encoding.Default.GetString(bytes);
A:
The .NET Framework's internal string type is UTF-16. All database access will convert to UTF-16 so that you can view the data appropriately: the database, or the OLE DB provider, will convert to UTF-16.
The XML writer classes (you are using XmlDocument or XmlWriter, right?) will then convert to UTF-8 on the output.
Basically, you shouldn't need to do anything extra.
|
How would I change an Access DB from ISO-8859-2 to UTF-8 in a connection string?
|
I have a database in ISO-8859-2 format, but I need to create XML in UTF-8. This means that I must encode the database before prinitng in UTF-8. I know very little about ASP.Net, so I'm hoping someone can help.
In PHP I would do something like this:
db_connect();
mysql_query("SET NAMES 'UTF8'");
mysql_query("SET character_set_client='UTF8'");
This is my ASP.Net code for database connection:
'CONNECTION TO DATABASE
dim dbconn,sql,dbcomm
dbconn=New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;" & _
"Data Source=" & Server.MapPath("../baze/test.mdb"))
dbconn.Open()
sql="SELECT * FROM nekretnine, tipovinekretnina WHERE nekretnine.idtipnekretnine = tipovinekretnina.idtipnekretnine ORDER BY nekretnine.idnekretnine"
dbcomm=New OleDbCommand(sql,dbconn)
dbread=dbcomm.ExecuteReader()
while dbread.Read()
Where and how do I encode to UTF-8?
|
[
"Assuming you have a value string in str, this is the pure-.NET way of doing this.\nvar encoding = System.Text.Encoding.GetEncoding(\"iso-8859-2\");\n\nvar bytes = System.Text.Encoding.Convert(encoding, System.Text.Encoding.Default, encoding.GetBytes(str));\n\nvar newString = System.Text.Encoding.Default.GetString(bytes);\n\n",
"The .NET Framework's internal string type is UTF-16. All database access will convert to UTF-16 so that you can view the data appropriately: the database, or the OLE DB provider, will convert to UTF-16.\nThe XML writer classes (you are using XmlDocument or XmlWriter, right?) will then convert to UTF-8 on the output.\nBasically, you shouldn't need to do anything extra.\n"
] |
[
2,
1
] |
[] |
[] |
[
"asp.net",
"ms_access"
] |
stackoverflow_0000071578_asp.net_ms_access.txt
|
Q:
JS error for JQuery in IE 8.0
I have developed a simple page using JQuery. It works fine in almost all browsers (i.e. Firefox, IE, Chrome) but whenever the page is opened in IE, it prompts Javascript error like,
'guid' is null or not an object on line 1834
Do you have any idea ?
A:
Thanks guys for your messages.
The error was on my part. For hover event, I was not passing function for "out". Therefore the handler was passed as undefined in jQuery.event function and that causing error for statement ,
if ( !handler.guid )
written at 1834 line of jquery-1.2.6.js file.
While using I thought that out handler is not mandatory to specify, but I guess I am wrong.
Strangely, FF / Chrome does not prompt error but IE does :) which is bit different than what it used to be.
Regards,
Jatan
A:
Firefox removed the javascript error indication by default because there are a lot of pages that throw javascript errors. To an average user, the error messages aren't useful - only confusing. If you are a web developer, you should definitely install Firebug.
A:
Maybe you're using the parentNode or parentElement property? There are some issues with that in IE vs other browsers.
A:
Sorry, FF / Chrome both report this error but in very silent way. You need to go to Firefox 3.0 Javascript errors dialog to see if is there any error and for Chrome you need to go to Javascript console.
In my view, there should be at least some UI indications (like icon would turn RED), for such errors in FF 3.0 as well as Chrome. In FF 2.0, I guess the icon was turning to RED CROSS if any error is there but it does not happen in FF 3.0 !
|
JS error for JQuery in IE 8.0
|
I have developed a simple page using JQuery. It works fine in almost all browsers (i.e. Firefox, IE, Chrome) but whenever the page is opened in IE, it prompts Javascript error like,
'guid' is null or not an object on line 1834
Do you have any idea ?
|
[
"Thanks guys for your messages.\nThe error was on my part. For hover event, I was not passing function for \"out\". Therefore the handler was passed as undefined in jQuery.event function and that causing error for statement ,\nif ( !handler.guid )\nwritten at 1834 line of jquery-1.2.6.js file.\nWhile using I thought that out handler is not mandatory to specify, but I guess I am wrong.\nStrangely, FF / Chrome does not prompt error but IE does :) which is bit different than what it used to be.\nRegards,\nJatan\n",
"Firefox removed the javascript error indication by default because there are a lot of pages that throw javascript errors. To an average user, the error messages aren't useful - only confusing. If you are a web developer, you should definitely install Firebug.\n",
"Maybe you're using the parentNode or parentElement property? There are some issues with that in IE vs other browsers.\n",
"Sorry, FF / Chrome both report this error but in very silent way. You need to go to Firefox 3.0 Javascript errors dialog to see if is there any error and for Chrome you need to go to Javascript console.\nIn my view, there should be at least some UI indications (like icon would turn RED), for such errors in FF 3.0 as well as Chrome. In FF 2.0, I guess the icon was turning to RED CROSS if any error is there but it does not happen in FF 3.0 !\n"
] |
[
2,
2,
0,
0
] |
[] |
[] |
[
"internet_explorer",
"jquery"
] |
stackoverflow_0000071118_internet_explorer_jquery.txt
|
Q:
Managing large volumes of data - stored procedures or datasets or other...?
I have an application that imports large volumes of data daily, several 100 thousands records.
Data comes from different sources. The data is read using C#, then bulk inserted into the database.
This data is then processed:
different tables are linked
new tables are generated
data is corrected using complicated algorithmns (totals of certain tables have to total zero)
Most of this processing is done in stored procedures.
Although some of the complex processing would be simpler in C#, the extraction of the data into a dataset and its reinjection would slow things down considerably.
You may ask why I do not process the data before inserting it into the database, but I do not think it practical to manipulate 100,000s of records in memory, and the SQLs set based commands help when creating lots of records.
This will probably spark up the age old question of using stored procedures and their pros and cons.
(eg. How do you unit test stored procedures?)
What I would like in response, is your experience with large volumes of data and how you tackled the problem.
A:
I would use SSIS or DTS (assuming you are talking about MSSQL). They are made for that purpose and work with SPs if you need them.
Another option is to preprocess the data using Perl. Even though it sounds like a wierd suggestion, Perl is actually extremely fast in these scenarios. I've used it in the past to process billions of records in reasonable time (i.e. days instead of weeks).
Regarding "How do you Unit Test store procedures", you unit test them with MBUnit like anything else. Only bit of advice: the setup and rollback of the data can be tricky, you can either use a DTS transaction or explicit SQL statements.
A:
I would generally have to agree with Skliwz when it comes to doing things in MSSQL. SSIS and DTS are the way to go, but if you are unfamiliar with those technologies they can be cumbersome to work with. However, there is an alternative that would allow you to do the processing in C#, and still keep your data inside of SQL Server.
If you really think the processing would be simpler in C# then you may want to look into using a SQL Server Project to create database objects using C#. There are a lot of really powerful things you can do with CLR objects inside of SQL Server, and this would allow you to write and unit test the code before it ever touches the database. You can unit test your CLR code inside of VS using any of the standard unit testing frameworks (NUnit, MSTest), and you don't have to write a bunch of set up and tear down scripts that can be difficult to manage.
As far as testing your stored procedures I would honestly look into DBFit for that. Your database doesn't have to be a black hole of untested functionality any more :)
A:
Where you process data depends greatly on what you're doing. If you need, for example, to discard data which you don't want in your database, then you would process that in your C# code. However, data to process in the database should generally be data which should be "implementation agnostic". So if someone else wants to insert data from a Java client, the database should be able to reject bad data. If you put that logic into your C# code, the Java code won't know about it.
Some people object and say "but I'll never use another language for the database!" Even if that's true, you'll still have DBAs or developers working with the database and they'll make mistakes if the logic isn't there. Or your new C# developer will try to shove in data and not know about (or just ignore) data pre-processors written in C#.
In short, the logic you put in your database should be enough to guarantee that the data is correct without relying on external software.
|
Managing large volumes of data - stored procedures or datasets or other...?
|
I have an application that imports large volumes of data daily, several 100 thousands records.
Data comes from different sources. The data is read using C#, then bulk inserted into the database.
This data is then processed:
different tables are linked
new tables are generated
data is corrected using complicated algorithmns (totals of certain tables have to total zero)
Most of this processing is done in stored procedures.
Although some of the complex processing would be simpler in C#, the extraction of the data into a dataset and its reinjection would slow things down considerably.
You may ask why I do not process the data before inserting it into the database, but I do not think it practical to manipulate 100,000s of records in memory, and the SQLs set based commands help when creating lots of records.
This will probably spark up the age old question of using stored procedures and their pros and cons.
(eg. How do you unit test stored procedures?)
What I would like in response, is your experience with large volumes of data and how you tackled the problem.
|
[
"I would use SSIS or DTS (assuming you are talking about MSSQL). They are made for that purpose and work with SPs if you need them.\nAnother option is to preprocess the data using Perl. Even though it sounds like a wierd suggestion, Perl is actually extremely fast in these scenarios. I've used it in the past to process billions of records in reasonable time (i.e. days instead of weeks).\nRegarding \"How do you Unit Test store procedures\", you unit test them with MBUnit like anything else. Only bit of advice: the setup and rollback of the data can be tricky, you can either use a DTS transaction or explicit SQL statements.\n",
"I would generally have to agree with Skliwz when it comes to doing things in MSSQL. SSIS and DTS are the way to go, but if you are unfamiliar with those technologies they can be cumbersome to work with. However, there is an alternative that would allow you to do the processing in C#, and still keep your data inside of SQL Server.\nIf you really think the processing would be simpler in C# then you may want to look into using a SQL Server Project to create database objects using C#. There are a lot of really powerful things you can do with CLR objects inside of SQL Server, and this would allow you to write and unit test the code before it ever touches the database. You can unit test your CLR code inside of VS using any of the standard unit testing frameworks (NUnit, MSTest), and you don't have to write a bunch of set up and tear down scripts that can be difficult to manage.\nAs far as testing your stored procedures I would honestly look into DBFit for that. Your database doesn't have to be a black hole of untested functionality any more :)\n",
"Where you process data depends greatly on what you're doing. If you need, for example, to discard data which you don't want in your database, then you would process that in your C# code. However, data to process in the database should generally be data which should be \"implementation agnostic\". So if someone else wants to insert data from a Java client, the database should be able to reject bad data. If you put that logic into your C# code, the Java code won't know about it.\nSome people object and say \"but I'll never use another language for the database!\" Even if that's true, you'll still have DBAs or developers working with the database and they'll make mistakes if the logic isn't there. Or your new C# developer will try to shove in data and not know about (or just ignore) data pre-processors written in C#.\nIn short, the logic you put in your database should be enough to guarantee that the data is correct without relying on external software.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"c#",
"database"
] |
stackoverflow_0000071031_c#_database.txt
|
Q:
Drawing a view hierarchy into a specific context in Cocoa
For part of my application I have a need to create an image of a certain view and all of its subviews.
To do this I'm creating a context that wraps a bitmap with the same-size as the view, but I'm unsure how to draw the view hierarchy into it. I can draw a single view just be setting the context and explicitly calling drawRect, but this does not deal with all of the subviews.
I can't see anything in the NSView interface that could help with this so I suspect the solution may lie at a higher level.
A:
You can use -[NSView dataWithPDFInsideRect:] to render the entire hierarchy of the view you send it to into a PDF, returned as an NSData object. You can then do whatever you wish with that, including render it into a bitmap.
Are you sure you want a bitmap representation though? After all, that PDF could be (at least in theory) resolution-independent.
A:
I found that writing the drawing code myself was the best way to:
deal with potential transparency issues (some of the other options do add a white background to the whole image)
performance was much better
The code below is not perfect, because it does not deal with scaling issues when going from bounds to frames, but it does take into account the isFlipped state, and works very well for what I used it for. Note that it only draws the subviews (and the subsubviews,... recursively), but getting it to also draw itself is very easy, just add a [self drawRect:[self bounds]] in the implementation of imageWithSubviews.
- (void)drawSubviews
{
BOOL flipped = [self isFlipped];
for ( NSView *subview in [self subviews] ) {
// changes the coordinate system so that the local coordinates of the subview (bounds) become the coordinates of the superview (frame)
// the transform assumes bounds and frame have the same size, and bounds origin is (0,0)
// handling of 'isFlipped' also probably unreliable
NSAffineTransform *transform = [NSAffineTransform transform];
if ( flipped ) {
[transform translateXBy:subview.frame.origin.x yBy:NSMaxY(subview.frame)];
[transform scaleXBy:+1.0 yBy:-1.0];
} else
[transform translateXBy:subview.frame.origin.x yBy:subview.frame.origin.y];
[transform concat];
// recursively draw the subview and sub-subviews
[subview drawRect:[subview bounds]];
[subview drawSubviews];
// reset the transform to get back a clean graphic contexts for the rest of the drawing
[transform invert];
[transform concat];
}
}
- (NSImage *)imageWithSubviews
{
NSImage *image = [[[NSImage alloc] initWithSize:[self bounds].size] autorelease];
[image lockFocus];
// it seems NSImage cannot use flipped coordinates the way NSView does (the method 'setFlipped:' does not seem to help)
// Use instead an NSAffineTransform
if ( [self isFlipped] ) {
NSAffineTransform *transform = [NSAffineTransform transform];
[transform translateXBy:0 yBy:NSMaxY(self.bounds)];
[transform scaleXBy:+1.0 yBy:-1.0];
[transform concat];
}
[self drawSubviews];
[image unlockFocus];
return image;
}
A:
You can use -[NSBitmapImageRep initWithFocusedViewRect:] after locking focus on a view to have the view render itself (and its subviews) into the given rectangle.
A:
What you want to do is available explicitly already. See the section "NSView Drawing Redirection API" in the 10.4 AppKit release notes.
Make an NSBitmapImageRep for caching and clear it:
NSGraphicsContext *bitmapGraphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:cacheBitmapImageRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:bitmapGraphicsContext];
[[NSColor clearColor] set];
NSRectFill(NSMakeRect(0, 0, [cacheBitmapImageRep size].width, [cacheBitmapImageRep size].height));
[NSGraphicsContext restoreGraphicsState];
Cache to it:
-[NSView cacheDisplayInRect:toBitmapImageRep:]
If you want to more generally draw into a specified context handling view recursion and transparency correctly,
-[NSView displayRectIgnoringOpacity:inContext:]
|
Drawing a view hierarchy into a specific context in Cocoa
|
For part of my application I have a need to create an image of a certain view and all of its subviews.
To do this I'm creating a context that wraps a bitmap with the same-size as the view, but I'm unsure how to draw the view hierarchy into it. I can draw a single view just be setting the context and explicitly calling drawRect, but this does not deal with all of the subviews.
I can't see anything in the NSView interface that could help with this so I suspect the solution may lie at a higher level.
|
[
"You can use -[NSView dataWithPDFInsideRect:] to render the entire hierarchy of the view you send it to into a PDF, returned as an NSData object. You can then do whatever you wish with that, including render it into a bitmap.\nAre you sure you want a bitmap representation though? After all, that PDF could be (at least in theory) resolution-independent.\n",
"I found that writing the drawing code myself was the best way to:\n\ndeal with potential transparency issues (some of the other options do add a white background to the whole image)\nperformance was much better\n\nThe code below is not perfect, because it does not deal with scaling issues when going from bounds to frames, but it does take into account the isFlipped state, and works very well for what I used it for. Note that it only draws the subviews (and the subsubviews,... recursively), but getting it to also draw itself is very easy, just add a [self drawRect:[self bounds]] in the implementation of imageWithSubviews.\n- (void)drawSubviews\n{\n BOOL flipped = [self isFlipped];\n\n for ( NSView *subview in [self subviews] ) {\n\n // changes the coordinate system so that the local coordinates of the subview (bounds) become the coordinates of the superview (frame)\n // the transform assumes bounds and frame have the same size, and bounds origin is (0,0)\n // handling of 'isFlipped' also probably unreliable\n NSAffineTransform *transform = [NSAffineTransform transform];\n if ( flipped ) {\n [transform translateXBy:subview.frame.origin.x yBy:NSMaxY(subview.frame)];\n [transform scaleXBy:+1.0 yBy:-1.0];\n } else\n [transform translateXBy:subview.frame.origin.x yBy:subview.frame.origin.y];\n [transform concat];\n\n // recursively draw the subview and sub-subviews\n [subview drawRect:[subview bounds]];\n [subview drawSubviews];\n\n // reset the transform to get back a clean graphic contexts for the rest of the drawing\n [transform invert];\n [transform concat];\n }\n}\n\n- (NSImage *)imageWithSubviews\n{\n NSImage *image = [[[NSImage alloc] initWithSize:[self bounds].size] autorelease];\n [image lockFocus];\n // it seems NSImage cannot use flipped coordinates the way NSView does (the method 'setFlipped:' does not seem to help)\n // Use instead an NSAffineTransform\n if ( [self isFlipped] ) {\n NSAffineTransform *transform = [NSAffineTransform transform];\n [transform translateXBy:0 yBy:NSMaxY(self.bounds)];\n [transform scaleXBy:+1.0 yBy:-1.0];\n [transform concat];\n }\n [self drawSubviews];\n [image unlockFocus];\n return image;\n}\n\n",
"You can use -[NSBitmapImageRep initWithFocusedViewRect:] after locking focus on a view to have the view render itself (and its subviews) into the given rectangle.\n",
"What you want to do is available explicitly already. See the section \"NSView Drawing Redirection API\" in the 10.4 AppKit release notes.\nMake an NSBitmapImageRep for caching and clear it:\nNSGraphicsContext *bitmapGraphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:cacheBitmapImageRep];\n[NSGraphicsContext saveGraphicsState];\n[NSGraphicsContext setCurrentContext:bitmapGraphicsContext];\n[[NSColor clearColor] set];\nNSRectFill(NSMakeRect(0, 0, [cacheBitmapImageRep size].width, [cacheBitmapImageRep size].height));\n[NSGraphicsContext restoreGraphicsState];\n\nCache to it:\n-[NSView cacheDisplayInRect:toBitmapImageRep:]\n\nIf you want to more generally draw into a specified context handling view recursion and transparency correctly, \n-[NSView displayRectIgnoringOpacity:inContext:]\n\n"
] |
[
2,
2,
1,
1
] |
[] |
[] |
[
"cocoa",
"macos",
"objective_c"
] |
stackoverflow_0000014874_cocoa_macos_objective_c.txt
|
Q:
C++ Template Ambiguity
A friend and I were discussing C++ templates. He asked me what this should do:
#include <iostream>
template <bool>
struct A {
A(bool) { std::cout << "bool\n"; }
A(void*) { std::cout << "void*\n"; }
};
int main() {
A<true> *d = 0;
const int b = 2;
const int c = 1;
new A< b > (c) > (d);
}
The last line in main has two reasonable parses. Is 'b' the template argument or is b > (c) the template argument?
Although, it is trivial to compile this, and see what we get, we were wondering what resolves the ambiguity?
A:
AFAIK it would be compiled as new A<b>(c) > d. This is the only reasonable way to parse it IMHO. If the parser can't assume under normal circumstances a > end a template argument, that would result it much more ambiguity. If you want it the other way, you should have written:
new A<(b > c)>(d);
A:
As stated by Leon & Lee, 14.2/3 (C++ '03) explicitly defines this behaviour.
C++ '0x adds to the fun with a similar rule applying to >>. The basic concept, is that when parsing a template-argument-list a non nested >> will be treated as two distinct > > tokens and not the right shift operator:
template <bool>
struct A {
A(bool);
A(void*);
};
template <typename T>
class C
{
public:
C (int);
};
int main() {
A<true> *d = 0;
const int b = 2;
const int c = 1;
new C <A< b >> (c) > (d); // #1
new C <A< b > > (c) > (d); // #2
}
'#1' and '#2' are equivalent in the above.
This of course fixes that annoyance with having to add spaces in nested specializations:
C<A<false>> c; // Parse error in C++ '98, '03 due to "right shift operator"
A:
The C++ standard defines that if for a template name followed by a <, the < is always the beginning of the template argument list and the first non-nested > is taken as the end of the template argument list.
If you intended that the result of the > operator be the template argument, then you'd need to enclose the expression in parentheses. You don't need parentheses if the argument was part of a static_cast<> or another template expression.
A:
The greediness of the lexer is probably the determining factor in the absence of parentheses to make it explicit. I'd guess that the lexer isn't greedy.
|
C++ Template Ambiguity
|
A friend and I were discussing C++ templates. He asked me what this should do:
#include <iostream>
template <bool>
struct A {
A(bool) { std::cout << "bool\n"; }
A(void*) { std::cout << "void*\n"; }
};
int main() {
A<true> *d = 0;
const int b = 2;
const int c = 1;
new A< b > (c) > (d);
}
The last line in main has two reasonable parses. Is 'b' the template argument or is b > (c) the template argument?
Although, it is trivial to compile this, and see what we get, we were wondering what resolves the ambiguity?
|
[
"AFAIK it would be compiled as new A<b>(c) > d. This is the only reasonable way to parse it IMHO. If the parser can't assume under normal circumstances a > end a template argument, that would result it much more ambiguity. If you want it the other way, you should have written:\nnew A<(b > c)>(d);\n\n",
"As stated by Leon & Lee, 14.2/3 (C++ '03) explicitly defines this behaviour.\nC++ '0x adds to the fun with a similar rule applying to >>. The basic concept, is that when parsing a template-argument-list a non nested >> will be treated as two distinct > > tokens and not the right shift operator:\ntemplate <bool>\nstruct A {\n A(bool);\n A(void*);\n};\n\ntemplate <typename T>\nclass C\n{\npublic:\n C (int);\n};\n\nint main() {\n A<true> *d = 0;\n const int b = 2;\n const int c = 1;\n new C <A< b >> (c) > (d); // #1\n new C <A< b > > (c) > (d); // #2\n}\n\n'#1' and '#2' are equivalent in the above.\nThis of course fixes that annoyance with having to add spaces in nested specializations:\nC<A<false>> c; // Parse error in C++ '98, '03 due to \"right shift operator\"\n\n",
"The C++ standard defines that if for a template name followed by a <, the < is always the beginning of the template argument list and the first non-nested > is taken as the end of the template argument list.\nIf you intended that the result of the > operator be the template argument, then you'd need to enclose the expression in parentheses. You don't need parentheses if the argument was part of a static_cast<> or another template expression. \n",
"The greediness of the lexer is probably the determining factor in the absence of parentheses to make it explicit. I'd guess that the lexer isn't greedy.\n"
] |
[
7,
6,
3,
0
] |
[] |
[] |
[
"c++",
"grammar",
"templates"
] |
stackoverflow_0000052506_c++_grammar_templates.txt
|
Q:
Are there any advantages compiling for .NET Framework 3.5 instead of 2.0?
Are there any advantages compiling for .NET Framework 3.5 instead of 2.0?
For example less memory consumption, faster startup, better performance...
Personally I don't think so however, I may have missed something.
Edits
Of course there are more features in the 3.5 framework, but these are not the focus of this question.
There seem to be no advantages.
Yes I meant targeting the Framework. I have installed the latest 3.5 SP1 and VS 2008 so what's the difference between compiling with and targeting a framework? I can target the framework in the project options but how do I 'compile with' a specific framework version? I did not know that there is a difference.
So for now we agree that there are no advantages.
A:
There's a difference between compiling and targeting.
Compiling the code with the (for example) C# 3.0 compiler will probably give you a boost on performance (very little one anyway) as some optimization for the generated IL code migh have been included. It also allows you to use some of the new features like automatic properties or lambda expressions.
Targeting for a given framework will ensure your assembly works for that framework (and posteriors) and will fail if you target for 2.0 and are using a 3.5 library. No performance improvements will be directly related to that unless your substituting a class from one framework with another "fastest" class. For example, targeting .NET 1.1 won't allow you to use generics and therefore you'll have to use the ArrayList which is considerably slower than List (due to boxing and unboxing).
A:
There are two things to remember in regards to .NET 2.0 and .NET 3.5.
The .NET Framework 3.5 is just a few libraries that run on top of .NET 2.0.
When developing in Visual Studio 2008 and targeting .NET 2.0 you can still use certain C# 3.0 language features such as Extension Methods since they are in fact a feature of the C# 3.0 (or .NET 3.5) compiler. See this link: http://www.codethinked.com/post/2008/02/Using-Extension-Methods-in-net-20.aspx
A:
I haven't found any. An obvious disadvantage, if you don't need the 3.5 specific features, is that the 3.5 code base is younger and is therefore possible, albeit unlikely, that there is some bug lurking around.
A:
There is no benefit to compiling to the 3.5 framework if you are not using any classes from that version of the framework.
A:
I presume that you must mean targeting the .NET 3.5 framework for your compilation? If so then as others have said I don't believe you will see much difference.
However, if you're talking about using the updated compilers then there are various changes and break changes described for both C# and VB at the following links:
C# 3.0 and SP1 compiler
chanages
What's New in the Visual Basic
Compiler
Visual Basic 2008 Breaking
Changes
A:
I believe that a different compiler ships with each version of Visual Studio. For example in the case of C# the 2.0 compiler shipped with Visual Studio 2005 and the C# 3.0 shipped with Visual Studio 2008. Depending upon which version of Visual Studio you use you end up with a different compiler.
Targeting a framework refers to specifically which version of the framework you wish to target during the compilation process; targeting frameworks is a new feature of Visual Studio 2008. For instance I could have a solution open in Visual Studio 2008 and target v2.0 of .Net. The result would be that I wouldn't have any of the 3.0 or 3.5 .Net features available to me during that compilation, for example WPF.
A:
If your .NET assembly targets .NET 3.5, the resulting application will look for and require the .NET 3.5 libraries, and that's that. These libraries come with numerous additional classed not found in the .NET 2.0 framework, so that would be the advantage to targetting those libraries.
If you however compile C# code with the C# 3.0 compiler shipping with e.g. Visual Studio 2008 and suited for .NET 3.5, but have your assembly target .NET 2.0, you will still only need the regular .NET 2.0 libraries, and despite this actually use certain .NET 3.5 compiler features, as a number of those features are only making use of .NET 2.0 code in the end. Read more about this here: http://weblogs.asp.net/shahar/archive/2008/01/23/use-c-3-features-from-c-2-and-net-2-0-code.aspx
A:
3.5 has classes that 2.0 doesn't. Func<...> for instance. If you aim for 2.0, you can't use them.
|
Are there any advantages compiling for .NET Framework 3.5 instead of 2.0?
|
Are there any advantages compiling for .NET Framework 3.5 instead of 2.0?
For example less memory consumption, faster startup, better performance...
Personally I don't think so however, I may have missed something.
Edits
Of course there are more features in the 3.5 framework, but these are not the focus of this question.
There seem to be no advantages.
Yes I meant targeting the Framework. I have installed the latest 3.5 SP1 and VS 2008 so what's the difference between compiling with and targeting a framework? I can target the framework in the project options but how do I 'compile with' a specific framework version? I did not know that there is a difference.
So for now we agree that there are no advantages.
|
[
"There's a difference between compiling and targeting.\nCompiling the code with the (for example) C# 3.0 compiler will probably give you a boost on performance (very little one anyway) as some optimization for the generated IL code migh have been included. It also allows you to use some of the new features like automatic properties or lambda expressions.\nTargeting for a given framework will ensure your assembly works for that framework (and posteriors) and will fail if you target for 2.0 and are using a 3.5 library. No performance improvements will be directly related to that unless your substituting a class from one framework with another \"fastest\" class. For example, targeting .NET 1.1 won't allow you to use generics and therefore you'll have to use the ArrayList which is considerably slower than List (due to boxing and unboxing).\n",
"There are two things to remember in regards to .NET 2.0 and .NET 3.5.\n\nThe .NET Framework 3.5 is just a few libraries that run on top of .NET 2.0.\nWhen developing in Visual Studio 2008 and targeting .NET 2.0 you can still use certain C# 3.0 language features such as Extension Methods since they are in fact a feature of the C# 3.0 (or .NET 3.5) compiler. See this link: http://www.codethinked.com/post/2008/02/Using-Extension-Methods-in-net-20.aspx\n\n",
"I haven't found any. An obvious disadvantage, if you don't need the 3.5 specific features, is that the 3.5 code base is younger and is therefore possible, albeit unlikely, that there is some bug lurking around.\n",
"There is no benefit to compiling to the 3.5 framework if you are not using any classes from that version of the framework.\n",
"I presume that you must mean targeting the .NET 3.5 framework for your compilation? If so then as others have said I don't believe you will see much difference.\nHowever, if you're talking about using the updated compilers then there are various changes and break changes described for both C# and VB at the following links:\n\nC# 3.0 and SP1 compiler\nchanages \nWhat's New in the Visual Basic\nCompiler \nVisual Basic 2008 Breaking\nChanges\n\n",
"I believe that a different compiler ships with each version of Visual Studio. For example in the case of C# the 2.0 compiler shipped with Visual Studio 2005 and the C# 3.0 shipped with Visual Studio 2008. Depending upon which version of Visual Studio you use you end up with a different compiler.\nTargeting a framework refers to specifically which version of the framework you wish to target during the compilation process; targeting frameworks is a new feature of Visual Studio 2008. For instance I could have a solution open in Visual Studio 2008 and target v2.0 of .Net. The result would be that I wouldn't have any of the 3.0 or 3.5 .Net features available to me during that compilation, for example WPF.\n",
"If your .NET assembly targets .NET 3.5, the resulting application will look for and require the .NET 3.5 libraries, and that's that. These libraries come with numerous additional classed not found in the .NET 2.0 framework, so that would be the advantage to targetting those libraries.\nIf you however compile C# code with the C# 3.0 compiler shipping with e.g. Visual Studio 2008 and suited for .NET 3.5, but have your assembly target .NET 2.0, you will still only need the regular .NET 2.0 libraries, and despite this actually use certain .NET 3.5 compiler features, as a number of those features are only making use of .NET 2.0 code in the end. Read more about this here: http://weblogs.asp.net/shahar/archive/2008/01/23/use-c-3-features-from-c-2-and-net-2-0-code.aspx\n",
"3.5 has classes that 2.0 doesn't. Func<...> for instance. If you aim for 2.0, you can't use them. \n"
] |
[
5,
2,
1,
1,
1,
1,
1,
0
] |
[] |
[] |
[
".net",
"compiler_construction",
"frameworks",
"optimization"
] |
stackoverflow_0000066759_.net_compiler_construction_frameworks_optimization.txt
|
Q:
Is it possible to reference control templates defined in microsoft's assemblies?
i have scenario where i have to provide my own control template for a few WPF controls - i.e. GridViewHeader. when you take a look at control template for GridViewHEader in blend, it is agregated from several other controls, which in some cases are styled for that control only - i.e. this splitter between columns.
those templates, obviously are resources hidden somewhere in system...dll (or somewhwere in themes dll's).
so, my question is - is there a way to reference those predefined templates? so far, i've ended up having my own copies of them in my resources, but i don't like that approach.
here is sample scenario:
i have a GridViewColumnHeader:
<Style TargetType="{x:Type GridViewColumnHeader}" x:Key="gridViewColumnStyle">
<Setter Property="HorizontalContentAlignment" Value="Stretch"/>
<Setter Property="VerticalContentAlignment" Value="Stretch"/>
<Setter Property="Background" Value="{StaticResource GridViewHeaderBackgroundColor}"/>
<Setter Property="BorderBrush" Value="{StaticResource GridViewHeaderForegroundColor}"/>
<Setter Property="BorderThickness" Value="0"/>
<Setter Property="Padding" Value="2,0,2,0"/>
<Setter Property="Foreground" Value="{StaticResource GridViewHeaderForegroundColor}"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type GridViewColumnHeader}">
<Grid SnapsToDevicePixels="true" Tag="Header" Name="Header">
<ContentPresenter Name="HeaderContent" Margin="0,0,0,1" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" RecognizesAccessKey="True" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" />
<Canvas>
<Thumb x:Name="PART_HeaderGripper" Style="{StaticResource GridViewColumnHeaderGripper}"/>
</Canvas>
</Grid>
<ControlTemplate.Triggers>
<Trigger Property="IsMouseOver" Value="true">
</Trigger>
<Trigger Property="IsPressed" Value="true">
<Setter TargetName="HeaderContent" Property="Margin" Value="1,1,0,0"/>
</Trigger>
<Trigger Property="Height" Value="Auto">
<Setter Property="MinHeight" Value="20"/>
</Trigger>
<Trigger Property="IsEnabled" Value="false">
<Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}"/>
</Trigger>
</ControlTemplate.Triggers>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
so far - nothing interesting, but say, i want to add some extra functionality straight in the template - i'd leave cotnent presenter as is, add my controls next to it and i'd like to leave Thumb with defaults from framework. i've found themes provided by microsoft here:
the theme for Thumb looks like that:
<Style x:Key="GridViewColumnHeaderGripper"
TargetType="{x:Type Thumb}">
<Setter Property="Canvas.Right"
Value="-9"/>
<Setter Property="Width"
Value="18"/>
<Setter Property="Height"
Value="{Binding Path=ActualHeight,RelativeSource={RelativeSource TemplatedParent}}"/>
<Setter Property="Padding"
Value="0"/>
<Setter Property="Background"
Value="{StaticResource GridViewColumnHeaderBorderBackground}"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type Thumb}">
<Border Padding="{TemplateBinding Padding}"
Background="Transparent">
<Rectangle HorizontalAlignment="Center"
Width="1"
Fill="{TemplateBinding Background}"/>
</Border>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
so far - i have to copy & paste that style, while i'd prefer to get reference to it from resources.
A:
Referencing internal resources that are 100% subject to change isn't serviceable - better to just copy it.
A:
It is possible to reference them, but as paulbetts said, its not recommended as they could change. Also consider if what you are doing is truely 'correct'. Can you edit your question to explain why you need to do this exactly?
|
Is it possible to reference control templates defined in microsoft's assemblies?
|
i have scenario where i have to provide my own control template for a few WPF controls - i.e. GridViewHeader. when you take a look at control template for GridViewHEader in blend, it is agregated from several other controls, which in some cases are styled for that control only - i.e. this splitter between columns.
those templates, obviously are resources hidden somewhere in system...dll (or somewhwere in themes dll's).
so, my question is - is there a way to reference those predefined templates? so far, i've ended up having my own copies of them in my resources, but i don't like that approach.
here is sample scenario:
i have a GridViewColumnHeader:
<Style TargetType="{x:Type GridViewColumnHeader}" x:Key="gridViewColumnStyle">
<Setter Property="HorizontalContentAlignment" Value="Stretch"/>
<Setter Property="VerticalContentAlignment" Value="Stretch"/>
<Setter Property="Background" Value="{StaticResource GridViewHeaderBackgroundColor}"/>
<Setter Property="BorderBrush" Value="{StaticResource GridViewHeaderForegroundColor}"/>
<Setter Property="BorderThickness" Value="0"/>
<Setter Property="Padding" Value="2,0,2,0"/>
<Setter Property="Foreground" Value="{StaticResource GridViewHeaderForegroundColor}"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type GridViewColumnHeader}">
<Grid SnapsToDevicePixels="true" Tag="Header" Name="Header">
<ContentPresenter Name="HeaderContent" Margin="0,0,0,1" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" RecognizesAccessKey="True" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" />
<Canvas>
<Thumb x:Name="PART_HeaderGripper" Style="{StaticResource GridViewColumnHeaderGripper}"/>
</Canvas>
</Grid>
<ControlTemplate.Triggers>
<Trigger Property="IsMouseOver" Value="true">
</Trigger>
<Trigger Property="IsPressed" Value="true">
<Setter TargetName="HeaderContent" Property="Margin" Value="1,1,0,0"/>
</Trigger>
<Trigger Property="Height" Value="Auto">
<Setter Property="MinHeight" Value="20"/>
</Trigger>
<Trigger Property="IsEnabled" Value="false">
<Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}"/>
</Trigger>
</ControlTemplate.Triggers>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
so far - nothing interesting, but say, i want to add some extra functionality straight in the template - i'd leave cotnent presenter as is, add my controls next to it and i'd like to leave Thumb with defaults from framework. i've found themes provided by microsoft here:
the theme for Thumb looks like that:
<Style x:Key="GridViewColumnHeaderGripper"
TargetType="{x:Type Thumb}">
<Setter Property="Canvas.Right"
Value="-9"/>
<Setter Property="Width"
Value="18"/>
<Setter Property="Height"
Value="{Binding Path=ActualHeight,RelativeSource={RelativeSource TemplatedParent}}"/>
<Setter Property="Padding"
Value="0"/>
<Setter Property="Background"
Value="{StaticResource GridViewColumnHeaderBorderBackground}"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type Thumb}">
<Border Padding="{TemplateBinding Padding}"
Background="Transparent">
<Rectangle HorizontalAlignment="Center"
Width="1"
Fill="{TemplateBinding Background}"/>
</Border>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
so far - i have to copy & paste that style, while i'd prefer to get reference to it from resources.
|
[
"Referencing internal resources that are 100% subject to change isn't serviceable - better to just copy it. \n",
"It is possible to reference them, but as paulbetts said, its not recommended as they could change. Also consider if what you are doing is truely 'correct'. Can you edit your question to explain why you need to do this exactly?\n"
] |
[
2,
0
] |
[] |
[] |
[
"wpf",
"xaml"
] |
stackoverflow_0000069398_wpf_xaml.txt
|
Q:
Why don't we get a compile time error even if we don't include stdio.h in a C program?
How does the compiler know the prototype of sleep function or even printf function, when I did not include any header file in the first place?
Moreover, if I specify sleep(1,1,"xyz") or any arbitrary number of arguments, the compiler still compiles it.
But the strange thing is that gcc is able to find the definition of this function at link time, I don't understand how is this possible, because actual sleep() function takes a single argument only, but our program mentioned three arguments.
/********************************/
int main()
{
short int i;
for(i = 0; i<5; i++)
{
printf("%d",i);`print("code sample");`
sleep(1);
}
return 0;
}
A:
Lacking a more specific prototype, the compiler will assume that the function returns int and takes whatever number of arguments you provide.
Depending on the CPU architecture arguments can be passed in registers (for example, a0 through a3 on MIPS) or by pushing them onto the stack as in the original x86 calling convention. In either case, passing extra arguments is harmless. The called function won't use the registers passed in nor reference the extra arguments on the stack, but nothing bad happens.
Passing in fewer arguments is more problematic. The called function will use whatever garbage happened to be in the appropriate register or stack location, and hijinks may ensue.
A:
In classic C, you don't need a prototype to call a function. The compiler will infer that the function returns an int and takes a unknown number of parameters. This may work on some architectures, but it will fail if the function returns something other than int, like a structure, or if there are any parameter conversions.
In your example, sleep is seen and the compiler assumes a prototype like
int sleep();
Note that the argument list is empty. In C, this is NOT the same as void. This actually means "unknown". If you were writing K&R C code, you could have unknown parameters through code like
int sleep(t)
int t;
{
/* do something with t */
}
This is all dangerous, especially on some embedded chips where the way parameters are passed for a unprototyped function differs from one with a prototype.
Note: prototypes aren't needed for linking. Usually, the linker automatically links with a C runtime library like glibc on Linux. The association between your use of sleep and the code that implements it happens at link time long after the source code has been processed.
I'd suggest that you use the feature of your compiler to require prototypes to avoid problems like this. With GCC, it's the -Wstrict-prototypes command line argument. In the CodeWarrior tools, it was the "Require Prototypes" flag in the C/C++ Compiler panel.
A:
C will guess int for unknown types. So, it probably thinks sleep has this prototype:
int sleep(int);
As for giving multiple parameters and linking...I'm not sure. That does surprise me. If that really worked, then what happened at run-time?
A:
This is to do with something called 'K & R C' and 'ANSI C'.
In good old K & R C, if something is not declared, it is assumed to be int.
So any thing that looks like a function call, but not declared as function
will automatically take return value of 'int' and argument types depending
on the actuall call.
However people later figured out that this can be very bad sometimes. So
several compilers added warning. C++ made this error. I think gcc has some
flag ( -ansic or -pedantic? ) , which make this condition an error.
So, In a nutshell, this is historical baggage.
A:
Other answers cover the probable mechanics (all guesses as compiler not specified).
The issue that you have is that your compiler and linker have not been set to enable every possible error and warning. For any new project there is (virtually) no excuse for not doing so. for legacy projects more excuse - but should strive to enable as many as possible
A:
Depends on the compiler, but with gcc (for example, since that's the one you referred to), some of the standard (both C and POSIX) functions have builtin "compiler intrinsics". This means that the compiler library shipped with your compiler (libgcc in this case) contains an implementation of the function. The compiler will allow an implicit declaration (i.e., using the function without a header), and the linker will find the implementation in the compiler library because you're probably using the compiler as a linker front-end.
Try compiling your objects with the '-c' flag (compile only, no link), and then link them directly using the linker. You will find that you get the linker errors you expect.
Alternatively, gcc supports options to disable the use of intrinsics: -fno-builtin or for granular control, -fno-builtin-function. There are further options that may be useful if you're doing something like building a homebrew kernel or some other kind of on-the-metal app.
A:
In a non-toy example another file may include the one you missed. Reviewing the output from the pre-processor is a nice way to see what you end up with compiling.
|
Why don't we get a compile time error even if we don't include stdio.h in a C program?
|
How does the compiler know the prototype of sleep function or even printf function, when I did not include any header file in the first place?
Moreover, if I specify sleep(1,1,"xyz") or any arbitrary number of arguments, the compiler still compiles it.
But the strange thing is that gcc is able to find the definition of this function at link time, I don't understand how is this possible, because actual sleep() function takes a single argument only, but our program mentioned three arguments.
/********************************/
int main()
{
short int i;
for(i = 0; i<5; i++)
{
printf("%d",i);`print("code sample");`
sleep(1);
}
return 0;
}
|
[
"Lacking a more specific prototype, the compiler will assume that the function returns int and takes whatever number of arguments you provide.\nDepending on the CPU architecture arguments can be passed in registers (for example, a0 through a3 on MIPS) or by pushing them onto the stack as in the original x86 calling convention. In either case, passing extra arguments is harmless. The called function won't use the registers passed in nor reference the extra arguments on the stack, but nothing bad happens.\nPassing in fewer arguments is more problematic. The called function will use whatever garbage happened to be in the appropriate register or stack location, and hijinks may ensue.\n",
"In classic C, you don't need a prototype to call a function. The compiler will infer that the function returns an int and takes a unknown number of parameters. This may work on some architectures, but it will fail if the function returns something other than int, like a structure, or if there are any parameter conversions.\nIn your example, sleep is seen and the compiler assumes a prototype like\nint sleep();\n\nNote that the argument list is empty. In C, this is NOT the same as void. This actually means \"unknown\". If you were writing K&R C code, you could have unknown parameters through code like\nint sleep(t)\nint t;\n{\n /* do something with t */\n}\n\nThis is all dangerous, especially on some embedded chips where the way parameters are passed for a unprototyped function differs from one with a prototype.\nNote: prototypes aren't needed for linking. Usually, the linker automatically links with a C runtime library like glibc on Linux. The association between your use of sleep and the code that implements it happens at link time long after the source code has been processed.\nI'd suggest that you use the feature of your compiler to require prototypes to avoid problems like this. With GCC, it's the -Wstrict-prototypes command line argument. In the CodeWarrior tools, it was the \"Require Prototypes\" flag in the C/C++ Compiler panel.\n",
"C will guess int for unknown types. So, it probably thinks sleep has this prototype:\nint sleep(int);\n\nAs for giving multiple parameters and linking...I'm not sure. That does surprise me. If that really worked, then what happened at run-time?\n",
"This is to do with something called 'K & R C' and 'ANSI C'.\nIn good old K & R C, if something is not declared, it is assumed to be int. \nSo any thing that looks like a function call, but not declared as function\nwill automatically take return value of 'int' and argument types depending\non the actuall call.\nHowever people later figured out that this can be very bad sometimes. So \nseveral compilers added warning. C++ made this error. I think gcc has some \nflag ( -ansic or -pedantic? ) , which make this condition an error.\nSo, In a nutshell, this is historical baggage.\n",
"Other answers cover the probable mechanics (all guesses as compiler not specified).\nThe issue that you have is that your compiler and linker have not been set to enable every possible error and warning. For any new project there is (virtually) no excuse for not doing so. for legacy projects more excuse - but should strive to enable as many as possible\n",
"Depends on the compiler, but with gcc (for example, since that's the one you referred to), some of the standard (both C and POSIX) functions have builtin \"compiler intrinsics\". This means that the compiler library shipped with your compiler (libgcc in this case) contains an implementation of the function. The compiler will allow an implicit declaration (i.e., using the function without a header), and the linker will find the implementation in the compiler library because you're probably using the compiler as a linker front-end.\nTry compiling your objects with the '-c' flag (compile only, no link), and then link them directly using the linker. You will find that you get the linker errors you expect.\nAlternatively, gcc supports options to disable the use of intrinsics: -fno-builtin or for granular control, -fno-builtin-function. There are further options that may be useful if you're doing something like building a homebrew kernel or some other kind of on-the-metal app.\n",
"In a non-toy example another file may include the one you missed. Reviewing the output from the pre-processor is a nice way to see what you end up with compiling.\n"
] |
[
10,
5,
2,
2,
2,
1,
1
] |
[] |
[] |
[
"c",
"compiler_construction"
] |
stackoverflow_0000068843_c_compiler_construction.txt
|
Q:
Leaving your harddrive shared
The leaving your wireless network open question reminded me of this.
I typically share the root drive on my machines across my network, and tie login authorization to the machines NT ID, so there is at least some form of protection.
My question, how easy is it to gain access to these drives for ill good? Is the authorization enough, or should I lock things down more?
A:
If this is a home network with no wifi or secured wifi, it's probably not an issue. Your isp will almost certainly prevent anyone from trying anything via the larger web.
If you have open wifi, then there's a little more cause for concern. If it's properly secured so that some authentication is required, you're probably okay. I mean, a determined hacker could probably break in, but you're not likely to find a determined hacker in wi-fi range. But the risk (if small) is there. You will want to make sure the administrative shares (the \\yourmachine\c$ or \\yourmachine\admin$ mentioned earlier) are disabled if you have open wifi. No sense making it too easy.
A:
I can't answer the main question, but do keep in mind that Windows, by default, is always sharing the roots of your drives. Try:
\\yourmachine\c$
(And then try not to freak out.)
A:
Windows generally protects shares via two methods - permissions on the share itself, and then NTFS file permissions. Good practice would be to have the share permissions as "Authenticated User" and remove the "Everyone" group.
Personally I would make sure that usernames and passwords match up on each computer, and control permissions like that, rather than using computer name.
|
Leaving your harddrive shared
|
The leaving your wireless network open question reminded me of this.
I typically share the root drive on my machines across my network, and tie login authorization to the machines NT ID, so there is at least some form of protection.
My question, how easy is it to gain access to these drives for ill good? Is the authorization enough, or should I lock things down more?
|
[
"If this is a home network with no wifi or secured wifi, it's probably not an issue. Your isp will almost certainly prevent anyone from trying anything via the larger web.\nIf you have open wifi, then there's a little more cause for concern. If it's properly secured so that some authentication is required, you're probably okay. I mean, a determined hacker could probably break in, but you're not likely to find a determined hacker in wi-fi range. But the risk (if small) is there. You will want to make sure the administrative shares (the \\\\yourmachine\\c$ or \\\\yourmachine\\admin$ mentioned earlier) are disabled if you have open wifi. No sense making it too easy.\n",
"I can't answer the main question, but do keep in mind that Windows, by default, is always sharing the roots of your drives. Try:\n\\\\yourmachine\\c$\n\n(And then try not to freak out.)\n",
"Windows generally protects shares via two methods - permissions on the share itself, and then NTFS file permissions. Good practice would be to have the share permissions as \"Authenticated User\" and remove the \"Everyone\" group.\nPersonally I would make sure that usernames and passwords match up on each computer, and control permissions like that, rather than using computer name.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"networking",
"security",
"sysadmin",
"windows"
] |
stackoverflow_0000032991_networking_security_sysadmin_windows.txt
|
Q:
Missing classes in WMI when non-admin
I'd like to be able to see Win32_PhysicalMedia information when logged in as
a Limited User in Windows XP (no admin rights). It works ok when logged in as Admin,
WMIDiag has just given a clean bill of health, and Win32_DiskDrive class
produces information correctly, but Win32_PhysicalMedia produces a count of 0
for this code
set WMI = GetObject("WinMgtmts:/root/cimv2")
set objs = WMI.InstancesOf("Win32_PhysicalMedia")
wscript.echo objs.count
Alternatively, if the hard disk serial number as found on the SerialNumber
property of the physical drives is available in another class which I can
read as a limited user please let me know. I am not attempting to write to
any property with WMI, but I can't read this when running as a Limited User.
Interestingly, DiskDrive misses out the Signature property, which would do for
my application when run as a Limited User but is present when run from an
Admin account.
A:
WMI does not give limited users this information.
If you can access Win32 functions from your language, you can call GetVolumeInformation.
|
Missing classes in WMI when non-admin
|
I'd like to be able to see Win32_PhysicalMedia information when logged in as
a Limited User in Windows XP (no admin rights). It works ok when logged in as Admin,
WMIDiag has just given a clean bill of health, and Win32_DiskDrive class
produces information correctly, but Win32_PhysicalMedia produces a count of 0
for this code
set WMI = GetObject("WinMgtmts:/root/cimv2")
set objs = WMI.InstancesOf("Win32_PhysicalMedia")
wscript.echo objs.count
Alternatively, if the hard disk serial number as found on the SerialNumber
property of the physical drives is available in another class which I can
read as a limited user please let me know. I am not attempting to write to
any property with WMI, but I can't read this when running as a Limited User.
Interestingly, DiskDrive misses out the Signature property, which would do for
my application when run as a Limited User but is present when run from an
Admin account.
|
[
"WMI does not give limited users this information.\nIf you can access Win32 functions from your language, you can call GetVolumeInformation.\n"
] |
[
1
] |
[] |
[] |
[
"vbscript",
"wmi"
] |
stackoverflow_0000063940_vbscript_wmi.txt
|
Q:
How To Discover RSS Feeds for a given URL
I get a URL from a user. I need to know:
a) is the URL a valid RSS feed?
b) if not is there a valid feed associated with that URL
using PHP/Javascript or something similar
(Ex. http://techcrunch.com fails a), but b) would return their RSS feed)
A:
Found something that I wanted:
Google's AJAX Feed API has a load feed and lookup feed function (Docs here).
a) Load feed provides the feed (and feed status) in JSON
b) Lookup feed provides the RSS feed for a given URL
Theres also a find feed function that searches for RSS feeds based on a keyword.
Planning to use this with JQuery's $.getJSON
A:
The Zend Feed class of the Zend-framework can automatically parse a webpage and list the available feeds.
Example:
$feedArray = Zend_Feed::findFeeds('http://www.example.com/news.html');
A:
This link will allow you to validate the link against the RSS/Atom specifications using the W3C specs, but does require you to manually enter the url.
There are a number of ways to do this programmatically, depending on your choice of language - in PHP, parsing the file as valid XML is a good way to start, then compare it to the relevant DTD.
For b), if the link itself isn't a feed, you can parse it and look for a specified feed in the <head> section of the page, searching for a link whose type is "application/rss+xml", e.g:
<link rel="alternate" title="RSS Feed"
href="http://www.example.com/rss-feed.xml" type="application/rss+xml" />
This type of link is the one used by most browsers to "auto-discover" feeds (causing the RSS icon to appear in your address bar)
A:
a) Retrieve it and try to parse it. If you can parse it, it's valid.
b) Test if it's an HTML document (server sent text/html) MIME-type. If so, run it through an HTML parser and look for <link> elements with RSS feed relations.
A:
For Perl, there is Feed::Find , which does automate the discovery of syndication feeds from the webpage. The usage is quite simplicistic:
use Feed::Find;
my @feeds = Feed::Find->find('http://example.com/');
It first tries the link tags and then scans the a tags for files named .rss and something like that.
A:
Are you doing this in a specific language, or do you just want details about the RSS specification?
In general, look for the XML prolog:
<?xml version="1.0" encoding="UTF-8"?>
followed by an <rss> element, but you might want to validate it as XML, fully validate it against a DTD, or verify that - for example, each URL referred to is valid, etc. More detail would help.
UPDATE: Ah - PHP. I've found this library to be pretty useful: MagpieRSS
|
How To Discover RSS Feeds for a given URL
|
I get a URL from a user. I need to know:
a) is the URL a valid RSS feed?
b) if not is there a valid feed associated with that URL
using PHP/Javascript or something similar
(Ex. http://techcrunch.com fails a), but b) would return their RSS feed)
|
[
"Found something that I wanted:\nGoogle's AJAX Feed API has a load feed and lookup feed function (Docs here).\na) Load feed provides the feed (and feed status) in JSON\nb) Lookup feed provides the RSS feed for a given URL\nTheres also a find feed function that searches for RSS feeds based on a keyword.\nPlanning to use this with JQuery's $.getJSON\n",
"The Zend Feed class of the Zend-framework can automatically parse a webpage and list the available feeds.\nExample: \n$feedArray = Zend_Feed::findFeeds('http://www.example.com/news.html');\n\n",
"This link will allow you to validate the link against the RSS/Atom specifications using the W3C specs, but does require you to manually enter the url.\nThere are a number of ways to do this programmatically, depending on your choice of language - in PHP, parsing the file as valid XML is a good way to start, then compare it to the relevant DTD.\nFor b), if the link itself isn't a feed, you can parse it and look for a specified feed in the <head> section of the page, searching for a link whose type is \"application/rss+xml\", e.g:\n<link rel=\"alternate\" title=\"RSS Feed\" \n href=\"http://www.example.com/rss-feed.xml\" type=\"application/rss+xml\" />\n\nThis type of link is the one used by most browsers to \"auto-discover\" feeds (causing the RSS icon to appear in your address bar)\n",
"a) Retrieve it and try to parse it. If you can parse it, it's valid.\nb) Test if it's an HTML document (server sent text/html) MIME-type. If so, run it through an HTML parser and look for <link> elements with RSS feed relations.\n",
"For Perl, there is Feed::Find , which does automate the discovery of syndication feeds from the webpage. The usage is quite simplicistic:\nuse Feed::Find;\nmy @feeds = Feed::Find->find('http://example.com/');\n\nIt first tries the link tags and then scans the a tags for files named .rss and something like that.\n",
"Are you doing this in a specific language, or do you just want details about the RSS specification?\nIn general, look for the XML prolog:\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\nfollowed by an <rss> element, but you might want to validate it as XML, fully validate it against a DTD, or verify that - for example, each URL referred to is valid, etc. More detail would help.\nUPDATE: Ah - PHP. I've found this library to be pretty useful: MagpieRSS\n"
] |
[
20,
10,
6,
5,
4,
2
] |
[] |
[] |
[
"atom_feed",
"feed",
"php",
"rss"
] |
stackoverflow_0000061535_atom_feed_feed_php_rss.txt
|
Q:
Programmatically handling the Vista Sidebar
Is there an api to bring the vista side bar to the front (Win+Space) programatically and to do the reverse (send it to the back ground).
A:
Probably using SetWindowPos you can change it to be placed the top / bottom of the z-order or even as the top-most window. You would need to find the handle to the sidebar using FindWindow or an application like WinSpy.
But after that something like.
Sets the window on top, but not top most.
SetWindowPos(sidebarHandle, HWND_TOP, 0, 0, 0, 0, SWP_NOMOVE | SWP_NORESIZE);
Sets the window at the bottom.
SetWindowPos(sidebarHandle, HWND_BOTTOM, 0, 0, 0, 0, SWP_NOMOVE | SWP_NORESIZE);
This is my best guess on achieving what you asked, hopefully it helps.
A:
You probably shouldn't do it at all, since such action may annoy the user when executed at the wrong time (95% of cases*), just like stealing focus with a "Yes/No" prompt.
Unless your product's task is to toggle the sidebar of course. ;)
There's no official API for that anyway.
*Purely hypothetical figure
|
Programmatically handling the Vista Sidebar
|
Is there an api to bring the vista side bar to the front (Win+Space) programatically and to do the reverse (send it to the back ground).
|
[
"Probably using SetWindowPos you can change it to be placed the top / bottom of the z-order or even as the top-most window. You would need to find the handle to the sidebar using FindWindow or an application like WinSpy.\nBut after that something like.\nSets the window on top, but not top most.\nSetWindowPos(sidebarHandle, HWND_TOP, 0, 0, 0, 0, SWP_NOMOVE | SWP_NORESIZE);\n\nSets the window at the bottom.\nSetWindowPos(sidebarHandle, HWND_BOTTOM, 0, 0, 0, 0, SWP_NOMOVE | SWP_NORESIZE);\n\nThis is my best guess on achieving what you asked, hopefully it helps.\n",
"You probably shouldn't do it at all, since such action may annoy the user when executed at the wrong time (95% of cases*), just like stealing focus with a \"Yes/No\" prompt.\nUnless your product's task is to toggle the sidebar of course. ;)\nThere's no official API for that anyway.\n*Purely hypothetical figure\n"
] |
[
1,
0
] |
[] |
[] |
[
"c#",
"sidebar",
"windows_vista"
] |
stackoverflow_0000071694_c#_sidebar_windows_vista.txt
|
Q:
In Lucene how do terms get used in calculating scores, can I override it with a CustomScoreQuery?
Has someone successfully overridden the scoring of documents in a query so that the "relevancy" of a term to the field contents can be determined through one's own function? If so, was it by implementing a CustomScoreQuery and overriding the customScore(int, float, float)? I cannot seem to find a way to build either a custom sort or a custom scorer that can rank exact term matches much higher than other prefix term matches. Any suggestions would be appreciated.
A:
I don't know lucene directly, but I can tell you that Solr, an application based on lucene, has got this feature:
Boosting query via functions
Let me know if it helps you.
|
In Lucene how do terms get used in calculating scores, can I override it with a CustomScoreQuery?
|
Has someone successfully overridden the scoring of documents in a query so that the "relevancy" of a term to the field contents can be determined through one's own function? If so, was it by implementing a CustomScoreQuery and overriding the customScore(int, float, float)? I cannot seem to find a way to build either a custom sort or a custom scorer that can rank exact term matches much higher than other prefix term matches. Any suggestions would be appreciated.
|
[
"I don't know lucene directly, but I can tell you that Solr, an application based on lucene, has got this feature:\nBoosting query via functions\nLet me know if it helps you.\n"
] |
[
1
] |
[] |
[] |
[
"lucene",
"scoring"
] |
stackoverflow_0000045002_lucene_scoring.txt
|
Q:
How do you find out which NIC is connected to the internet?
Consider the following setup:
A windows PC with a LAN interface and a WiFi interface (the standard for any new laptop). Each of the interfaces might be connected or disconnected from a network. I need a way to determine which one of the adapters is the one connected to the internet - specifically, in case they are both connected to different networks, one with connection to the internet and one without.
My current solution involves using IPHelper's "GetBestInterface" function and supplying it with the IP address "0.0.0.0".
Do you have any other solutions you might suggest to this problem?
Following some of the answers, let me elaborate:
I need this because I have a product that has to choose which adapter to bind to. I have no way of controlling the setup of the network or the host where the product will run and so I need a solution that is as robust as possible, with as few assumptions as possible.
I need to do this in code, since this is part of a product.
@Chris Upchurch: This makes me dependent on google.com being up (usually not a problem) and on any personal firewall that might be installed to allow pinging.
@Till: Like Steve Moon said, relying on the adapter's address is kind of risky because you make a lot of assumptions on the internal network setup.
@Steve Moon: Looking at the routing table sounds like a good idea, but instead of applying the routing logic myself, I am trying to use "GetBestInterface" as described above. I believe what it should do is exactly what you outlined in your answer, but I am not really sure. The reason I'm reluctant to implement my own "routing logic" is that there's a better chance that I'll get it wrong than if I use a library/API written and tested by more "hard-core" network people.
A:
Technically, there is no "connected to the Internet". The real question is, which interface is routeable to a desired address. Right now, you're querying for the "default route" - the one that applies if no specific route to destination exists. But, you're ignoring any specific routes.
Fortunately, for 99.9% of home users, that'll do the trick. They're not likely to have much of a routing table, and GetBestInterface will automatically prefer wired over wireless - so you should be good. Throw in an override option for the .1% of cases you screw up, and call it a day.
But, for corporate use, you should be using GetBestInterface for a specific destination - otherwise, you'll have issues if someone is on the same LAN as your destination (which means you should take the "internal" interface, not the "external") or has a specific route to your destination (my internal network could peer with your destination's network, for instance).
Then again, I'm not sure what you plan to do with this adapter "connected to the Internet", so it might not be a big deal.
A:
Apparently, in Vista there are new interfaces that enable querying for internet connectivity and more. Take a look at the NLM Interfaces and specifically at INetworkConnection - you can specifically query if the network connection has internet connectivity using the GetConnectivity method.
See also: Network Awareness on Windows Vista
Unfortunately, this is only available on Vista, so for XP I'd have to keep my original heuristic.
A:
I'd look at the routing table. Whichever NIC has an 0.0.0.0 route AND is enabled AND has the lowest metric, is the nic that's currently sending packets to the internet.
So in my case, the top one is the 'internet nic'.
IPv4 Route Table
===========================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 10.0.0.10 10.0.0.51 20
0.0.0.0 0.0.0.0 10.0.0.10 10.0.0.50 25
(much other stuff deleted)
Another alternative is to ping or GetBestInterface 4.2.2.2 - this is an old and venerable DNS server, currently held by GTEI; formerly by Sprint if I remember right.
A:
Start > Run > cmd.exe (this works in XP and Vista): ipconfig /all
This displays all info about the interfaces in your computer. The "public" facing interface should have a public IP address. For starters, it should not be 192.168.x.x or 10.x.x.x :)
A:
Look at the routing table? Generally, unless you're routing between the networks in windows (which is possible, but unusual for a client computer these days) the interface that holds the default route is going to have the Internet connection.
Your question didn't detail why or what you're doing this with so I can't provide any specifics. The command line tool "route" may be of some help, but there are probably libraries for whatever programming language you're using to look at the routing table.
You can't rely on the IP address of the interface (e.g., assuming an RFC-1918 address [192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8] isn't the internet) since most sites have some kind of NATed firewall or proxy setup and the "internet" interface is really on a "private" lan that gets you out to the Internet.
UPDATE: Based on your further information, it sounds like you have a decent solution. I'm not so sure about the choice of 0.0.0.0 since that's a boundary case for IP address -- might be OK on your particular mix of platform/language. Sounds (from the API description) like you could just specify an address, so why not some address known to be on the Internet, e.g. the IP address of your web site, or something more random like 65.66.67.68? Just make sure not to pick one of the rfc-1918 addresses, or the localhost range (127.0.0.0/8), or multicast, any other reserved range, and any address that resolves to a .mil or .gov (while it doesn't sound like getbestinterface sends any traffic, it would suck to find out by having the feds break your door down... :)
A:
running traceroute to some public site will show you. Of course, there may be more than one interface that would get you there.
A:
Looking at the network point of view, either could be routing to the "internet" at any time. If things like spanning tree protocol are enabled on a switch then you may find that what may have been the routing card to begin with may not be anymore.
|
How do you find out which NIC is connected to the internet?
|
Consider the following setup:
A windows PC with a LAN interface and a WiFi interface (the standard for any new laptop). Each of the interfaces might be connected or disconnected from a network. I need a way to determine which one of the adapters is the one connected to the internet - specifically, in case they are both connected to different networks, one with connection to the internet and one without.
My current solution involves using IPHelper's "GetBestInterface" function and supplying it with the IP address "0.0.0.0".
Do you have any other solutions you might suggest to this problem?
Following some of the answers, let me elaborate:
I need this because I have a product that has to choose which adapter to bind to. I have no way of controlling the setup of the network or the host where the product will run and so I need a solution that is as robust as possible, with as few assumptions as possible.
I need to do this in code, since this is part of a product.
@Chris Upchurch: This makes me dependent on google.com being up (usually not a problem) and on any personal firewall that might be installed to allow pinging.
@Till: Like Steve Moon said, relying on the adapter's address is kind of risky because you make a lot of assumptions on the internal network setup.
@Steve Moon: Looking at the routing table sounds like a good idea, but instead of applying the routing logic myself, I am trying to use "GetBestInterface" as described above. I believe what it should do is exactly what you outlined in your answer, but I am not really sure. The reason I'm reluctant to implement my own "routing logic" is that there's a better chance that I'll get it wrong than if I use a library/API written and tested by more "hard-core" network people.
|
[
"Technically, there is no \"connected to the Internet\". The real question is, which interface is routeable to a desired address. Right now, you're querying for the \"default route\" - the one that applies if no specific route to destination exists. But, you're ignoring any specific routes.\nFortunately, for 99.9% of home users, that'll do the trick. They're not likely to have much of a routing table, and GetBestInterface will automatically prefer wired over wireless - so you should be good. Throw in an override option for the .1% of cases you screw up, and call it a day. \nBut, for corporate use, you should be using GetBestInterface for a specific destination - otherwise, you'll have issues if someone is on the same LAN as your destination (which means you should take the \"internal\" interface, not the \"external\") or has a specific route to your destination (my internal network could peer with your destination's network, for instance).\nThen again, I'm not sure what you plan to do with this adapter \"connected to the Internet\", so it might not be a big deal.\n",
"Apparently, in Vista there are new interfaces that enable querying for internet connectivity and more. Take a look at the NLM Interfaces and specifically at INetworkConnection - you can specifically query if the network connection has internet connectivity using the GetConnectivity method.\nSee also: Network Awareness on Windows Vista\nUnfortunately, this is only available on Vista, so for XP I'd have to keep my original heuristic.\n",
"I'd look at the routing table. Whichever NIC has an 0.0.0.0 route AND is enabled AND has the lowest metric, is the nic that's currently sending packets to the internet.\nSo in my case, the top one is the 'internet nic'.\nIPv4 Route Table\n===========================================================================\nActive Routes:\nNetwork Destination Netmask Gateway Interface Metric\n 0.0.0.0 0.0.0.0 10.0.0.10 10.0.0.51 20\n 0.0.0.0 0.0.0.0 10.0.0.10 10.0.0.50 25\n(much other stuff deleted)\n\nAnother alternative is to ping or GetBestInterface 4.2.2.2 - this is an old and venerable DNS server, currently held by GTEI; formerly by Sprint if I remember right.\n",
"Start > Run > cmd.exe (this works in XP and Vista): ipconfig /all\nThis displays all info about the interfaces in your computer. The \"public\" facing interface should have a public IP address. For starters, it should not be 192.168.x.x or 10.x.x.x :)\n",
"Look at the routing table? Generally, unless you're routing between the networks in windows (which is possible, but unusual for a client computer these days) the interface that holds the default route is going to have the Internet connection.\nYour question didn't detail why or what you're doing this with so I can't provide any specifics. The command line tool \"route\" may be of some help, but there are probably libraries for whatever programming language you're using to look at the routing table.\nYou can't rely on the IP address of the interface (e.g., assuming an RFC-1918 address [192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8] isn't the internet) since most sites have some kind of NATed firewall or proxy setup and the \"internet\" interface is really on a \"private\" lan that gets you out to the Internet.\nUPDATE: Based on your further information, it sounds like you have a decent solution. I'm not so sure about the choice of 0.0.0.0 since that's a boundary case for IP address -- might be OK on your particular mix of platform/language. Sounds (from the API description) like you could just specify an address, so why not some address known to be on the Internet, e.g. the IP address of your web site, or something more random like 65.66.67.68? Just make sure not to pick one of the rfc-1918 addresses, or the localhost range (127.0.0.0/8), or multicast, any other reserved range, and any address that resolves to a .mil or .gov (while it doesn't sound like getbestinterface sends any traffic, it would suck to find out by having the feds break your door down... :)\n",
"running traceroute to some public site will show you. Of course, there may be more than one interface that would get you there.\n",
"Looking at the network point of view, either could be routing to the \"internet\" at any time. If things like spanning tree protocol are enabled on a switch then you may find that what may have been the routing card to begin with may not be anymore.\n"
] |
[
4,
1,
1,
0,
0,
0,
0
] |
[
"Ping google.com though each NIC.\n"
] |
[
-1
] |
[
"iphelper",
"networking",
"windows"
] |
stackoverflow_0000038864_iphelper_networking_windows.txt
|
Q:
How do I make the lights stay fixed in the world with Direct3D
I've been using OpenGL for years, but after trying to use D3D for the first time, I wasted a significant amount of time trying figure out how to make my scene lights stay fixed in the world rather than fixed on my objects.
In OpenGL light positions get transformed just like everything else with the MODELVIEW matrix, so to get lights fixed in space, you set up your MODELVIEW the way you want for the lights, and call glLightPosition then set it up for your geometry and make geometry calls. In D3D that doesn't help.
(Comment -- I eventually figured out the answer to this one, but I couldn't find anything helpful on the web or in the MSDN. It would have saved me a few hours of head scratching if I could have found this answer then.)
A:
The answer I discovered eventually was that while OpenGL only has its one amalgamated MODELVIEW matrix, in D3D the "world" and "view" transforms are kept separate, and placing lights seems to be the major reason for this. So the answer is you use D3DTS_VIEW to set up matrices that should apply to your lights, and D3DTS_WORLD to set up matrices that apply to the placement of your geometry in the world.
So actually the D3D system kinda makes more sense than the OpenGL way. It allows you to specify your light positions whenever and wherever the heck you feel like it once and for all, without having to constantly reposition them so that they get transformed by your current "view" transform. OpenGL has to work that way because it simply doesn't know what you think your "view" is vs your "model". It's all just a modelview to GL.
(Comment - apologies if I'm not supposed to answer my own questions here, but this was a real question that I had a few weeks ago and thought it was worth posting here to help others making the shift from OpenGL to D3D. Basic overviews of the D3D lighting and rendering pipeline seem hard to come by.)
A:
For the fixed function pipeline, the lights position and direction are set in world space. The docs for the light structures do tell you that, but I'm not surprised that you missed it in the docs. There's not much information on the fixed function pipeline anymore as the focus has to programmable shaders.
|
How do I make the lights stay fixed in the world with Direct3D
|
I've been using OpenGL for years, but after trying to use D3D for the first time, I wasted a significant amount of time trying figure out how to make my scene lights stay fixed in the world rather than fixed on my objects.
In OpenGL light positions get transformed just like everything else with the MODELVIEW matrix, so to get lights fixed in space, you set up your MODELVIEW the way you want for the lights, and call glLightPosition then set it up for your geometry and make geometry calls. In D3D that doesn't help.
(Comment -- I eventually figured out the answer to this one, but I couldn't find anything helpful on the web or in the MSDN. It would have saved me a few hours of head scratching if I could have found this answer then.)
|
[
"The answer I discovered eventually was that while OpenGL only has its one amalgamated MODELVIEW matrix, in D3D the \"world\" and \"view\" transforms are kept separate, and placing lights seems to be the major reason for this. So the answer is you use D3DTS_VIEW to set up matrices that should apply to your lights, and D3DTS_WORLD to set up matrices that apply to the placement of your geometry in the world.\nSo actually the D3D system kinda makes more sense than the OpenGL way. It allows you to specify your light positions whenever and wherever the heck you feel like it once and for all, without having to constantly reposition them so that they get transformed by your current \"view\" transform. OpenGL has to work that way because it simply doesn't know what you think your \"view\" is vs your \"model\". It's all just a modelview to GL.\n(Comment - apologies if I'm not supposed to answer my own questions here, but this was a real question that I had a few weeks ago and thought it was worth posting here to help others making the shift from OpenGL to D3D. Basic overviews of the D3D lighting and rendering pipeline seem hard to come by.)\n",
"For the fixed function pipeline, the lights position and direction are set in world space. The docs for the light structures do tell you that, but I'm not surprised that you missed it in the docs. There's not much information on the fixed function pipeline anymore as the focus has to programmable shaders.\n"
] |
[
2,
0
] |
[] |
[] |
[
"direct3d",
"directx",
"lighting"
] |
stackoverflow_0000071720_direct3d_directx_lighting.txt
|
Q:
Effective strategy for leaving an audit trail/change history for DB applications?
What are some strategies that people have had success with for maintaining a change history for data in a fairly complex database. One of the applications that I frequently use and develop for could really benefit from a more comprehensive way of tracking how records have changed over time. For instance, right now records can have a number of timestamp and modified user fields, but we currently don't have a scheme for logging multiple change, for instance if an operation is rolled back. In a perfect world, it would be possible to reconstruct the record as it was after each save, etc.
Some info on the DB:
Needs to have the capacity to grow by thousands of records per week
50-60 Tables
Main revisioned tables may have several million records each
Reasonable amount of foreign keys and indexes set
Using PostgreSQL 8.x
A:
One strategy you could use is MVCC, Multi-Value Concurrency Control. In this scheme, you never do updates to any of your tables, you just do inserts, maintaining version numbers for each record. This has the advantage of providing an exact snapshot from any point in time, and it also completely sidesteps the update lock problems that plague many databases.
But it makes for a huge database, and selects all require an extra clause to select the current version of a record.
A:
If you are using Hibernate, take a look at JBoss Envers. From the project homepage:
The Envers project aims to enable easy versioning of persistent JPA classes. All that you have to do is annotate your persistent class or some of its properties, that you want to version, with @Versioned. For each versioned entity, a table will be created, which will hold the history of changes made to the entity. You can then retrieve and query historical data without much effort.
This is somewhat similar to Eric's approach, but probably much less effort. Don't know, what language/technology you use to access the database, though.
A:
In the past I have used triggers to construct db update/insert/delete logging.
You could insert a record each time one of the above actions is done on a specific table into a logging table that keeps track of the action, what db user did it, timestamp, table it was performed on, and previous value.
There is probably a better answer though as this would require you to cache the value before the actual delete or update was performed I think. But you could use this to do rollbacks.
A:
The only problem with using Triggers is that it adds to performance overhead of any insert/update/delete. For higher scalability and performance, you would like to keep the database transaction to a minimum. Auditing via triggers increase the time required to do the transaction and depending on the volume may cause performance issues.
another way is to explore if the database provides any way of mining the "Redo" logs as is the case in Oracle. Redo logs is what the database uses to recreate the data in case it fails and has to recover.
A:
Similar to a trigger (or even with) you can have every transaction fire a logging event asynchronously and have another process (or just thread) actually handle the logging. There would be many ways to implement this depending upon your application. I suggest having the application fire the event so that it does not cause unnecessary load on your first transaction (which sometimes leads to locks from cascading audit logs).
In addition, you may be able to improve performance to the primary database by keeping the audit database in a separate location.
A:
I use SQL Server, not PostgreSQL, so I'm not sure if this will work for you or not, but Pop Rivett had a great article on creating an audit trail here:
Pop rivett's SQL Server FAQ No.5: Pop on the Audit Trail
Build an audit table, then create a trigger for each table you want to audit.
Hint: use Codesmith to build your triggers.
|
Effective strategy for leaving an audit trail/change history for DB applications?
|
What are some strategies that people have had success with for maintaining a change history for data in a fairly complex database. One of the applications that I frequently use and develop for could really benefit from a more comprehensive way of tracking how records have changed over time. For instance, right now records can have a number of timestamp and modified user fields, but we currently don't have a scheme for logging multiple change, for instance if an operation is rolled back. In a perfect world, it would be possible to reconstruct the record as it was after each save, etc.
Some info on the DB:
Needs to have the capacity to grow by thousands of records per week
50-60 Tables
Main revisioned tables may have several million records each
Reasonable amount of foreign keys and indexes set
Using PostgreSQL 8.x
|
[
"One strategy you could use is MVCC, Multi-Value Concurrency Control. In this scheme, you never do updates to any of your tables, you just do inserts, maintaining version numbers for each record. This has the advantage of providing an exact snapshot from any point in time, and it also completely sidesteps the update lock problems that plague many databases.\nBut it makes for a huge database, and selects all require an extra clause to select the current version of a record.\n",
"If you are using Hibernate, take a look at JBoss Envers. From the project homepage:\n\nThe Envers project aims to enable easy versioning of persistent JPA classes. All that you have to do is annotate your persistent class or some of its properties, that you want to version, with @Versioned. For each versioned entity, a table will be created, which will hold the history of changes made to the entity. You can then retrieve and query historical data without much effort. \n\nThis is somewhat similar to Eric's approach, but probably much less effort. Don't know, what language/technology you use to access the database, though.\n",
"In the past I have used triggers to construct db update/insert/delete logging. \nYou could insert a record each time one of the above actions is done on a specific table into a logging table that keeps track of the action, what db user did it, timestamp, table it was performed on, and previous value. \nThere is probably a better answer though as this would require you to cache the value before the actual delete or update was performed I think. But you could use this to do rollbacks. \n",
"The only problem with using Triggers is that it adds to performance overhead of any insert/update/delete. For higher scalability and performance, you would like to keep the database transaction to a minimum. Auditing via triggers increase the time required to do the transaction and depending on the volume may cause performance issues. \nanother way is to explore if the database provides any way of mining the \"Redo\" logs as is the case in Oracle. Redo logs is what the database uses to recreate the data in case it fails and has to recover. \n",
"Similar to a trigger (or even with) you can have every transaction fire a logging event asynchronously and have another process (or just thread) actually handle the logging. There would be many ways to implement this depending upon your application. I suggest having the application fire the event so that it does not cause unnecessary load on your first transaction (which sometimes leads to locks from cascading audit logs). \nIn addition, you may be able to improve performance to the primary database by keeping the audit database in a separate location.\n",
"I use SQL Server, not PostgreSQL, so I'm not sure if this will work for you or not, but Pop Rivett had a great article on creating an audit trail here:\nPop rivett's SQL Server FAQ No.5: Pop on the Audit Trail\nBuild an audit table, then create a trigger for each table you want to audit. \nHint: use Codesmith to build your triggers.\n"
] |
[
24,
12,
11,
4,
4,
2
] |
[] |
[] |
[
"audit_trail",
"crud",
"database",
"database_design",
"postgresql"
] |
stackoverflow_0000023770_audit_trail_crud_database_database_design_postgresql.txt
|
Q:
Is there a custom FxCop rule that will detect unused PUBLIC methods?
I just tried FxCop. It does detect unused private methods, but not unused public. Is there a custom rule that I can download, plug-in that will detect public methods that aren't called from within the same assembly?
A:
Corey, my answer of using FxCop had assumed you were interested in removing unused private members, however to solve the problem with other cases you can try using NDepend. Here is some CQL to detect unused public members (adapted from an article listed below):
// <Name>Potentially unused methods</Name>
WARN IF Count > 0 IN SELECT METHODS WHERE
MethodCa == 0 AND // Ca=0 -> No Afferent Coupling -> The method
// is not used in the context of this
// application.
IsPublic AND // Check for unused public methods
!IsEntryPoint AND // Main() method is not used by-design.
!IsExplicitInterfaceImpl AND // The IL code never explicitely calls
// explicit interface methods implementation.
!IsClassConstructor AND // The IL code never explicitely calls class
// constructors.
!IsFinalizer // The IL code never explicitely calls
// finalizers.
Source: Patrick Smacchia's "Code metrics on Coupling, Dead Code, Design flaws and Re-engineering. The article also goes over detecting dead fields and types.
(EDIT: made answer more understandable)
EDIT 11th June 2012: Explain new NDepend facilities concerning unused code. Disclaimer: I am one of the developer of this tool.
Since NDepend v4 released in May 2012, the tool proposes to write Code Rule over LINQ Query (CQLinq). Around 200 default code rules are proposed, 3 of them being dedicated to unused/dead code detection:
Potentially dead Types (hence detect unused class, struct, interface, delegate...)
Potentially dead Methods (hence detect unused method, ctor, property getter/setter...)
Potentially dead Fields
These CQLinq code rules are more powerful than the previous CQL ones. If you click these 3 links above toward the source code of these rules, you'll see that the ones concerning types and methods are a bit complex. This is because they detect not only unused types and methods, but also types and methods used only by unused dead types and methods (recursive).
This is static analysis, hence the prefix Potentially in the rule names. If a code element is used only through reflection, these rules might consider it as unused which is not the case.
In addition to using these 3 rules, I'd advise measuring code coverage by tests and striving for having full coverage. Often, you'll see that code that cannot be covered by tests, is actually unused/dead code that can be safely discarded. This is especially useful in complex algorithms where it is not clear if a branch of code is reachable or not.
A:
If a method is unused and public FxCop assumes that you have made it public for external things to access.
If unused public methods lead to FxCop warnings writing APIs and the like would be a pain - you'd get loads of FxCop warnings for methods you intend others to use.
If you don't need anything external to access your assembly/exe consider find-replacing public with internal. Your application will run the same and FxCop will be able to find the unreferenced internal methods.
If you do need external access find which methods are really needed to be external, and make all the rest internal.
Any methods you make externally visible could have unit tests too.
A:
NDepend is your friend for this kind of thing
A:
How would it know that the public methods are unused?
By marking a method as public it can be accessed by any application which references your library.
|
Is there a custom FxCop rule that will detect unused PUBLIC methods?
|
I just tried FxCop. It does detect unused private methods, but not unused public. Is there a custom rule that I can download, plug-in that will detect public methods that aren't called from within the same assembly?
|
[
"Corey, my answer of using FxCop had assumed you were interested in removing unused private members, however to solve the problem with other cases you can try using NDepend. Here is some CQL to detect unused public members (adapted from an article listed below):\n// <Name>Potentially unused methods</Name>\nWARN IF Count > 0 IN SELECT METHODS WHERE\n MethodCa == 0 AND // Ca=0 -> No Afferent Coupling -> The method \n // is not used in the context of this\n // application.\n\n IsPublic AND // Check for unused public methods\n\n !IsEntryPoint AND // Main() method is not used by-design.\n\n !IsExplicitInterfaceImpl AND // The IL code never explicitely calls \n // explicit interface methods implementation.\n\n !IsClassConstructor AND // The IL code never explicitely calls class\n // constructors.\n\n !IsFinalizer // The IL code never explicitely calls\n // finalizers.\n\nSource: Patrick Smacchia's \"Code metrics on Coupling, Dead Code, Design flaws and Re-engineering. The article also goes over detecting dead fields and types.\n(EDIT: made answer more understandable)\n\nEDIT 11th June 2012: Explain new NDepend facilities concerning unused code. Disclaimer: I am one of the developer of this tool.\nSince NDepend v4 released in May 2012, the tool proposes to write Code Rule over LINQ Query (CQLinq). Around 200 default code rules are proposed, 3 of them being dedicated to unused/dead code detection:\n\nPotentially dead Types (hence detect unused class, struct, interface, delegate...)\nPotentially dead Methods (hence detect unused method, ctor, property getter/setter...)\nPotentially dead Fields\n\nThese CQLinq code rules are more powerful than the previous CQL ones. If you click these 3 links above toward the source code of these rules, you'll see that the ones concerning types and methods are a bit complex. This is because they detect not only unused types and methods, but also types and methods used only by unused dead types and methods (recursive).\nThis is static analysis, hence the prefix Potentially in the rule names. If a code element is used only through reflection, these rules might consider it as unused which is not the case. \nIn addition to using these 3 rules, I'd advise measuring code coverage by tests and striving for having full coverage. Often, you'll see that code that cannot be covered by tests, is actually unused/dead code that can be safely discarded. This is especially useful in complex algorithms where it is not clear if a branch of code is reachable or not.\n",
"If a method is unused and public FxCop assumes that you have made it public for external things to access.\nIf unused public methods lead to FxCop warnings writing APIs and the like would be a pain - you'd get loads of FxCop warnings for methods you intend others to use.\nIf you don't need anything external to access your assembly/exe consider find-replacing public with internal. Your application will run the same and FxCop will be able to find the unreferenced internal methods.\nIf you do need external access find which methods are really needed to be external, and make all the rest internal.\nAny methods you make externally visible could have unit tests too.\n",
"NDepend is your friend for this kind of thing\n",
"How would it know that the public methods are unused?\nBy marking a method as public it can be accessed by any application which references your library.\n"
] |
[
15,
8,
3,
1
] |
[] |
[] |
[
".net",
"code_analysis",
"fxcop",
"public_method"
] |
stackoverflow_0000071518_.net_code_analysis_fxcop_public_method.txt
|
Q:
How can I detect from a Swing app that the PC is being shut-down?
Well behaved windows programs need to allow users to save their work when they are shutting the PC down.
How can I make my app detect the shutdown event? Any solution should allow the user to abort the shutdown if user selects, say "Cancel".
The normal Swing window closing hook doesn't work, nor does adding a shutdown hook.
On testing, the methods of WindowListener (windowClosing,windowClosed, etc) do not get called.
The answer I have accepted requires the use of platform specific code (JNI to register for WM_QUERYENDSESSION ). Isn't this a bug on Swing?
See http://forums.sun.com/thread.jspa?threadID=481807&messageID=2246870
A:
Write some JNI code to WM_QUERYENDSESSION message. You can get details for this from the MSDN documentation or by googling it.
If you don't want to write too much C++ code to do this I can recommend the JNA library click here. Which gives you some nice Java abstractions for C code.
A:
how-do-i-get-my-java-application-to-shutdown-nicely-in-windows
That might be of help
A:
The above seems to be the better answer.
I can't find any good information on detecting window shutdown events. I guess the best possible method would be to detect weather your application is trying to close, using a window closing event or the like then ask the question.
http://www.javalobby.org/java/forums/t17933
A:
Look for signal handling in java. when Windows closes it will send a signal to the application asking it to terminate most likely a sigterm
see here for more about this (I am not the owner of the website)
|
How can I detect from a Swing app that the PC is being shut-down?
|
Well behaved windows programs need to allow users to save their work when they are shutting the PC down.
How can I make my app detect the shutdown event? Any solution should allow the user to abort the shutdown if user selects, say "Cancel".
The normal Swing window closing hook doesn't work, nor does adding a shutdown hook.
On testing, the methods of WindowListener (windowClosing,windowClosed, etc) do not get called.
The answer I have accepted requires the use of platform specific code (JNI to register for WM_QUERYENDSESSION ). Isn't this a bug on Swing?
See http://forums.sun.com/thread.jspa?threadID=481807&messageID=2246870
|
[
"Write some JNI code to WM_QUERYENDSESSION message. You can get details for this from the MSDN documentation or by googling it.\nIf you don't want to write too much C++ code to do this I can recommend the JNA library click here. Which gives you some nice Java abstractions for C code.\n",
"how-do-i-get-my-java-application-to-shutdown-nicely-in-windows\nThat might be of help\n",
"The above seems to be the better answer.\nI can't find any good information on detecting window shutdown events. I guess the best possible method would be to detect weather your application is trying to close, using a window closing event or the like then ask the question.\nhttp://www.javalobby.org/java/forums/t17933\n",
"Look for signal handling in java. when Windows closes it will send a signal to the application asking it to terminate most likely a sigterm\nsee here for more about this (I am not the owner of the website)\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"java",
"operating_system",
"swing"
] |
stackoverflow_0000071842_java_operating_system_swing.txt
|
Q:
sqlserver express database copy options
Why can I not see an option for copying database objects when I right click > tasks on my database?
A:
MS Sql Server Express doesn't come with SSIS which is what you will need to import/export objects out of your database.
You can also manually script this process. One way is to use BCP (http://msdn.microsoft.com/en-us/library/ms162802.aspx)
A:
Have a look at Red Gate SQL Compare and SQL Data Compare.
You can download the trial and use them to build a script that will dump your objects to a .sql file.
|
sqlserver express database copy options
|
Why can I not see an option for copying database objects when I right click > tasks on my database?
|
[
"MS Sql Server Express doesn't come with SSIS which is what you will need to import/export objects out of your database.\nYou can also manually script this process. One way is to use BCP (http://msdn.microsoft.com/en-us/library/ms162802.aspx)\n",
"Have a look at Red Gate SQL Compare and SQL Data Compare.\nYou can download the trial and use them to build a script that will dump your objects to a .sql file.\n"
] |
[
2,
0
] |
[] |
[] |
[
"sql_server",
"sql_server_express"
] |
stackoverflow_0000071204_sql_server_sql_server_express.txt
|
Q:
What is the best implementation for DB Audit Trail?
A DB Audit Trail captures the User Last Modified, Modified Date, and Created Date.
There are several possible implementations:
SQL Server Triggers
Add UserModified, ModifiedDate, CreatedDate columns to the database and include logic in Stored Procedures or Insert, Update statements accordingly.
It would be nice if you include implementation (or link to) in your answer.
A:
Depending on what you're doing, you might want to move the audit out of the data layer into the data access layer. It give you more control.
I asked a similar question wrt NHibernate and SqlServer here.
A:
I totally second @IainMH (and voted him up).
You want to have it in your DAL and ideally tied to some kind of aspect/interceptor/code injection mechanism.
A:
+2 for implementation of when/how to audit in the DAL.
As for where the audit entries themselves should live, it depends on how it will be visible. I'd do a separate table if users can view a separate "audit trail report," but tag existing tables if you want to display last modified-type audits inline.
A:
Here is the implementation I use to audit tables:
Pop Rivett's SQL Server FAQ No.5: Pop on the Audit Trail
|
What is the best implementation for DB Audit Trail?
|
A DB Audit Trail captures the User Last Modified, Modified Date, and Created Date.
There are several possible implementations:
SQL Server Triggers
Add UserModified, ModifiedDate, CreatedDate columns to the database and include logic in Stored Procedures or Insert, Update statements accordingly.
It would be nice if you include implementation (or link to) in your answer.
|
[
"Depending on what you're doing, you might want to move the audit out of the data layer into the data access layer. It give you more control.\nI asked a similar question wrt NHibernate and SqlServer here.\n",
"I totally second @IainMH (and voted him up).\nYou want to have it in your DAL and ideally tied to some kind of aspect/interceptor/code injection mechanism.\n",
"+2 for implementation of when/how to audit in the DAL.\nAs for where the audit entries themselves should live, it depends on how it will be visible. I'd do a separate table if users can view a separate \"audit trail report,\" but tag existing tables if you want to display last modified-type audits inline.\n",
"Here is the implementation I use to audit tables:\nPop Rivett's SQL Server FAQ No.5: Pop on the Audit Trail\n"
] |
[
6,
1,
1,
0
] |
[] |
[] |
[
"audit",
"sql_server"
] |
stackoverflow_0000060920_audit_sql_server.txt
|
Q:
Of Memory Management, Heap Corruption, and C++
So, I need some help. I am working on a project in C++. However, I think I have somehow managed to corrupt my heap. This is based on the fact that I added an std::string to a class and assigning it a value from another std::string:
std::string hello = "Hello, world.\n";
/* exampleString = "Hello, world.\n" would work fine. */
exampleString = hello;
crashes on my system with a stack dump. So basically I need to stop and go through all my code and memory management stuff and find out where I've screwed up. The codebase is still small (about 1000 lines), so this is easily do-able.
Still, I'm over my head with this kind of stuff, so I thought I'd throw it out there. I'm on a Linux system and have poked around with valgrind, and while not knowing completely what I'm doing, it did report that the std::string's destructor was an invalid free. I have to admit to getting the term 'Heap Corruption' from a Google search; any general purpose articles on this sort of stuff would be appreciated as well.
(In before rm -rf ProjectDir, do again in C# :D)
EDIT:
I haven't made it clear, but what I'm asking for are ways an advice of diagnosing these sort of memory problems. I know the std::string stuff is right, so it's something I've done (or a bug, but there's Not A Problem With Select). I'm sure I could check the code I've written up and you very smart folks would see the problem in no time, but I want to add this kind of code analysis to my 'toolbox', as it were.
A:
These are relatively cheap mechanisms for possibly solving the problem:
Keep an eye on my heap corruption question - I'm updating with the answers as they shake out. The first was balancing new[] and delete[], but you're already doing that.
Give valgrind more of a go; it's an excellent tool, and I only wish it was available under Windows. I only slows your program down by about half, which is pretty good compared to the Windows equivalents.
Think about using the Google Performance Tools as a replacement malloc/new.
Have you cleaned out all your object files and started over? Perhaps your make file is... "suboptimal"
You're not assert()ing enough in your code. How do I know that without having seen it? Like flossing, no-one assert()s enough in their code. Add in a validation function for your objects and call that on method start and method end.
Are you compiling -wall? If not, do so.
Find yourself a lint tool like PC-Lint. A small app like yours might fit in the PC-lint demo page, meaning no purchase for you!
Check you're NULLing out pointers after deleteing them. Nobody likes a dangling pointer. Same gig with declared but unallocated pointers.
Stop using arrays. Use a vector instead.
Don't use raw pointers. Use a smart pointer. Don't use auto_ptr! That thing is... surprising; its semantics are very odd. Instead, choose one of the Boost smart pointers, or something out of the Loki library.
A:
We once had a bug which eluded all of the regular techniques, valgrind, purify etc. The crash only ever happened on machines with lots of memory and only on large input data sets.
Eventually we tracked it down using debugger watch points. I'll try to describe the procedure here:
1) Find the cause of the failure. It looks from your example code, that the memory for "exampleString" is being corrupted, and so cannot be written to. Let's continue with this assumption.
2) Set a breakpoint at the last known location that "exampleString" is used or modified without any problem.
3) Add a watch point to the data member of 'exampleString'. With my version of g++, the string is stored in _M_dataplus._M_p. We want to know when this data member changes. The GDB technique for this is:
(gdb) p &exampleString._M_dataplus._M_p
$3 = (char **) 0xbfccc2d8
(gdb) watch *$3
Hardware watchpoint 1: *$3
I'm obviously using linux with g++ and gdb here, but I believe that memory watch points are available with most debuggers.
4) Continue until the watch point is triggered:
Continuing.
Hardware watchpoint 2: *$3
Old value = 0xb7ec2604 ""
New value = 0x804a014 ""
0xb7e70a1c in std::string::_M_mutate () from /usr/lib/libstdc++.so.6
(gdb) where
The gdb where command will give a back trace showing what resulted in the modification. This is either a perfectly legal modification, in which case just continue - or if you're lucky it will be the modification due to the memory corruption. In the latter case, you should now be able to review the code that is really causing the problem and hopefully fix it.
The cause of our bug was an array access with a negative index. The index was the result of a cast of a pointer to an 'int' modulos the size of the array. The bug was missed by valgrind et al. as the memory addresses allocated when running under those tools was never "> MAX_INT" and so never resulted in a negative index.
A:
Oh, if you want to know how to debug the problem, that's simple. First, get a dead chicken. Then, start shaking it.
Seriously, I haven't found a consistent way to track these kinds of bugs down. Because there's so many potential problems, there's not a simple checklist to go through. However, I would recommend the following:
Get comfortable in a debugger.
Start tromping around in the debugger to see if you can find anything that looks fishy. Check especially to see what's happening during the exampleString = hello; line.
Check to make sure it's actually crashing on the exampleString = hello; line, and not when exiting some enclosing block (which could cause destructors to fire).
Check any pointer magic you might be doing. Pointer arithmetic, casting, etc.
Check all of your allocations and deallocations to make sure they are matched (no double-deallocations).
Make sure you aren't returning any references or pointers to objects on the stack.
There are lots of other things to try, too. I'm sure some other people will chime in with ideas as well.
A:
Some places to start:
If you're on windows, and using visual C++6 (I hope to god nobody still uses it these days) it's implentation of std::string is not threadsafe, and can lead to this kind of thing.
Here's an article I found which explains a lot of the common causes of memory leaks and corruption.
At my previous workplace we used Compuware Boundschecker to help with this. It's commercial and very expensive, so may not be an option.
Here's a couple of free libraries which may be of some use
http://www.codeguru.com/cpp/misc/misc/memory/article.php/c3745/
http://www.codeproject.com/KB/cpp/MemLeakDetect.aspx
Hope that helps. Memory corruption is a sucky place to be in!
A:
It could be heap corruption, but it's just as likely to be stack corruption. Jim's right. We really need a bit more context. Those two lines of source don't tell us much in isolation. There could be any number of things causing this (which is the real joy of C/C++).
If you're comfortable posting your code, you could even throw all of it up on a server and post a link. I'm sure you'd gets lots more advice that way (some of it undoubtedly unrelated to your question).
A:
Your code as I can see has no errors. As has been said more context is needed.
If you haven't already tried, install gdb (the gcc debugger) and compile the program with -g. This will compile in debugging symbols which gdb can use. Once you have gdb installed run it with the program (gdb <your_program>). This is a useful cheatsheat for using gdb.
Set a breakpoint for the function that is producing the bug, and see what the value of exampleString is. Also do the same for whatever parameter you are passing to exampleString. This should at least tell you if the std::strings are valid.
I found the answer from this article to be a good guide about pointers.
A:
The code was simply an example of where my program was failing (it was allocated on the stack, Jim). I'm not actually looking for 'what have I done wrong', but rather 'how do I diagnose what I've done wrong'. Teach a man to fish and all that. Though looking at the question, I haven't made that clear enough. Thank goodness for the edit function. :')
Also, I actually fixed the std::string problem. How? By replacing it with a vector, compiling, then replacing the string again. It was consistently crashing there, and that fixed even though it...couldn't. There's something nasty there, and I'm not sure what. I did want to check the one time I manually allocate memory on the heap, though:
this->map = new Area*[largestY + 1];
for (int i = 0; i < largestY + 1; i++) {
this->map[i] = new Area[largestX + 1];
}
and deleting it:
for (int i = 0; i < largestY + 1; i++) {
delete [] this->map[i];
}
delete [] this->map;
I haven't allocated a 2d array with C++ before. It seems to work.
A:
Also, I actually fixed the std::string problem. How? By replacing it with a vector, compiling, then replacing the string again. It was consistently crashing there, and that fixed even though it...couldn't. There's something nasty there, and I'm not sure what.
That sounds like you really did shake a chicken at it. If you don't know why it's working now, then it's still broken, and pretty much guaranteed to bite you again later (after you've added even more complexity).
A:
Run Purify.
It is a near-magical tool that will report when you are clobbering memory you shouldn't be touching, leaking memory by not freeing things, double-freeing, etc.
It works at the machine code level, so you don't even have to have the source code.
One of the most enjoyable vendor conference calls I was ever on was when Purify found a memory leak in their code, and we were able to ask, "is it possible you're not freeing memory in your function foo()" and hear the astonishment in their voices.
They thought we were debugging gods but then we let them in on the secret so they could run Purify before we had to use their code. :-)
http://www-306.ibm.com/software/awdtools/purify/unix/
(It's pretty pricey but they have a free eval download)
A:
One of the debugging techniques that I use frequently (except in cases of the most extreme weirdness) is to divide and conquer. If your program currently fails with some specific error, then divide it in half in some way and see if it still has the same error. Obviously the trick is to decide where to divide your program!
Your example as given doesn't show enough context to determine where the error might be. If anybody else were to try your example, it would work fine. So, in your program, try removing as much of the extra stuff you didn't show us and see if it works then. If so, then add the other code back in a bit at a time until it starts failing. Then, the thing you just added is probably the problem.
Note that if your program is multithreaded, then you probably have larger problems. If not, then you should be able to narrow it down in this way. Good luck!
A:
Other than tools like Boundschecker or Purify, your best bet at solving problems like this is to just get really good at reading code and become familiar with the code that you're working on.
Memory corruption is one of the most difficult things to troubleshoot and usually these types of problems are solved by spending hours/days in a debugger and noticing something like "hey, pointer X is being used after it was deleted!".
If it helps any, it's something you get better at as you gain experience.
Your memory allocation for the array looks correct, but make sure you check all the places where you access the array too.
A:
As far as I can tell your code is correct. Assuming exampleString is an std::string that has class scope like you describe, you ought to be able to initialize/assign it that way. Perhaps there is some other issue? Maybe a snippet of actual code would help put it in context.
Question: Is exampleString a pointer to a string object created with new?
|
Of Memory Management, Heap Corruption, and C++
|
So, I need some help. I am working on a project in C++. However, I think I have somehow managed to corrupt my heap. This is based on the fact that I added an std::string to a class and assigning it a value from another std::string:
std::string hello = "Hello, world.\n";
/* exampleString = "Hello, world.\n" would work fine. */
exampleString = hello;
crashes on my system with a stack dump. So basically I need to stop and go through all my code and memory management stuff and find out where I've screwed up. The codebase is still small (about 1000 lines), so this is easily do-able.
Still, I'm over my head with this kind of stuff, so I thought I'd throw it out there. I'm on a Linux system and have poked around with valgrind, and while not knowing completely what I'm doing, it did report that the std::string's destructor was an invalid free. I have to admit to getting the term 'Heap Corruption' from a Google search; any general purpose articles on this sort of stuff would be appreciated as well.
(In before rm -rf ProjectDir, do again in C# :D)
EDIT:
I haven't made it clear, but what I'm asking for are ways an advice of diagnosing these sort of memory problems. I know the std::string stuff is right, so it's something I've done (or a bug, but there's Not A Problem With Select). I'm sure I could check the code I've written up and you very smart folks would see the problem in no time, but I want to add this kind of code analysis to my 'toolbox', as it were.
|
[
"These are relatively cheap mechanisms for possibly solving the problem:\n\nKeep an eye on my heap corruption question - I'm updating with the answers as they shake out. The first was balancing new[] and delete[], but you're already doing that.\nGive valgrind more of a go; it's an excellent tool, and I only wish it was available under Windows. I only slows your program down by about half, which is pretty good compared to the Windows equivalents.\nThink about using the Google Performance Tools as a replacement malloc/new.\nHave you cleaned out all your object files and started over? Perhaps your make file is... \"suboptimal\"\nYou're not assert()ing enough in your code. How do I know that without having seen it? Like flossing, no-one assert()s enough in their code. Add in a validation function for your objects and call that on method start and method end.\nAre you compiling -wall? If not, do so.\nFind yourself a lint tool like PC-Lint. A small app like yours might fit in the PC-lint demo page, meaning no purchase for you!\nCheck you're NULLing out pointers after deleteing them. Nobody likes a dangling pointer. Same gig with declared but unallocated pointers.\nStop using arrays. Use a vector instead.\nDon't use raw pointers. Use a smart pointer. Don't use auto_ptr! That thing is... surprising; its semantics are very odd. Instead, choose one of the Boost smart pointers, or something out of the Loki library.\n\n",
"We once had a bug which eluded all of the regular techniques, valgrind, purify etc. The crash only ever happened on machines with lots of memory and only on large input data sets.\nEventually we tracked it down using debugger watch points. I'll try to describe the procedure here:\n1) Find the cause of the failure. It looks from your example code, that the memory for \"exampleString\" is being corrupted, and so cannot be written to. Let's continue with this assumption.\n2) Set a breakpoint at the last known location that \"exampleString\" is used or modified without any problem.\n3) Add a watch point to the data member of 'exampleString'. With my version of g++, the string is stored in _M_dataplus._M_p. We want to know when this data member changes. The GDB technique for this is:\n(gdb) p &exampleString._M_dataplus._M_p\n$3 = (char **) 0xbfccc2d8\n(gdb) watch *$3\nHardware watchpoint 1: *$3\n\nI'm obviously using linux with g++ and gdb here, but I believe that memory watch points are available with most debuggers.\n4) Continue until the watch point is triggered:\nContinuing.\nHardware watchpoint 2: *$3\n\nOld value = 0xb7ec2604 \"\"\nNew value = 0x804a014 \"\"\n0xb7e70a1c in std::string::_M_mutate () from /usr/lib/libstdc++.so.6\n(gdb) where\n\nThe gdb where command will give a back trace showing what resulted in the modification. This is either a perfectly legal modification, in which case just continue - or if you're lucky it will be the modification due to the memory corruption. In the latter case, you should now be able to review the code that is really causing the problem and hopefully fix it.\nThe cause of our bug was an array access with a negative index. The index was the result of a cast of a pointer to an 'int' modulos the size of the array. The bug was missed by valgrind et al. as the memory addresses allocated when running under those tools was never \"> MAX_INT\" and so never resulted in a negative index.\n",
"Oh, if you want to know how to debug the problem, that's simple. First, get a dead chicken. Then, start shaking it.\nSeriously, I haven't found a consistent way to track these kinds of bugs down. Because there's so many potential problems, there's not a simple checklist to go through. However, I would recommend the following:\n\nGet comfortable in a debugger.\nStart tromping around in the debugger to see if you can find anything that looks fishy. Check especially to see what's happening during the exampleString = hello; line.\nCheck to make sure it's actually crashing on the exampleString = hello; line, and not when exiting some enclosing block (which could cause destructors to fire).\nCheck any pointer magic you might be doing. Pointer arithmetic, casting, etc.\nCheck all of your allocations and deallocations to make sure they are matched (no double-deallocations).\nMake sure you aren't returning any references or pointers to objects on the stack.\n\nThere are lots of other things to try, too. I'm sure some other people will chime in with ideas as well.\n",
"Some places to start:\nIf you're on windows, and using visual C++6 (I hope to god nobody still uses it these days) it's implentation of std::string is not threadsafe, and can lead to this kind of thing.\nHere's an article I found which explains a lot of the common causes of memory leaks and corruption.\nAt my previous workplace we used Compuware Boundschecker to help with this. It's commercial and very expensive, so may not be an option.\nHere's a couple of free libraries which may be of some use\nhttp://www.codeguru.com/cpp/misc/misc/memory/article.php/c3745/\nhttp://www.codeproject.com/KB/cpp/MemLeakDetect.aspx\nHope that helps. Memory corruption is a sucky place to be in!\n",
"It could be heap corruption, but it's just as likely to be stack corruption. Jim's right. We really need a bit more context. Those two lines of source don't tell us much in isolation. There could be any number of things causing this (which is the real joy of C/C++).\nIf you're comfortable posting your code, you could even throw all of it up on a server and post a link. I'm sure you'd gets lots more advice that way (some of it undoubtedly unrelated to your question).\n",
"Your code as I can see has no errors. As has been said more context is needed.\nIf you haven't already tried, install gdb (the gcc debugger) and compile the program with -g. This will compile in debugging symbols which gdb can use. Once you have gdb installed run it with the program (gdb <your_program>). This is a useful cheatsheat for using gdb.\nSet a breakpoint for the function that is producing the bug, and see what the value of exampleString is. Also do the same for whatever parameter you are passing to exampleString. This should at least tell you if the std::strings are valid.\nI found the answer from this article to be a good guide about pointers.\n",
"The code was simply an example of where my program was failing (it was allocated on the stack, Jim). I'm not actually looking for 'what have I done wrong', but rather 'how do I diagnose what I've done wrong'. Teach a man to fish and all that. Though looking at the question, I haven't made that clear enough. Thank goodness for the edit function. :')\nAlso, I actually fixed the std::string problem. How? By replacing it with a vector, compiling, then replacing the string again. It was consistently crashing there, and that fixed even though it...couldn't. There's something nasty there, and I'm not sure what. I did want to check the one time I manually allocate memory on the heap, though:\n this->map = new Area*[largestY + 1];\n for (int i = 0; i < largestY + 1; i++) {\n this->map[i] = new Area[largestX + 1];\n }\n\nand deleting it:\nfor (int i = 0; i < largestY + 1; i++) {\n delete [] this->map[i];\n}\ndelete [] this->map;\n\nI haven't allocated a 2d array with C++ before. It seems to work.\n",
"\nAlso, I actually fixed the std::string problem. How? By replacing it with a vector, compiling, then replacing the string again. It was consistently crashing there, and that fixed even though it...couldn't. There's something nasty there, and I'm not sure what.\n\nThat sounds like you really did shake a chicken at it. If you don't know why it's working now, then it's still broken, and pretty much guaranteed to bite you again later (after you've added even more complexity).\n",
"Run Purify.\nIt is a near-magical tool that will report when you are clobbering memory you shouldn't be touching, leaking memory by not freeing things, double-freeing, etc.\nIt works at the machine code level, so you don't even have to have the source code.\nOne of the most enjoyable vendor conference calls I was ever on was when Purify found a memory leak in their code, and we were able to ask, \"is it possible you're not freeing memory in your function foo()\" and hear the astonishment in their voices.\nThey thought we were debugging gods but then we let them in on the secret so they could run Purify before we had to use their code. :-)\nhttp://www-306.ibm.com/software/awdtools/purify/unix/\n(It's pretty pricey but they have a free eval download)\n",
"One of the debugging techniques that I use frequently (except in cases of the most extreme weirdness) is to divide and conquer. If your program currently fails with some specific error, then divide it in half in some way and see if it still has the same error. Obviously the trick is to decide where to divide your program!\nYour example as given doesn't show enough context to determine where the error might be. If anybody else were to try your example, it would work fine. So, in your program, try removing as much of the extra stuff you didn't show us and see if it works then. If so, then add the other code back in a bit at a time until it starts failing. Then, the thing you just added is probably the problem.\nNote that if your program is multithreaded, then you probably have larger problems. If not, then you should be able to narrow it down in this way. Good luck!\n",
"Other than tools like Boundschecker or Purify, your best bet at solving problems like this is to just get really good at reading code and become familiar with the code that you're working on.\nMemory corruption is one of the most difficult things to troubleshoot and usually these types of problems are solved by spending hours/days in a debugger and noticing something like \"hey, pointer X is being used after it was deleted!\".\nIf it helps any, it's something you get better at as you gain experience.\nYour memory allocation for the array looks correct, but make sure you check all the places where you access the array too. \n",
"As far as I can tell your code is correct. Assuming exampleString is an std::string that has class scope like you describe, you ought to be able to initialize/assign it that way. Perhaps there is some other issue? Maybe a snippet of actual code would help put it in context.\nQuestion: Is exampleString a pointer to a string object created with new?\n"
] |
[
24,
10,
7,
3,
1,
1,
1,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"c++",
"heap_memory",
"memory",
"stack"
] |
stackoverflow_0000007525_c++_heap_memory_memory_stack.txt
|
Q:
How do I get the text size of a string on a WPF canvas?
I'm trying to find the amount of space/width that a string would take when its drawn on a WPF canvas?
A:
I may have found an answer to my own question. The FormattedText class seems to have what I'm after.
|
How do I get the text size of a string on a WPF canvas?
|
I'm trying to find the amount of space/width that a string would take when its drawn on a WPF canvas?
|
[
"I may have found an answer to my own question. The FormattedText class seems to have what I'm after.\n"
] |
[
3
] |
[] |
[] |
[
".net",
"wpf"
] |
stackoverflow_0000071919_.net_wpf.txt
|
Q:
Zend PHP debugger: How can I start debugging a page using a get argument?
I am trying out the debugger built into Zend studio. It seems great! One thing though, when I start a page using the debugger does anyone know how I can set a request get argument within the page?
For example, I don't want to debug runtests.php
I want to debug runtests.php?test=10
I assume its a simple configuration and I just can't find it.
A:
I recommend getting the Zend Studio Toolbar. The extension allows you to control which pages are debugged from within the browser instead of from Zend Studio. The options for debugging let you debug the next page, the next form post or all pages. When you debug like this it runs the PHP just like it will from your server instead of from within Zend Studio. It's an essential tool when using Zend Studio.
A:
Just start the debugger on another page, and then change the browser url to what you wanted.
It's not ideal but it should work.
I have high hopes for their next release.
|
Zend PHP debugger: How can I start debugging a page using a get argument?
|
I am trying out the debugger built into Zend studio. It seems great! One thing though, when I start a page using the debugger does anyone know how I can set a request get argument within the page?
For example, I don't want to debug runtests.php
I want to debug runtests.php?test=10
I assume its a simple configuration and I just can't find it.
|
[
"I recommend getting the Zend Studio Toolbar. The extension allows you to control which pages are debugged from within the browser instead of from Zend Studio. The options for debugging let you debug the next page, the next form post or all pages. When you debug like this it runs the PHP just like it will from your server instead of from within Zend Studio. It's an essential tool when using Zend Studio. \n",
"Just start the debugger on another page, and then change the browser url to what you wanted.\nIt's not ideal but it should work.\nI have high hopes for their next release.\n"
] |
[
3,
2
] |
[] |
[] |
[
"debugging",
"php",
"zend_debugger",
"zend_studio"
] |
stackoverflow_0000069330_debugging_php_zend_debugger_zend_studio.txt
|
Q:
One method for creating several objects or several methods for creating single objects?
If I have the following:
Public Class Product
Public Id As Integer
Public Name As String
Public AvailableColours As List(Of Colour)
Public AvailableSizes As List(Of Size)
End Class
and I want to get a list of products from the database and display them on a page along with their available sizes and colours, should I
have one method (GetProducts()) which makes use of a single view that joins the relevant tables, that then loops through each row and creates the objects as required? Or…
have several methods which are responsible only for creating one object each? eg. GetProducts(), GetAvailableColoursForProduct(id), etc
I'm currently doing a) but as I add other other properties (multiple images, optional tassels, etc) the code is getting very messy (having to check that this isn't the same product as the previous row, has this colour already been added, etc) so I'm tempted to go with b) however, this will really ramp up the number of round trips to the database.
A:
You got it. Solution b won't scale up so solution a is key, as far as performance are of concern. By the same time, why should you constrain GetProductDetails() method to grab every data in a single request (hence the SQL view approach) ? Why not have this method perform 3 requests and say goodbye to your messy logic :
One for id and name retrieval
One for the colors list
One for sizes list
Depending on the SQL engine you use, these 3 requests could be grouped in a single batch query (one round trip) or would require 3 reound-trips. When adding additional properties to your class, you will have to decide whether to enhance the first request or to add a new one.
A:
You're probably best off benchmarking both and finding out. I've seen situations where just doing multiple queries (MySQL likes this) is faster than JOINs and one big slow query that takes a lot memory and causes the DB server to thrash. I say benchmark because it's going to depend on your database server, how much memory and concurrent connections it has, sizes of your tables, how your indexes are optimized and the size of your typical recordsets. JOINs on large unindexed columns are very expensive (so you should either not do them or add indexes).
You will probably also learn a bit more/be more satisfied in the end if you write at least a little of both implementations (you don't need to write the methods, just a bunch of test queries) and then benchmark, vs. just going with one or the other. The trickiest (but important) part of testing though is simulating concurrent users hitting the DB at the same time -- realistic production memory and cpu load.
Keep in mind you are dealing with 2 issues: One is the DBA issue, how do I make it fastest and most efficient. The second is the programmer who wants pretty, maintainable code. (b) makes your code more readable and extensible than just having giant queries with complicated JOINs, so you may decide to prefer it over (a) as long as it isn't drastically slower.
A:
Personally, I'd get more data from the database through fewer methods and then bind the UI against only those parts of the data set that I currently want to display. Managing lots of small methods that get out specific chunks of data is harder than getting out large chunks and using only those parts you need.
A:
In the case above I would probably just have a single static load method especially if all or most of the properties are normally needed:
Public static function Load(Id as integer) as Product
Product.Load(Id)
If say the color property is rarly used and fairly expensive to load then you may want to still use the above but not load the color property from there but dynamically load it from the getter like so:
private _Colors as list(Of Color)
public property Colors() as List(Of Color)
get
if _Colors is nothing
. .. . load colors here
end if
end get. . . . .
A:
Go for Option b) it makes your attributes independent from the Presentation of the Data (e.g. a table)
I think you would benefit from learning more about the MVC-Architecture. It stands for Model (Your Data -> Product), View (the Presentation -> Table) and Controller (a new Class that will gather the Data from the Model and processes it for View output)
Confused? It isn't that complicated. Which language is your code snippet from? Many Frameworks like Ruby on Rails, Java Struts or CakePHP practice this seperation of Program layers.
A:
b would be faster (performance wise) while reading your setup but it will require you more maintenance code when you will update your class (updating each function).
Now if performance is your true goal, just benchmark it. Write both a and b, load your DB with a few (hundreds of) thousands record and test. Then select your best solution. :)
/Vey
A:
If you are using any of the agile tenants in your coding practises then "a" is fine for now but as the complexity of your query grows you should consider refactoring, that is, build your code based on what you know now and refactor when necessary.
If you do refactor I would suggest introducing the factory pattern into your code. The factory pattern manages the creation of complex objects and allows you to hide the details of object construction from the code that consumes the object (your UI in this case). This also means that as your object becomes more complex the consumers will be protected from the changes that you may need to make to manage the complexity.
A:
You should look into Castle's ActiveRecord ORM, which works on top of NHibernate. You develop a data model (like you've done with 'Product') which inherits from AR's base class, which provides great querying abilities. AR/NHibernate provide aggressive query optimization and caching. Performance and scalability problems may disappear. At the very least you have a decent framework within which to tweak. You could easily split your data models up to test B.
|
One method for creating several objects or several methods for creating single objects?
|
If I have the following:
Public Class Product
Public Id As Integer
Public Name As String
Public AvailableColours As List(Of Colour)
Public AvailableSizes As List(Of Size)
End Class
and I want to get a list of products from the database and display them on a page along with their available sizes and colours, should I
have one method (GetProducts()) which makes use of a single view that joins the relevant tables, that then loops through each row and creates the objects as required? Or…
have several methods which are responsible only for creating one object each? eg. GetProducts(), GetAvailableColoursForProduct(id), etc
I'm currently doing a) but as I add other other properties (multiple images, optional tassels, etc) the code is getting very messy (having to check that this isn't the same product as the previous row, has this colour already been added, etc) so I'm tempted to go with b) however, this will really ramp up the number of round trips to the database.
|
[
"You got it. Solution b won't scale up so solution a is key, as far as performance are of concern. By the same time, why should you constrain GetProductDetails() method to grab every data in a single request (hence the SQL view approach) ? Why not have this method perform 3 requests and say goodbye to your messy logic :\n\nOne for id and name retrieval\nOne for the colors list\nOne for sizes list\n\nDepending on the SQL engine you use, these 3 requests could be grouped in a single batch query (one round trip) or would require 3 reound-trips. When adding additional properties to your class, you will have to decide whether to enhance the first request or to add a new one.\n",
"You're probably best off benchmarking both and finding out. I've seen situations where just doing multiple queries (MySQL likes this) is faster than JOINs and one big slow query that takes a lot memory and causes the DB server to thrash. I say benchmark because it's going to depend on your database server, how much memory and concurrent connections it has, sizes of your tables, how your indexes are optimized and the size of your typical recordsets. JOINs on large unindexed columns are very expensive (so you should either not do them or add indexes).\nYou will probably also learn a bit more/be more satisfied in the end if you write at least a little of both implementations (you don't need to write the methods, just a bunch of test queries) and then benchmark, vs. just going with one or the other. The trickiest (but important) part of testing though is simulating concurrent users hitting the DB at the same time -- realistic production memory and cpu load.\nKeep in mind you are dealing with 2 issues: One is the DBA issue, how do I make it fastest and most efficient. The second is the programmer who wants pretty, maintainable code. (b) makes your code more readable and extensible than just having giant queries with complicated JOINs, so you may decide to prefer it over (a) as long as it isn't drastically slower.\n",
"Personally, I'd get more data from the database through fewer methods and then bind the UI against only those parts of the data set that I currently want to display. Managing lots of small methods that get out specific chunks of data is harder than getting out large chunks and using only those parts you need.\n",
"In the case above I would probably just have a single static load method especially if all or most of the properties are normally needed:\nPublic static function Load(Id as integer) as Product\n\nProduct.Load(Id)\n\nIf say the color property is rarly used and fairly expensive to load then you may want to still use the above but not load the color property from there but dynamically load it from the getter like so:\nprivate _Colors as list(Of Color)\npublic property Colors() as List(Of Color)\n get\n if _Colors is nothing \n . .. . load colors here\n end if\n end get. . . . .\n\n",
"Go for Option b) it makes your attributes independent from the Presentation of the Data (e.g. a table)\nI think you would benefit from learning more about the MVC-Architecture. It stands for Model (Your Data -> Product), View (the Presentation -> Table) and Controller (a new Class that will gather the Data from the Model and processes it for View output)\nConfused? It isn't that complicated. Which language is your code snippet from? Many Frameworks like Ruby on Rails, Java Struts or CakePHP practice this seperation of Program layers.\n",
"b would be faster (performance wise) while reading your setup but it will require you more maintenance code when you will update your class (updating each function).\nNow if performance is your true goal, just benchmark it. Write both a and b, load your DB with a few (hundreds of) thousands record and test. Then select your best solution. :)\n/Vey\n",
"If you are using any of the agile tenants in your coding practises then \"a\" is fine for now but as the complexity of your query grows you should consider refactoring, that is, build your code based on what you know now and refactor when necessary.\nIf you do refactor I would suggest introducing the factory pattern into your code. The factory pattern manages the creation of complex objects and allows you to hide the details of object construction from the code that consumes the object (your UI in this case). This also means that as your object becomes more complex the consumers will be protected from the changes that you may need to make to manage the complexity.\n",
"You should look into Castle's ActiveRecord ORM, which works on top of NHibernate. You develop a data model (like you've done with 'Product') which inherits from AR's base class, which provides great querying abilities. AR/NHibernate provide aggressive query optimization and caching. Performance and scalability problems may disappear. At the very least you have a decent framework within which to tweak. You could easily split your data models up to test B.\n"
] |
[
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"database",
"performance"
] |
stackoverflow_0000071565_.net_database_performance.txt
|
Q:
How to conduct blackbox testing on an AJAX application?
What's the best, crossplatform way to perform blackbox tests on AJAX web applications?
Ideally, the solution should have the following attributes:
Able to integrate into a continuous integration build loop
Cross platform so I you can run it on Windows laptops and Linux continuous integration servers
Easy way to script the interactions
Free-as-in-freedom so you can adapt it into your tool chain if necessary
I've looked into HttpUnit but I'm not conviced it can handle AJAX-heavy websites.
A:
Selenium might be what you're looking for: http://selenium.openqa.org/
It allows you to script actions and evaluate the results. It's open-source (Apache 2.0), cross platform, and has nice tools.
A:
I have used Selenium for exactly this task, but found it to be brittle.
Check out this talk by two Googlers: Does my button look big in this? Building testable AJAX applications
They isolate the testable javascript (non DOM-interaction) and test that using the Rhino javascript engine.
|
How to conduct blackbox testing on an AJAX application?
|
What's the best, crossplatform way to perform blackbox tests on AJAX web applications?
Ideally, the solution should have the following attributes:
Able to integrate into a continuous integration build loop
Cross platform so I you can run it on Windows laptops and Linux continuous integration servers
Easy way to script the interactions
Free-as-in-freedom so you can adapt it into your tool chain if necessary
I've looked into HttpUnit but I'm not conviced it can handle AJAX-heavy websites.
|
[
"Selenium might be what you're looking for: http://selenium.openqa.org/\nIt allows you to script actions and evaluate the results. It's open-source (Apache 2.0), cross platform, and has nice tools.\n",
"I have used Selenium for exactly this task, but found it to be brittle.\nCheck out this talk by two Googlers: Does my button look big in this? Building testable AJAX applications\nThey isolate the testable javascript (non DOM-interaction) and test that using the Rhino javascript engine.\n"
] |
[
5,
3
] |
[] |
[] |
[
".net",
"ajax",
"java",
"javascript",
"testing"
] |
stackoverflow_0000070554_.net_ajax_java_javascript_testing.txt
|
Q:
How do you set up your .NET development tree?
How do you set up your .NET development tree? I use a structure like this:
-projectname
--config (where I put the configuration files)
--doc (where I put all the document concerning the project: e-mails, documentation)
--tools (all the tools I use: Nunit, Moq)
--lib (all the libraries used by the solution: ninject or autofac)
--src
---app (sourcefiles)
---test (unittests)
solutionfile.sln
build.csproj
The sign "-" marks directories.
I think it's very important to have a good structure on this stuff. You should be able to get the source code from the source control system and then build the solution without opening Visual Studio or installing any third party libraries.
Any thoughts on this?
A:
We use a very similar layout as covered in JP Boodhoo's blog post titled Directory Structure For Projects.
A:
Check out these other StackOverflow questions...
Structure of Projects in Version Control
Best Practice: Collaborative Environment, Bin Directory, SVN
A:
TreeSurgeon is a tool that will set up a directory tree for you, with all the required dependencies and a skeleton nant file. At that link, you can also find a series of blog posts by its original creator, Mike Roberts, explaining some of the deliberate choices behind the structure that TreeSurgeon gives you, e.g. why it's OK to have duplication between lib and tools, why it's important to have all dependencies present etc.
I haven't used it in a while so can't remember if I still agree with all the choices it makes, but I don't think you can go far wrong with it.
A:
We use a structure like this:
CompanyNameOrCoreProjectName
Branch
BranchName
CopyOfTrunk
Trunk
Desktop
ReferencedAssemblies
Shared
Solutions
Test
Webs
Then just make sure that all project/solution files only use relative paths and branching works well. Desktop/Webs are for projects of the respective types, Test is for any unit test projects, Solutions folder has a folder for each solution with only the solution file in it. ReferencedAssemblies holds all of the assemblies that we don't include in the solution (these are sometimes local projects that we just don't want to build every time we build the solution or third party assemblies like rhinomocks or log4net, etc. Shared is for any of the core libraries (data access, business logic, etc) that are used across several solutions.
A:
At my place of work we have multiple projects, where each project gets its own sub-directory, like so:
-proj1
--proj1.csproj
-proj2
--proj2.csproj
-proj3
--proj3.csproj
solutionfile.sln
The rest of your setup looks okay, but I think you should figure out how you would incorporate multiple projects, for example a shared source library between multiple solutions.
A:
If I understand your structure correctly, I think you are going to have many duplicates in your dev tree related to "tools" and "lib". Most likely these are external tools and libraries that might be shared by different projects.
Something that works well for us is:
solutionfile.sln
-src
--projectname
---config
---doc
---source files (structure representing namespaces)
-test
--testprojectname (usually, a test project per source project)
---unit test files (structure mirroing the structure in the source project)
-lib
--libraryname (containing the libraries)
-tools
A:
I don't have tools within the project. Tools are in a network share. Yes disk space is cheap these days but... come on :)
Also I have a database script folder below projectname (when it's a data driven app)
Of course it doesn't matter so much how you're set-up, but the fact that a logical organised standard is used to suit the project and adhered to with good discipline. This is useful whether you're solo or on a team.
A:
We also use TreeSurgeon and are quite happy with it. Our structure looks like:
Branch
build
lib
src
<
various src directories for apps, tests, db migrations, etc.)
tools
Trunk
Same as above
|
How do you set up your .NET development tree?
|
How do you set up your .NET development tree? I use a structure like this:
-projectname
--config (where I put the configuration files)
--doc (where I put all the document concerning the project: e-mails, documentation)
--tools (all the tools I use: Nunit, Moq)
--lib (all the libraries used by the solution: ninject or autofac)
--src
---app (sourcefiles)
---test (unittests)
solutionfile.sln
build.csproj
The sign "-" marks directories.
I think it's very important to have a good structure on this stuff. You should be able to get the source code from the source control system and then build the solution without opening Visual Studio or installing any third party libraries.
Any thoughts on this?
|
[
"We use a very similar layout as covered in JP Boodhoo's blog post titled Directory Structure For Projects.\n",
"Check out these other StackOverflow questions...\n\nStructure of Projects in Version Control\nBest Practice: Collaborative Environment, Bin Directory, SVN\n\n",
"TreeSurgeon is a tool that will set up a directory tree for you, with all the required dependencies and a skeleton nant file. At that link, you can also find a series of blog posts by its original creator, Mike Roberts, explaining some of the deliberate choices behind the structure that TreeSurgeon gives you, e.g. why it's OK to have duplication between lib and tools, why it's important to have all dependencies present etc.\nI haven't used it in a while so can't remember if I still agree with all the choices it makes, but I don't think you can go far wrong with it.\n",
"We use a structure like this:\n\nCompanyNameOrCoreProjectName\n \nBranch\n \nBranchName\n \nCopyOfTrunk\n\n\n\n\nTrunk\n \nDesktop\nReferencedAssemblies\nShared\nSolutions\nTest\nWebs\n\n\n\n\n\nThen just make sure that all project/solution files only use relative paths and branching works well. Desktop/Webs are for projects of the respective types, Test is for any unit test projects, Solutions folder has a folder for each solution with only the solution file in it. ReferencedAssemblies holds all of the assemblies that we don't include in the solution (these are sometimes local projects that we just don't want to build every time we build the solution or third party assemblies like rhinomocks or log4net, etc. Shared is for any of the core libraries (data access, business logic, etc) that are used across several solutions.\n",
"At my place of work we have multiple projects, where each project gets its own sub-directory, like so:\n -proj1\n--proj1.csproj\n-proj2\n--proj2.csproj\n-proj3\n--proj3.csproj\nsolutionfile.sln \nThe rest of your setup looks okay, but I think you should figure out how you would incorporate multiple projects, for example a shared source library between multiple solutions.\n",
"If I understand your structure correctly, I think you are going to have many duplicates in your dev tree related to \"tools\" and \"lib\". Most likely these are external tools and libraries that might be shared by different projects.\nSomething that works well for us is:\n\nsolutionfile.sln\n-src\n--projectname\n---config\n---doc\n---source files (structure representing namespaces)\n-test\n--testprojectname (usually, a test project per source project)\n---unit test files (structure mirroing the structure in the source project)\n-lib\n--libraryname (containing the libraries)\n-tools\n\n",
"I don't have tools within the project. Tools are in a network share. Yes disk space is cheap these days but... come on :)\nAlso I have a database script folder below projectname (when it's a data driven app) \nOf course it doesn't matter so much how you're set-up, but the fact that a logical organised standard is used to suit the project and adhered to with good discipline. This is useful whether you're solo or on a team. \n",
"We also use TreeSurgeon and are quite happy with it. Our structure looks like:\nBranch\n\nbuild\nlib\nsrc\n\n\n<\nvarious src directories for apps, tests, db migrations, etc.)\n\ntools\n\nTrunk\n\nSame as above\n\n"
] |
[
8,
5,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"development_environment"
] |
stackoverflow_0000071608_.net_c#_development_environment.txt
|
Q:
How do I get my Java application to shutdown nicely in windows?
I have a Java application which I want to shutdown 'nicely' when the user selects Start->Shutdown. I've tried using JVM shutdown listeners via Runtime.addShutdownHook(...) but this doesn't work as I can't use any UI elements from it.
I've also tried using the exit handler on my main application UI window but it has no way to pause or halt shutdown as far as I can tell. How can I handle shutdown nicely?
A:
As far as I know you need to start using JNI to set up a message handler for the Windows WM_QUERYENDSESSION message.
To do this (if you're new to Windows programming like me) you'll need to create a new class of window with a new message handling function (as described here) and handle the WM_QUERYENDSESSION from the message handler.
NB: You'll need to use the JNIEnv::GetJavaVM(...) and then JavaVM::AttachCurrentThread(...) on the message handling thread before you can call any Java methods from your native message handling code.
A:
The previously mentioned JNI approach will likely work.
You can use JNA which is basically a wrapper around JNI to make it easier to use. An added bonus is that it (in my opinion at least) generally is faster and more maintainable than raw JNI. You can find JNA at https://jna.dev.java.net/
If you're just starting the application in the start menu because you're trying to make it behave like a service in windows, you can use the java service wrapper which is found here:
http://wrapper.tanukisoftware.org/doc/english/download.jsp
|
How do I get my Java application to shutdown nicely in windows?
|
I have a Java application which I want to shutdown 'nicely' when the user selects Start->Shutdown. I've tried using JVM shutdown listeners via Runtime.addShutdownHook(...) but this doesn't work as I can't use any UI elements from it.
I've also tried using the exit handler on my main application UI window but it has no way to pause or halt shutdown as far as I can tell. How can I handle shutdown nicely?
|
[
"As far as I know you need to start using JNI to set up a message handler for the Windows WM_QUERYENDSESSION message.\nTo do this (if you're new to Windows programming like me) you'll need to create a new class of window with a new message handling function (as described here) and handle the WM_QUERYENDSESSION from the message handler.\nNB: You'll need to use the JNIEnv::GetJavaVM(...) and then JavaVM::AttachCurrentThread(...) on the message handling thread before you can call any Java methods from your native message handling code. \n",
"The previously mentioned JNI approach will likely work.\nYou can use JNA which is basically a wrapper around JNI to make it easier to use. An added bonus is that it (in my opinion at least) generally is faster and more maintainable than raw JNI. You can find JNA at https://jna.dev.java.net/\nIf you're just starting the application in the start menu because you're trying to make it behave like a service in windows, you can use the java service wrapper which is found here:\nhttp://wrapper.tanukisoftware.org/doc/english/download.jsp\n"
] |
[
4,
4
] |
[] |
[] |
[
"java",
"shutdown",
"windows"
] |
stackoverflow_0000061692_java_shutdown_windows.txt
|
Q:
Which 3D cards support full scene antialiasing?
Is there a list of 3D cards available that provide full scene antialiasing as well as which are able to do it in hardware (decent performance)?
A:
Pretty much all cards since DX7-level technology (GeForce 2 / Radeon 7000) can do it. Most notable exceptions are Intel cards (Intel 945 aka GMA 950 and earlier can't do it; I think Intel 965 aka GMA X3100 can't do it either).
Older cards (GeForce 2 / 4MX, Radeon 7000-9250) were using supersampling (render everything into internally larger buffer, downsample at the end). All later cards have multisampling, where this expensive process is only performed at polygon edges (simply speaking, shaders are run for each pixel, while depth/coverage is stored for each sample).
A:
Off the top of my head, pretty much any card since a geforce 2 or something can do it. There's always a performance hit, but this varies on the card and AA mode (of which there are about 100 different kinds) but generally it's quite a performance hit.
A:
Agree with Orion Edwards, pretty much everything new can. Performance also depends greatly on the resolution you run at.
A:
Integrated GPUs are going to be really poor performers with games FSAA or no. If you want even moderate performance, buy a separate video card.
For something that's not crazy expensive go with either a nVidia Geforce 8000 series card or an ATI 3000 series card. Even as a nVidia 8800 GTS owner, I will tell you the ATIs have better support for older games.
Although I personally still like FSAA, it is becoming less important with higher resolution screens. Also, more and more games are using deferred rendering which makes FSAA impossible.
A:
Yes, of course integrated cards are awful. :) But this wasn't a question about gaming, but rather about an application that we are writing that will use OpenGL/D3D for 3D rendering. The 3D scene is relatively small, but antialiasing makes a dramatic difference in terms of the quality of the rendering. We are curious if there is some way to easily determine which cards support these features fully and which do not.
With the exception of the 3100, so far all of the cards we've found that do antialiasing are plenty fast for our purposes (as is my GeForce 9500).
|
Which 3D cards support full scene antialiasing?
|
Is there a list of 3D cards available that provide full scene antialiasing as well as which are able to do it in hardware (decent performance)?
|
[
"Pretty much all cards since DX7-level technology (GeForce 2 / Radeon 7000) can do it. Most notable exceptions are Intel cards (Intel 945 aka GMA 950 and earlier can't do it; I think Intel 965 aka GMA X3100 can't do it either).\nOlder cards (GeForce 2 / 4MX, Radeon 7000-9250) were using supersampling (render everything into internally larger buffer, downsample at the end). All later cards have multisampling, where this expensive process is only performed at polygon edges (simply speaking, shaders are run for each pixel, while depth/coverage is stored for each sample).\n",
"Off the top of my head, pretty much any card since a geforce 2 or something can do it. There's always a performance hit, but this varies on the card and AA mode (of which there are about 100 different kinds) but generally it's quite a performance hit.\n",
"Agree with Orion Edwards, pretty much everything new can. Performance also depends greatly on the resolution you run at.\n",
"Integrated GPUs are going to be really poor performers with games FSAA or no. If you want even moderate performance, buy a separate video card. \nFor something that's not crazy expensive go with either a nVidia Geforce 8000 series card or an ATI 3000 series card. Even as a nVidia 8800 GTS owner, I will tell you the ATIs have better support for older games. \nAlthough I personally still like FSAA, it is becoming less important with higher resolution screens. Also, more and more games are using deferred rendering which makes FSAA impossible.\n",
"Yes, of course integrated cards are awful. :) But this wasn't a question about gaming, but rather about an application that we are writing that will use OpenGL/D3D for 3D rendering. The 3D scene is relatively small, but antialiasing makes a dramatic difference in terms of the quality of the rendering. We are curious if there is some way to easily determine which cards support these features fully and which do not.\nWith the exception of the 3100, so far all of the cards we've found that do antialiasing are plenty fast for our purposes (as is my GeForce 9500).\n"
] |
[
4,
3,
2,
1,
1
] |
[
"Having seen a pile of machines recently that don't do it, I don't think that's quite true. The GMA 950 integrated ones don't do it to start with, and I don't think that the 3100/X3100 do either (at least not in hardware... the 3100 was enormously slow in a demo). Also, I don't believe that the GeForce MX5200 supported it either.\nOr perhaps I'm just misunderstanding what you mean when you refer to \"AA mode\". Are there a lot of cards which support modes that are virtually unnoticable? :)\n"
] |
[
-1
] |
[
"antialiasing",
"opengl"
] |
stackoverflow_0000029311_antialiasing_opengl.txt
|
Q:
Setting Excel Number Format via xlcFormatNumber in an xll
I'm trying to set the number format of a cell but the call to xlcFormatNumber fails leaving the cell number format as "General". I can successfully set the value of the cell using xlSet.
XLOPER xRet;
XLOPER xRef;
//try to set the format of cell A1
xRef.xltype = xltypeSRef;
xRef.val.sref.count = 1;
xRef.val.sref.ref.rwFirst = 0;
xRef.val.sref.ref.rwLast = 0;
xRef.val.sref.ref.colFirst = 0;
xRef.val.sref.ref.colLast = 0;
XLOPER xFormat;
xFormat.xltype = xltypeStr;
xFormat.val.str = "\4#.00"; //I've tried various formats
Excel4( xlcFormatNumber, &xRet, 2, (LPXLOPER)&xRef, (LPXLOPER)&xFormat);
I haven't managed to find any documentation regarding the usage of this command. Any help here would be greatly appreciated.
A:
Thanks to Simon Murphy for the answer:-
Smurf on Spreadsheets
//It is necessary to select the cell to apply the formatting to
Excel4 (xlcSelect, 0, 1, &xRef);
//Then we apply the formatting
Excel4( xlcFormatNumber, 0, 1, &xFormat);
|
Setting Excel Number Format via xlcFormatNumber in an xll
|
I'm trying to set the number format of a cell but the call to xlcFormatNumber fails leaving the cell number format as "General". I can successfully set the value of the cell using xlSet.
XLOPER xRet;
XLOPER xRef;
//try to set the format of cell A1
xRef.xltype = xltypeSRef;
xRef.val.sref.count = 1;
xRef.val.sref.ref.rwFirst = 0;
xRef.val.sref.ref.rwLast = 0;
xRef.val.sref.ref.colFirst = 0;
xRef.val.sref.ref.colLast = 0;
XLOPER xFormat;
xFormat.xltype = xltypeStr;
xFormat.val.str = "\4#.00"; //I've tried various formats
Excel4( xlcFormatNumber, &xRet, 2, (LPXLOPER)&xRef, (LPXLOPER)&xFormat);
I haven't managed to find any documentation regarding the usage of this command. Any help here would be greatly appreciated.
|
[
"Thanks to Simon Murphy for the answer:-\nSmurf on Spreadsheets\n//It is necessary to select the cell to apply the formatting to\nExcel4 (xlcSelect, 0, 1, &xRef);\n\n//Then we apply the formatting\nExcel4( xlcFormatNumber, 0, 1, &xFormat);\n\n"
] |
[
6
] |
[] |
[] |
[
"c++",
"excel",
"xll"
] |
stackoverflow_0000070643_c++_excel_xll.txt
|
Q:
Best way to implement mutliple Default Buttons on a ASP.NET Webform
What is the best way to implement mutliple Default Buttons on a ASP.NET Webform?
I have what I think is a pretty standard page. There is a login area with user/pass field and a login button. Then elsewhere on the same page there is a single search field with a search button.
A:
asp:Panel has a property named DefaultButton. You just need to encapsulate your markup portions with appropriate panels and set the default buttons for each.
A:
Capture the enter key press for each area of the screen and then fire the corresponding button's click even.
A:
Use a helper function like this one to tie the textboxes to their associated buttons.
|
Best way to implement mutliple Default Buttons on a ASP.NET Webform
|
What is the best way to implement mutliple Default Buttons on a ASP.NET Webform?
I have what I think is a pretty standard page. There is a login area with user/pass field and a login button. Then elsewhere on the same page there is a single search field with a search button.
|
[
"asp:Panel has a property named DefaultButton. You just need to encapsulate your markup portions with appropriate panels and set the default buttons for each.\n",
"Capture the enter key press for each area of the screen and then fire the corresponding button's click even. \n",
"Use a helper function like this one to tie the textboxes to their associated buttons.\n"
] |
[
8,
1,
1
] |
[] |
[] |
[
"asp.net",
"webforms"
] |
stackoverflow_0000072036_asp.net_webforms.txt
|
Q:
ASP.net - How can one differentiate Page-Processing Time from Client-Transmission Time
The single timing column in the weblog naturally includes client transmission timing. For anamoly analysis, I want to differentiate pages that took excessive construction time from requests that simply had a slow client.
For buffered pages, I've looked at the ASP.NET page lifecycle model and do not see where I can tap in and codewise measure just the page-processing time before the page is flushed to the client.
I probably should have mentioned that my goal is production monitoring (not test or dev). In addition, the intent is to annotate the weblogs with this measurement for later analysis. Current we liberally annotate the weblogs with Response.AppendToLog(). I believe the desire to use Response.AppendToLog() somewhat limits my potential logpoints as for instance, the response-object is not viable in Application_EndRequest.
Any insight would be appreciated.
A:
You could use a Stopwatch in the BeginRequest and the PreSendRequestContent as mentioned in the other two answers, or you could just use the request's Timestamp in the PreSendRequestContent.
For example, on SingingEels, I added this to the bottom of my Master Page (yes, it's a hack) : <%=DateTime.Now.Subtract(HttpContext.Current.Timestamp).TotalSeconds %>
That way I can see how long any page took to actually execute on the server, including hitting the database, etc.
A:
the easist way would probably be to use the follow events in the global.asax file:
protected void Application_BeginRequest(Object sender, EventArgs e)
protected void Application_EndRequest(Object sender, EventArgs e)
You could also implement a custom httpmodule
A:
This depends on the feature set of the performance tools you have. But if you just need to log the processing time then you could follow this approach.
Log the starting time in the HttpApplication.BeginRequest event.
Log the elapsed time in the HttpApplication.PreSendRequestContent event.
If you just want a specific page then you could check for this in the BeginRequest event.
The application events can be attached in Global.asax.
A:
If you want to log on a specific page, I believe asp.net pages' lifecycle begin with PreInit and end with Disposed, so you can log anything you want in those events.
Or, if you want to log on every page, as Bob Dizzle pointed out, you can use the Global.asax file, which has a thousand events to choose from : http://msdn.microsoft.com/en-us/library/2027ewzw.aspx
A:
You could also do your testing right there on the web server. Then ClientTransmission time becomes effectively 0.
|
ASP.net - How can one differentiate Page-Processing Time from Client-Transmission Time
|
The single timing column in the weblog naturally includes client transmission timing. For anamoly analysis, I want to differentiate pages that took excessive construction time from requests that simply had a slow client.
For buffered pages, I've looked at the ASP.NET page lifecycle model and do not see where I can tap in and codewise measure just the page-processing time before the page is flushed to the client.
I probably should have mentioned that my goal is production monitoring (not test or dev). In addition, the intent is to annotate the weblogs with this measurement for later analysis. Current we liberally annotate the weblogs with Response.AppendToLog(). I believe the desire to use Response.AppendToLog() somewhat limits my potential logpoints as for instance, the response-object is not viable in Application_EndRequest.
Any insight would be appreciated.
|
[
"You could use a Stopwatch in the BeginRequest and the PreSendRequestContent as mentioned in the other two answers, or you could just use the request's Timestamp in the PreSendRequestContent.\nFor example, on SingingEels, I added this to the bottom of my Master Page (yes, it's a hack) : <%=DateTime.Now.Subtract(HttpContext.Current.Timestamp).TotalSeconds %>\nThat way I can see how long any page took to actually execute on the server, including hitting the database, etc.\n",
"the easist way would probably be to use the follow events in the global.asax file:\nprotected void Application_BeginRequest(Object sender, EventArgs e)\nprotected void Application_EndRequest(Object sender, EventArgs e)\nYou could also implement a custom httpmodule\n",
"This depends on the feature set of the performance tools you have. But if you just need to log the processing time then you could follow this approach.\n\nLog the starting time in the HttpApplication.BeginRequest event.\nLog the elapsed time in the HttpApplication.PreSendRequestContent event.\n\nIf you just want a specific page then you could check for this in the BeginRequest event.\nThe application events can be attached in Global.asax.\n",
"If you want to log on a specific page, I believe asp.net pages' lifecycle begin with PreInit and end with Disposed, so you can log anything you want in those events.\nOr, if you want to log on every page, as Bob Dizzle pointed out, you can use the Global.asax file, which has a thousand events to choose from : http://msdn.microsoft.com/en-us/library/2027ewzw.aspx\n",
"You could also do your testing right there on the web server. Then ClientTransmission time becomes effectively 0.\n"
] |
[
2,
0,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"measurement",
"page_lifecycle",
"transmission"
] |
stackoverflow_0000071306_asp.net_measurement_page_lifecycle_transmission.txt
|
Q:
Are there any JSF component libraries that generate semantic and cross-browser html markup?
I'm using RichFaces per a client requirement, but the markup it (and the stock JSF controls) generates is an awful mess of nested tables. Are there any control libraries out there that generate nicer markup? AJAX support is a huge plus!
A:
There is ICEFaces which provides more semantic support than RichFaces .Also you can try Nitobi suite which also provides similar kinda solution.If you are not satisfied with any of these I suggest try to write your own part extending the Sun faces
A:
Short answer: No I have not yet found one.
Your options include using less complicated controls and know what html the standard controls emit. Thing like h:panelGrid render as a table. There is nothing stopping you writing your own rendering family which produces more standards compliment html, but this would be a big time investment.
As for using RichFaces if you stick more to the a4j: namespace of tags you will still be getting the cross browser ajax with out all the mark up you don't like.
|
Are there any JSF component libraries that generate semantic and cross-browser html markup?
|
I'm using RichFaces per a client requirement, but the markup it (and the stock JSF controls) generates is an awful mess of nested tables. Are there any control libraries out there that generate nicer markup? AJAX support is a huge plus!
|
[
"There is ICEFaces which provides more semantic support than RichFaces .Also you can try Nitobi suite which also provides similar kinda solution.If you are not satisfied with any of these I suggest try to write your own part extending the Sun faces\n",
"Short answer: No I have not yet found one.\nYour options include using less complicated controls and know what html the standard controls emit. Thing like h:panelGrid render as a table. There is nothing stopping you writing your own rendering family which produces more standards compliment html, but this would be a big time investment. \nAs for using RichFaces if you stick more to the a4j: namespace of tags you will still be getting the cross browser ajax with out all the mark up you don't like.\n"
] |
[
1,
0
] |
[] |
[] |
[
"jsf",
"semantic_markup"
] |
stackoverflow_0000063509_jsf_semantic_markup.txt
|
Q:
Delete all but the 4 newest directories
I want to delete all but the 4 newest directories in my parent directory. How would you do this in Bash?
A:
ls -atrd */ | head --lines=-4 | xargs rm -rf
Edit: added 'a' argument to ls
A:
Please clarify if you mean “delete all directories but the four newst ones” or “delete everything (files and directories) except for the four newest directories”.
Please also note that creation times are not known for directories. One can only tell when a directory was last modified, that is, had files added, removed or renamed.
A:
you could do the following:
#!/bin/bash
#store the listing of current directory in var
mydir=`ls -t`
it=1
for file in $mydir
do
if [ $it -gt 5 ]
then
echo file $it will be deleted: $file
#rm -rf $file
fi
it=$((it+1))
done
(remove the # before rm to make it really happen ;) )
A:
Another, BSD-safe, way to do it, with arrays (why not?)
#!/bin/bash
ARRAY=( `ls -td */` )
ELEMENTS=${#ARRAY[@]}
COUNTER=4
while [ $COUNTER -lt $ELEMENTS ]; do
echo ${ARRAY[${COUNTER}]}
let COUNTER=COUNTER+1
done
|
Delete all but the 4 newest directories
|
I want to delete all but the 4 newest directories in my parent directory. How would you do this in Bash?
|
[
"ls -atrd */ | head --lines=-4 | xargs rm -rf\n\nEdit: added 'a' argument to ls\n",
"Please clarify if you mean “delete all directories but the four newst ones” or “delete everything (files and directories) except for the four newest directories”.\nPlease also note that creation times are not known for directories. One can only tell when a directory was last modified, that is, had files added, removed or renamed.\n",
"you could do the following:\n#!/bin/bash\n\n#store the listing of current directory in var\nmydir=`ls -t`\nit=1\n\nfor file in $mydir\n do\n if [ $it -gt 5 ]\n then\n echo file $it will be deleted: $file\n #rm -rf $file\n fi\n it=$((it+1))\n done\n\n(remove the # before rm to make it really happen ;) )\n",
"Another, BSD-safe, way to do it, with arrays (why not?)\n#!/bin/bash\nARRAY=( `ls -td */` )\nELEMENTS=${#ARRAY[@]}\nCOUNTER=4\nwhile [ $COUNTER -lt $ELEMENTS ]; do\n echo ${ARRAY[${COUNTER}]}\n let COUNTER=COUNTER+1\ndone\n\n"
] |
[
9,
1,
1,
0
] |
[] |
[] |
[
"bash",
"shell"
] |
stackoverflow_0000071864_bash_shell.txt
|
Q:
Split a list by distinct date
Another easy one hopefully.
Let's say I have a collection like this:
List<DateTime> allDates;
I want to turn that into
List<List<DateTime>> dividedDates;
where each List in 'dividedDates' contains all of the dates in 'allDates' that belong to a distinct year.
Is there a bit of LINQ trickery that my tired mind can't pick out right now?
Solution
The Accepted Answer is correct.
Thanks, I don't think I was aware of the 'into' bit of GroupBy and I was trying to use the .GroupBy() sort of methods rather than the SQL like syntax. And thanks for confirming the ToList() amendment and including it in the Accepted Answer :-)
A:
var q = from date in allDates
group date by date.Year into datesByYear
select datesByYear.ToList();
q.ToList(); //returns List<List<DateTime>>
A:
Here's the methods form.
allDates
.GroupBy(d => d.Year)
.Select(g => g.ToList())
.ToList();
|
Split a list by distinct date
|
Another easy one hopefully.
Let's say I have a collection like this:
List<DateTime> allDates;
I want to turn that into
List<List<DateTime>> dividedDates;
where each List in 'dividedDates' contains all of the dates in 'allDates' that belong to a distinct year.
Is there a bit of LINQ trickery that my tired mind can't pick out right now?
Solution
The Accepted Answer is correct.
Thanks, I don't think I was aware of the 'into' bit of GroupBy and I was trying to use the .GroupBy() sort of methods rather than the SQL like syntax. And thanks for confirming the ToList() amendment and including it in the Accepted Answer :-)
|
[
"var q = from date in allDates \n group date by date.Year into datesByYear\n select datesByYear.ToList();\nq.ToList(); //returns List<List<DateTime>>\n\n",
"Here's the methods form.\n\nallDates\n .GroupBy(d => d.Year)\n .Select(g => g.ToList())\n .ToList();\n\n"
] |
[
8,
7
] |
[] |
[] |
[
"c#",
"linq"
] |
stackoverflow_0000069748_c#_linq.txt
|
Q:
How to run TAP::Harness tests written in Guile?
The usual approach of
test:
$(PERL) "-MExtUtils::Command::MM" "-e" "test_harness($(TEST_VERBOSE), '$(INCDIRS)')" $(TEST_FILES)
fails to run Guile scripts, because it passes to Guile the extra parameter "-w".
A:
One possible approach is to set up your project as follows.
Your directory structure is as follows:
./project Your project files
./project/t/*.t Your unit test scripts
./project/t/scripts/* Auxiliary scripts used by your unit tests
Your ./project/Makefile contains the following:
PERL = /usr/bin/perl
TEST_LIBDIRS = ./lib
RUN_GUILE_TESTS = ./t/scripts/RunGuileTests.pl
TEST_FILES = ./t/*.t
test:
$(PERL) -I$(TEST_LIBDIRS) $(RUN_GUILE_TESTS) $(TEST_FILES)
Your ./project/t/scripts/RunGuileTests.pl contents are:
#!/usr/bin/perl -w
# Run Guile tests - filenames are given as arguments to the script.
use TAP::Harness;
my @tests = @ARGV;
my %args = (
verbosity => 0,
timer => 1,
show_count => 1,
exec => ['/usr/bin/guile', '-s'],
);
my $harness = TAP::Harness->new( \%args );
$harness->runtests(@tests);
# End of RunGuileTests.pl
Your Guile test scripts should start with:
#!/usr/bin/guile -s
!#
; Description of your tests
|
How to run TAP::Harness tests written in Guile?
|
The usual approach of
test:
$(PERL) "-MExtUtils::Command::MM" "-e" "test_harness($(TEST_VERBOSE), '$(INCDIRS)')" $(TEST_FILES)
fails to run Guile scripts, because it passes to Guile the extra parameter "-w".
|
[
"One possible approach is to set up your project as follows.\nYour directory structure is as follows:\n./project Your project files\n./project/t/*.t Your unit test scripts\n./project/t/scripts/* Auxiliary scripts used by your unit tests\nYour ./project/Makefile contains the following:\nPERL = /usr/bin/perl\nTEST_LIBDIRS = ./lib\nRUN_GUILE_TESTS = ./t/scripts/RunGuileTests.pl\nTEST_FILES = ./t/*.t\n\ntest:\n $(PERL) -I$(TEST_LIBDIRS) $(RUN_GUILE_TESTS) $(TEST_FILES)\nYour ./project/t/scripts/RunGuileTests.pl contents are:\n#!/usr/bin/perl -w\n# Run Guile tests - filenames are given as arguments to the script.\n\nuse TAP::Harness;\nmy @tests = @ARGV;\nmy %args = (\n verbosity => 0,\n timer => 1,\n show_count => 1,\n exec => ['/usr/bin/guile', '-s'],\n );\nmy $harness = TAP::Harness->new( \\%args );\n $harness->runtests(@tests);\n\n# End of RunGuileTests.pl\nYour Guile test scripts should start with:\n#!/usr/bin/guile -s\n!#\n; Description of your tests\n"
] |
[
1
] |
[] |
[] |
[
"guile",
"unit_testing"
] |
stackoverflow_0000071989_guile_unit_testing.txt
|
Q:
Task Scheduler Problem Starting MSSQLSERVER
I am trying to create a Task Scheduler task to start my SQL Server 2005 instance every morning, because something stops it every night. This is a temporary solution until I can diagnose the stoppage.
I created a task to run under my admin user, and to start the program, cmd with the arguments /c net start mssqlserver. When I manually run the command, in a console under my admin user, it runs, but when I try to manually execute the task, it logs the following message, and the service remains stopped:
action "C:\Windows\system32\cmd.EXE" with return code 2.
Any suggestions?
A:
Use the NET command:
To start a service, type: net startservice
To stop a service, type: net stopservice
To pause a service, type: net pauseservice
To resume a service, type: net continueservice
See this Microsoft article on additional details:
Microsoft Article
In addition I would look at the Windows Event logs (Application and System) for details as to why SQLServer is stopping in the first place.
A:
I would recommend opening the Services MMC snap-in (just run services.msc), finding the service and modifying the properties of the service to restart automatically when the service fails.
Open the Services MMC snap-in (run
services.msc)
Locate the service. If you
installed a default instance of SQL
Server 2005 that would be "SQL
Server (MSSQLSERVER)". If you
installed a named instance the name
would be in the parenthesis.
Right-click on the service and
select "Properties".
Switch to the "Recovery" tab.
Set the options for first, second
and subsequent failures as desired.
Click "OK".
And John Dyer is also right about looking in the Windows Event logs for details on why SQL Server stopped (run eventvwr.exe).
|
Task Scheduler Problem Starting MSSQLSERVER
|
I am trying to create a Task Scheduler task to start my SQL Server 2005 instance every morning, because something stops it every night. This is a temporary solution until I can diagnose the stoppage.
I created a task to run under my admin user, and to start the program, cmd with the arguments /c net start mssqlserver. When I manually run the command, in a console under my admin user, it runs, but when I try to manually execute the task, it logs the following message, and the service remains stopped:
action "C:\Windows\system32\cmd.EXE" with return code 2.
Any suggestions?
|
[
"Use the NET command:\nTo start a service, type: net startservice\nTo stop a service, type: net stopservice\nTo pause a service, type: net pauseservice\nTo resume a service, type: net continueservice\nSee this Microsoft article on additional details:\nMicrosoft Article\nIn addition I would look at the Windows Event logs (Application and System) for details as to why SQLServer is stopping in the first place.\n",
"I would recommend opening the Services MMC snap-in (just run services.msc), finding the service and modifying the properties of the service to restart automatically when the service fails.\n\nOpen the Services MMC snap-in (run\nservices.msc)\nLocate the service. If you\ninstalled a default instance of SQL\nServer 2005 that would be \"SQL\nServer (MSSQLSERVER)\". If you\ninstalled a named instance the name\nwould be in the parenthesis.\nRight-click on the service and\nselect \"Properties\".\nSwitch to the \"Recovery\" tab.\nSet the options for first, second\nand subsequent failures as desired.\nClick \"OK\".\n\nAnd John Dyer is also right about looking in the Windows Event logs for details on why SQL Server stopped (run eventvwr.exe).\n"
] |
[
1,
1
] |
[] |
[] |
[
"scheduled_tasks",
"sql_server",
"sql_server_2005"
] |
stackoverflow_0000070694_scheduled_tasks_sql_server_sql_server_2005.txt
|
Q:
Mixing jsp and jsf
I will elaborate somewhat. Jsf is kind-of extremely painful for working with from designer's perspective, somewhat in the range of trying to draw a picture while having hands tied at your back, but it is good for chewing up forms and listing lots of data. So sites we are making in my company are jsf admin pages and jsp user pages. Problem occurs when user pages have some complicated forms and stuff and jsf starts kickin' in.
Here is the question: I'm on pure jsp page. I need to access some jsf page that uses session bean. How can I initialize that bean? If I was on jsf page, I could have some commandLink which would prepare data. Only thing I can come up with is having dummy jsf page that will do the work and redirect me to needed jsf page, but that's kind of ugly, and I don't want to end up with 50 dummy pages. I would rather find some mechanism to reinitialize bean that is already in session with some wanted parameters.
Edit: some more details. In this specific situation, I have a tests that are either full or filtered. It's a same test with same logic and everything, except if test is filtered, it should eliminate some questions depending on answers. Upon a clicking a link, it should start a requested test in one of the two modes. Links are parts of main menu-tree and are visible on many sibling jsp pages. My task is to have 4 links: testA full, testA filtered, testB full, testB filtered, that all lead on same jsf page and TestFormBean should be reinitialized accordingly.
Edit: I've researched facelets a bit, and while it won't help me now, I'll definitely keep that in mind for next project.
A:
have you looked into using facelets? It lets you get rid of the whole JSF / JSP differences (it's an alternate and superior view controller).
It also supports great design-time semantics with the jsfc tag...
<input type="text" jsfc="#{SomeBean.property}" class="foo" />
gets translated internally to the correct JSF stuff, so you can work with your existing tools.
A:
You can retrieve a managed bean inside of a tag library using something like this:
FacesContext context = FacesContext.getCurrentInstance();
Object myBean = context.getELContext().getELResolver().getValue(context.getELContext(), null, "myBeanName");
However, you'd need to use the tag library from one of your JSF pages. FacesContext.getCurrentInstance() returns null when it's called outside of the FacesServlet.
A:
To solve this one I'd probably create a JSF fragment that only includes your form, then use a <c:import> tag to include it in my JSF page.
That solution is probably a little fragile depending on your environment though.
EDIT: See Chris Hall's answer, FacesContext is not available outside the FacesServlet.
A:
Create a custom JSP tag handler. You can then retrieve the bean from session scope and then initialize it on the fly. See this tutorial for more details.
A:
Actually, I've resolved this by removing bean from session, so it has to be generated again when jsf page is called. Then I pick up get parameters from a request in constructor.
|
Mixing jsp and jsf
|
I will elaborate somewhat. Jsf is kind-of extremely painful for working with from designer's perspective, somewhat in the range of trying to draw a picture while having hands tied at your back, but it is good for chewing up forms and listing lots of data. So sites we are making in my company are jsf admin pages and jsp user pages. Problem occurs when user pages have some complicated forms and stuff and jsf starts kickin' in.
Here is the question: I'm on pure jsp page. I need to access some jsf page that uses session bean. How can I initialize that bean? If I was on jsf page, I could have some commandLink which would prepare data. Only thing I can come up with is having dummy jsf page that will do the work and redirect me to needed jsf page, but that's kind of ugly, and I don't want to end up with 50 dummy pages. I would rather find some mechanism to reinitialize bean that is already in session with some wanted parameters.
Edit: some more details. In this specific situation, I have a tests that are either full or filtered. It's a same test with same logic and everything, except if test is filtered, it should eliminate some questions depending on answers. Upon a clicking a link, it should start a requested test in one of the two modes. Links are parts of main menu-tree and are visible on many sibling jsp pages. My task is to have 4 links: testA full, testA filtered, testB full, testB filtered, that all lead on same jsf page and TestFormBean should be reinitialized accordingly.
Edit: I've researched facelets a bit, and while it won't help me now, I'll definitely keep that in mind for next project.
|
[
"have you looked into using facelets? It lets you get rid of the whole JSF / JSP differences (it's an alternate and superior view controller).\nIt also supports great design-time semantics with the jsfc tag...\n<input type=\"text\" jsfc=\"#{SomeBean.property}\" class=\"foo\" />\n\ngets translated internally to the correct JSF stuff, so you can work with your existing tools.\n",
"You can retrieve a managed bean inside of a tag library using something like this:\nFacesContext context = FacesContext.getCurrentInstance();\nObject myBean = context.getELContext().getELResolver().getValue(context.getELContext(), null, \"myBeanName\");\n\nHowever, you'd need to use the tag library from one of your JSF pages. FacesContext.getCurrentInstance() returns null when it's called outside of the FacesServlet.\n",
"To solve this one I'd probably create a JSF fragment that only includes your form, then use a <c:import> tag to include it in my JSF page. \nThat solution is probably a little fragile depending on your environment though.\nEDIT: See Chris Hall's answer, FacesContext is not available outside the FacesServlet.\n",
"Create a custom JSP tag handler. You can then retrieve the bean from session scope and then initialize it on the fly. See this tutorial for more details.\n",
"Actually, I've resolved this by removing bean from session, so it has to be generated again when jsf page is called. Then I pick up get parameters from a request in constructor.\n"
] |
[
4,
3,
1,
0,
0
] |
[] |
[] |
[
"java",
"jsf",
"jsp"
] |
stackoverflow_0000042828_java_jsf_jsp.txt
|
Q:
What's a good design pattern for web method return values?
When coding web services, how do you structure your return values? How do you handle error conditions (expected ones and unexpected ones)? If you are returning something simple like an int, do you just return it, or embed it in a more complex object? Do all of the web methods within one service return an instance of a single class, or do you create a custom return value class for each method?
A:
I like the Request/Response object pattern, where you encapsulate your arguments into a single [Operation]Request class, which has simple public properties on it.
Something like AddCustomerRequest, which would return AddCustomerResponse.
The response can include information on the success/failure of the operation, any messages that might be used by the UI, possibly the ID of the customer that was added, for example.
Another good pattern is to make these all derive from a simple IMessage interface, where your general end-point is something like Process(params IMessage[] messages)... this way you can pass in multiple operations in the same web request.
A:
+1 for Ben's answer.
In addition, I suggest considering that the generic response allow for multiple error/warning items, to allow the reply to be as comprehensive and actionable as possible. (Would you want to use a compiler that stopped after the first error message, or one that told you as much as possible?)
A:
If you're using SOAP web services then SOAP faults are the standard way to return error details, where the fault messages can return whatever additional detail you like.
A:
Soap faults are a standard practice where the calling application is a Soap client. There are cases, such as a COM client using XMLHTTP, where the Soap is parsed as XML and Soap faults cannot be easily handled. Can't vote yet but another +1 for @Ben Scheirman.
|
What's a good design pattern for web method return values?
|
When coding web services, how do you structure your return values? How do you handle error conditions (expected ones and unexpected ones)? If you are returning something simple like an int, do you just return it, or embed it in a more complex object? Do all of the web methods within one service return an instance of a single class, or do you create a custom return value class for each method?
|
[
"I like the Request/Response object pattern, where you encapsulate your arguments into a single [Operation]Request class, which has simple public properties on it.\nSomething like AddCustomerRequest, which would return AddCustomerResponse.\nThe response can include information on the success/failure of the operation, any messages that might be used by the UI, possibly the ID of the customer that was added, for example.\nAnother good pattern is to make these all derive from a simple IMessage interface, where your general end-point is something like Process(params IMessage[] messages)... this way you can pass in multiple operations in the same web request.\n",
"+1 for Ben's answer.\nIn addition, I suggest considering that the generic response allow for multiple error/warning items, to allow the reply to be as comprehensive and actionable as possible. (Would you want to use a compiler that stopped after the first error message, or one that told you as much as possible?)\n",
"If you're using SOAP web services then SOAP faults are the standard way to return error details, where the fault messages can return whatever additional detail you like.\n",
"Soap faults are a standard practice where the calling application is a Soap client. There are cases, such as a COM client using XMLHTTP, where the Soap is parsed as XML and Soap faults cannot be easily handled. Can't vote yet but another +1 for @Ben Scheirman.\n"
] |
[
8,
1,
1,
0
] |
[] |
[] |
[
"soap",
"web_services",
"wsdl"
] |
stackoverflow_0000039585_soap_web_services_wsdl.txt
|
Q:
IIS is keeping hold of my generated files
My web application generates pdf files and either e-mails or faxes them to our customers. Somehow IIS6 is keeping hold of the file and blocking any other requests for it claiming the old '..the process cannot access the file 'xxx.pdf' because it is being used by another process.'
When I recycle the application pool all is ok. Does anybody know why this is happening and how can I stop it.
Thanks
A:
As with everyone said, do call the Close and Dispose method on any IO objects you have open when reading/writing the PDF files.
But I suppose you'd incorporated a 3rd party component? to do the PDF writing for you? If that's the case you might want to check with the vendor and/or its documentation to make sure that you are doing things in the way the vendors intended them to be. Don't trust the black box you got from someone else unless it has proven itself.
Another place to look might be what happens during multiple web request to the PDF files, are you sure that the file is not written simultaneously from multiple places? e.g. 2-3 requests genrating PDF simultaneously? or 2-3 pages along the PDF generation process?
And lastly, you might want to check the exception logs to make sure that nothing is crashing/thread exiting and leaving the file handle open without you noticing it. It happens a lot in multiple threading scenarios, sometimes the thread just crashes and exits - which could happen especially if you use 3rd party components, they might be performing some magic tricks, you'd never know.
A:
Sounds like, the files - after being created - are still locked by the worker process. Make sure that you close all the connections for your file.
(remember, using using blocks'll take care of that)
A:
I'd look through your code and make sure all handles to open (generated) files have been closed properly. Sometimes you just can't rely on the garbage collector to sort these things out.
A:
Check that all the code writing files on disk properly close every handle using the proper .Close() in the finally clause or trough the "using" clause of C#
byte[] asciiBytes = getPdf(...);
try{
BinaryWriter bw = new BinaryWriter(File.Create(filename));
bw.Write(pdfBytes);
}
finally {
if(null != bw)
bw.Close();
}
Use the Response and the Content-Disposition clause to send the file
Response.ContentType = "application/pdf";
Response.AppendHeader("Content-disposition", "attachment; filename=" + PDID + ".pdf");
Response.WriteFile(filename);
Response.Flush();
The code shown creates and send Pdf files to customer from about 18 months and we've never seen a file locked.
A:
Like mentioned before: Take care that you close all open handlers.
Sometimes the indexing service of Microsoft blocks files. Exclude your directory
|
IIS is keeping hold of my generated files
|
My web application generates pdf files and either e-mails or faxes them to our customers. Somehow IIS6 is keeping hold of the file and blocking any other requests for it claiming the old '..the process cannot access the file 'xxx.pdf' because it is being used by another process.'
When I recycle the application pool all is ok. Does anybody know why this is happening and how can I stop it.
Thanks
|
[
"As with everyone said, do call the Close and Dispose method on any IO objects you have open when reading/writing the PDF files.\nBut I suppose you'd incorporated a 3rd party component? to do the PDF writing for you? If that's the case you might want to check with the vendor and/or its documentation to make sure that you are doing things in the way the vendors intended them to be. Don't trust the black box you got from someone else unless it has proven itself.\nAnother place to look might be what happens during multiple web request to the PDF files, are you sure that the file is not written simultaneously from multiple places? e.g. 2-3 requests genrating PDF simultaneously? or 2-3 pages along the PDF generation process?\nAnd lastly, you might want to check the exception logs to make sure that nothing is crashing/thread exiting and leaving the file handle open without you noticing it. It happens a lot in multiple threading scenarios, sometimes the thread just crashes and exits - which could happen especially if you use 3rd party components, they might be performing some magic tricks, you'd never know.\n",
"Sounds like, the files - after being created - are still locked by the worker process. Make sure that you close all the connections for your file.\n(remember, using using blocks'll take care of that)\n",
"I'd look through your code and make sure all handles to open (generated) files have been closed properly. Sometimes you just can't rely on the garbage collector to sort these things out.\n",
"Check that all the code writing files on disk properly close every handle using the proper .Close() in the finally clause or trough the \"using\" clause of C#\nbyte[] asciiBytes = getPdf(...);\ntry{\nBinaryWriter bw = new BinaryWriter(File.Create(filename));\nbw.Write(pdfBytes);\n}\nfinally {\n if(null != bw)\n bw.Close();\n}\n\nUse the Response and the Content-Disposition clause to send the file\nResponse.ContentType = \"application/pdf\";\nResponse.AppendHeader(\"Content-disposition\", \"attachment; filename=\" + PDID + \".pdf\");\nResponse.WriteFile(filename);\nResponse.Flush();\n\nThe code shown creates and send Pdf files to customer from about 18 months and we've never seen a file locked.\n",
"\nLike mentioned before: Take care that you close all open handlers.\nSometimes the indexing service of Microsoft blocks files. Exclude your directory\n\n"
] |
[
6,
1,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"iis"
] |
stackoverflow_0000071971_asp.net_iis.txt
|
Q:
What are the names given to these 2 LINQ expressions
I'm trying to find the correct names for these 2 "types" of coding expressions in LINQ so that I can refer to them correctly. I want to say that the first is called "Fluent Style"?
var selectVar = arrayVar.Select( (a,i) => new { Line = a });
var selectVar =
from s in arrayVar
select new { Line = s };
A:
First - calling an extension method.
This style of coding is called "fluent interface" as you mentioned.
Second method is called language integrated query
A:
The first isn't even really LINQ, it's a lambda expression, with a type invariant object created.
(a) => new { blah = b}
The second is a LINQ query filling an on the fly class that has a property Line.
There is no hashrocket operator in this one, so this one is just plain old linq.
A:
The name of the second form is "query comprehesion syntax", which the compiler translates into the first form.
|
What are the names given to these 2 LINQ expressions
|
I'm trying to find the correct names for these 2 "types" of coding expressions in LINQ so that I can refer to them correctly. I want to say that the first is called "Fluent Style"?
var selectVar = arrayVar.Select( (a,i) => new { Line = a });
var selectVar =
from s in arrayVar
select new { Line = s };
|
[
"\nFirst - calling an extension method. \nThis style of coding is called \"fluent interface\" as you mentioned.\nSecond method is called language integrated query\n\n",
"The first isn't even really LINQ, it's a lambda expression, with a type invariant object created. \n(a) => new { blah = b}\n\nThe second is a LINQ query filling an on the fly class that has a property Line.\n There is no hashrocket operator in this one, so this one is just plain old linq.\n",
"The name of the second form is \"query comprehesion syntax\", which the compiler translates into the first form.\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"linq"
] |
stackoverflow_0000046096_linq.txt
|
Q:
How to implement "DOM Ready" event in a GreaseMonkey script?
I'm trying to modify my GreaseMonkey script from firing on window.onload to window.DOMContentLoaded, but this event never fires.
I'm using FireFox 2.0.0.16 / GreaseMonkey 0.8.20080609
This is the full script that I'm trying to modify, changing:
window.addEventListener ("load", doStuff, false);
to
window.addEventListener ("DOMContentLoaded", doStuff, false);
A:
So I googled greasemonkey dom ready and the first result seemed to say that the greasemonkey script is actually running at "DOM ready" so you just need to remove the onload call and run the script straight away.
I removed the window.addEventListener ("load", function() { and }, false); wrapping and it worked perfectly. It's much more responsive this way, the page appears straight away with your script applied to it and all the unseen questions highlighted, no flicker at all. And there was much rejoicing.... yea.
A:
GreaseMonkey scripts are themselves executed on DOMContentLoaded, so it's unnecessary to add a load event handler - just have your script do whatever it needs to to immediately.
http://wiki.greasespot.net/DOMContentLoaded
A:
@Sam: yeah, I was trying the same:
// ==UserScript==
// @name Stack Overflow highlight viewed questions
// @namespace *
// @include http://stackoverflow.com/questions
// @include http://stackoverflow.com/questions?*
// @include http://stackoverflow.com/questions
// @include http://stackoverflow.com/questions?*
// @version 0.55 (DOM-Ready instead of onload)
// ==/UserScript==
(function() {
// Customizable items
// var fav_tags = ["python", "database", "mysql"]; // Your favorite tags
const UNSEEN_BACK_COLOR = "rgb(225,210,210)"; // Backcolor for the question already seen
const FAV_TAG_BACK_COLOR = "rgb(210,210,225)"; // Backcolor for the favorite tags
// Internal to the DOM
// const QUESTION_URL = "http:\/\/stackoverflow.com\/questions\/([0-9]+)\/";
const QUESTION_URL = "http:\/\/stackoverflow.com\/questions\/([0-9]+)\/";
const TAG_PREFIX = "show questions tagged ";
const SEEN_MARK = "x";
//
var seen_q = [];
var seen_q_str = "";
var seen_q_str = GM_getValue ("seen_q", "");
var seen_q = seen_q_str.split("|");
var fav_tags_str = GM_getValue ("fav_tags", "")
var fav_tags = fav_tags_str.split(" ")
var already_run = false;
GM_registerMenuCommand ("Set favorite tags", askTags);
// window.addEventListener ("DOMContentLoaded", doStuff, false);
if (! doStuff()) {
window.addEventListener ("load", doStuff, false);
}
function doStuff() {
var elements = window.document.getElementsByTagName('A');
if (! elements || already_run) {
return false;
} else {
already_run = true;
}
GM_log ("here");
for (elem = 0; elem < elements.length; elem++) {
if (elements[elem].href.match (QUESTION_URL)) {
curr_q = RegExp.$1;
// Already seen?
if ((seen_q.length < curr_q) || (seen_q [curr_q] != SEEN_MARK)) {
elements[elem].style.backgroundColor = UNSEEN_BACK_COLOR;
seen_q [curr_q] = SEEN_MARK;
}
// Is a favorite tag?
node = elements[elem].parentNode.parentNode;
for (tag = 0; tag <= fav_tags.length; tag++) {
if (node.innerHTML.match ("'" + fav_tags[tag] + "'")) {
node.style.backgroundColor = FAV_TAG_BACK_COLOR;
break;
}
}
// return (0);
}
}
seen_q_str = seen_q.join("|");
GM_setValue ("seen_q", seen_q_str);
return true;
}
function askTags() {
fav_tags_str = prompt("Favorite tags (separated by spaces)", fav_tags_str);
GM_setValue ("fav_tags", fav_tags_str)
}
})();
|
How to implement "DOM Ready" event in a GreaseMonkey script?
|
I'm trying to modify my GreaseMonkey script from firing on window.onload to window.DOMContentLoaded, but this event never fires.
I'm using FireFox 2.0.0.16 / GreaseMonkey 0.8.20080609
This is the full script that I'm trying to modify, changing:
window.addEventListener ("load", doStuff, false);
to
window.addEventListener ("DOMContentLoaded", doStuff, false);
|
[
"So I googled greasemonkey dom ready and the first result seemed to say that the greasemonkey script is actually running at \"DOM ready\" so you just need to remove the onload call and run the script straight away.\nI removed the window.addEventListener (\"load\", function() { and }, false); wrapping and it worked perfectly. It's much more responsive this way, the page appears straight away with your script applied to it and all the unseen questions highlighted, no flicker at all. And there was much rejoicing.... yea.\n",
"GreaseMonkey scripts are themselves executed on DOMContentLoaded, so it's unnecessary to add a load event handler - just have your script do whatever it needs to to immediately.\nhttp://wiki.greasespot.net/DOMContentLoaded\n",
"@Sam: yeah, I was trying the same:\n// ==UserScript==\n// @name Stack Overflow highlight viewed questions\n// @namespace *\n// @include http://stackoverflow.com/questions\n// @include http://stackoverflow.com/questions?*\n// @include http://stackoverflow.com/questions\n// @include http://stackoverflow.com/questions?*\n// @version 0.55 (DOM-Ready instead of onload)\n// ==/UserScript==\n\n(function() {\n\n // Customizable items\n // var fav_tags = [\"python\", \"database\", \"mysql\"]; // Your favorite tags\n const UNSEEN_BACK_COLOR = \"rgb(225,210,210)\"; // Backcolor for the question already seen\n const FAV_TAG_BACK_COLOR = \"rgb(210,210,225)\"; // Backcolor for the favorite tags\n\n // Internal to the DOM\n // const QUESTION_URL = \"http:\\/\\/stackoverflow.com\\/questions\\/([0-9]+)\\/\";\n const QUESTION_URL = \"http:\\/\\/stackoverflow.com\\/questions\\/([0-9]+)\\/\";\n const TAG_PREFIX = \"show questions tagged \";\n\n const SEEN_MARK = \"x\";\n //\n\n var seen_q = [];\n var seen_q_str = \"\";\n\n var seen_q_str = GM_getValue (\"seen_q\", \"\");\n var seen_q = seen_q_str.split(\"|\");\n\n var fav_tags_str = GM_getValue (\"fav_tags\", \"\")\n var fav_tags = fav_tags_str.split(\" \")\n\n var already_run = false;\n\n GM_registerMenuCommand (\"Set favorite tags\", askTags);\n\n // window.addEventListener (\"DOMContentLoaded\", doStuff, false);\n if (! doStuff()) {\n window.addEventListener (\"load\", doStuff, false);\n }\n\n function doStuff() {\n\n var elements = window.document.getElementsByTagName('A');\n\n if (! elements || already_run) {\n return false;\n } else {\n already_run = true;\n }\n\n GM_log (\"here\");\n\n for (elem = 0; elem < elements.length; elem++) {\n if (elements[elem].href.match (QUESTION_URL)) {\n curr_q = RegExp.$1;\n\n // Already seen?\n if ((seen_q.length < curr_q) || (seen_q [curr_q] != SEEN_MARK)) {\n elements[elem].style.backgroundColor = UNSEEN_BACK_COLOR;\n seen_q [curr_q] = SEEN_MARK;\n }\n\n // Is a favorite tag?\n node = elements[elem].parentNode.parentNode;\n for (tag = 0; tag <= fav_tags.length; tag++) {\n if (node.innerHTML.match (\"'\" + fav_tags[tag] + \"'\")) {\n node.style.backgroundColor = FAV_TAG_BACK_COLOR;\n break;\n }\n }\n\n // return (0);\n }\n }\n\n seen_q_str = seen_q.join(\"|\");\n GM_setValue (\"seen_q\", seen_q_str);\n\n return true;\n }\n\n\n function askTags() {\n fav_tags_str = prompt(\"Favorite tags (separated by spaces)\", fav_tags_str);\n GM_setValue (\"fav_tags\", fav_tags_str)\n }\n\n})();\n\n"
] |
[
27,
11,
1
] |
[] |
[] |
[
"firefox",
"greasemonkey",
"javascript"
] |
stackoverflow_0000072090_firefox_greasemonkey_javascript.txt
|
Q:
How to compile a java application which uses Google webdriver from comand line without ant
I want to compile an example code which using google`s webdriver.
I saved webdriver into /home/iyo/webdriver. My code is:
package com.googlecode.webdriver.example;
import com.googlecode.webdriver.By;
import com.googlecode.webdriver.WebDriver;
import com.googlecode.webdriver.WebElement;
import com.googlecode.webdriver.htmlunit.HtmlUnitDriver;
public class FirstTest {
public static void main(String[] args) {
WebDriver driver = new HtmlUnitDriver();
driver.get("http://www.google.com");
WebElement element =
driver.findElement(By.xpath("//input[@name = 'q']"));
element.sendKeys("Cheese!");
element.submit();
System.out.println("Page title is: " + driver.getTitle());
}
}
But I with javac -cp /home/iyo/webdriver FirstTest.java I got errors like this: FirstTest.java:5: cannot find symbol
symbol : class By
location: package com.googlecode.webdriver
import com.googlecode.webdriver.By;
^
FirstTest.java:7: cannot find symbol
symbol : class WebDriver
location: package com.googlecode.webdriver
import com.googlecode.webdriver.WebDriver;
^
FirstTest.java:9: cannot find symbol
symbol : class WebElement
location: package com.googlecode.webdriver
import com.googlecode.webdriver.WebElement;
^
FirstTest.java:11: package com.googlecode.webdriver.htmlunit does not exist
import com.googlecode.webdriver.htmlunit.HtmlUnitDriver;
^
FirstTest.java:19: cannot find symbol
symbol : class WebDriver
location: class com.googlecode.webdriver.example.FirstTest
WebDriver driver = new HtmlUnitDriver();
^
FirstTest.java:19: cannot find symbol
symbol : class HtmlUnitDriver
location: class com.googlecode.webdriver.example.FirstTest
WebDriver driver = new HtmlUnitDriver();
^
FirstTest.java:27: cannot find symbol
symbol : class WebElement
location: class com.googlecode.webdriver.example.FirstTest
WebElement element =
^
FirstTest.java:29: cannot find symbol
symbol : variable By
location: class com.googlecode.webdriver.example.FirstTest
driver.findElement(By.xpath("//input[@name = 'q']"));
^
8 errors
Its possible to use it whitouht Ant?(The code in NetBeans or Eclipse work well, but I dont want to use them.) Only with javac?
Thanks.
A:
On the webdriver homepage one can read
Add $WEBDRIVER_HOME/common/build/webdriver-common.jar to the CLASSPATH
Add $WEBDRIVER_HOME/htmlunit/build/webdriver-htmlunit.jar to the CLASSPATH
Add all the Jar files under $WEBDRIVER_HOME/htmlunit/lib/runtime to the CLASSPATH
So you have to put all the jar files behind -cp like that
javac -cp /home/iyo/webdriver/common/build/webdriver-common.jar:/home/iyo/webdriver/common/build/webdriver-htmlunit.jar FirstTest.java
You probably have to add all the jar files from htmlunit/lib/runtime to the classpath as well.
|
How to compile a java application which uses Google webdriver from comand line without ant
|
I want to compile an example code which using google`s webdriver.
I saved webdriver into /home/iyo/webdriver. My code is:
package com.googlecode.webdriver.example;
import com.googlecode.webdriver.By;
import com.googlecode.webdriver.WebDriver;
import com.googlecode.webdriver.WebElement;
import com.googlecode.webdriver.htmlunit.HtmlUnitDriver;
public class FirstTest {
public static void main(String[] args) {
WebDriver driver = new HtmlUnitDriver();
driver.get("http://www.google.com");
WebElement element =
driver.findElement(By.xpath("//input[@name = 'q']"));
element.sendKeys("Cheese!");
element.submit();
System.out.println("Page title is: " + driver.getTitle());
}
}
But I with javac -cp /home/iyo/webdriver FirstTest.java I got errors like this: FirstTest.java:5: cannot find symbol
symbol : class By
location: package com.googlecode.webdriver
import com.googlecode.webdriver.By;
^
FirstTest.java:7: cannot find symbol
symbol : class WebDriver
location: package com.googlecode.webdriver
import com.googlecode.webdriver.WebDriver;
^
FirstTest.java:9: cannot find symbol
symbol : class WebElement
location: package com.googlecode.webdriver
import com.googlecode.webdriver.WebElement;
^
FirstTest.java:11: package com.googlecode.webdriver.htmlunit does not exist
import com.googlecode.webdriver.htmlunit.HtmlUnitDriver;
^
FirstTest.java:19: cannot find symbol
symbol : class WebDriver
location: class com.googlecode.webdriver.example.FirstTest
WebDriver driver = new HtmlUnitDriver();
^
FirstTest.java:19: cannot find symbol
symbol : class HtmlUnitDriver
location: class com.googlecode.webdriver.example.FirstTest
WebDriver driver = new HtmlUnitDriver();
^
FirstTest.java:27: cannot find symbol
symbol : class WebElement
location: class com.googlecode.webdriver.example.FirstTest
WebElement element =
^
FirstTest.java:29: cannot find symbol
symbol : variable By
location: class com.googlecode.webdriver.example.FirstTest
driver.findElement(By.xpath("//input[@name = 'q']"));
^
8 errors
Its possible to use it whitouht Ant?(The code in NetBeans or Eclipse work well, but I dont want to use them.) Only with javac?
Thanks.
|
[
"On the webdriver homepage one can read\n\nAdd $WEBDRIVER_HOME/common/build/webdriver-common.jar to the CLASSPATH\nAdd $WEBDRIVER_HOME/htmlunit/build/webdriver-htmlunit.jar to the CLASSPATH\nAdd all the Jar files under $WEBDRIVER_HOME/htmlunit/lib/runtime to the CLASSPATH \n\nSo you have to put all the jar files behind -cp like that\njavac -cp /home/iyo/webdriver/common/build/webdriver-common.jar:/home/iyo/webdriver/common/build/webdriver-htmlunit.jar FirstTest.java\n\nYou probably have to add all the jar files from htmlunit/lib/runtime to the classpath as well.\n"
] |
[
1
] |
[] |
[] |
[
"ant",
"compiler_construction",
"java",
"webdriver"
] |
stackoverflow_0000072201_ant_compiler_construction_java_webdriver.txt
|
Q:
Preventing the loss of keystrokes between pages in a web application
My current project is to write a web application that is an equivalent of an existing desktop application.
In the desktop app at certain points in the workflow the user might click on a button and then be shown a form to fill in. Even if it takes a little time for the app to display the form, expert users know what the form will be and will start typing, knowing that the app will "catch up with them".
In a web application this doesn't happen: when the user clicks a link their keystrokes are then lost until the form on the following page is dispayed. Does anyone have any tricks for preventing this? Do I have to move away from using separate pages and use AJAX to embed the form in the page using something like GWT, or will that still have the problem of lost keystrokes?
A:
Keystrokes won't have an effect until the page has loaded, javascript has been processed and the text field is then focused.
Basically what you are really asking is; how do I speed up a web application to increase response times? Your anwser is AJAX!
Carefully think about the most common actions in the application and use AJAX to minimise the reloading of webpages. Remember, don't over-use AJAX. Using too much javascript can hinder usability just as much as it can improve it.
Related reading material:
Response Times: The Three Important Limits - Great article from the usability king, Jacon Neilson.
Ajax Usability Mistakes
AJAX Usability Checklist
A:
Perhaps I am under-thinking the problem but I'll throw this out there... You could just put your form inside a hidden div or similar container that you show (perhaps give it a modal look/behavior?) on the click event of the link. That way the form is already loaded as part of the page. It should appear almost instantly.
You can find modal div tutorials all over the place, shouldn't be too tricky. If you're using ASP.NET there's even one included in Microsoft's AJAX library.
A:
AJAX or plugin are your only chances.
A:
I think it will be quite hard to do what you want. I presume that the real problem is that the new page takes too long to load. You should look at caching the page or doing partial caching on the static components such as pictures etc. to improve the load time or preloading the page and making it invisible. (see Simple Tricks for More Usable Forms for some ideas)
For coding options you could use javascript to capture the keystrokes (see Detecting various Keystroke)
<html><head>
<script language=javascript>
IE=document.all;
NN=document.layers;
kys="";
if (NN){document.captureEvents(Event.KEYPRESS)}
document.onkeypress=katch
function katch(e){
if (NN){kys+=e.which}
if (IE){kys+=event.keyCode}
document.forms[0].elements[0].value=kys
}
</script>
</head>
<body>
<form><input></form>
</body>
</html>
You will need to save and then transfer them to the new page after control passes from the current page. (see Save Changes on Close of Browser or when exiting the page)
For some general info on problems with detecting keystrokes in the various browsers have a look at Javascript - Detecting keystrokes.
|
Preventing the loss of keystrokes between pages in a web application
|
My current project is to write a web application that is an equivalent of an existing desktop application.
In the desktop app at certain points in the workflow the user might click on a button and then be shown a form to fill in. Even if it takes a little time for the app to display the form, expert users know what the form will be and will start typing, knowing that the app will "catch up with them".
In a web application this doesn't happen: when the user clicks a link their keystrokes are then lost until the form on the following page is dispayed. Does anyone have any tricks for preventing this? Do I have to move away from using separate pages and use AJAX to embed the form in the page using something like GWT, or will that still have the problem of lost keystrokes?
|
[
"Keystrokes won't have an effect until the page has loaded, javascript has been processed and the text field is then focused.\nBasically what you are really asking is; how do I speed up a web application to increase response times? Your anwser is AJAX! \nCarefully think about the most common actions in the application and use AJAX to minimise the reloading of webpages. Remember, don't over-use AJAX. Using too much javascript can hinder usability just as much as it can improve it.\nRelated reading material: \n\nResponse Times: The Three Important Limits - Great article from the usability king, Jacon Neilson. \nAjax Usability Mistakes \nAJAX Usability Checklist\n\n",
"Perhaps I am under-thinking the problem but I'll throw this out there... You could just put your form inside a hidden div or similar container that you show (perhaps give it a modal look/behavior?) on the click event of the link. That way the form is already loaded as part of the page. It should appear almost instantly.\nYou can find modal div tutorials all over the place, shouldn't be too tricky. If you're using ASP.NET there's even one included in Microsoft's AJAX library.\n",
"AJAX or plugin are your only chances.\n",
"I think it will be quite hard to do what you want. I presume that the real problem is that the new page takes too long to load. You should look at caching the page or doing partial caching on the static components such as pictures etc. to improve the load time or preloading the page and making it invisible. (see Simple Tricks for More Usable Forms for some ideas)\nFor coding options you could use javascript to capture the keystrokes (see Detecting various Keystroke)\n<html><head>\n<script language=javascript>\nIE=document.all;\nNN=document.layers;\nkys=\"\";\nif (NN){document.captureEvents(Event.KEYPRESS)}\ndocument.onkeypress=katch\nfunction katch(e){\nif (NN){kys+=e.which}\nif (IE){kys+=event.keyCode}\ndocument.forms[0].elements[0].value=kys\n}\n</script>\n</head>\n<body>\n<form><input></form>\n</body>\n</html>\n\nYou will need to save and then transfer them to the new page after control passes from the current page. (see Save Changes on Close of Browser or when exiting the page)\nFor some general info on problems with detecting keystrokes in the various browsers have a look at Javascript - Detecting keystrokes.\n"
] |
[
6,
1,
0,
0
] |
[] |
[] |
[
"javascript",
"user_input"
] |
stackoverflow_0000061688_javascript_user_input.txt
|
Q:
Terminate MySQL connections on shared host?
I'm using MediaTemple's Grid Server (shared/grid hosting) to run some MySQL/PHP sites I'm writing and noticed that I wasn't closing one of my MySQL connections, which caused my site to error out:
"Too Many Connections"
I can't log in anywhere to close the connections manually.
Is that any way to close open connections using a script or other type of command?.
Should I just wait?
A:
If you can't log into MySQL at all, you will probably have to contact your hosting provider to kill the connections.
If you can use the MySQL shell, you can use the show processlist command to view connections, then use the kill command to remove the connections.
It's been my experience that hung SQL connections tend to stay that way, unfortunately.
A:
blindly going in an terminating connections is not the way to solve this problem. first you need to understand why you are running out of connections. is your max_connections setting selected to correctly match the number of max/anticipated users? are you using persistent connections when you really don't need them? etc.
A:
Make sure that you're closing the connections with your PHP code. Also, you could increase the maximum connections allowed in /etc/my.cnf.
max_connections=500
Finally, you can login to a mysql prompt and type show status or show processlist to view various statistics with your server.
If all else fails, restarting the server daemon should clear the persistent connections.
A:
Well, if you cannot ever sneak in with a connection, I dunno', but if you can occasionally sneak in, in Ruby it would be close to:
require 'mysql'
mysql = Mysql.new(ip, user, pass)
processlist = mysql.query("show full processlist")
killed = 0
processlist.each { | process |
mysql.query("KILL #{process[0].to_i}")
}
puts "#{Time.new} -- killed: #{killed} connections"
A:
If you can access the command line with enough privileges, restart the MySQL server or the Apache (assuming that you use Apache) server - because probably it is keeping the connections open. After you successfully closed the connections, make sure that you are not using persistent connections from PHP (the general opinion seems to be that it doesn't create any significant performance gain, but it has all kinds of problems - like you've experienced - and in some cases - like using it PostgreSQL - it can even significantly slow down your site!).
|
Terminate MySQL connections on shared host?
|
I'm using MediaTemple's Grid Server (shared/grid hosting) to run some MySQL/PHP sites I'm writing and noticed that I wasn't closing one of my MySQL connections, which caused my site to error out:
"Too Many Connections"
I can't log in anywhere to close the connections manually.
Is that any way to close open connections using a script or other type of command?.
Should I just wait?
|
[
"If you can't log into MySQL at all, you will probably have to contact your hosting provider to kill the connections.\nIf you can use the MySQL shell, you can use the show processlist command to view connections, then use the kill command to remove the connections.\nIt's been my experience that hung SQL connections tend to stay that way, unfortunately.\n",
"blindly going in an terminating connections is not the way to solve this problem. first you need to understand why you are running out of connections. is your max_connections setting selected to correctly match the number of max/anticipated users? are you using persistent connections when you really don't need them? etc.\n",
"Make sure that you're closing the connections with your PHP code. Also, you could increase the maximum connections allowed in /etc/my.cnf.\nmax_connections=500\n\nFinally, you can login to a mysql prompt and type show status or show processlist to view various statistics with your server.\nIf all else fails, restarting the server daemon should clear the persistent connections.\n",
"Well, if you cannot ever sneak in with a connection, I dunno', but if you can occasionally sneak in, in Ruby it would be close to:\nrequire 'mysql'\n\nmysql = Mysql.new(ip, user, pass)\nprocesslist = mysql.query(\"show full processlist\")\nkilled = 0\nprocesslist.each { | process |\n mysql.query(\"KILL #{process[0].to_i}\")\n} \nputs \"#{Time.new} -- killed: #{killed} connections\"\n\n",
"If you can access the command line with enough privileges, restart the MySQL server or the Apache (assuming that you use Apache) server - because probably it is keeping the connections open. After you successfully closed the connections, make sure that you are not using persistent connections from PHP (the general opinion seems to be that it doesn't create any significant performance gain, but it has all kinds of problems - like you've experienced - and in some cases - like using it PostgreSQL - it can even significantly slow down your site!).\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"connection",
"database",
"mysql",
"sql"
] |
stackoverflow_0000069159_connection_database_mysql_sql.txt
|
Q:
Are there any noted differences in appearance rendering of html and xhtml in Google Chrome from other browsers?
Are there any noted differences in appearance rendering of HTML and XHTML in Google Chrome from Firefox? From IE? From other browsers? What browser does it render the code the most similar to?
A:
Since it's based on WebKit, its rendering will most closely resemble Safari and Konqueror.
A:
Google's Chrome uses the WebKit rendering engine, which is what Safari uses. So, I would guess it renders most closely to Safari.
A:
There are anti-aliasing differences between Safari 3.1 and Google Chrome, for whatever that's worth. This will doubtless be because Safari on Windows uses its own text-rendering and anti-aliasing layer instead of Windows's GDI.
A:
There are additional minor differences that I have attributed to Chrome using a different (older?) version of Webkit (525.13) than the current release of Safari uses (525.21 for me).
Example:
https://woot.campfirenow.com/login
In Safari, the password label and input box are directly below the email label and input box, while in Chrome the password label and input box are indented approximately 75 pixels to the right.
|
Are there any noted differences in appearance rendering of html and xhtml in Google Chrome from other browsers?
|
Are there any noted differences in appearance rendering of HTML and XHTML in Google Chrome from Firefox? From IE? From other browsers? What browser does it render the code the most similar to?
|
[
"Since it's based on WebKit, its rendering will most closely resemble Safari and Konqueror.\n",
"Google's Chrome uses the WebKit rendering engine, which is what Safari uses. So, I would guess it renders most closely to Safari.\n",
"There are anti-aliasing differences between Safari 3.1 and Google Chrome, for whatever that's worth. This will doubtless be because Safari on Windows uses its own text-rendering and anti-aliasing layer instead of Windows's GDI.\n",
"There are additional minor differences that I have attributed to Chrome using a different (older?) version of Webkit (525.13) than the current release of Safari uses (525.21 for me). \nExample:\nhttps://woot.campfirenow.com/login\nIn Safari, the password label and input box are directly below the email label and input box, while in Chrome the password label and input box are indented approximately 75 pixels to the right.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"browser",
"google_chrome",
"html",
"xhtml"
] |
stackoverflow_0000069890_browser_google_chrome_html_xhtml.txt
|
Q:
How to play a standard windows sound?
How do I find out which sound files the user has configured in the control panel?
Example: I want to play the sound for "Device connected".
Which API can be used to query the control panel sound settings?
I see that there are some custom entries made by third party programs in the control panel dialog, so there has to be a way for these programs to communicate with the global sound settings.
Edit: Thank you. I did not know that PlaySound also just played appropriate sound file when specifying the name of the registry entry.
To play the "Device Conntected" sound:
::PlaySound( TEXT("DeviceConnect"), NULL, SND_ALIAS|SND_ASYNC );
A:
PlaySound is the API.
Also see Play System Sounds.
A:
Not Win32, but for .net anyway, you can do this using the following in C#:
System.Media.SystemSounds.Asterisk.Play();
// Plays the Asterisk sound (used for Information (i))
// Also available:
// Exclamation (Warning /!\)
// Hand (aka Critical Stop - Error (X))
// Question (?)
// Beep (aka Default Beep)
|
How to play a standard windows sound?
|
How do I find out which sound files the user has configured in the control panel?
Example: I want to play the sound for "Device connected".
Which API can be used to query the control panel sound settings?
I see that there are some custom entries made by third party programs in the control panel dialog, so there has to be a way for these programs to communicate with the global sound settings.
Edit: Thank you. I did not know that PlaySound also just played appropriate sound file when specifying the name of the registry entry.
To play the "Device Conntected" sound:
::PlaySound( TEXT("DeviceConnect"), NULL, SND_ALIAS|SND_ASYNC );
|
[
"PlaySound is the API.\nAlso see Play System Sounds.\n",
"Not Win32, but for .net anyway, you can do this using the following in C#:\nSystem.Media.SystemSounds.Asterisk.Play();\n// Plays the Asterisk sound (used for Information (i))\n// Also available:\n// Exclamation (Warning /!\\)\n// Hand (aka Critical Stop - Error (X))\n// Question (?)\n// Beep (aka Default Beep)\n\n"
] |
[
15,
13
] |
[] |
[] |
[
"audio",
"winapi"
] |
stackoverflow_0000072167_audio_winapi.txt
|
Q:
In Visual Studio 2008, is it possible to mix vertical tab groups with horizontal tab groups?
I have a 1920x1200 screen an would like to customize VS2008 code windows to have some areas split vertically and horizontally (tab groups).
I can only seem to do all vertical or all horizontal in VS2008. Is there any crafty way of getting a mixing both?
A:
I don't believe this is possible in Visual Studio. However, you should check out this nice product for splitting applications: http://www.winsplit-revolution.com/
|
In Visual Studio 2008, is it possible to mix vertical tab groups with horizontal tab groups?
|
I have a 1920x1200 screen an would like to customize VS2008 code windows to have some areas split vertically and horizontally (tab groups).
I can only seem to do all vertical or all horizontal in VS2008. Is there any crafty way of getting a mixing both?
|
[
"I don't believe this is possible in Visual Studio. However, you should check out this nice product for splitting applications: http://www.winsplit-revolution.com/\n"
] |
[
0
] |
[] |
[] |
[
"multiple_monitors",
"visual_studio_2008"
] |
stackoverflow_0000072372_multiple_monitors_visual_studio_2008.txt
|
Q:
How to use the "is" operator in System.Type variables?
here is what a I'm doing:
object ReturnMatch(System.Type type)
{
foreach(object obj in myObjects)
{
if (obj == type)
{
return obj;
}
}
}
However, if obj is a subclass of type, it will not match. But I would like the function to return the same way as if I was using the operator is.
I tried the following, but it won't compile:
if (obj is type) // won't compile in C# 2.0
The best solution I came up with was:
if (obj.GetType().Equals(type) || obj.GetType().IsSubclassOf(type))
Isn't there a way to use operator is to make the code cleaner?
A:
I've used the IsAssignableFrom method when faced with this problem.
Type theTypeWeWant; // From argument or whatever
foreach (object o in myCollection)
{
if (theTypeWeWant.IsAssignableFrom(o.GetType))
return o;
}
Another approach that may or may not work with your problem is to use a generic method:
private T FindObjectOfType<T>() where T: class
{
foreach(object o in myCollection)
{
if (o is T)
return (T) o;
}
return null;
}
(Code written from memory and is not tested)
A:
Not using the is operator, but the Type.IsInstanceOfType Method appears to be what you're looking for.
http://msdn.microsoft.com/en-us/library/system.type.isinstanceoftype.aspx
A:
Perhaps
type.IsAssignableFrom(obj.GetType())
A:
the is operator indicates whether or not it would be 'safe' to cast one object as another obeject (often a super class).
if(obj is type)
if obj is of type 'type' or a subclass thereof, then the if statement will succeede as it is 'safe' to cast obj as (type)obj.
see: http://msdn.microsoft.com/en-us/library/scekt9xw(VS.71).aspx
A:
Is there a reason why you cannot use the "is" keyword itself?
foreach(object obj in myObjects)
{
if (obj is type)
{
return obj;
}
}
EDIT - I see what I was missing. Isak's suggestion is the correct one; I have tested and confirmed it.
class Level1
{
}
class Level2A : Level1
{
}
class Level2B : Level1
{
}
class Level3A2A : Level2A
{
}
class Program
{
static void Main(string[] args)
{
object[] objects = new object[] {"testing", new Level1(), new Level2A(), new Level2B(), new Level3A2A(), new object() };
ReturnMatch(typeof(Level1), objects);
Console.ReadLine();
}
static void ReturnMatch(Type arbitraryType, object[] objects)
{
foreach (object obj in objects)
{
Type objType = obj.GetType();
Console.Write(arbitraryType.ToString() + " is ");
if (!arbitraryType.IsAssignableFrom(objType))
Console.Write("not ");
Console.WriteLine("assignable from " + objType.ToString());
}
}
}
|
How to use the "is" operator in System.Type variables?
|
here is what a I'm doing:
object ReturnMatch(System.Type type)
{
foreach(object obj in myObjects)
{
if (obj == type)
{
return obj;
}
}
}
However, if obj is a subclass of type, it will not match. But I would like the function to return the same way as if I was using the operator is.
I tried the following, but it won't compile:
if (obj is type) // won't compile in C# 2.0
The best solution I came up with was:
if (obj.GetType().Equals(type) || obj.GetType().IsSubclassOf(type))
Isn't there a way to use operator is to make the code cleaner?
|
[
"I've used the IsAssignableFrom method when faced with this problem.\nType theTypeWeWant; // From argument or whatever\nforeach (object o in myCollection)\n{\n if (theTypeWeWant.IsAssignableFrom(o.GetType))\n return o;\n}\n\nAnother approach that may or may not work with your problem is to use a generic method:\nprivate T FindObjectOfType<T>() where T: class\n{\n foreach(object o in myCollection)\n {\n if (o is T)\n return (T) o;\n }\n return null;\n}\n\n(Code written from memory and is not tested)\n",
"Not using the is operator, but the Type.IsInstanceOfType Method appears to be what you're looking for.\nhttp://msdn.microsoft.com/en-us/library/system.type.isinstanceoftype.aspx\n",
"Perhaps \ntype.IsAssignableFrom(obj.GetType())\n\n",
"the is operator indicates whether or not it would be 'safe' to cast one object as another obeject (often a super class).\nif(obj is type)\n\nif obj is of type 'type' or a subclass thereof, then the if statement will succeede as it is 'safe' to cast obj as (type)obj.\nsee: http://msdn.microsoft.com/en-us/library/scekt9xw(VS.71).aspx\n",
"Is there a reason why you cannot use the \"is\" keyword itself?\nforeach(object obj in myObjects)\n{\n if (obj is type)\n {\n return obj;\n }\n}\n\nEDIT - I see what I was missing. Isak's suggestion is the correct one; I have tested and confirmed it.\n class Level1\n {\n }\n\n class Level2A : Level1\n {\n }\n\n class Level2B : Level1\n {\n }\n\n class Level3A2A : Level2A\n {\n }\n\n\n class Program\n {\n static void Main(string[] args)\n {\n object[] objects = new object[] {\"testing\", new Level1(), new Level2A(), new Level2B(), new Level3A2A(), new object() };\n\n\n ReturnMatch(typeof(Level1), objects);\n Console.ReadLine();\n }\n\n\n static void ReturnMatch(Type arbitraryType, object[] objects)\n {\n foreach (object obj in objects)\n {\n Type objType = obj.GetType();\n\n Console.Write(arbitraryType.ToString() + \" is \");\n\n if (!arbitraryType.IsAssignableFrom(objType))\n Console.Write(\"not \");\n\n Console.WriteLine(\"assignable from \" + objType.ToString());\n\n }\n }\n }\n\n"
] |
[
5,
3,
2,
0,
0
] |
[] |
[] |
[
"c#"
] |
stackoverflow_0000072360_c#.txt
|
Q:
How do I quickly do something in Rmagick to test it work
I need to be able to quickly convert an image (inside a rails controller) so that the hosting company using managing our application can quickly test at any time to ensure that rmagick is not only successfully installed, but can be called throgh the rails stiack, what is the quickest clean code I can use to do this?
A:
require 'RMagick'
image = Magick::Image.new(110, 30){ self.background_color = 'white' }
image.write('/tmp/test.jpg')
A:
I wanted to do this so that I can easily hit it with a web browser, as I'm deployng to managed servers, which I do not have shell access onto (for increased security).
So this is what I did
class DiagnosticsController < ApplicationController
require 'RMagick'
def rmagick
images_path = "public/images"
file_name = "rmagick_generated_thumb.jpg"
file_path = images_path + "/"+ file_name
File.delete file_path if File.exists? file_path
img = Magick::Image.read("lib/sample_images/magic.jpg").first
thumb = img.scale(0.25)
@path = file_name
thumb.write file_path
end
end #------
and then in rmagick.html.erb
<%= image_tag @path %>
Now I can hit the controller, and if I see an image, I know rmagic is installed.
A:
I'd log on to the server and try out your code in script/console. This will still go through the rails stack, but will allow you to quickly check that your code works the way you expect and that RMagick and ImageMagick are correctly installed without having to deploy anything.
When the time comes to write your actual code, I'd suggest putting the image conversion code inside a model, so you can call it outside the context of a controller.
A:
Use script/console, and call code in a model or a controller that does something like the following:
require 'RMagick'
include Magick
img = ImageList.new('myfile.jpg')
img.crop(0,0,10,10) # or whatever
img.write('mycroppedfile.jpg')
|
How do I quickly do something in Rmagick to test it work
|
I need to be able to quickly convert an image (inside a rails controller) so that the hosting company using managing our application can quickly test at any time to ensure that rmagick is not only successfully installed, but can be called throgh the rails stiack, what is the quickest clean code I can use to do this?
|
[
"require 'RMagick'\n\nimage = Magick::Image.new(110, 30){ self.background_color = 'white' }\nimage.write('/tmp/test.jpg')\n\n",
"I wanted to do this so that I can easily hit it with a web browser, as I'm deployng to managed servers, which I do not have shell access onto (for increased security).\nSo this is what I did\nclass DiagnosticsController < ApplicationController\n require 'RMagick'\n\n def rmagick\n images_path = \"public/images\"\n file_name = \"rmagick_generated_thumb.jpg\"\n file_path = images_path + \"/\"+ file_name\n\n File.delete file_path if File.exists? file_path\n img = Magick::Image.read(\"lib/sample_images/magic.jpg\").first\n thumb = img.scale(0.25)\n @path = file_name\n thumb.write file_path\n end\nend #------\n\nand then in rmagick.html.erb\n<%= image_tag @path %>\n\nNow I can hit the controller, and if I see an image, I know rmagic is installed.\n",
"I'd log on to the server and try out your code in script/console. This will still go through the rails stack, but will allow you to quickly check that your code works the way you expect and that RMagick and ImageMagick are correctly installed without having to deploy anything.\nWhen the time comes to write your actual code, I'd suggest putting the image conversion code inside a model, so you can call it outside the context of a controller.\n",
"Use script/console, and call code in a model or a controller that does something like the following:\nrequire 'RMagick'\ninclude Magick\nimg = ImageList.new('myfile.jpg')\nimg.crop(0,0,10,10) # or whatever\nimg.write('mycroppedfile.jpg')\n\n"
] |
[
14,
4,
0,
0
] |
[] |
[] |
[
"rmagick",
"ruby"
] |
stackoverflow_0000070779_rmagick_ruby.txt
|
Q:
Why is a method call shown as not covered when the code within the method is covered with emma?
I am writing a unit test to check that a private method will close a stream.
The unit test calls methodB and the variable something is null
The unit test doesn't mock the class on test
The private method is within a public method that I am calling.
Using emma in eclipse (via the eclemma plugin) the method call is displayed as not being covered even though the code within the method is
e.g
public methodA(){
if (something==null) {
methodB(); //Not displayed as covered
}
}
private methodB(){
lineCoveredByTest; //displayed as covered
}
Why would the method call not be highlighted as being covered?
A:
I have found that the eclipse plugin for EMMA is quite buggy, and have had similar experiences to the one you describe. Better to just use EMMA on its own (via ANT if required). Make sure you always regenerate the metadata files produced by EMMA, to avoid merging confusion (which I suspect is the problem with the eclipse plugin).
A:
I assume when you say 'the unit test calls methodB()', you mean not directly and via methodA().
So, is it possible methodB() is being called elsewhere, by another unit test or methodC() maybe?
|
Why is a method call shown as not covered when the code within the method is covered with emma?
|
I am writing a unit test to check that a private method will close a stream.
The unit test calls methodB and the variable something is null
The unit test doesn't mock the class on test
The private method is within a public method that I am calling.
Using emma in eclipse (via the eclemma plugin) the method call is displayed as not being covered even though the code within the method is
e.g
public methodA(){
if (something==null) {
methodB(); //Not displayed as covered
}
}
private methodB(){
lineCoveredByTest; //displayed as covered
}
Why would the method call not be highlighted as being covered?
|
[
"I have found that the eclipse plugin for EMMA is quite buggy, and have had similar experiences to the one you describe. Better to just use EMMA on its own (via ANT if required). Make sure you always regenerate the metadata files produced by EMMA, to avoid merging confusion (which I suspect is the problem with the eclipse plugin).\n",
"I assume when you say 'the unit test calls methodB()', you mean not directly and via methodA(). \nSo, is it possible methodB() is being called elsewhere, by another unit test or methodC() maybe?\n"
] |
[
2,
0
] |
[] |
[] |
[
"code_coverage",
"emma",
"junit",
"unit_testing"
] |
stackoverflow_0000056373_code_coverage_emma_junit_unit_testing.txt
|
Q:
What is the best way to extract a version string from a file?
I want to use a file to store the current version number for a piece of customer software which can be used by a start-up script to run the binary in the correct directory.
For Example, if the run directory looks like this:
.
..
1.2.1
1.2.2
1.3.0
run.sh
current_version
And current_version contains:
1.2.2
I want run.sh to descend into 1.2.2 and run the program foo.
The current solution is this:
#!/bin/sh
version = `cat current_version`
cd $version
./foo
It works but is not very robust. It does not check for file existence, cannot cope with multiple lines, leading spaces, commented lines, blank files, etc.
What is the most survivable way to do this with either a shell or perl script?
A:
That's a common approach. You can check for dir/file existence using test operators such as "if [ -d "somedirectory" ]; then" or [ -t "somefile" ]
I use symbolic links more often, though. Then, you can just change your symbolic link to use the version you want.
For example,
$ ln -s 1.2.2 current_version
$ ls -al
total 20
drwxr-xr-x 5 dbreese dbreese 4096 2008-09-15 13:34 .
drwxr-xr-x 3 dbreese dbreese 4096 2008-09-15 13:34 ..
drwxr-xr-x 2 dbreese dbreese 4096 2008-09-15 13:34 1.2.1
drwxr-xr-x 2 dbreese dbreese 4096 2008-09-15 13:34 1.2.2
drwxr-xr-x 2 dbreese dbreese 4096 2008-09-15 13:34 1.3.0
lrwxrwxrwx 1 dbreese dbreese 6 2008-09-15 13:34 current_version -> 1.2.2/
Then your script can just use "cd current_version".
A:
I would change the script to accept an argument. The argument should be a filename. Open the file using whatever scripting language you prefer [perl, python] and traverse the file until you find a match for your version.
I would use a regex... something along the lines of /\d\.\d\.\d/ . then have it execute the application through your script.
A:
You can check for existence using the && and || operators:
!$/bin/sh
version = `cat current_version`
cd $version && ./foo || echo "Bad version"
The && operator causes the second statement to only execute if the first one succeeds (exit status 0), and the || operator causes the second statement to execute only if the first one fails (exit status non-zero).
I'm not sure what you mean by coping with multiple lines, leading spaces, or commented lines.
A:
If the versioning will always be in #.#.# format, you could do this:
ls | grep ^[0-9]\.[0-9]\.[0-9]$ | sort -nr | head -n 1
That will list the versions in descending numerical order, then selects the first of those
A:
What about:
#!/usr/bin/perl -w
use strict;
use warnings;
my $version_file = 'current_version';
open my $fh, '<', $version_file or die "Can't open $version_file: $!";
# Read version from file
my $version = <$fh>;
chomp $version;
# Remove whitespace (and match version)
die "Version format not recognized" if $version !~ m/(\d+\.\d+\.\d+)/;
my $dir = $1;
die "Directory not found: $dir" unless -d $dir;
# Execute program in versioned directory.
chdir $dir;
system("./foo");
|
What is the best way to extract a version string from a file?
|
I want to use a file to store the current version number for a piece of customer software which can be used by a start-up script to run the binary in the correct directory.
For Example, if the run directory looks like this:
.
..
1.2.1
1.2.2
1.3.0
run.sh
current_version
And current_version contains:
1.2.2
I want run.sh to descend into 1.2.2 and run the program foo.
The current solution is this:
#!/bin/sh
version = `cat current_version`
cd $version
./foo
It works but is not very robust. It does not check for file existence, cannot cope with multiple lines, leading spaces, commented lines, blank files, etc.
What is the most survivable way to do this with either a shell or perl script?
|
[
"That's a common approach. You can check for dir/file existence using test operators such as \"if [ -d \"somedirectory\" ]; then\" or [ -t \"somefile\" ]\nI use symbolic links more often, though. Then, you can just change your symbolic link to use the version you want.\nFor example,\n\n$ ln -s 1.2.2 current_version\n$ ls -al\ntotal 20\ndrwxr-xr-x 5 dbreese dbreese 4096 2008-09-15 13:34 .\ndrwxr-xr-x 3 dbreese dbreese 4096 2008-09-15 13:34 ..\ndrwxr-xr-x 2 dbreese dbreese 4096 2008-09-15 13:34 1.2.1\ndrwxr-xr-x 2 dbreese dbreese 4096 2008-09-15 13:34 1.2.2\ndrwxr-xr-x 2 dbreese dbreese 4096 2008-09-15 13:34 1.3.0\nlrwxrwxrwx 1 dbreese dbreese 6 2008-09-15 13:34 current_version -> 1.2.2/\n\nThen your script can just use \"cd current_version\".\n",
"I would change the script to accept an argument. The argument should be a filename. Open the file using whatever scripting language you prefer [perl, python] and traverse the file until you find a match for your version.\nI would use a regex... something along the lines of /\\d\\.\\d\\.\\d/ . then have it execute the application through your script.\n",
"You can check for existence using the && and || operators:\n\n!$/bin/sh\nversion = `cat current_version`\ncd $version && ./foo || echo \"Bad version\"\n\nThe && operator causes the second statement to only execute if the first one succeeds (exit status 0), and the || operator causes the second statement to execute only if the first one fails (exit status non-zero).\nI'm not sure what you mean by coping with multiple lines, leading spaces, or commented lines.\n",
"If the versioning will always be in #.#.# format, you could do this:\nls | grep ^[0-9]\\.[0-9]\\.[0-9]$ | sort -nr | head -n 1\n\nThat will list the versions in descending numerical order, then selects the first of those\n",
"What about:\n#!/usr/bin/perl -w\nuse strict;\nuse warnings;\n\nmy $version_file = 'current_version';\nopen my $fh, '<', $version_file or die \"Can't open $version_file: $!\";\n\n# Read version from file\nmy $version = <$fh>;\nchomp $version;\n\n# Remove whitespace (and match version)\ndie \"Version format not recognized\" if $version !~ m/(\\d+\\.\\d+\\.\\d+)/;\n\nmy $dir = $1;\ndie \"Directory not found: $dir\" unless -d $dir;\n\n# Execute program in versioned directory.\nchdir $dir;\nsystem(\"./foo\");\n\n"
] |
[
5,
0,
0,
0,
0
] |
[
"!#/bin/sh\n\nif [ -e 'current_version' ]; then\n version=`cat current_version`;\n version=`echo $version | tr -ds [[:blank:]]`\n if [ -n \"$version\" ]; then\n if [ -d \"$version\" ]; then\n cd \"$version\"\n else\n echo $version is not a directory\n fi\n else\n echo version_file contained only blanks\n fi\nelse \n No file named current_version exists\nfi\n\n"
] |
[
-4
] |
[
"bash",
"perl",
"scripting"
] |
stackoverflow_0000065994_bash_perl_scripting.txt
|
Q:
How do you use gnuplot's built-in fonts?
The gnuplot docs have this to say about fonts:
Five basic fonts are supported directly by the gd library. These are
`tiny` (5x8 pixels), `small` (6x12 pixels), `medium`, (7x13 Bold),
`large` (8x16) or `giant` (9x15 pixels).
But when i try to use one:
gnuplot> set terminal png font tiny
I get:
Could not find/open font when opening font tiny, using default
How do I use these seemingly built-in fonts?
A:
The problem was that, for these fonts for some reason, you don't use the standard syntax I tried above:
gnuplot> set terminal png font tiny
But instead, you drop the word "font" for these five special fonts:
gnuplot> set terminal png tiny
|
How do you use gnuplot's built-in fonts?
|
The gnuplot docs have this to say about fonts:
Five basic fonts are supported directly by the gd library. These are
`tiny` (5x8 pixels), `small` (6x12 pixels), `medium`, (7x13 Bold),
`large` (8x16) or `giant` (9x15 pixels).
But when i try to use one:
gnuplot> set terminal png font tiny
I get:
Could not find/open font when opening font tiny, using default
How do I use these seemingly built-in fonts?
|
[
"The problem was that, for these fonts for some reason, you don't use the standard syntax I tried above:\ngnuplot> set terminal png font tiny\n\nBut instead, you drop the word \"font\" for these five special fonts:\ngnuplot> set terminal png tiny\n\n"
] |
[
4
] |
[] |
[] |
[
"gnuplot"
] |
stackoverflow_0000069814_gnuplot.txt
|
Q:
equivalent vb code for a java code
Can anyone tell me what exactly does this Java code do?
SecureRandom random = SecureRandom.getInstance("SHA1PRNG");
byte[] bytes = new byte[20];
synchronized (random)
{
random.nextBytes(bytes);
}
return Base64.encode(bytes);
Step by step explanation will be useful so that I can recreate this code in VB. Thanks
A:
Using code snippets you can get to something like this
Dim randomNumGen As RandomNumberGenerator = RNGCryptoServiceProvider.Create()
Dim randomBytes(20) As Byte
randomNumGen.GetBytes(randomBytes)
return Convert.ToBase64String(randomBytes)
A:
This creates a random number generator (SecureRandom). It then creates a byte array (byte[] bytes), length 20 bytes, and populates it with random data.
This is then encoded using BASE64 and returned.
So, in a nutshell,
Generate 20 random bytes
Encode using Base 64
A:
It creates a SHA1 based random number generator (RNG), then Base64 encodes the next 20 bytes returned by the RNG.
I can't tell you why it does this however without some more context :-).
A:
This code gets a cryptographically strong random number that is 20 bytes in length, then Base64 encodes it. There's a lot of Java library code here, so your guess is as good as mine as to how to do it in VB.
SecureRandom random = SecureRandom.getInstance("SHA1PRNG");
byte[] bytes = new byte[20];
synchronized (random) { random.nextBytes(bytes); }
return Base64.encode(bytes);
The first line creates an instance of the SecureRandom class. This class provides a cryptographically strong pseudo-random number generator.
The second line declares a byte array of length 20.
The third line reads the next 20 random bytes into the array created in line 2. It synchronizes on the SecureRandom object so that there are no conflicts from other threads that may be using the object. It's not apparent from this code why you need to do this.
The fourth line Base64 encodes the resulting byte array. This is probably for transmission, storage, or display in a known format.
A:
Basically the code above:
Creates a secure random number generator (for VB see link below)
Fills a bytearray of length 20 with random bytes
Base64 encodes the result (you can probably use Convert.ToBase64String(...))
You should find some help here:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.rngcryptoserviceprovider.aspx
|
equivalent vb code for a java code
|
Can anyone tell me what exactly does this Java code do?
SecureRandom random = SecureRandom.getInstance("SHA1PRNG");
byte[] bytes = new byte[20];
synchronized (random)
{
random.nextBytes(bytes);
}
return Base64.encode(bytes);
Step by step explanation will be useful so that I can recreate this code in VB. Thanks
|
[
"Using code snippets you can get to something like this\n\nDim randomNumGen As RandomNumberGenerator = RNGCryptoServiceProvider.Create()\nDim randomBytes(20) As Byte\nrandomNumGen.GetBytes(randomBytes)\nreturn Convert.ToBase64String(randomBytes)\n\n",
"This creates a random number generator (SecureRandom). It then creates a byte array (byte[] bytes), length 20 bytes, and populates it with random data.\nThis is then encoded using BASE64 and returned.\nSo, in a nutshell,\n\nGenerate 20 random bytes\nEncode using Base 64\n\n",
"It creates a SHA1 based random number generator (RNG), then Base64 encodes the next 20 bytes returned by the RNG.\nI can't tell you why it does this however without some more context :-).\n",
"This code gets a cryptographically strong random number that is 20 bytes in length, then Base64 encodes it. There's a lot of Java library code here, so your guess is as good as mine as to how to do it in VB.\nSecureRandom random = SecureRandom.getInstance(\"SHA1PRNG\");\nbyte[] bytes = new byte[20];\nsynchronized (random) { random.nextBytes(bytes); }\nreturn Base64.encode(bytes);\n\nThe first line creates an instance of the SecureRandom class. This class provides a cryptographically strong pseudo-random number generator.\nThe second line declares a byte array of length 20.\nThe third line reads the next 20 random bytes into the array created in line 2. It synchronizes on the SecureRandom object so that there are no conflicts from other threads that may be using the object. It's not apparent from this code why you need to do this.\nThe fourth line Base64 encodes the resulting byte array. This is probably for transmission, storage, or display in a known format.\n",
"Basically the code above:\n\nCreates a secure random number generator (for VB see link below)\nFills a bytearray of length 20 with random bytes\nBase64 encodes the result (you can probably use Convert.ToBase64String(...))\n\nYou should find some help here:\nhttp://msdn.microsoft.com/en-us/library/system.security.cryptography.rngcryptoserviceprovider.aspx\n"
] |
[
5,
3,
1,
1,
0
] |
[] |
[] |
[
"java",
"random",
"vba"
] |
stackoverflow_0000072479_java_random_vba.txt
|
Q:
How do I change the type of control that is used in a .NET PropertyGrid
I have a Windows application that uses a .NET PropertyGrid control. Is it possible to change the type of control that is used for the value field of a property?
I would like to be able to use a RichTextBox to allow better formatting of the input value.
Can this be done without creating a custom editor class?
A:
To add your own custom editing when the user selects a property grid value you need to implement a class that derives from UITypeEditor. You then have the choice of showing just a small popup window below the property area or a full blown dialog box.
What is nice is that you can reuse the existing implementations. So to add the ability to multiline edit a string you just do this...
[Editor(typeof(MultilineStringEditor), typeof(UITypeEditor))]
public override string Text
{
get { return _string; }
set { _string = value; }
}
Another nice one they provide for you is the ability to edit an array of strings...
[Editor("System.Windows.Forms.Design.StringArrayEditor,
System.Design, Version=2.0.0.0,
Culture=neutral,
PublicKeyToken=b03f5f7f11d50a3a",
typeof(UITypeEditor))]
public string[] Lines
{
get { return _lines; }
set { _lines = value; }
}
A:
You can control whether the PropertyGrid displays a simple edit box, a drop-down arrow, or an ellipsis control.
Look up EditorAttribute, and follow it on from there. I did have a sample somewhere; I'll try to dig it out.
A:
I think what you are looking for is Custom Type Descriptors.
You could read up a bit and get started here: http://www.codeproject.com/KB/miscctrl/bending_property.aspx
I am not sure you can do any control you want, but that article got me started on propertygrids.
|
How do I change the type of control that is used in a .NET PropertyGrid
|
I have a Windows application that uses a .NET PropertyGrid control. Is it possible to change the type of control that is used for the value field of a property?
I would like to be able to use a RichTextBox to allow better formatting of the input value.
Can this be done without creating a custom editor class?
|
[
"To add your own custom editing when the user selects a property grid value you need to implement a class that derives from UITypeEditor. You then have the choice of showing just a small popup window below the property area or a full blown dialog box.\nWhat is nice is that you can reuse the existing implementations. So to add the ability to multiline edit a string you just do this...\n[Editor(typeof(MultilineStringEditor), typeof(UITypeEditor))]\npublic override string Text\n{\n get { return _string; }\n set { _string = value; }\n}\n\nAnother nice one they provide for you is the ability to edit an array of strings...\n[Editor(\"System.Windows.Forms.Design.StringArrayEditor, \n System.Design, Version=2.0.0.0, \n Culture=neutral, \n PublicKeyToken=b03f5f7f11d50a3a\", \n typeof(UITypeEditor))]\npublic string[] Lines\n{\n get { return _lines; }\n set { _lines = value; }\n}\n\n",
"You can control whether the PropertyGrid displays a simple edit box, a drop-down arrow, or an ellipsis control.\nLook up EditorAttribute, and follow it on from there. I did have a sample somewhere; I'll try to dig it out.\n",
"I think what you are looking for is Custom Type Descriptors.\nYou could read up a bit and get started here: http://www.codeproject.com/KB/miscctrl/bending_property.aspx\nI am not sure you can do any control you want, but that article got me started on propertygrids.\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
".net",
"c#",
"windows"
] |
stackoverflow_0000072515_.net_c#_windows.txt
|
Q:
Is there an easy way to create two columns in a popup text window?
This seemed like an easy thing to do. I just wanted to pop up a text window and display two columns of data -- a description on the left side and a corresponding value displayed on the right side. I haven't worked with Forms much so I just grabbed the first control that seemed appropriate, a TextBox. I thought using tabs would be an easy way to create the second column, but I discovered things just don't work that well.
There seems to be two problems with the way I tried to do this (see below). First, I read on numerous websites that the MeasureString function isn't very precise due to how complex fonts are, with kerning issues and all. The second is that I have no idea what the TextBox control is using as its StringFormat underneath.
Anyway, the result is that I invariably end up with items in the right column that are off by a tab. I suppose I could roll my own text window and do everything myself, but gee, isn't there a simple way to do this?
TextBox textBox = new TextBox();
textBox.Font = new Font("Calibri", 11);
textBox.Dock = DockStyle.Fill;
textBox.Multiline = true;
textBox.WordWrap = false;
textBox.ScrollBars = ScrollBars.Vertical;
Form form = new Form();
form.Text = "Recipe";
form.Size = new Size(400, 600);
form.FormBorderStyle = FormBorderStyle.Sizable;
form.StartPosition = FormStartPosition.CenterScreen;
form.Controls.Add(textBox);
Graphics g = form.CreateGraphics();
float targetWidth = 230;
foreach (PropertyInfo property in properties)
{
string text = String.Format("{0}:\t", Description);
while (g.MeasureString(text,textBox.Font).Width < targetWidth)
text += "\t";
textBox.AppendText(text + value.ToString() + "\n");
}
g.Dispose();
form.ShowDialog();
A:
Thanks Matt, your solution worked great for me. Here's my version of your code...
// This is a better way to pass in what tab stops I want...
SetTabStops(textBox, new int[] { 12,120 });
// And the code for the SetTabsStops method itself...
private const uint EM_SETTABSTOPS = 0x00CB;
[DllImport("User32.dll")]
private static extern uint SendMessage(IntPtr hWnd, uint wMsg, int wParam, int[] lParam);
public static void SetTabStops(TextBox textBox, int[] tabs)
{
SendMessage(textBox.Handle, EM_SETTABSTOPS, tabs.Length, tabs);
}
A:
If you want, you can translate this VB.Net code to C#. The theory here is that you change the size of a tab in the control.
Private Declare Function SendMessage _
Lib "user32" Alias "SendMessageA" _
(ByVal handle As IntPtr, ByVal wMsg As Integer, _
ByVal wParam As Integer, ByRef lParam As Integer) As Integer
Private Sub SetTabStops(ByVal ctlTextBox As TextBox)
Const EM_SETTABSTOPS As Integer = &HCBS
Dim tabs() As Integer = {20, 40, 80}
SendMessage(ctlTextBox.Handle, EM_SETTABSTOPS, _
tabs.Length, tabs(0))
End Sub
I converted a version to C# for you, too. Tested and working in VS2005.
Add this using statement to your form:
using System.Runtime.InteropServices;
Put this right after the class declaration:
private const int EM_SETTABSTOPS = 0x00CB;
[DllImport("User32.dll", CharSet = CharSet.Auto)]
public static extern IntPtr SendMessage(IntPtr h, int msg, int wParam, int[] lParam);
Call this method when you want to set the tabstops:
private void SetTabStops(TextBox ctlTextBox)
{
const int EM_SETTABSTOPS = 203;
int[] tabs = { 100, 40, 80 };
SendMessage(textBox1.Handle, EM_SETTABSTOPS, tabs.Length, tabs);
}
To use it, here is all I did:
private void Form1_Load(object sender, EventArgs e)
{
SetTabStops(textBox1);
textBox1.Text = "Hi\tWorld";
}
A:
If you want something truly tabular, Mr. Haren's answer is a good one. The DataGridView will give you a very Excel spreadsheet type of look.
If you just want a two column layout (similar to HTML's table), then try out the TableLayoutPanel. It'll give you the layout you desire with the ability to use standard controls within each table cell.
|
Is there an easy way to create two columns in a popup text window?
|
This seemed like an easy thing to do. I just wanted to pop up a text window and display two columns of data -- a description on the left side and a corresponding value displayed on the right side. I haven't worked with Forms much so I just grabbed the first control that seemed appropriate, a TextBox. I thought using tabs would be an easy way to create the second column, but I discovered things just don't work that well.
There seems to be two problems with the way I tried to do this (see below). First, I read on numerous websites that the MeasureString function isn't very precise due to how complex fonts are, with kerning issues and all. The second is that I have no idea what the TextBox control is using as its StringFormat underneath.
Anyway, the result is that I invariably end up with items in the right column that are off by a tab. I suppose I could roll my own text window and do everything myself, but gee, isn't there a simple way to do this?
TextBox textBox = new TextBox();
textBox.Font = new Font("Calibri", 11);
textBox.Dock = DockStyle.Fill;
textBox.Multiline = true;
textBox.WordWrap = false;
textBox.ScrollBars = ScrollBars.Vertical;
Form form = new Form();
form.Text = "Recipe";
form.Size = new Size(400, 600);
form.FormBorderStyle = FormBorderStyle.Sizable;
form.StartPosition = FormStartPosition.CenterScreen;
form.Controls.Add(textBox);
Graphics g = form.CreateGraphics();
float targetWidth = 230;
foreach (PropertyInfo property in properties)
{
string text = String.Format("{0}:\t", Description);
while (g.MeasureString(text,textBox.Font).Width < targetWidth)
text += "\t";
textBox.AppendText(text + value.ToString() + "\n");
}
g.Dispose();
form.ShowDialog();
|
[
"Thanks Matt, your solution worked great for me. Here's my version of your code...\n// This is a better way to pass in what tab stops I want...\nSetTabStops(textBox, new int[] { 12,120 });\n\n// And the code for the SetTabsStops method itself...\nprivate const uint EM_SETTABSTOPS = 0x00CB;\n\n[DllImport(\"User32.dll\")]\nprivate static extern uint SendMessage(IntPtr hWnd, uint wMsg, int wParam, int[] lParam);\n\npublic static void SetTabStops(TextBox textBox, int[] tabs)\n{\n SendMessage(textBox.Handle, EM_SETTABSTOPS, tabs.Length, tabs);\n}\n\n",
"If you want, you can translate this VB.Net code to C#. The theory here is that you change the size of a tab in the control.\nPrivate Declare Function SendMessage _\n Lib \"user32\" Alias \"SendMessageA\" _\n (ByVal handle As IntPtr, ByVal wMsg As Integer, _\n ByVal wParam As Integer, ByRef lParam As Integer) As Integer\n\n\nPrivate Sub SetTabStops(ByVal ctlTextBox As TextBox)\n\n Const EM_SETTABSTOPS As Integer = &HCBS\n\n Dim tabs() As Integer = {20, 40, 80}\n\n SendMessage(ctlTextBox.Handle, EM_SETTABSTOPS, _\n tabs.Length, tabs(0))\n\nEnd Sub\n\nI converted a version to C# for you, too. Tested and working in VS2005.\nAdd this using statement to your form: \nusing System.Runtime.InteropServices;\n\nPut this right after the class declaration:\n private const int EM_SETTABSTOPS = 0x00CB;\n [DllImport(\"User32.dll\", CharSet = CharSet.Auto)]\n public static extern IntPtr SendMessage(IntPtr h, int msg, int wParam, int[] lParam);\n\nCall this method when you want to set the tabstops:\n private void SetTabStops(TextBox ctlTextBox)\n {\n const int EM_SETTABSTOPS = 203;\n int[] tabs = { 100, 40, 80 };\n SendMessage(textBox1.Handle, EM_SETTABSTOPS, tabs.Length, tabs);\n }\n\nTo use it, here is all I did:\n private void Form1_Load(object sender, EventArgs e)\n {\n SetTabStops(textBox1);\n\n textBox1.Text = \"Hi\\tWorld\";\n }\n\n",
"If you want something truly tabular, Mr. Haren's answer is a good one. The DataGridView will give you a very Excel spreadsheet type of look.\nIf you just want a two column layout (similar to HTML's table), then try out the TableLayoutPanel. It'll give you the layout you desire with the ability to use standard controls within each table cell.\n"
] |
[
1,
0,
0
] |
[
"I believe the only way is to do something similar to what you are doing, but use a fixed font and do your own padding with spaces so that you don't have to worry about tab expansion.\n",
"Don't the text boxes allow HTML usage? If that is the case, just use HTML to format the text into a table. Otherwise, try adding the text to a datagrid and then adding that to the form.\n"
] |
[
-1,
-2
] |
[
"c#",
"controls",
"formatting",
"winforms"
] |
stackoverflow_0000072198_c#_controls_formatting_winforms.txt
|
Q:
JSF Lifecycle and Custom components
There are a couple of things that I am having a difficult time understanding with regards to developing custom components in JSF. For the purposes of these questions, you can assume that all of the custom controls are using valuebindings/expressions (not literal bindings), but I'm interested in explanations on them as well.
Where do I set the value for the valuebinding? Is this supposed to happen in decode? Or should decode do something else and then have the value set in encodeBegin?
Read from the Value Binding - When do I read data from the valuebinding vs. reading it from submittedvalue and putting it into the valuebinding?
When are action listeners on forms called in relation to all of this? The JSF lifecycle pages all mention events happening at various steps, but its not completely clear to me when just a simple listener for a commandbutton is being called
I've tried a few combinations, but always end up with hard to find bugs that I believe are coming from basic misunderstandings of the event lifecycle.
A:
There is a pretty good diagram in the JSF specification that shows the request lifecycle - essential for understanding this stuff.
The steps are:
Restore View. The UIComponent tree is rebuilt.
Apply Request Values. Editable components should implement EditableValueHolder. This phase walks the component tree and calls the processDecodes methods. If the component isn't something complex like a UIData, it won't do much except call its own decode method. The decode method doesn't do much except find its renderer and invokes its decode method, passing itself as an argument. It is the renderer's job to get any submitted value and set it via setSubmittedValue.
Process Validations. This phase calls processValidators which will call validate. The validate method takes the submitted value, converts it with any converters, validates it with any validators and (assuming the data passes those tests) calls setValue. This will store the value as a local variable. While this local variable is not null, it will be returned and not the value from the value binding for any calls to getValue.
Update Model Values. This phase calls processUpdates. In an input component, this will call updateModel which will get the ValueExpression and invoke it to set the value on the model.
Invoke Application. Button event listeners and so on will be invoked here (as will navigation if memory serves).
Render Response. The tree is rendered via the renderers and the state saved.
If any of these phases fail (e.g. a value is invalid), the lifecycle skips to Render Response.
Various events can be fired after most of these phases, invoking listeners as appropriate (like value change listeners after Process Validations).
This is a somewhat simplified version of events. Refer to the specification for more details.
I would question why you are writing your own UIComponent. This is a non-trivial task and a deep understanding of the JSF architecture is required to get it right. If you need a custom control, it is better to create a concrete control that extends an exisiting UIComponent (like HtmlInputText does) with an equivalent renderer.
If contamination isn't an issue, there is an open-source JSF implementation in the form of Apache MyFaces.
A:
Action listeners, such as for a CommandButton, are called during the Invoke Application phase, which is the last phase before the final Render Response phase. This is shown in The JSF Lifecycle - figure 1.
A:
It is the only framework that I've
ever used where component creation is
a deep intricate process like this.
None of the other web frameworks
(whether in the .net world or not)
make this so painful, which is
completely inexplicable to me.
Some of the design decisions behind JSF start to make a little more sense when you consider the goals. JSF was designed to be tooled - it exposes lots of metadata for IDEs. JSF is not a web framework - it is a MVP framework that can be used as a web framework. JSF is highly extensible and configurable - you can replace 90% of the implementation on a per-application basis.
Most of this stuff just makes your job more complicated if all you want to do is slip in an extra HTML control.
The component is a composition of
several inputtext (and other) base
components, btw.
I'm assuming JSP-includes/tooling-based page fragments don't meet your requirements.
I would consider using your UIComponentELTag.createComponent to create a composite control with a UIPanel base and creating all its children from existing implementations. (I'm assuming you're using JSPs/taglibs and making a few other guesses.) You'd probably want a custom renderer if none of the existing UIPanel renderers did the job, but renderers are easy.
A:
The best article I've found is Jsf Component Writing,
as for 2 where do I read the value for a value binding in your component you have a getter that looks like this
public String getBar() {
if (null != this.bar) {
return this.bar ;
}
ValueBinding _vb = getValueBinding("bar");
return (_vb != null) ? (bar) _vb.getValue(getFacesContext()) : null;
}
how did this get into the getValueBinding?
In your tag class setProperties method
if (bar!= null) {
if (isValueReference(bar)) {
ValueBinding vb = Util.getValueBinding(bar);
foo.setValueBinding("bar", vb);
} else {
throw new IllegalStateException("The value for 'bar' must be a ValueBinding.");
}
}
|
JSF Lifecycle and Custom components
|
There are a couple of things that I am having a difficult time understanding with regards to developing custom components in JSF. For the purposes of these questions, you can assume that all of the custom controls are using valuebindings/expressions (not literal bindings), but I'm interested in explanations on them as well.
Where do I set the value for the valuebinding? Is this supposed to happen in decode? Or should decode do something else and then have the value set in encodeBegin?
Read from the Value Binding - When do I read data from the valuebinding vs. reading it from submittedvalue and putting it into the valuebinding?
When are action listeners on forms called in relation to all of this? The JSF lifecycle pages all mention events happening at various steps, but its not completely clear to me when just a simple listener for a commandbutton is being called
I've tried a few combinations, but always end up with hard to find bugs that I believe are coming from basic misunderstandings of the event lifecycle.
|
[
"There is a pretty good diagram in the JSF specification that shows the request lifecycle - essential for understanding this stuff.\nThe steps are:\n\nRestore View. The UIComponent tree is rebuilt.\nApply Request Values. Editable components should implement EditableValueHolder. This phase walks the component tree and calls the processDecodes methods. If the component isn't something complex like a UIData, it won't do much except call its own decode method. The decode method doesn't do much except find its renderer and invokes its decode method, passing itself as an argument. It is the renderer's job to get any submitted value and set it via setSubmittedValue.\nProcess Validations. This phase calls processValidators which will call validate. The validate method takes the submitted value, converts it with any converters, validates it with any validators and (assuming the data passes those tests) calls setValue. This will store the value as a local variable. While this local variable is not null, it will be returned and not the value from the value binding for any calls to getValue.\nUpdate Model Values. This phase calls processUpdates. In an input component, this will call updateModel which will get the ValueExpression and invoke it to set the value on the model.\nInvoke Application. Button event listeners and so on will be invoked here (as will navigation if memory serves).\nRender Response. The tree is rendered via the renderers and the state saved.\nIf any of these phases fail (e.g. a value is invalid), the lifecycle skips to Render Response.\nVarious events can be fired after most of these phases, invoking listeners as appropriate (like value change listeners after Process Validations).\n\nThis is a somewhat simplified version of events. Refer to the specification for more details.\nI would question why you are writing your own UIComponent. This is a non-trivial task and a deep understanding of the JSF architecture is required to get it right. If you need a custom control, it is better to create a concrete control that extends an exisiting UIComponent (like HtmlInputText does) with an equivalent renderer.\nIf contamination isn't an issue, there is an open-source JSF implementation in the form of Apache MyFaces.\n",
"Action listeners, such as for a CommandButton, are called during the Invoke Application phase, which is the last phase before the final Render Response phase. This is shown in The JSF Lifecycle - figure 1.\n",
"\nIt is the only framework that I've\n ever used where component creation is\n a deep intricate process like this.\n None of the other web frameworks\n (whether in the .net world or not)\n make this so painful, which is\n completely inexplicable to me.\n\nSome of the design decisions behind JSF start to make a little more sense when you consider the goals. JSF was designed to be tooled - it exposes lots of metadata for IDEs. JSF is not a web framework - it is a MVP framework that can be used as a web framework. JSF is highly extensible and configurable - you can replace 90% of the implementation on a per-application basis.\nMost of this stuff just makes your job more complicated if all you want to do is slip in an extra HTML control.\n\nThe component is a composition of\n several inputtext (and other) base\n components, btw.\n\nI'm assuming JSP-includes/tooling-based page fragments don't meet your requirements.\nI would consider using your UIComponentELTag.createComponent to create a composite control with a UIPanel base and creating all its children from existing implementations. (I'm assuming you're using JSPs/taglibs and making a few other guesses.) You'd probably want a custom renderer if none of the existing UIPanel renderers did the job, but renderers are easy.\n",
"The best article I've found is Jsf Component Writing, \nas for 2 where do I read the value for a value binding in your component you have a getter that looks like this\n\npublic String getBar() { \n if (null != this.bar) { \n return this.bar ; \n } \n ValueBinding _vb = getValueBinding(\"bar\"); \n return (_vb != null) ? (bar) _vb.getValue(getFacesContext()) : null; \n}\n \n\nhow did this get into the getValueBinding?\nIn your tag class setProperties method\n if (bar!= null) { \n if (isValueReference(bar)) { \n ValueBinding vb = Util.getValueBinding(bar); \n foo.setValueBinding(\"bar\", vb); \n } else { \n throw new IllegalStateException(\"The value for 'bar' must be a ValueBinding.\"); \n } \n } \n\n"
] |
[
20,
4,
3,
1
] |
[] |
[] |
[
"custom_component",
"jakarta_ee",
"java",
"jsf"
] |
stackoverflow_0000033476_custom_component_jakarta_ee_java_jsf.txt
|
Q:
DataGridView.HitTestInfo equivalent in Infragistics.Win.UltraWinGrid.UltraGrid?
Does anyone know if the Infragistics UltraGrid control provides functionality similar to that of DataGridView.HitTestInfo?
A:
Check this out.
They don't convert the coordinates, but they use a special Infragistics grid event (MouseEnterElement) to get the element, which the mouse currently hovers over.
Maybe it helps.
A:
There's a .MousePosition property which returns System.Drawing.Point and "Gets the position of the mouse cursor in screen coordinates" but I'm using an older version of their UltraWinGrid (2003).
They have a free trial download, so you could see if they've added it to their latest and greatest :o)
A:
If you had a MouseEventHandler for the UltraGrid then you can do the following:
UltraGrid grid = (UltraGrid)sender;
UIElement element = grid.DisplayLayout.UIElement.ElementFromPoint(new Point(e.X, e.Y));
You can then cast the element depending on its expected type using element.GetContext():
UltraGridCell cell = (UltraGridCell)element.GetContext(typeof(UltraGridCell));
|
DataGridView.HitTestInfo equivalent in Infragistics.Win.UltraWinGrid.UltraGrid?
|
Does anyone know if the Infragistics UltraGrid control provides functionality similar to that of DataGridView.HitTestInfo?
|
[
"Check this out.\nThey don't convert the coordinates, but they use a special Infragistics grid event (MouseEnterElement) to get the element, which the mouse currently hovers over.\nMaybe it helps.\n",
"There's a .MousePosition property which returns System.Drawing.Point and \"Gets the position of the mouse cursor in screen coordinates\" but I'm using an older version of their UltraWinGrid (2003).\nThey have a free trial download, so you could see if they've added it to their latest and greatest :o)\n",
"If you had a MouseEventHandler for the UltraGrid then you can do the following:\nUltraGrid grid = (UltraGrid)sender;\n\nUIElement element = grid.DisplayLayout.UIElement.ElementFromPoint(new Point(e.X, e.Y));\n\nYou can then cast the element depending on its expected type using element.GetContext():\n UltraGridCell cell = (UltraGridCell)element.GetContext(typeof(UltraGridCell));\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"infragistics",
"ultrawingrid"
] |
stackoverflow_0000071838_.net_c#_infragistics_ultrawingrid.txt
|
Q:
How can I check for a file size and add that result in an Excel spreadsheet in Perl?
Currently I monitoring a particular file with a simple shell one-liner:
filesize=$(ls -lah somefile | awk '{print $5}')
I'm aware that Perl has some nice modules to deal with Excel files so the idea is to, let's say, run that check daily, perhaps with cron, and write the result on a spreadsheet for further statistical use.
A:
You can use the -s operator to obtain the size of a file and the Spreadsheet::ParseExcel and Spreadsheet::WriteExcel modules to produce an updated spreadsheet with the information. Spreadsheet::ParseExcel::SaveParser lets you easily combine the two, in case you want to update an existing file with new information. If you are on Windows, you may want to automate Excel itself instead, probably with the aid of Win32::OLE.
A:
You can check the size of the file using the -s operator.
use strict;
use warnings;
use File::Slurp qw(read_file write_file);
use Spreadsheet::ParseExcel;
use Spreadsheet::ParseExcel::SaveParser;
use Spreadsheet::WriteExcel;
my $file = 'path_to_file';
my $size_file = 'path_to_file_keeping_the_size';
my $excel_file = 'path_to_excel_file.xls';
my $current_size = -s $file;
my $old_size = 0;
if (-e $size_file) {
$old_size = read_file($size_file);
}
if ($old_size new;
my $excel = $parser->Parse($excel_file);
my $row = 1;
$row++ while $excel->{Worksheet}[0]->{Cells}[$row][0];
$excel->AddCell(0, $row, 0, scalar(localtime));
$excel->AddCell(0, $row, 1, $current_size);
my $workbook = $excel->SaveAs($excel_file);
$workbook->close;
} else {
my $workbook = Spreadsheet::WriteExcel->new($excel_file);
my $worksheet = $workbook->add_worksheet();
$worksheet->write(0, 0, 'Date');
$worksheet->write(0, 1, 'Size');
$worksheet->write(1, 0, scalar(localtime));
$worksheet->write(1, 1, $current_size);
$workbook->close;
}
}
write_file($size_file, $current_size);
A simple way to write Excel files would be using
Spreadsheet::Write.
but if you need to update an existing Excel file you should look into
Spreadsheet::ParseExcel.
A:
You can also skip the hassle of writing .xls format files and use a more generic (but sufficiently Excel-friendly) format such as CSV:
#!/bin/bash
date=`date +%Y/%m/%d:%H:%M:%S`
size=$(ls -lah somefile | awk '{print $5}')
echo "$date,$size"
Then, in your crontab:
0 0 * * * /path/to/script.sh >/data/sizelog.csv
Then you import that .csv file into Excel just like any other spreadsheet.
A:
Perl also has the very nice (and very fast) Text::CSV_XS which allows you to easily make Excel-friendly CSV files, which may be a better solution than creating proper XLS files.
For example (over-commented for instructional value):
#!/usr/bin/perl
package main;
use strict; use warnings; # always!
use Text::CSV_XS;
use IO::File;
# set up the CSV file
my $csv = Text::CSV_XS->new( {eol=>"\r\n"} );
my $io = IO::File->new( 'report.csv', '>')
or die "Cannot create report.csv: $!\n";
# for each file specified on command line
for my $file (@ARGV) {
unless ( -f $file ) {
# file doesn't exist
warn "$file doesn't exist, skipping\n";
next;
}
# get its size
my $size = -s $file;
# write the filename and size to a row in CSV
$csv->print( $io, [ $file, $size ] );
}
$io->close; # make sure CSV file is flushed and closed
A:
The module you should be using is Spreadsheet::WriteExcel.
|
How can I check for a file size and add that result in an Excel spreadsheet in Perl?
|
Currently I monitoring a particular file with a simple shell one-liner:
filesize=$(ls -lah somefile | awk '{print $5}')
I'm aware that Perl has some nice modules to deal with Excel files so the idea is to, let's say, run that check daily, perhaps with cron, and write the result on a spreadsheet for further statistical use.
|
[
"You can use the -s operator to obtain the size of a file and the Spreadsheet::ParseExcel and Spreadsheet::WriteExcel modules to produce an updated spreadsheet with the information. Spreadsheet::ParseExcel::SaveParser lets you easily combine the two, in case you want to update an existing file with new information. If you are on Windows, you may want to automate Excel itself instead, probably with the aid of Win32::OLE.\n",
"You can check the size of the file using the -s operator.\n\nuse strict;\nuse warnings;\n\nuse File::Slurp qw(read_file write_file);\nuse Spreadsheet::ParseExcel;\nuse Spreadsheet::ParseExcel::SaveParser;\nuse Spreadsheet::WriteExcel;\n\nmy $file = 'path_to_file';\nmy $size_file = 'path_to_file_keeping_the_size';\nmy $excel_file = 'path_to_excel_file.xls';\n\nmy $current_size = -s $file;\nmy $old_size = 0;\nif (-e $size_file) {\n $old_size = read_file($size_file);\n}\n\nif ($old_size new;\n my $excel = $parser->Parse($excel_file);\n my $row = 1;\n $row++ while $excel->{Worksheet}[0]->{Cells}[$row][0];\n $excel->AddCell(0, $row, 0, scalar(localtime));\n $excel->AddCell(0, $row, 1, $current_size);\n\n my $workbook = $excel->SaveAs($excel_file);\n $workbook->close;\n\n } else {\n my $workbook = Spreadsheet::WriteExcel->new($excel_file);\n my $worksheet = $workbook->add_worksheet();\n $worksheet->write(0, 0, 'Date');\n $worksheet->write(0, 1, 'Size');\n\n $worksheet->write(1, 0, scalar(localtime));\n $worksheet->write(1, 1, $current_size);\n $workbook->close;\n }\n}\n\nwrite_file($size_file, $current_size);\n\nA simple way to write Excel files would be using\nSpreadsheet::Write.\nbut if you need to update an existing Excel file you should look into\nSpreadsheet::ParseExcel.\n",
"You can also skip the hassle of writing .xls format files and use a more generic (but sufficiently Excel-friendly) format such as CSV:\n#!/bin/bash\ndate=`date +%Y/%m/%d:%H:%M:%S`\nsize=$(ls -lah somefile | awk '{print $5}')\necho \"$date,$size\"\n\nThen, in your crontab:\n0 0 * * * /path/to/script.sh >/data/sizelog.csv\n\nThen you import that .csv file into Excel just like any other spreadsheet.\n",
"Perl also has the very nice (and very fast) Text::CSV_XS which allows you to easily make Excel-friendly CSV files, which may be a better solution than creating proper XLS files.\nFor example (over-commented for instructional value):\n#!/usr/bin/perl\npackage main;\nuse strict; use warnings; # always!\n\nuse Text::CSV_XS;\nuse IO::File;\n\n# set up the CSV file\nmy $csv = Text::CSV_XS->new( {eol=>\"\\r\\n\"} );\nmy $io = IO::File->new( 'report.csv', '>')\n or die \"Cannot create report.csv: $!\\n\";\n\n# for each file specified on command line\nfor my $file (@ARGV) {\n unless ( -f $file ) {\n # file doesn't exist\n warn \"$file doesn't exist, skipping\\n\";\n next;\n }\n\n # get its size\n my $size = -s $file;\n\n # write the filename and size to a row in CSV\n $csv->print( $io, [ $file, $size ] );\n}\n\n$io->close; # make sure CSV file is flushed and closed\n\n",
"The module you should be using is Spreadsheet::WriteExcel.\n"
] |
[
8,
7,
4,
4,
2
] |
[] |
[] |
[
"ksh",
"perl",
"shell",
"unix"
] |
stackoverflow_0000071643_ksh_perl_shell_unix.txt
|
Q:
Change app icon in Visual Studio 2005?
I'd like to use a different icon for the demo version of my game, and I'm building the demo with a different build config than I do for the full verison, using a preprocessor define to lockout some content, use different graphics, etc. Is there a way that I can make Visual Studio use a different icon for the app Icon in the demo config but continue to use the regular icon for the full version's config?
A:
According to this page you may use preprocessor directives in your *.rc file. You should write something like this
#ifdef _DEMO_VERSION_
IDR_MAINFRAME ICON "demo.ico"
#else
IDR_MAINFRAME ICON "full.ico"
#endif
A:
What I would do is setup a pre-build event (Project properties -> Configuration Properties -> Build Events -> Pre-Build Event). The pre-build event is a command line. I would use this to copy the appropriate icon file to the build icon.
For example, let's say your build icon is 'app.ico'. I would make my fullicon 'app_full.ico' and my demo icon 'app_demo.ico'. Then I would set my pre-build events as follows:
Full mode pre-build event:
del app.ico | copy app_full.ico app.ico
Demo mode pre-build event:
del app.ico | copy app_demo.ico app.ico
I hope that helps!
A:
This will get you halfway there: http://www.codeproject.com/KB/dotnet/embedmultipleiconsdotnet.aspx
Then you need to find the Win32 call which will set the displayed icon from the list of embedded icons.
A:
I don't know a way in visual studio, because the application settings are bound to the hole project. But a simple way is to use a PreBuild event and copy the app.demo.ico to app.ico or the app.release.ico to app.ico demanding on the value of the key $(ConfigurationName) and refer to the app.ico in your project directory.
|
Change app icon in Visual Studio 2005?
|
I'd like to use a different icon for the demo version of my game, and I'm building the demo with a different build config than I do for the full verison, using a preprocessor define to lockout some content, use different graphics, etc. Is there a way that I can make Visual Studio use a different icon for the app Icon in the demo config but continue to use the regular icon for the full version's config?
|
[
"According to this page you may use preprocessor directives in your *.rc file. You should write something like this\n#ifdef _DEMO_VERSION_\nIDR_MAINFRAME ICON \"demo.ico\"\n#else\nIDR_MAINFRAME ICON \"full.ico\"\n#endif\n\n",
"What I would do is setup a pre-build event (Project properties -> Configuration Properties -> Build Events -> Pre-Build Event). The pre-build event is a command line. I would use this to copy the appropriate icon file to the build icon. \nFor example, let's say your build icon is 'app.ico'. I would make my fullicon 'app_full.ico' and my demo icon 'app_demo.ico'. Then I would set my pre-build events as follows:\nFull mode pre-build event:\ndel app.ico | copy app_full.ico app.ico\n\nDemo mode pre-build event:\ndel app.ico | copy app_demo.ico app.ico\n\nI hope that helps!\n",
"This will get you halfway there: http://www.codeproject.com/KB/dotnet/embedmultipleiconsdotnet.aspx\nThen you need to find the Win32 call which will set the displayed icon from the list of embedded icons.\n",
"I don't know a way in visual studio, because the application settings are bound to the hole project. But a simple way is to use a PreBuild event and copy the app.demo.ico to app.ico or the app.release.ico to app.ico demanding on the value of the key $(ConfigurationName) and refer to the app.ico in your project directory.\n"
] |
[
8,
2,
0,
0
] |
[] |
[] |
[
"c++",
"icons",
"visual_c++_2005",
"visual_studio_2005"
] |
stackoverflow_0000072789_c++_icons_visual_c++_2005_visual_studio_2005.txt
|
Q:
Getting The XML Data Inside Custom XPath function
Is there a way to get the current xml data when we make our own custom XPath function (see here).
I know you have access to an XPathContext but is this enough?
Example:
Our XML:
<foo>
<bar>smang</bar>
<fizz>buzz</fizz>
</foo>
Our XSL:
<xsl:template match="/">
<xsl:value-of select="ourFunction()" />
</xsl:template>
How do we get the entire XML tree?
Edit: To clarify: I'm creating a custom function that ends up executing static Java code (it's a Saxon feature). So, in this Java code, I wish to be able to get elements from the XML tree, such as bar and fizz, and their CDATA, such as smang and buzz.
A:
What about select the current node selecting the relevant data from the current node into an XSL parameter, and passing that parameter to the function? Like:
<xsl:value-of select="ourFunction($data)" />
A:
Try changing your XSL so you call 'ourFunction(/)'. That should pass the root node to the function. You could also try . or ..
You'll presumably need to change the signature of the implementing function, I'll let someone else help with that.
|
Getting The XML Data Inside Custom XPath function
|
Is there a way to get the current xml data when we make our own custom XPath function (see here).
I know you have access to an XPathContext but is this enough?
Example:
Our XML:
<foo>
<bar>smang</bar>
<fizz>buzz</fizz>
</foo>
Our XSL:
<xsl:template match="/">
<xsl:value-of select="ourFunction()" />
</xsl:template>
How do we get the entire XML tree?
Edit: To clarify: I'm creating a custom function that ends up executing static Java code (it's a Saxon feature). So, in this Java code, I wish to be able to get elements from the XML tree, such as bar and fizz, and their CDATA, such as smang and buzz.
|
[
"What about select the current node selecting the relevant data from the current node into an XSL parameter, and passing that parameter to the function? Like:\n<xsl:value-of select=\"ourFunction($data)\" />\n\n",
"Try changing your XSL so you call 'ourFunction(/)'. That should pass the root node to the function. You could also try . or ..\nYou'll presumably need to change the signature of the implementing function, I'll let someone else help with that.\n"
] |
[
1,
1
] |
[] |
[] |
[
"java",
"saxon",
"xpath",
"xslt"
] |
stackoverflow_0000069470_java_saxon_xpath_xslt.txt
|
Q:
Can a SVN repository include/link-to an external git repository?
I have a svn repository, R, that depends on a library, l, in another repository.
The goal is that when someone checks out R, they also check out l. We want l to still be in its own repository so that l can be updated without dependence on R.
I don't know much about external svn links, but I believe that when depending on a svn-based library one can link to it externally, 'ext'.
If l is in a git repository, can I do something similar? I'd like to preserve the goal stated above.
A:
svn:externals is the way svn can be made to check out sources from more than one repository into one working copy. But it is only meant for dealing with svn repositories - it doesn't know how to check out a git repository.
You might be able to do it the other way 'round, by including an svn repository inside a git repository, using something like 'git svn'.
A:
I suggest using a script wrapper for svn co.
#!/bin/sh
svn co path://server/R svn-R
git clone path://server/l git-l
Or similar.
|
Can a SVN repository include/link-to an external git repository?
|
I have a svn repository, R, that depends on a library, l, in another repository.
The goal is that when someone checks out R, they also check out l. We want l to still be in its own repository so that l can be updated without dependence on R.
I don't know much about external svn links, but I believe that when depending on a svn-based library one can link to it externally, 'ext'.
If l is in a git repository, can I do something similar? I'd like to preserve the goal stated above.
|
[
"svn:externals is the way svn can be made to check out sources from more than one repository into one working copy. But it is only meant for dealing with svn repositories - it doesn't know how to check out a git repository.\nYou might be able to do it the other way 'round, by including an svn repository inside a git repository, using something like 'git svn'.\n",
"I suggest using a script wrapper for svn co. \n#!/bin/sh\nsvn co path://server/R svn-R\ngit clone path://server/l git-l\n\nOr similar.\n"
] |
[
4,
3
] |
[] |
[] |
[
"git",
"svn"
] |
stackoverflow_0000072723_git_svn.txt
|
Q:
Best Practices for embedding .NET assemblies in SQL Server
What are some important practices to follow when creating a .NET assembly that is going to be embedded to SQL Server 2005?
I am brand new to this, and I've found that there are significant method attributes like:
[SqlFunction(FillRowMethodName = "FillRow", TableDefinition = "letter nchar(1)")]
I'm also looking for common pitfalls to avoid, etc.
A:
Some that I remember:
Keep its usage to a minimum, only use it when T-SQL proved too complex.
Avoid pointers/cursors at all costs because a for loop is so easily abusable in CLR context.
Only use the SQL-Server native data types unless totally necessary.
Can't remember where I've found the information, but those are some that I do remember.
Basically, only use it when declarative T-SQL is too complex or is impossible to do (such as registry editing etc.).
A:
Single tip regarding assembly deployment:
Keep functionality isolated across small assemblies. Try not to build a dependency chain, because replacing a base assembly means you need to remove the dependent assemblies first, before you can update the base assembly.
|
Best Practices for embedding .NET assemblies in SQL Server
|
What are some important practices to follow when creating a .NET assembly that is going to be embedded to SQL Server 2005?
I am brand new to this, and I've found that there are significant method attributes like:
[SqlFunction(FillRowMethodName = "FillRow", TableDefinition = "letter nchar(1)")]
I'm also looking for common pitfalls to avoid, etc.
|
[
"Some that I remember:\n\nKeep its usage to a minimum, only use it when T-SQL proved too complex.\nAvoid pointers/cursors at all costs because a for loop is so easily abusable in CLR context.\nOnly use the SQL-Server native data types unless totally necessary.\n\nCan't remember where I've found the information, but those are some that I do remember.\nBasically, only use it when declarative T-SQL is too complex or is impossible to do (such as registry editing etc.).\n",
"Single tip regarding assembly deployment:\nKeep functionality isolated across small assemblies. Try not to build a dependency chain, because replacing a base assembly means you need to remove the dependent assemblies first, before you can update the base assembly.\n"
] |
[
2,
1
] |
[
"I would strongly advise against putting .net assemblies in your database server, think n-tier applications. Persistence <- Business Logic <-Presentation Logic <- client\nKeep your Logic in your Business Logic layer. \nThe only reason I can think of to put .net in your database would to add a new complex data type, I would strongly that this be a dumb class that only holds data and does no processing on it.\nJust because you can does not mean you should. \nSorry for not directly answering your question.\n"
] |
[
-1
] |
[
".net",
"assemblies",
"sql_server_2005"
] |
stackoverflow_0000072014_.net_assemblies_sql_server_2005.txt
|
Q:
Troubleshooting a NullReference exception in a service
I have a windows service that runs various system monitoring operations. However, when running SNMP related checks, I always get a NullReference exception.
The code runs fine when run through the user interface (under my username and password), but always errors running as the service.
I've tried running the service as different user accounts (including mine), with no luck. I've tried replacing the SNMP monitoring code with calling the PowerShell cmdlet get-snmp (from the /n NetCmdlets), but that yields the same error.
The application I'm working with is PolyMon.
Any ideas?
A:
You can attach a debugger to the running process before triggering the exception. This should give you a better idea what's up with the application.
A:
Some ways to debug:
Is there any additional information in the Windows events log?
I believe you should be able to listen to some kind of global-exception event like Application_Exception in windows services. I can't remember the exact name but you can atelast dump stack trace from there.
You should be able to start debugging the project in service mode.
Some code snippets/stack trace/information will definitely help.
A:
A couple of things we've seen - more about differences between interactive vs services, but might help...
One thing we've seen that does not seem relevant is the difference with what is on the user vs system path.
Another thing we've seen relates to temporary files - the service we had was creating lots in the windows\temp directory - we tracked this down when it had created something like 65000 of these files and thus hit the limit of what a directory can hold...
Regards,
Chris
A:
I have tackled these kind of issues before, if you haven't already found the answer, I suggest the following:
Enable tracing/logging in all third party apps and libraries you are using such that the errors are logged to files instead of stdout or stderr. Often times, you will find a clue from these.
Your Windows Service may be relying on some Windows networking set up to be in place before startup. This, can be due to environment (PATH, as others have suggested) or due to 'dependencies' on other services.
Jay.........
|
Troubleshooting a NullReference exception in a service
|
I have a windows service that runs various system monitoring operations. However, when running SNMP related checks, I always get a NullReference exception.
The code runs fine when run through the user interface (under my username and password), but always errors running as the service.
I've tried running the service as different user accounts (including mine), with no luck. I've tried replacing the SNMP monitoring code with calling the PowerShell cmdlet get-snmp (from the /n NetCmdlets), but that yields the same error.
The application I'm working with is PolyMon.
Any ideas?
|
[
"You can attach a debugger to the running process before triggering the exception. This should give you a better idea what's up with the application.\n",
"Some ways to debug:\n\nIs there any additional information in the Windows events log?\nI believe you should be able to listen to some kind of global-exception event like Application_Exception in windows services. I can't remember the exact name but you can atelast dump stack trace from there.\nYou should be able to start debugging the project in service mode.\n\nSome code snippets/stack trace/information will definitely help.\n",
"A couple of things we've seen - more about differences between interactive vs services, but might help...\nOne thing we've seen that does not seem relevant is the difference with what is on the user vs system path. \nAnother thing we've seen relates to temporary files - the service we had was creating lots in the windows\\temp directory - we tracked this down when it had created something like 65000 of these files and thus hit the limit of what a directory can hold...\nRegards,\nChris\n",
"I have tackled these kind of issues before, if you haven't already found the answer, I suggest the following:\n\nEnable tracing/logging in all third party apps and libraries you are using such that the errors are logged to files instead of stdout or stderr. Often times, you will find a clue from these.\nYour Windows Service may be relying on some Windows networking set up to be in place before startup. This, can be due to environment (PATH, as others have suggested) or due to 'dependencies' on other services. \n\nJay.........\n"
] |
[
2,
2,
1,
1
] |
[] |
[] |
[
".net",
"exception",
"powershell",
"service"
] |
stackoverflow_0000048574_.net_exception_powershell_service.txt
|
Q:
Should I use a state machine or a sequence workflow in WF?
I have a repeatable business process that I execute every week as part of my configuration management responsibilities. The process does not change: I download change details into Excel, open the spreadsheet and copy out details based on a macro, create a Word document from an agenda template, update the agenda with the Excel data, create PDFs from the Word document, and email them out.
This process is very easily represented in a sequence workflow and that's how I have it so far, with COM automation to handle the Excel and Word pieces automatically. The wrench in the gears is that there is a human step between "create agenda" and "send it out," wherein I review the change details and formulate questions about them, which are added to the agenda. I currently have a Suspend activity to suspend the workflow while I manually do this piece of the process.
My question is, should I rewrite my workflow to make it a state machine to follow a best practice for human interaction in a business process, or is the Suspend activity a reasonable solution?
A:
No, I don't think that you have to use a state machine for this workflow. But, I propose to change the Suspend activity because:
The SuspendActivity activity
temporarily stops the execution of the
current workflow. Typically, you use
the SuspendActivity activity to
reflect an error condition that
requires attention by an
administrator.
When a workflow
instance is suspended, an error is
logged. You can specify a message
string to accompany the error to help
the administrator diagnose the problem
with the SuspendActivity Error
property. A suspended workflow
instance can still receive messages
that are queued up until the workflow
is restarted. All the state
information for the workflow instance
is saved and is reinstated when the
instance is resumed (using Resume).
Source: MSDN
The typical way for adding a human task in a workflow (either sequence or state machine) is to define an External Data Exchange interface and use a HandleExternalEvent activity (and possibly a CallExternalMethod activity). For more details, please see the following articles:
Building State Machines with Windows Workflow Foundation
Simple Human Workflow with Windows Workflow Foundation
A:
Update: Panos makes a good point about Suspend Activity. I agree it has a different purpose in the workflow automaton.
If you feel you are worrying more about workflow transitioning between various states, then state machine workflow is ideal. Otherwise, sequence is just fine.
The main problem you should be trying to solve is that the workflow should not tie up a thread while waiting for the human interaction (thread agility). If the workflow is idled and persisted during that time (like using SqlWorkflowPersistenceService), it should not be a problem.
|
Should I use a state machine or a sequence workflow in WF?
|
I have a repeatable business process that I execute every week as part of my configuration management responsibilities. The process does not change: I download change details into Excel, open the spreadsheet and copy out details based on a macro, create a Word document from an agenda template, update the agenda with the Excel data, create PDFs from the Word document, and email them out.
This process is very easily represented in a sequence workflow and that's how I have it so far, with COM automation to handle the Excel and Word pieces automatically. The wrench in the gears is that there is a human step between "create agenda" and "send it out," wherein I review the change details and formulate questions about them, which are added to the agenda. I currently have a Suspend activity to suspend the workflow while I manually do this piece of the process.
My question is, should I rewrite my workflow to make it a state machine to follow a best practice for human interaction in a business process, or is the Suspend activity a reasonable solution?
|
[
"No, I don't think that you have to use a state machine for this workflow. But, I propose to change the Suspend activity because:\n\nThe SuspendActivity activity\n temporarily stops the execution of the\n current workflow. Typically, you use\n the SuspendActivity activity to\n reflect an error condition that\n requires attention by an\n administrator. \nWhen a workflow\n instance is suspended, an error is\n logged. You can specify a message\n string to accompany the error to help\n the administrator diagnose the problem\n with the SuspendActivity Error\n property. A suspended workflow\n instance can still receive messages\n that are queued up until the workflow\n is restarted. All the state\n information for the workflow instance\n is saved and is reinstated when the\n instance is resumed (using Resume).\nSource: MSDN \n\nThe typical way for adding a human task in a workflow (either sequence or state machine) is to define an External Data Exchange interface and use a HandleExternalEvent activity (and possibly a CallExternalMethod activity). For more details, please see the following articles:\n\nBuilding State Machines with Windows Workflow Foundation\nSimple Human Workflow with Windows Workflow Foundation\n\n",
"Update: Panos makes a good point about Suspend Activity. I agree it has a different purpose in the workflow automaton.\nIf you feel you are worrying more about workflow transitioning between various states, then state machine workflow is ideal. Otherwise, sequence is just fine. \nThe main problem you should be trying to solve is that the workflow should not tie up a thread while waiting for the human interaction (thread agility). If the workflow is idled and persisted during that time (like using SqlWorkflowPersistenceService), it should not be a problem.\n"
] |
[
2,
2
] |
[] |
[] |
[
".net_3.5",
"business_process_management",
"com",
"interop",
"workflow"
] |
stackoverflow_0000072667_.net_3.5_business_process_management_com_interop_workflow.txt
|
Q:
MySQL UTF/Unicode migration tips
Does anyone have any tips or gotcha moments to look out for when trying to migrate MySQL tables from the the default case-insenstive swedish or ascii charsets to utf-8? Some of the projects that I'm involved in are striving for better internationalization and the database is going to be a significant part of this change.
Before we look to alter the database, we are going to convert each site to use UTF-8 character encoding (from least critical to most) to help ensure all input/output is using the same character set.
Thanks for any help
A:
Some hints:
Your CHAR and VARCHAR columns will use up to 3 times more disk space. (You probably won't get much disk space grow for Swedish words.)
Use SET NAMES utf8 before reading or writing to the database. If you don't this then you will get partially garbled characters.
A:
I am going to be going over the following sites/articles to help find an answer.
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) - Joel on Software
UTF-8 And Unicode FAQ
Hanselminutes episode "Sorting out Internationalization with Michael Kaplan"
And I also just found a very on topic post by Derek Sivers @ O'Reilly ONLamp Blog as I was writing this out. Turning MySQL data in latin1 to utf8 utf-8
A:
Beware index length limitations. If a table is structured, say:
a varchar(255)
b varchar(255)
key ('a', 'b')
You're going to go past the 1000 byte limit on key lengths. 255+255 is okay, but 255*3 + 255*3 isn't going to work.
A:
Your CHAR and VARCHAR columns will use up to 3 times more disk space.
Only if they're stuffed full of latin-1 with ordinals > 128. Otherwise, the increased space use of UTF-8 is minimal.
A:
The collations are not always favorable. You'll get umlats collating to non umlatted versions which is not always correct. Might want to go w/ utf8_bin, but then everything is case sensitive as well.
|
MySQL UTF/Unicode migration tips
|
Does anyone have any tips or gotcha moments to look out for when trying to migrate MySQL tables from the the default case-insenstive swedish or ascii charsets to utf-8? Some of the projects that I'm involved in are striving for better internationalization and the database is going to be a significant part of this change.
Before we look to alter the database, we are going to convert each site to use UTF-8 character encoding (from least critical to most) to help ensure all input/output is using the same character set.
Thanks for any help
|
[
"Some hints:\n\nYour CHAR and VARCHAR columns will use up to 3 times more disk space. (You probably won't get much disk space grow for Swedish words.)\nUse SET NAMES utf8 before reading or writing to the database. If you don't this then you will get partially garbled characters.\n\n",
"I am going to be going over the following sites/articles to help find an answer.\nThe Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) - Joel on Software\nUTF-8 And Unicode FAQ\nHanselminutes episode \"Sorting out Internationalization with Michael Kaplan\"\nAnd I also just found a very on topic post by Derek Sivers @ O'Reilly ONLamp Blog as I was writing this out. Turning MySQL data in latin1 to utf8 utf-8\n",
"Beware index length limitations. If a table is structured, say:\na varchar(255)\nb varchar(255)\nkey ('a', 'b')\nYou're going to go past the 1000 byte limit on key lengths. 255+255 is okay, but 255*3 + 255*3 isn't going to work.\n",
"\nYour CHAR and VARCHAR columns will use up to 3 times more disk space.\n\nOnly if they're stuffed full of latin-1 with ordinals > 128. Otherwise, the increased space use of UTF-8 is minimal.\n",
"The collations are not always favorable. You'll get umlats collating to non umlatted versions which is not always correct. Might want to go w/ utf8_bin, but then everything is case sensitive as well. \n"
] |
[
2,
1,
1,
0,
0
] |
[] |
[] |
[
"internationalization",
"mysql",
"unicode",
"utf_8"
] |
stackoverflow_0000047005_internationalization_mysql_unicode_utf_8.txt
|
Q:
mod_python/MySQL error on INSERT with a lot of data: "OperationalError: (2006, 'MySQL server has gone away')"
When doing an INSERT with a lot of data, ie:
INSERT INTO table (mediumtext_field) VALUES ('...lots of text here: about 2MB worth...')
MySQL returns
"OperationalError: (2006, 'MySQL server has gone away')"
This is happening within a minute of starting the script, so it is not a timeout issue. Also, mediumtext_field should be able to hold ~16MB of data, so that shouldn't be a problem.
Any ideas what is causing the error or how to work around it?
Some relevant libraries being used: mod_python 3.3.1, MySQL 5.0.51 (on Windows XP SP3, via xampp, details below)
ApacheFriends XAMPP (basic package) version 1.6.5
Apache 2.2.6
MySQL 5.0.51
phpMyAdmin 2.11.3
A:
check the max_packet setting in your my.cnf file. this determines the largest amount of data you can send to your mysql server in a single statement. exceeding this values results in that error.
|
mod_python/MySQL error on INSERT with a lot of data: "OperationalError: (2006, 'MySQL server has gone away')"
|
When doing an INSERT with a lot of data, ie:
INSERT INTO table (mediumtext_field) VALUES ('...lots of text here: about 2MB worth...')
MySQL returns
"OperationalError: (2006, 'MySQL server has gone away')"
This is happening within a minute of starting the script, so it is not a timeout issue. Also, mediumtext_field should be able to hold ~16MB of data, so that shouldn't be a problem.
Any ideas what is causing the error or how to work around it?
Some relevant libraries being used: mod_python 3.3.1, MySQL 5.0.51 (on Windows XP SP3, via xampp, details below)
ApacheFriends XAMPP (basic package) version 1.6.5
Apache 2.2.6
MySQL 5.0.51
phpMyAdmin 2.11.3
|
[
"check the max_packet setting in your my.cnf file. this determines the largest amount of data you can send to your mysql server in a single statement. exceeding this values results in that error.\n"
] |
[
1
] |
[] |
[] |
[
"mysql",
"mysql_error_2006",
"python",
"xampp"
] |
stackoverflow_0000067180_mysql_mysql_error_2006_python_xampp.txt
|
Q:
How to rewrite or convert C# code in Java code?
I start to write a client - server application using .net (C#) for both client and server side.
Unfortunately, my company refuse to pay for Windows licence on server box meaning that I need to rewrite my code in Java, or go to the Mono way.
Is there any good way to translate C# code in Java ? The server application used no .net specific feature, only cross language tools like Spring.net, Hibernate.net and log4net.
Thanks.
A:
I'd suggest building for Mono. You'll run into some gray area, but overall it's great. However, if you want to build for Java, you might check out Grasshopper. It's a commercial product, but it claims to be able to translate CIL (the output of the C# compiler) to Java bytecodes.
A:
Possible solutions aside, direct translations of programs written in one language to a different language is generally considered a Bad Idea™ -- especially if this translation is done in some automated fashion. Even when done by a "real" programmer, translating an application line by line often results in a less than desirable end result because each language has its own idioms, strengths and weaknesses that require things be done in a slightly different way.
As painful as it may be, it's probably in your best interest and those who have to maintain this application to rewrite it in Java if that's what your employer requires.
A:
I only know the other way. Dbo4 is developed in java and the c# version is generated from the java sources automaticaly.
A:
There is no good way. My recommendation is to start over in Java, or like you said use Mono.
A:
Although I think the first mistake was choosing an implementation language without ensuring a suitable deployment environment, there's nothing that can be done about that now. I would think the Mono way would be better. Having to rewrite code would only increase the cost of the project, especially if you already have a good amount of code written in C#. I, personally, try to avoid rewriting code whenever possible.
A:
Java and C# are pretty close in syntax and semantics. The real problem is the little differences. They will bite you when you dont expect it.
A:
Grasshopper is really the best solution at this time, if the licensing works for you (the free version has some significant limitations). Its completely based on the Mono class libs (which are actually pretty good), but runs on top of standard Java VMs. Thats good as the Java VMs are generally a bit faster and more stable than Mono, in my experience. It does have more weaknesses than Mono when it comes to Forms/Graphics related APIs, as much of this hasn't been ported to Java from the Mono VM, however.
In the cases were it works, it can be wonderful, though. The performance is sometimes even better than when running the same code on MS's VM on Windows. :)
A:
I would say from a maintance stand point rewrite the code. It's going to bring the initial cost of the projet up but would be less labor intensive later for whoever is looking at the code. Like previous posters stated anything automated like this can't do as good as a job as a "real" programmer and doing line by line converting won't help much either. You don't want to produce code later on that works but is hell to maintain.
|
How to rewrite or convert C# code in Java code?
|
I start to write a client - server application using .net (C#) for both client and server side.
Unfortunately, my company refuse to pay for Windows licence on server box meaning that I need to rewrite my code in Java, or go to the Mono way.
Is there any good way to translate C# code in Java ? The server application used no .net specific feature, only cross language tools like Spring.net, Hibernate.net and log4net.
Thanks.
|
[
"I'd suggest building for Mono. You'll run into some gray area, but overall it's great. However, if you want to build for Java, you might check out Grasshopper. It's a commercial product, but it claims to be able to translate CIL (the output of the C# compiler) to Java bytecodes.\n",
"Possible solutions aside, direct translations of programs written in one language to a different language is generally considered a Bad Idea™ -- especially if this translation is done in some automated fashion. Even when done by a \"real\" programmer, translating an application line by line often results in a less than desirable end result because each language has its own idioms, strengths and weaknesses that require things be done in a slightly different way.\nAs painful as it may be, it's probably in your best interest and those who have to maintain this application to rewrite it in Java if that's what your employer requires.\n",
"I only know the other way. Dbo4 is developed in java and the c# version is generated from the java sources automaticaly.\n",
"There is no good way. My recommendation is to start over in Java, or like you said use Mono.\n",
"Although I think the first mistake was choosing an implementation language without ensuring a suitable deployment environment, there's nothing that can be done about that now. I would think the Mono way would be better. Having to rewrite code would only increase the cost of the project, especially if you already have a good amount of code written in C#. I, personally, try to avoid rewriting code whenever possible.\n",
"Java and C# are pretty close in syntax and semantics. The real problem is the little differences. They will bite you when you dont expect it.\n",
"Grasshopper is really the best solution at this time, if the licensing works for you (the free version has some significant limitations). Its completely based on the Mono class libs (which are actually pretty good), but runs on top of standard Java VMs. Thats good as the Java VMs are generally a bit faster and more stable than Mono, in my experience. It does have more weaknesses than Mono when it comes to Forms/Graphics related APIs, as much of this hasn't been ported to Java from the Mono VM, however.\nIn the cases were it works, it can be wonderful, though. The performance is sometimes even better than when running the same code on MS's VM on Windows. :)\n",
"I would say from a maintance stand point rewrite the code. It's going to bring the initial cost of the projet up but would be less labor intensive later for whoever is looking at the code. Like previous posters stated anything automated like this can't do as good as a job as a \"real\" programmer and doing line by line converting won't help much either. You don't want to produce code later on that works but is hell to maintain.\n"
] |
[
6,
4,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"interop",
"java",
"mono"
] |
stackoverflow_0000065058_.net_c#_interop_java_mono.txt
|
Q:
Is it possible to advance an enumerator and get its value in a lambda?
If I have an IEnumerator variable is it possible to have a lambda function that takes it, advances it with MoveNext() and returns the Current value every single time its called?
A:
e => e.MoveNext() ? e.Current : null
This will advance the enumerator and return the current value, and return null when the enumeration is complete.
A:
A Lambda expression can contain complex statements, so you can do the following:
Func<IEnumerator, object> f = ie => { ie.MoveNext(); return ie.Current; };
A:
Is this what you are looking for?
List<string> strings = new List<string>()
{
"Hello", "I", "am", "a", "list", "of", "strings."
};
IEnumerator<string> e = strings.GetEnumerator();
Func<string> f = () => e.MoveNext() ? e.Current : null;
for (; ; )
{
string str = f();
if (str == null)
break;
Console.Write(str + " ");
}
The point of an IEnumerator is that you already get syntactic sugar to deal with it:
foreach (string str in strings)
Console.Write(str + " ");
Even handling the enumerator directly looks cleaner in this case:
while (e.MoveNext())
Console.Write(e.Current + " ");
A:
Extending on Abe's solution, you can also use closures to hold a reference to the enumerator:
var iter = ((IEnumerable<char>)"hello").GetEnumerator();
//with closure
{
Func<object> f =
() =>
{
iter.MoveNext();
return iter.Current;
};
Console.WriteLine(f());
Console.WriteLine(f());
}
//without closure
{
Func<IEnumerator, object> f =
ie =>
{
ie.MoveNext();
return ie.Current;
};
Console.WriteLine(f(iter));
Console.WriteLine(f(iter));
}
|
Is it possible to advance an enumerator and get its value in a lambda?
|
If I have an IEnumerator variable is it possible to have a lambda function that takes it, advances it with MoveNext() and returns the Current value every single time its called?
|
[
"e => e.MoveNext() ? e.Current : null\n\nThis will advance the enumerator and return the current value, and return null when the enumeration is complete.\n",
"A Lambda expression can contain complex statements, so you can do the following:\nFunc<IEnumerator, object> f = ie => { ie.MoveNext(); return ie.Current; };\n\n",
"Is this what you are looking for?\nList<string> strings = new List<string>()\n{\n \"Hello\", \"I\", \"am\", \"a\", \"list\", \"of\", \"strings.\"\n};\nIEnumerator<string> e = strings.GetEnumerator();\nFunc<string> f = () => e.MoveNext() ? e.Current : null;\nfor (; ; )\n{\n string str = f();\n if (str == null)\n break;\n\n Console.Write(str + \" \");\n}\n\nThe point of an IEnumerator is that you already get syntactic sugar to deal with it:\nforeach (string str in strings)\n Console.Write(str + \" \");\n\nEven handling the enumerator directly looks cleaner in this case:\nwhile (e.MoveNext())\n Console.Write(e.Current + \" \");\n\n",
"Extending on Abe's solution, you can also use closures to hold a reference to the enumerator:\nvar iter = ((IEnumerable<char>)\"hello\").GetEnumerator();\n\n//with closure\n{\n Func<object> f =\n () =>\n {\n iter.MoveNext();\n return iter.Current;\n };\n Console.WriteLine(f());\n Console.WriteLine(f());\n}\n\n//without closure\n{\n Func<IEnumerator, object> f =\n ie =>\n {\n ie.MoveNext();\n return ie.Current;\n };\n Console.WriteLine(f(iter));\n Console.WriteLine(f(iter));\n}\n\n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
"c#",
"lambda"
] |
stackoverflow_0000072913_c#_lambda.txt
|
Q:
One file doesn't recognize other file's class in C++
I have my own class inside the file "Particles.h" and the class's implementation is inside "Particles.cpp"
I want the file "Load.h" to recognize my classes inside there, so I've added the line
#include "Particles.h"
and the file doesn't recognize it and in the past everything was OK (I haven't made any changes inside that class).
What should I do?
A:
It sounds like your include path - the list of directories that the compiler scans in order to locate files that you #include - is set incorrectly. Which compiler are you using?
A:
Well, if you listed your error codes, it might help. Off the top of my head, do you have something in Particles.h to make sure that the file is only included once? There are two methods of doing this. The first is to use #pragma once, but I think that might be Microsoft specific. The second is to use a #define.
Example:
#ifndef PARTICLES_H
#define PARTICLES_H
class CParticleWrapper
{
...
};
#endif
Also, unless you're deriving from a class in Particles.h or using an instance of a class instead of a pointer, you can use a forward declaration of the class and skip including the header file in a header file, which will save you compile time.
#ifndef LOAD_H
#define LOAD_H
class CParticleWrapper;
class CLoader
{
CParticleWrapper * m_pParticle;
public:
CLoader(CParticleWrapper * pParticle);
...
};
#endif
Then, in the Load.cpp, you would include the particle.h file.
A:
make sure the file "Particles.cpp" has also included "Particles.h" to start with and the files are in the same folder and they are all part of the same project. it will help if you also share the error message that you are getting from your compiler.
A:
Dev C++,It uses GCC,
The line is:
Stone *stone[48];
and it says: "expected constructor, destructor, or type conversion before '*' token ".
A:
It sounds like you need to include the definition of the Stone class, but it would be impossible to say without more details. Can you narrow down the error by removing unrelated code and post that?
|
One file doesn't recognize other file's class in C++
|
I have my own class inside the file "Particles.h" and the class's implementation is inside "Particles.cpp"
I want the file "Load.h" to recognize my classes inside there, so I've added the line
#include "Particles.h"
and the file doesn't recognize it and in the past everything was OK (I haven't made any changes inside that class).
What should I do?
|
[
"It sounds like your include path - the list of directories that the compiler scans in order to locate files that you #include - is set incorrectly. Which compiler are you using?\n",
"Well, if you listed your error codes, it might help. Off the top of my head, do you have something in Particles.h to make sure that the file is only included once? There are two methods of doing this. The first is to use #pragma once, but I think that might be Microsoft specific. The second is to use a #define.\nExample:\n#ifndef PARTICLES_H \n#define PARTICLES_H\n\nclass CParticleWrapper\n{\n...\n};\n\n#endif\n\nAlso, unless you're deriving from a class in Particles.h or using an instance of a class instead of a pointer, you can use a forward declaration of the class and skip including the header file in a header file, which will save you compile time.\n#ifndef LOAD_H\n#define LOAD_H\n\nclass CParticleWrapper;\n\nclass CLoader\n{\n CParticleWrapper * m_pParticle;\n\npublic:\n\n CLoader(CParticleWrapper * pParticle);\n ...\n}; \n\n#endif\n\nThen, in the Load.cpp, you would include the particle.h file.\n",
"make sure the file \"Particles.cpp\" has also included \"Particles.h\" to start with and the files are in the same folder and they are all part of the same project. it will help if you also share the error message that you are getting from your compiler.\n",
"Dev C++,It uses GCC,\nThe line is:\nStone *stone[48];\n\nand it says: \"expected constructor, destructor, or type conversion before '*' token \".\n",
"It sounds like you need to include the definition of the Stone class, but it would be impossible to say without more details. Can you narrow down the error by removing unrelated code and post that?\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"c++",
"class",
"header"
] |
stackoverflow_0000071959_c++_class_header.txt
|
Q:
Entities and Value Objects in Web Applications
We have a simple domain model: Contact, TelephoneNumber and ContactRepository. Contact is entity, it has an identity field. TelephoneNumber is typical value object: hasn't any identity and couldn't be loaded separately from the Contact instance.
From other side we have web application for manipulating the contacts. 1st page is "ContactList", next page is "Contact/C0001" which shows the contact details and the list of telephone numbers.
We have to implement telepone numbers edit form. The first approximation thought is to add some page which will be navigable like 'ThelephoneNumber/T0001'.
But ThelephoneNumber is is Value Object class and its instance couldn't be identified this way.
What is the best practice for resolving this issue? How can we identify non-identifieble objects in the stateless applications?
A:
Does the value objects state identify that particular instance? If not you could just pass back the old value and the new value when the edit form is submitted, then update any objects with the old state to the new state.
I would rather have a page like Contact/C0001/ThelephoneNumber, and use both the contact id and the value objects class to identify the instance you want to change.
Unless I've completely misunderstood what you're asking.
A:
I would make the TelephoneNumber just contain a bunch of numbers (maybe make it plural), and refer to it this way: Contact/C0001/TelephoneNumber(s)
A:
In practice I always find it easier to give the telephone number an identity, even if it isn't strictly necessary in design terms.
If it is a strict value object which cannot exist outside the context of the Contact, that indicates that a good user interface may call for the telephone number to be edited within the contact page rather than on its own page.
However I think Marc Gear's solution is a good one if you decide against either of those two approaches.
A:
Despite what many people would like you believe, you can't be 100% pure.
Your value objects need some kind of Identity field. Sometimes it will be something unique for an object like a phone number, sometimes it will have to be something artificial, like TelephoneNumber.Id.
The sooner you accept this, the better for you :-)
|
Entities and Value Objects in Web Applications
|
We have a simple domain model: Contact, TelephoneNumber and ContactRepository. Contact is entity, it has an identity field. TelephoneNumber is typical value object: hasn't any identity and couldn't be loaded separately from the Contact instance.
From other side we have web application for manipulating the contacts. 1st page is "ContactList", next page is "Contact/C0001" which shows the contact details and the list of telephone numbers.
We have to implement telepone numbers edit form. The first approximation thought is to add some page which will be navigable like 'ThelephoneNumber/T0001'.
But ThelephoneNumber is is Value Object class and its instance couldn't be identified this way.
What is the best practice for resolving this issue? How can we identify non-identifieble objects in the stateless applications?
|
[
"Does the value objects state identify that particular instance? If not you could just pass back the old value and the new value when the edit form is submitted, then update any objects with the old state to the new state. \nI would rather have a page like Contact/C0001/ThelephoneNumber, and use both the contact id and the value objects class to identify the instance you want to change.\nUnless I've completely misunderstood what you're asking.\n",
"I would make the TelephoneNumber just contain a bunch of numbers (maybe make it plural), and refer to it this way: Contact/C0001/TelephoneNumber(s)\n",
"In practice I always find it easier to give the telephone number an identity, even if it isn't strictly necessary in design terms.\nIf it is a strict value object which cannot exist outside the context of the Contact, that indicates that a good user interface may call for the telephone number to be edited within the contact page rather than on its own page.\nHowever I think Marc Gear's solution is a good one if you decide against either of those two approaches.\n",
"Despite what many people would like you believe, you can't be 100% pure.\nYour value objects need some kind of Identity field. Sometimes it will be something unique for an object like a phone number, sometimes it will have to be something artificial, like TelephoneNumber.Id.\nThe sooner you accept this, the better for you :-)\n"
] |
[
2,
0,
0,
0
] |
[] |
[] |
[
"architecture",
"domain_driven_design",
"web_applications"
] |
stackoverflow_0000072218_architecture_domain_driven_design_web_applications.txt
|
Q:
How do you pass an authenticated session between app domains
Lets say that you have websites www.xyz.com and www.abc.com.
Lets say that a user goes to www.abc.com and they get authenticated through the normal ASP .NET membership provider.
Then, from that site, they get sent to (redirection, linked, whatever works) site www.xyz.com, and the intent of site www.abc.com was to pass that user to the other site as the status of isAuthenticated, so that the site www.xyz.com does not ask for the credentials of said user again.
What would be needed for this to work? I have some constraints on this though, the user databases are completely separate, it is not internal to an organization, in all regards, it is like passing from stackoverflow.com to google as authenticated, it is that separate in nature. A link to a relevant article will suffice.
A:
Try using FormAuthentication by setting the web.config authentication section like so:
<authentication mode="Forms">
<forms name=".ASPXAUTH" requireSSL="true"
protection="All"
enableCrossAppRedirects="true" />
</authentication>
Generate a machine key. Example: Easiest way to generate MachineKey – Tips and tricks: ASP.NET, IIS ...
When posting to the other application the authentication ticket is passed as a hidden field. While reading the post from the first app, the second app will read the encrypted ticket and authenticate the user. Here's an example of the page that passes that posts the field:
.aspx:
<form id="form1" runat="server">
<div>
<p><asp:Button ID="btnTransfer" runat="server" Text="Go" PostBackUrl="http://otherapp/" /></p>
<input id="hdnStreetCred" runat="server" type="hidden" />
</div>
</form>
code-behind:
protected void Page_Load(object sender, EventArgs e)
{
FormsIdentity cIdentity = Page.User.Identity as FormsIdentity;
if (cIdentity != null)
{
this.hdnStreetCred.ID = FormsAuthentication.FormsCookieName;
this.hdnStreetCred.Value = FormsAuthentication.Encrypt(((FormsIdentity)User.Identity).Ticket);
}
}
Also see the cross app form authentication section in Chapter 5 of this book from Wrox. It recommends answers like the ones above in addition to providing a homebrew SSO solution.
A:
If you are using the built in membership system you can do cross sub-domain authentication with forms auth by using some like this in each web.config.
<authentication mode="Forms">
<forms name=".ASPXAUTH" loginUrl="~/Login.aspx" path="/"
protection="All"
domain="datasharp.co.uk"
enableCrossAppRedirects="true" />
</authentication>
Make sure that name, path, protection and domain are the same in all web.configs. If the sites are on different machines you will also need to ensure that the machineKey and validation and encryption keys are the same.
A:
If you store user sessions in the database, you could simply check the existance of the Guid in the session table, if it exists, then the user already authenticated on the other domain. For this to work, you would have to included the session guid in the URL when you redirect the user over to the other website.
A:
Not sure what you'd use for .NET but ordinarily I'd use memcached in a LAMP stack.
A:
The resolution depends on the type of application and environment in which it is running. E.g. on intranet with NT Domain you can use NTLM to pass windows credentials directly to servers in intranet perimeter without any need to duplicate sessions.
The approach how to do this is generally named single sign-on (see Wikipedia).
A:
There are multiple approaches to this problem, which is described as "Cross-domain Single Sign On". The wikipedia article pointed to by Matej is particularly helpful if you're looking for an open source solution - however - in a windows environment I belive you're best off with one of 2 approaches:
Buy a commercial SSO product (like SiteMinder or PingIdentity)
Use MicroSoft's cross-domain SSO solution, called ADFS - Active Direcctory Federation Services. (federation is the term for coordinating the behavior of multiple domains)
I have used SiteMinder and it works well, but it's expensive. If you're in an all MicroSoft environment I think ADFS is your best bet. Start with this ADFS whitepaper.
A:
I would user something like CAS:
[1]: http://www.ja-sig.org/products/cas/ CAS
This is a solved problem and wouldn't recommend rolling your own.
|
How do you pass an authenticated session between app domains
|
Lets say that you have websites www.xyz.com and www.abc.com.
Lets say that a user goes to www.abc.com and they get authenticated through the normal ASP .NET membership provider.
Then, from that site, they get sent to (redirection, linked, whatever works) site www.xyz.com, and the intent of site www.abc.com was to pass that user to the other site as the status of isAuthenticated, so that the site www.xyz.com does not ask for the credentials of said user again.
What would be needed for this to work? I have some constraints on this though, the user databases are completely separate, it is not internal to an organization, in all regards, it is like passing from stackoverflow.com to google as authenticated, it is that separate in nature. A link to a relevant article will suffice.
|
[
"Try using FormAuthentication by setting the web.config authentication section like so:\n<authentication mode=\"Forms\">\n <forms name=\".ASPXAUTH\" requireSSL=\"true\" \n protection=\"All\" \n enableCrossAppRedirects=\"true\" />\n</authentication>\n\nGenerate a machine key. Example: Easiest way to generate MachineKey – Tips and tricks: ASP.NET, IIS ...\nWhen posting to the other application the authentication ticket is passed as a hidden field. While reading the post from the first app, the second app will read the encrypted ticket and authenticate the user. Here's an example of the page that passes that posts the field:\n.aspx:\n<form id=\"form1\" runat=\"server\">\n <div>\n <p><asp:Button ID=\"btnTransfer\" runat=\"server\" Text=\"Go\" PostBackUrl=\"http://otherapp/\" /></p>\n <input id=\"hdnStreetCred\" runat=\"server\" type=\"hidden\" />\n </div>\n</form>\n\ncode-behind:\nprotected void Page_Load(object sender, EventArgs e)\n{\n FormsIdentity cIdentity = Page.User.Identity as FormsIdentity;\n if (cIdentity != null)\n {\n this.hdnStreetCred.ID = FormsAuthentication.FormsCookieName;\n this.hdnStreetCred.Value = FormsAuthentication.Encrypt(((FormsIdentity)User.Identity).Ticket);\n }\n}\n\nAlso see the cross app form authentication section in Chapter 5 of this book from Wrox. It recommends answers like the ones above in addition to providing a homebrew SSO solution. \n",
"If you are using the built in membership system you can do cross sub-domain authentication with forms auth by using some like this in each web.config.\n<authentication mode=\"Forms\">\n <forms name=\".ASPXAUTH\" loginUrl=\"~/Login.aspx\" path=\"/\" \n protection=\"All\" \n domain=\"datasharp.co.uk\" \n enableCrossAppRedirects=\"true\" />\n\n</authentication>\n\nMake sure that name, path, protection and domain are the same in all web.configs. If the sites are on different machines you will also need to ensure that the machineKey and validation and encryption keys are the same.\n",
"If you store user sessions in the database, you could simply check the existance of the Guid in the session table, if it exists, then the user already authenticated on the other domain. For this to work, you would have to included the session guid in the URL when you redirect the user over to the other website.\n",
"Not sure what you'd use for .NET but ordinarily I'd use memcached in a LAMP stack.\n",
"The resolution depends on the type of application and environment in which it is running. E.g. on intranet with NT Domain you can use NTLM to pass windows credentials directly to servers in intranet perimeter without any need to duplicate sessions.\nThe approach how to do this is generally named single sign-on (see Wikipedia). \n",
"There are multiple approaches to this problem, which is described as \"Cross-domain Single Sign On\". The wikipedia article pointed to by Matej is particularly helpful if you're looking for an open source solution - however - in a windows environment I belive you're best off with one of 2 approaches:\n\nBuy a commercial SSO product (like SiteMinder or PingIdentity)\nUse MicroSoft's cross-domain SSO solution, called ADFS - Active Direcctory Federation Services. (federation is the term for coordinating the behavior of multiple domains)\n\nI have used SiteMinder and it works well, but it's expensive. If you're in an all MicroSoft environment I think ADFS is your best bet. Start with this ADFS whitepaper.\n",
"I would user something like CAS:\n[1]: http://www.ja-sig.org/products/cas/ CAS\nThis is a solved problem and wouldn't recommend rolling your own. \n"
] |
[
5,
2,
1,
0,
0,
0,
0
] |
[
"Alternatively if you want to roll your own and the sites in question are not on the same servers or don't have access to a shared database (in which case see the above responses) then you could place a web beacon on each of the sites which would refer back to the other site. \nPlace a single pixel image (web beacon) on site A which would call site B passing through the users ID (encrypted & time stamped). This would then create a new user session on site B for the user which would be set as logged in. Then when the user visited site B they would already be logged in.\nTo minimise calls you could only place the web beacon on the home page and or log in confirmation pages. I've used this successfully in the past to pass information between partner sites. \n"
] |
[
-1
] |
[
"asp.net",
"asp.net_membership",
"c#",
"membership"
] |
stackoverflow_0000072125_asp.net_asp.net_membership_c#_membership.txt
|
Q:
What is the best way of preventing memory leaks in a yacc-based parser?
Yacc does not permit objects to be passed around. Because the %union can only contain POD types, complex objects must be new'd and passed around by pointer. If a syntax error occurs, the yacc parser just stops running, and references to all of those created objects are lost.
The only solution I've come up with is that all new'd object inherit a particular base class, be added to a container when allocated, and if there is an error everything in that container can be deleted.
Does anyone know of any better yacc tricks to solve this problem?
Please don't tell me to choose a different parser.
A:
I love Yacc, but the discriminating union stack does present a challenge.
I don't know whether you are using C or C++. I've modified Yacc to generate C++ for my own purposes, but this solution can be adapted to C.
My preferred solution is to pass an interface to the owner down the parse tree, rather than constructed objects up the stack. Do this by creating your own stack outside of Yacc's. Before you invoke a non-terminal that allocates an object, push the owner of that object to this stack.
For example:
class IExpressionOwner
{
public:
virtual ExpressionAdd *newExpressionAdd() = 0;
virtual ExpressionSubstract *newExpressionSubtract() = 0;
virtual ExpressionMultiply *newExpressionMultiply() = 0;
virtual ExpressionDivide *newExpressionDivide() = 0;
};
class ExpressionAdd : public Expression, public IExpressionOwner
{
private:
std::auto_ptr<Expression> left;
std::auto_ptr<Expression> right;
public:
ExpressionAdd *newExpressionAdd()
{
ExpressionAdd *newExpression = new ExpressionAdd();
std::auto_ptr<Expression> autoPtr(newExpression);
if (left.get() == NULL)
left = autoPtr;
else
right = autoPtr;
return newExpression;
}
...
};
class Parser
{
private:
std::stack<IExpressionOwner *> expressionOwner;
...
};
Everything that wants an expression has to implement the IExpressionOwner interface and push itself to the stack before invoking the expression non-terminal. It's a lot of extra code, but it controls object lifetime.
Update
The expression example is a bad one, since you don't know the operation until after you've reduced the left operand. Still, this technique works in many cases, and requires just a little tweaking for expressions.
A:
If it suits your project, consider using the Boehm Garbage collector. That way you can freely allocate new objects and let the collector handle the deletes. Of course there are tradeoffs involved in using a garbage collector. You would have to weigh the costs and benefits.
|
What is the best way of preventing memory leaks in a yacc-based parser?
|
Yacc does not permit objects to be passed around. Because the %union can only contain POD types, complex objects must be new'd and passed around by pointer. If a syntax error occurs, the yacc parser just stops running, and references to all of those created objects are lost.
The only solution I've come up with is that all new'd object inherit a particular base class, be added to a container when allocated, and if there is an error everything in that container can be deleted.
Does anyone know of any better yacc tricks to solve this problem?
Please don't tell me to choose a different parser.
|
[
"I love Yacc, but the discriminating union stack does present a challenge.\nI don't know whether you are using C or C++. I've modified Yacc to generate C++ for my own purposes, but this solution can be adapted to C.\nMy preferred solution is to pass an interface to the owner down the parse tree, rather than constructed objects up the stack. Do this by creating your own stack outside of Yacc's. Before you invoke a non-terminal that allocates an object, push the owner of that object to this stack.\nFor example:\nclass IExpressionOwner\n{\npublic:\n virtual ExpressionAdd *newExpressionAdd() = 0;\n virtual ExpressionSubstract *newExpressionSubtract() = 0;\n virtual ExpressionMultiply *newExpressionMultiply() = 0;\n virtual ExpressionDivide *newExpressionDivide() = 0;\n};\n\nclass ExpressionAdd : public Expression, public IExpressionOwner\n{\nprivate:\n std::auto_ptr<Expression> left;\n std::auto_ptr<Expression> right;\n\npublic:\n ExpressionAdd *newExpressionAdd()\n {\n ExpressionAdd *newExpression = new ExpressionAdd();\n std::auto_ptr<Expression> autoPtr(newExpression);\n if (left.get() == NULL)\n left = autoPtr;\n else\n right = autoPtr;\n return newExpression;\n }\n\n ...\n};\n\nclass Parser\n{\nprivate:\n std::stack<IExpressionOwner *> expressionOwner;\n\n ...\n};\n\nEverything that wants an expression has to implement the IExpressionOwner interface and push itself to the stack before invoking the expression non-terminal. It's a lot of extra code, but it controls object lifetime.\nUpdate\nThe expression example is a bad one, since you don't know the operation until after you've reduced the left operand. Still, this technique works in many cases, and requires just a little tweaking for expressions.\n",
"If it suits your project, consider using the Boehm Garbage collector. That way you can freely allocate new objects and let the collector handle the deletes. Of course there are tradeoffs involved in using a garbage collector. You would have to weigh the costs and benefits.\n"
] |
[
2,
1
] |
[
"Use smart pointers!\nOr, if you're uncomfortable depending on yet another library, you can always use auto_ptr from the C++ standard library.\n",
"Why is using a different parser such a problem? Bison is readily available, and (at least on linux) yacc is usually implemented as bison. You shouldn't need any changes to your grammar to use it (except for adding %destructor to solve your issue).\n"
] |
[
-1,
-1
] |
[
"c++",
"yacc"
] |
stackoverflow_0000064958_c++_yacc.txt
|
Q:
X/Gnome: How to measure the geometry of an open window
Is there a standard X / Gnome program that will display the X,Y width and depth in pixels of a window that I select? Something similar to the way an xterm shows you the width and depth of the window (in lines) as you resize it.
I'm running on Red Hat Enterprise Linux 4.4.
Thanks!
A:
Yes, you're looking for the program 'xwininfo'. Run it in another terminal and then click on the window you want info about and it will give it to you.
Hope this helps!
A:
$ xwininfo
xwininfo: Please select the window about which you
would like information by clicking the
mouse in that window.
xwininfo: Window id: 0x1200007 "xeyes"
Absolute upper-left X: 1130
Absolute upper-left Y: 0
Relative upper-left X: 0
Relative upper-left Y: 0
Width: 150
Height: 100
Depth: 24
Visual Class: TrueColor
Border width: 0
Class: InputOutput
Colormap: 0x20 (installed)
Bit Gravity State: NorthWestGravity
Window Gravity State: NorthWestGravity
Backing Store State: NotUseful
Save Under State: no
Map State: IsViewable
Override Redirect State: no
Corners: +1130+0 -0+0 -0-924 +1130-924
-geometry 150x100-0+0
|
X/Gnome: How to measure the geometry of an open window
|
Is there a standard X / Gnome program that will display the X,Y width and depth in pixels of a window that I select? Something similar to the way an xterm shows you the width and depth of the window (in lines) as you resize it.
I'm running on Red Hat Enterprise Linux 4.4.
Thanks!
|
[
"Yes, you're looking for the program 'xwininfo'. Run it in another terminal and then click on the window you want info about and it will give it to you. \nHope this helps! \n",
"$ xwininfo \n\nxwininfo: Please select the window about which you\n would like information by clicking the\n mouse in that window.\n\nxwininfo: Window id: 0x1200007 \"xeyes\"\n\n Absolute upper-left X: 1130\n Absolute upper-left Y: 0\n Relative upper-left X: 0\n Relative upper-left Y: 0\n Width: 150\n Height: 100\n Depth: 24\n Visual Class: TrueColor\n Border width: 0\n Class: InputOutput\n Colormap: 0x20 (installed)\n Bit Gravity State: NorthWestGravity\n Window Gravity State: NorthWestGravity\n Backing Store State: NotUseful\n Save Under State: no\n Map State: IsViewable\n Override Redirect State: no\n Corners: +1130+0 -0+0 -0-924 +1130-924\n -geometry 150x100-0+0\n\n"
] |
[
23,
7
] |
[] |
[] |
[
"gnome",
"linux",
"x11"
] |
stackoverflow_0000073087_gnome_linux_x11.txt
|
Q:
What is this delegate call doing in this line of code (C#)?
This is from an example accompanying the agsXMPP .Net assembly. I've read up on delegates, but am not sure how that fits in with this line of code (which waits for the logon to occur, and then sends a message. I guess what I'm looking for is an understanding of why delegate(0) accomplishes this, in the kind of simple terms I can understand.
xmpp.OnLogin += delegate(object o) {
xmpp.Send(new Message(new Jid(JID_RECEIVER),
MessageType.chat,
"Hello, how are you?"));
};
A:
It's exactly the same as
xmpp.OnLogin += EventHandler(MyMethod);
Where MyMethod is
public void MyMethod(object o)
{
xmpp.Send(new Message(new Jid(JID_RECEIVER), MessageType.chat, "Hello, how are you?"));
}
A:
As Abe noted, this code is creating an anonymous function. This:
xmpp.OnLogin += delegate(object o)
{
xmpp.Send(
new Message(new Jid(JID_RECEIVER), MessageType.chat, "Hello, how are you?"));
};
would have been accomplished as follows in older versions of .Net (I've excluded class declarations and such, and just kept the essential elements):
delegate void OnLoginEventHandler(object o);
public void MyLoginEventHandler(object o)
{
xmpp.Send(
new Message(new Jid(JID_RECEIVER), MessageType.chat, "Hello, how are you?"));
}
[...]
xmpp.OnLogin += new OnLoginEventHandler(MyLoginEventHandler);
What you're doing in either case is associating a method of yours to run when the xmpp OnLogin event is fired.
A:
OnLogin on xmpp is probably an event declared like this :
public event LoginEventHandler OnLogin;
where LoginEventHandler is as delegate type probably declared as :
public delegate void LoginEventHandler(Object o);
That means that in order to subscribe to the event, you need to provide a method (or an anonymous method / lambda expression) which match the LoginEventHandler delegate signature.
In your example, you pass an anonymous method using the delegate keyword:
xmpp.OnLogin += delegate(object o)
{
xmpp.Send(new Message(new Jid(JID_RECEIVER),
MessageType.chat,
"Hello, how are you?"));
};
The anonymous method matches the delegate signature expected by the OnLogin event (void return type + one object argument). You could also remove the object o parameter leveraging the contravariance, since it is not used inside the anonymous method body.
xmpp.OnLogin += delegate
{
xmpp.Send(new Message(new Jid(JID_RECEIVER),
MessageType.chat,
"Hello, how are you?"));
};
A:
The delegate(object o){..} tells the compiler to package up whatever is inside the brackets as an object to be executed later, in this case when OnLogin is fired. Without the delegate() statement, the compiler would think you are tying to execute an action in the middle of an assignemnt statement and give you errors.
A:
That is creating an anonymous function. This feature was introduced in C# 2.0
A:
It serves as an anonymous method, so you don't need to declare it somewhere else. It's very useful.
What it does in that case is to attach that method to the list of actions that are triggered because of the onLogin event.
A:
Agreed with Abe, this is an anonymous method. An anonymous method is just that -- a method without a name, which can be supplied as a parameter argument.
Obviously the OnLogin object is an Event; using an += operator ensures that the method specified by the anonymous delegate above is executed whenever the OnLogin event is raised.
A:
Basically, the code inside the {} will run when the "OnLogin" event of the xmpp event is fired. Based on the name, I'd guess that event fires at some point during the login process.
The syntax:
delegate(object o) { statements; }
is a called an anonymous method. The code in your question would be equivilent to this:
public class MyClass
{
private XMPPObjectType xmpp;
public void Main()
{
xmpp.OnLogin += MyMethod;
}
private void MyMethod(object o)
{
xmpp.Send(new Message(new Jid(JID_RECEIVER), MessageType.chat, "Hello, how are you?"));
}
}
A:
You are subscribing to the OnLogin event in xmpp.
This means that when xmpp fires this event, the code inside the anonymous delegate will fire. Its an elegant way to have callbacks.
In Xmpp, something like this is going on:
// Check to see if we should fire the login event
// ALso check to see if anything is subscribed to OnLogin
// (It will be null otherwise)
if (loggedIn && OnLogin != null)
{
// Anyone subscribed will now receive the event.
OnLogin(this);
}
|
What is this delegate call doing in this line of code (C#)?
|
This is from an example accompanying the agsXMPP .Net assembly. I've read up on delegates, but am not sure how that fits in with this line of code (which waits for the logon to occur, and then sends a message. I guess what I'm looking for is an understanding of why delegate(0) accomplishes this, in the kind of simple terms I can understand.
xmpp.OnLogin += delegate(object o) {
xmpp.Send(new Message(new Jid(JID_RECEIVER),
MessageType.chat,
"Hello, how are you?"));
};
|
[
"It's exactly the same as\nxmpp.OnLogin += EventHandler(MyMethod);\n\nWhere MyMethod is\npublic void MyMethod(object o) \n{ \n xmpp.Send(new Message(new Jid(JID_RECEIVER), MessageType.chat, \"Hello, how are you?\")); \n}\n\n",
"As Abe noted, this code is creating an anonymous function. This:\n\nxmpp.OnLogin += delegate(object o) \n { \n xmpp.Send(\n new Message(new Jid(JID_RECEIVER), MessageType.chat, \"Hello, how are you?\")); \n };\n\nwould have been accomplished as follows in older versions of .Net (I've excluded class declarations and such, and just kept the essential elements):\n\ndelegate void OnLoginEventHandler(object o);\n\npublic void MyLoginEventHandler(object o)\n{\n xmpp.Send(\n new Message(new Jid(JID_RECEIVER), MessageType.chat, \"Hello, how are you?\")); \n}\n\n[...]\n\nxmpp.OnLogin += new OnLoginEventHandler(MyLoginEventHandler);\n\nWhat you're doing in either case is associating a method of yours to run when the xmpp OnLogin event is fired.\n",
"OnLogin on xmpp is probably an event declared like this :\npublic event LoginEventHandler OnLogin;\n\nwhere LoginEventHandler is as delegate type probably declared as :\npublic delegate void LoginEventHandler(Object o);\n\nThat means that in order to subscribe to the event, you need to provide a method (or an anonymous method / lambda expression) which match the LoginEventHandler delegate signature.\nIn your example, you pass an anonymous method using the delegate keyword:\nxmpp.OnLogin += delegate(object o)\n { \n xmpp.Send(new Message(new Jid(JID_RECEIVER), \n MessageType.chat,\n \"Hello, how are you?\")); \n };\n\nThe anonymous method matches the delegate signature expected by the OnLogin event (void return type + one object argument). You could also remove the object o parameter leveraging the contravariance, since it is not used inside the anonymous method body.\nxmpp.OnLogin += delegate\n { \n xmpp.Send(new Message(new Jid(JID_RECEIVER), \n MessageType.chat,\n \"Hello, how are you?\")); \n };\n\n",
"The delegate(object o){..} tells the compiler to package up whatever is inside the brackets as an object to be executed later, in this case when OnLogin is fired. Without the delegate() statement, the compiler would think you are tying to execute an action in the middle of an assignemnt statement and give you errors.\n",
"That is creating an anonymous function. This feature was introduced in C# 2.0\n",
"It serves as an anonymous method, so you don't need to declare it somewhere else. It's very useful.\nWhat it does in that case is to attach that method to the list of actions that are triggered because of the onLogin event.\n",
"Agreed with Abe, this is an anonymous method. An anonymous method is just that -- a method without a name, which can be supplied as a parameter argument.\nObviously the OnLogin object is an Event; using an += operator ensures that the method specified by the anonymous delegate above is executed whenever the OnLogin event is raised.\n",
"Basically, the code inside the {} will run when the \"OnLogin\" event of the xmpp event is fired. Based on the name, I'd guess that event fires at some point during the login process.\nThe syntax:\ndelegate(object o) { statements; }\n\nis a called an anonymous method. The code in your question would be equivilent to this:\npublic class MyClass\n{\n private XMPPObjectType xmpp;\n public void Main()\n {\n xmpp.OnLogin += MyMethod;\n }\n private void MyMethod(object o)\n {\n xmpp.Send(new Message(new Jid(JID_RECEIVER), MessageType.chat, \"Hello, how are you?\"));\n }\n}\n\n",
"You are subscribing to the OnLogin event in xmpp.\nThis means that when xmpp fires this event, the code inside the anonymous delegate will fire. Its an elegant way to have callbacks.\nIn Xmpp, something like this is going on:\n // Check to see if we should fire the login event\n // ALso check to see if anything is subscribed to OnLogin \n // (It will be null otherwise)\n if (loggedIn && OnLogin != null)\n {\n // Anyone subscribed will now receive the event.\n OnLogin(this);\n }\n\n"
] |
[
4,
2,
2,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c#",
"delegates"
] |
stackoverflow_0000073024_c#_delegates.txt
|
Q:
Is it possible to use .htaccess to send six digit number URLs to a script but handle all other invalid URLs as 404s?
Is it possible to use .htaccess to process all six digit URLs by sending them to a script, but handle every other invalid URL as an error 404?
For example:
http://mywebsite.com/132483
would be sent to:
http://mywebsite.com/scriptname.php?no=132483
but
http://mywebsite.com/132483a or
http://mywebsite.com/asdf
would be handled as a 404 error.
I presently have this working via a custom PHP 404 script but it's kind of kludgy. Seems to me that .htaccess might be a more elegant solution, but I haven't been able to figure out if it's even possible.
A:
In your htaccess file, put the following
RewriteEngine On
RewriteRule ^([0-9]{6})$ /scriptname.php?no=$1 [L]
The first line turns the mod_rewrite engine on. The () brackets put the contents into $1 - successive () would populate $2, $3... and so on. The [0-9]{6} says look for a string precisely 6 characters long containing only characters 0-9.
The [L] at the end makes this the last rule - if it applies, rule processing will stop.
Oh, the ^ and $ mark the start and end of the incoming uri.
Hope that helps!
A:
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteRule ^([0-9]{6})$ scriptname.php?no=$1 [L]
</IfModule>
To preserve the clean URL
http://mywebsite.com/132483
while serving scriptname.php use only [L].
Using [R=301] will redirect you to your scriptname.php?no=xxx
You may find this useful http://www.addedbytes.com/download/mod_rewrite-cheat-sheet-v2/pdf/
A:
Yes it's possible with mod_rewrite. There are tons of good mod_rewrite tutorials online a quick Google search should turn up your answer in no time.
Basically what you're going to want to do is ensure that the regular expression you use is just looking for digits and no other characters and to ensure the length is 6. Then you'll redirect to scriptname.?no= with the number you captured.
Hope this helps!
|
Is it possible to use .htaccess to send six digit number URLs to a script but handle all other invalid URLs as 404s?
|
Is it possible to use .htaccess to process all six digit URLs by sending them to a script, but handle every other invalid URL as an error 404?
For example:
http://mywebsite.com/132483
would be sent to:
http://mywebsite.com/scriptname.php?no=132483
but
http://mywebsite.com/132483a or
http://mywebsite.com/asdf
would be handled as a 404 error.
I presently have this working via a custom PHP 404 script but it's kind of kludgy. Seems to me that .htaccess might be a more elegant solution, but I haven't been able to figure out if it's even possible.
|
[
"In your htaccess file, put the following\nRewriteEngine On\nRewriteRule ^([0-9]{6})$ /scriptname.php?no=$1 [L]\n\nThe first line turns the mod_rewrite engine on. The () brackets put the contents into $1 - successive () would populate $2, $3... and so on. The [0-9]{6} says look for a string precisely 6 characters long containing only characters 0-9.\nThe [L] at the end makes this the last rule - if it applies, rule processing will stop.\nOh, the ^ and $ mark the start and end of the incoming uri.\nHope that helps!\n",
"<IfModule mod_rewrite.c>\n RewriteEngine on\n RewriteRule ^([0-9]{6})$ scriptname.php?no=$1 [L]\n</IfModule>\n\nTo preserve the clean URL \nhttp://mywebsite.com/132483\n\nwhile serving scriptname.php use only [L]. \nUsing [R=301] will redirect you to your scriptname.php?no=xxx\nYou may find this useful http://www.addedbytes.com/download/mod_rewrite-cheat-sheet-v2/pdf/ \n",
"Yes it's possible with mod_rewrite. There are tons of good mod_rewrite tutorials online a quick Google search should turn up your answer in no time. \nBasically what you're going to want to do is ensure that the regular expression you use is just looking for digits and no other characters and to ensure the length is 6. Then you'll redirect to scriptname.?no= with the number you captured. \nHope this helps!\n"
] |
[
11,
4,
0
] |
[] |
[] |
[
".htaccess",
"php",
"redirect"
] |
stackoverflow_0000073123_.htaccess_php_redirect.txt
|
Q:
Generating JavaScript stubs from WSDL
I'm looking for a tool to generate a JavaScript stub from a WSDL.
Although I usually prefer to use REST services with JSON or XML, there are some tools I am currently integrating that works only using SOAP.
I already created a first version of the client in JavaScript but I'm parsing the SOAP envelope by hand and I doubt that my code can survive a service upgrade for example, seeing how complex the SOAP envelope specification is.
So is there any tool to automatically generate fully SOAP compliant stubs for JavaScript from the WSDL so I can be more confident on the future of my client code.
More: The web service I try to use is RPC encoded, not document literal.
A:
Apache CXF has tools that generate JavaScript clients that talk soap.
Actually, any CXF service can have a javascript client autogenerated by doing a get to the URL with ?js appended. (just like ?wsld produces the wsdl) There are command line tools as well, but the dynamic generated stuff is kind of neat.
A:
I had to do this myself in the past and I found this CodeProject article. I changed it up some, but it gave me a good foundation to implement everything I needed. One of the main features it already has is generating the SOAP client based off the WSDL. It also has built in caching of the WSDL for multiple calls.
This article also has a custom implementation of XmlHttpRequest for Ajax calls. This is the part that I didn't use. During that time, I think I was using Prototype javascript library and modified the code in this article to use it's Ajax functions instead. I just felt more comfortable using Prototype for the ajax calls, because it was widely used and had been tested on all the browsers.
A:
It would probably be an overkill, but NetBeans has this feature.
|
Generating JavaScript stubs from WSDL
|
I'm looking for a tool to generate a JavaScript stub from a WSDL.
Although I usually prefer to use REST services with JSON or XML, there are some tools I am currently integrating that works only using SOAP.
I already created a first version of the client in JavaScript but I'm parsing the SOAP envelope by hand and I doubt that my code can survive a service upgrade for example, seeing how complex the SOAP envelope specification is.
So is there any tool to automatically generate fully SOAP compliant stubs for JavaScript from the WSDL so I can be more confident on the future of my client code.
More: The web service I try to use is RPC encoded, not document literal.
|
[
"Apache CXF has tools that generate JavaScript clients that talk soap.\nActually, any CXF service can have a javascript client autogenerated by doing a get to the URL with ?js appended. (just like ?wsld produces the wsdl) There are command line tools as well, but the dynamic generated stuff is kind of neat.\n",
"I had to do this myself in the past and I found this CodeProject article. I changed it up some, but it gave me a good foundation to implement everything I needed. One of the main features it already has is generating the SOAP client based off the WSDL. It also has built in caching of the WSDL for multiple calls.\nThis article also has a custom implementation of XmlHttpRequest for Ajax calls. This is the part that I didn't use. During that time, I think I was using Prototype javascript library and modified the code in this article to use it's Ajax functions instead. I just felt more comfortable using Prototype for the ajax calls, because it was widely used and had been tested on all the browsers.\n",
"It would probably be an overkill, but NetBeans has this feature.\n"
] |
[
11,
8,
2
] |
[] |
[] |
[
"javascript",
"soap",
"wsdl"
] |
stackoverflow_0000041446_javascript_soap_wsdl.txt
|
Q:
To what use is multiple indirection in C++?
Under what circumstances might you want to use multiple indirection (that is, a chain of pointers as in Foo **) in C++?
A:
Most common usage as @aku pointed out is to allow a change to a pointer parameter to be visible after the function returns.
#include <iostream>
using namespace std;
struct Foo {
int a;
};
void CreateFoo(Foo** p) {
*p = new Foo();
(*p)->a = 12;
}
int main(int argc, char* argv[])
{
Foo* p = NULL;
CreateFoo(&p);
cout << p->a << endl;
delete p;
return 0;
}
This will print
12
But there are several other useful usages as in the following example to iterate an array of strings and print them to the standard output.
#include <iostream>
using namespace std;
int main(int argc, char* argv[])
{
const char* words[] = { "first", "second", NULL };
for (const char** p = words; *p != NULL; ++p) {
cout << *p << endl;
}
return 0;
}
A:
IMO most common usage is to pass reference to pointer variable
void test(int ** var)
{
...
}
int *foo = ...
test(&foo);
You can create multidimensional jagged array using double pointers:
int ** array = new *int[2];
array[0] = new int[2];
array[1] = new int[3];
A:
One common scenario is where you need to pass a null pointer to a function, and have it initialized within that function, and used outside the function. Without multplie indirection, the calling function would never have access to the initialized object.
Consider the following function:
initialize(foo* my_foo)
{
my_foo = new Foo();
}
Any function that calls 'initialize(foo*)' will not have access to the initialized instance of Foo, beacuse the pointer that's passed to this function is a copy. (The pointer is just an integer after all, and integers are passed by value.)
However, if the function was defined like this:
initialize(foo** my_foo)
{
*my_foo = new Foo();
}
...and it was called like this...
Foo* my_foo;
initialize(&my_foo);
...then the caller would have access to the initialized instance, via 'my_foo' - because it's the address of the pointer that was passed to 'initialize'.
Of course, in my simplified example, the 'initialize' function could simply return the newly created instance via the return keyword, but that does not always suit - maybe the function needs to return something else.
A:
If you pass a pointer in as output parameter, you might want to pass it as Foo** and set its value as *ppFoo = pSomeOtherFoo.
And from the algorithms-and-data-structures department, you can use that double indirection to update pointers, which can be faster than for instance swapping actual objects.
A:
A simple example would be using int** foo_mat as a 2d array of integers.
Or you may also use pointers to pointers - lets say that you have a pointer void* foo and you have 2 different objects that have a reference to it with the following members: void** foo_pointer1 and void** foo_pointer2, by having a pointer to a pointer you can actually check whether *foo_pointer1 == NULL which indicates that foo is NULL. You wouldn't be able to check whether foo is NULL if foo_pointer1 was a regular pointer.
I hope that my explanation wasn't too messy :)
A:
Usually when you pass a pointer to a function as a return value:
ErrorCode AllocateObject (void **object);
where the function returns a success/failure error code and fills in the object parameter with a pointer to the new object:
*object = new Object;
This is used a lot in COM programming in Win32.
This is more of a C thing to do, in C++ you can often wrap this type of system into a class to make the code more readable.
A:
Carl: Your example should be:
*p = x;
(You have two stars.) :-)
A:
In C, the idiom is absolutely required. Consider the problem in which you want a function to add a string (pure C, so a char *) to an array of pointers to char *. The function prototype requires three levels of indirection:
int AddStringToList(unsigned int *count_ptr, char ***list_ptr, const char *string_to_add);
We call it as follows:
unsigned int the_count = 0;
char **the_list = NULL;
AddStringToList(&the_count, &the_list, "The string I'm adding");
In C++ we have the option of using references instead, which would yield a different signature. But we still need the two levels of indirection you asked about in your original question:
int AddStringToList(unsigned int &count_ptr, char **&list_ptr, const char *string_to_add);
|
To what use is multiple indirection in C++?
|
Under what circumstances might you want to use multiple indirection (that is, a chain of pointers as in Foo **) in C++?
|
[
"Most common usage as @aku pointed out is to allow a change to a pointer parameter to be visible after the function returns.\n#include <iostream>\n\nusing namespace std;\n\nstruct Foo {\n int a;\n};\n\nvoid CreateFoo(Foo** p) {\n *p = new Foo();\n (*p)->a = 12;\n}\n\nint main(int argc, char* argv[])\n{\n Foo* p = NULL;\n CreateFoo(&p);\n cout << p->a << endl;\n delete p;\n return 0;\n}\n\nThis will print\n12\n\nBut there are several other useful usages as in the following example to iterate an array of strings and print them to the standard output.\n#include <iostream>\n\nusing namespace std;\n\nint main(int argc, char* argv[])\n{\n const char* words[] = { \"first\", \"second\", NULL };\n for (const char** p = words; *p != NULL; ++p) {\n cout << *p << endl;\n }\n\n return 0;\n}\n\n",
"IMO most common usage is to pass reference to pointer variable\nvoid test(int ** var)\n{\n ...\n}\n\nint *foo = ...\ntest(&foo);\n\nYou can create multidimensional jagged array using double pointers: \nint ** array = new *int[2];\narray[0] = new int[2];\narray[1] = new int[3];\n\n",
"One common scenario is where you need to pass a null pointer to a function, and have it initialized within that function, and used outside the function. Without multplie indirection, the calling function would never have access to the initialized object.\nConsider the following function:\ninitialize(foo* my_foo)\n{\n my_foo = new Foo();\n}\n\nAny function that calls 'initialize(foo*)' will not have access to the initialized instance of Foo, beacuse the pointer that's passed to this function is a copy. (The pointer is just an integer after all, and integers are passed by value.)\nHowever, if the function was defined like this:\ninitialize(foo** my_foo)\n{\n *my_foo = new Foo();\n}\n\n...and it was called like this...\nFoo* my_foo;\n\ninitialize(&my_foo);\n\n...then the caller would have access to the initialized instance, via 'my_foo' - because it's the address of the pointer that was passed to 'initialize'. \nOf course, in my simplified example, the 'initialize' function could simply return the newly created instance via the return keyword, but that does not always suit - maybe the function needs to return something else.\n",
"If you pass a pointer in as output parameter, you might want to pass it as Foo** and set its value as *ppFoo = pSomeOtherFoo.\nAnd from the algorithms-and-data-structures department, you can use that double indirection to update pointers, which can be faster than for instance swapping actual objects.\n",
"A simple example would be using int** foo_mat as a 2d array of integers.\nOr you may also use pointers to pointers - lets say that you have a pointer void* foo and you have 2 different objects that have a reference to it with the following members: void** foo_pointer1 and void** foo_pointer2, by having a pointer to a pointer you can actually check whether *foo_pointer1 == NULL which indicates that foo is NULL. You wouldn't be able to check whether foo is NULL if foo_pointer1 was a regular pointer.\nI hope that my explanation wasn't too messy :)\n",
"Usually when you pass a pointer to a function as a return value:\nErrorCode AllocateObject (void **object);\n\nwhere the function returns a success/failure error code and fills in the object parameter with a pointer to the new object:\n*object = new Object;\n\nThis is used a lot in COM programming in Win32.\nThis is more of a C thing to do, in C++ you can often wrap this type of system into a class to make the code more readable.\n",
"Carl: Your example should be:\n*p = x;\n\n(You have two stars.) :-)\n",
"In C, the idiom is absolutely required. Consider the problem in which you want a function to add a string (pure C, so a char *) to an array of pointers to char *. The function prototype requires three levels of indirection:\nint AddStringToList(unsigned int *count_ptr, char ***list_ptr, const char *string_to_add);\n\nWe call it as follows:\nunsigned int the_count = 0;\nchar **the_list = NULL;\n\nAddStringToList(&the_count, &the_list, \"The string I'm adding\");\n\nIn C++ we have the option of using references instead, which would yield a different signature. But we still need the two levels of indirection you asked about in your original question:\nint AddStringToList(unsigned int &count_ptr, char **&list_ptr, const char *string_to_add);\n\n"
] |
[
12,
7,
5,
3,
1,
1,
1,
1
] |
[] |
[] |
[
"c++",
"pointers"
] |
stackoverflow_0000071108_c++_pointers.txt
|
Q:
How to make the taskbar blink my application like Messenger does when a new message arrive?
Is there an API call in .NET or a native DLL that I can use to create similar behaviour as Windows Live Messenger when a response comes from someone I chat with?
A:
FlashWindowEx is the way to go. See here for MSDN documentation
[DllImport("user32.dll")]
[return: MarshalAs(UnmanagedType.Bool)]
static extern bool FlashWindowEx(ref FLASHWINFO pwfi);
[StructLayout(LayoutKind.Sequential)]
public struct FLASHWINFO
{
public UInt32 cbSize;
public IntPtr hwnd;
public UInt32 dwFlags;
public UInt32 uCount;
public UInt32 dwTimeout;
}
public const UInt32 FLASHW_ALL = 3;
Calling the Function:
FLASHWINFO fInfo = new FLASHWINFO();
fInfo.cbSize = Convert.ToUInt32(Marshal.SizeOf(fInfo));
fInfo.hwnd = hWnd;
fInfo.dwFlags = FLASHW_ALL;
fInfo.uCount = UInt32.MaxValue;
fInfo.dwTimeout = 0;
FlashWindowEx(ref fInfo);
This was shamelessly plugged from Pinvoke.net
A:
HWND hHandle = FindWindow(NULL,"YourApplicationName");
FLASHWINFO pf;
pf.cbSize = sizeof(FLASHWINFO);
pf.hwnd = hHandle;
pf.dwFlags = FLASHW_TIMER|FLASHW_TRAY; // (or FLASHW_ALL to flash and if it is not minimized)
pf.uCount = 8;
pf.dwTimeout = 75;
FlashWindowEx(&pf);
Stolen from experts-exchange member gtokas.
FlashWindowEx.
A:
From a Raymond Chen blog entry:
How do I flash my window caption and taskbar button manually?
How do I flash my window caption and
taskbar button manually? Commenter
Jonathan Scheepers wonders about those
programs that flash their taskbar
button indefinitely, overriding the
default flash count set by
SysteParametersInfo(SPI_SETFOREGROUNDFLASHCOUNT).
The FlashWindowEx function and its
simpler precursor FlashWindow let a
program flash its window caption and
taskbar button manually. The window
manager flashes the caption
automatically (and Explorer follows
the caption by flashing the taskbar
button) if a program calls
SetForegroundWindow when it doesn't
have permission to take foreground,
and it is that automatic flashing that
the SPI_SETFOREGROUNDFLASHCOUNT
setting controls.
For illustration purposes, I'll
demonstrate flashing the caption
manually. This is generally speaking
not recommended, but since you asked,
I'll show you how. And then promise
you won't do it.
Start with the scratch program and
make this simple change:
void
OnSize(HWND hwnd, UINT state, int cx, int cy)
{
if (state == SIZE_MINIMIZED) {
FLASHWINFO fwi = { sizeof(fwi), hwnd,
FLASHW_TIMERNOFG | FLASHW_ALL };
FlashWindowEx(&fwi);
}
}
Compile and run this program, then
minimize it. When you do, its taskbar
button flashes indefinitely until you
click on it. The program responds to
being minimzed by calling the
FlashWindowEx function asking for
everything possible (currently the
caption and taskbar button) to be
flashed until the window comes to the
foreground.
Other members of the FLASHWINFO
structure let you customize the
flashing behavior further, such as
controlling the flash frequency and
the number of flashes. and if you
really want to take control, you can
use FLASHW_ALL and FLASHW_STOP to turn
your caption and taskbar button on and
off exactly the way you want it. (Who
knows, maybe you want to send a
message in Morse code.)
Published Monday, May 12, 2008 7:00 AM
by oldnewthing Filed under: Code
A:
The FlashWindowEx Win32 API is the call used to do this. The documentation for it is at:
http://msdn.microsoft.com/en-us/library/ms679347(VS.85).aspx
A:
I believe you're looking for SetForegroundWindow.
|
How to make the taskbar blink my application like Messenger does when a new message arrive?
|
Is there an API call in .NET or a native DLL that I can use to create similar behaviour as Windows Live Messenger when a response comes from someone I chat with?
|
[
"FlashWindowEx is the way to go. See here for MSDN documentation\n[DllImport(\"user32.dll\")]\n[return: MarshalAs(UnmanagedType.Bool)]\nstatic extern bool FlashWindowEx(ref FLASHWINFO pwfi);\n\n[StructLayout(LayoutKind.Sequential)]\npublic struct FLASHWINFO\n{\n public UInt32 cbSize;\n public IntPtr hwnd;\n public UInt32 dwFlags;\n public UInt32 uCount;\n public UInt32 dwTimeout;\n}\n\npublic const UInt32 FLASHW_ALL = 3; \n\nCalling the Function:\nFLASHWINFO fInfo = new FLASHWINFO();\n\nfInfo.cbSize = Convert.ToUInt32(Marshal.SizeOf(fInfo));\nfInfo.hwnd = hWnd;\nfInfo.dwFlags = FLASHW_ALL;\nfInfo.uCount = UInt32.MaxValue;\nfInfo.dwTimeout = 0;\n\nFlashWindowEx(ref fInfo);\n\nThis was shamelessly plugged from Pinvoke.net\n",
"HWND hHandle = FindWindow(NULL,\"YourApplicationName\");\nFLASHWINFO pf;\npf.cbSize = sizeof(FLASHWINFO);\npf.hwnd = hHandle;\npf.dwFlags = FLASHW_TIMER|FLASHW_TRAY; // (or FLASHW_ALL to flash and if it is not minimized)\npf.uCount = 8;\npf.dwTimeout = 75;\n\nFlashWindowEx(&pf);\n\nStolen from experts-exchange member gtokas.\nFlashWindowEx.\n",
"From a Raymond Chen blog entry: \n\nHow do I flash my window caption and taskbar button manually?\nHow do I flash my window caption and\n taskbar button manually? Commenter\n Jonathan Scheepers wonders about those\n programs that flash their taskbar\n button indefinitely, overriding the\n default flash count set by\n SysteParametersInfo(SPI_SETFOREGROUNDFLASHCOUNT).\nThe FlashWindowEx function and its\n simpler precursor FlashWindow let a\n program flash its window caption and\n taskbar button manually. The window\n manager flashes the caption\n automatically (and Explorer follows\n the caption by flashing the taskbar\n button) if a program calls\n SetForegroundWindow when it doesn't\n have permission to take foreground,\n and it is that automatic flashing that\n the SPI_SETFOREGROUNDFLASHCOUNT\n setting controls.\nFor illustration purposes, I'll\n demonstrate flashing the caption\n manually. This is generally speaking\n not recommended, but since you asked,\n I'll show you how. And then promise\n you won't do it.\nStart with the scratch program and\n make this simple change:\nvoid\nOnSize(HWND hwnd, UINT state, int cx, int cy)\n{\n if (state == SIZE_MINIMIZED) {\n FLASHWINFO fwi = { sizeof(fwi), hwnd,\n FLASHW_TIMERNOFG | FLASHW_ALL };\n FlashWindowEx(&fwi);\n }\n}\n\nCompile and run this program, then\n minimize it. When you do, its taskbar\n button flashes indefinitely until you\n click on it. The program responds to\n being minimzed by calling the\n FlashWindowEx function asking for\n everything possible (currently the\n caption and taskbar button) to be\n flashed until the window comes to the\n foreground.\nOther members of the FLASHWINFO\n structure let you customize the\n flashing behavior further, such as\n controlling the flash frequency and\n the number of flashes. and if you\n really want to take control, you can\n use FLASHW_ALL and FLASHW_STOP to turn\n your caption and taskbar button on and\n off exactly the way you want it. (Who\n knows, maybe you want to send a\n message in Morse code.)\nPublished Monday, May 12, 2008 7:00 AM\n by oldnewthing Filed under: Code\n\n",
"The FlashWindowEx Win32 API is the call used to do this. The documentation for it is at:\nhttp://msdn.microsoft.com/en-us/library/ms679347(VS.85).aspx\n",
"I believe you're looking for SetForegroundWindow.\n"
] |
[
23,
4,
3,
2,
0
] |
[] |
[] |
[
"winapi",
"windows"
] |
stackoverflow_0000073162_winapi_windows.txt
|
Q:
"Could not find file" when using Isolated Storage
I save stuff in an Isolated Storage file (using class IsolatedStorageFile). It works well, and I can retrieve the saved values when calling the saving and retrieving methods in my DAL layer from my GUI layer. However, when I try to retrieve the same settings from another assembly in the same project, it gives me a FileNotFoundException. What do I do wrong? This is the general concept:
public void Save(int number)
{
IsolatedStorageFile storage = IsolatedStorageFile.GetMachineStoreForAssembly();
IsolatedStorageFileStream fileStream =
new IsolatedStorageFileStream(filename, FileMode.OpenOrCreate, storage);
StreamWriter writer = new StreamWriter(fileStream);
writer.WriteLine(number);
writer.Close();
}
public int Retrieve()
{
IsolatedStorageFile storage = IsolatedStorageFile.GetMachineStoreForAssembly();
IsolatedStorageFileStream fileStream = new IsolatedStorageFileStream(filename, FileMode.Open, storage);
StreamReader reader = new StreamReader(fileStream);
int number;
try
{
string line = reader.ReadLine();
number = int.Parse(line);
}
finally
{
reader.Close();
}
return number;
}
I've tried using all the GetMachineStoreFor* scopes.
EDIT: Since I need several assemblies to access the files, it doesn't seem possible to do with isolated storage, unless it's a ClickOnce application.
A:
When you instantiated the IsolatedStorageFile, did you scope it to IsolatedStorageScope.Machine?
Ok now that you have illustrated your code style and I have gone back to retesting the behaviour of the methods, here is the explanation:
GetMachineStoreForAssembly() - scoped to the machine and the assembly identity. Different assemblies in the same application would have their own isolated storage.
GetMachineStoreForDomain() - a misnomer in my opinion. scoped to the machine and the domain identity on top of the assembly identity. There should have been an option for just AppDomain alone.
GetMachineStoreForApplication() - this is the one you are looking for. I have tested it and different assemblies can pick up the values written in another assembly. The only catch is, the application identity must be verifiable. When running locally, it cannot be properly determined and it will end up with exception "Unable to determine application identity of the caller". It can be verified by deploying the application via Click Once. Only then can this method apply and achieve its desired effect of shared isolated storage.
A:
When you are saving, you are calling GetMachineStoreForDomain, but when you are retrieving, you are calling GetMachineStoreForAssembly.
GetMachineStoreForAssembly is scoped to the assembly that the code is executing in, while the GetMachineStoreForDomain is scoped to the currently running AppDomain and the assembly where the code is executing. Just change your these calls to GetMachineStoreForApplication, and it should work.
The documentation for IsolatedStorageFile can be found at http://msdn.microsoft.com/en-us/library/system.io.isolatedstorage.isolatedstoragefile_members.aspx
|
"Could not find file" when using Isolated Storage
|
I save stuff in an Isolated Storage file (using class IsolatedStorageFile). It works well, and I can retrieve the saved values when calling the saving and retrieving methods in my DAL layer from my GUI layer. However, when I try to retrieve the same settings from another assembly in the same project, it gives me a FileNotFoundException. What do I do wrong? This is the general concept:
public void Save(int number)
{
IsolatedStorageFile storage = IsolatedStorageFile.GetMachineStoreForAssembly();
IsolatedStorageFileStream fileStream =
new IsolatedStorageFileStream(filename, FileMode.OpenOrCreate, storage);
StreamWriter writer = new StreamWriter(fileStream);
writer.WriteLine(number);
writer.Close();
}
public int Retrieve()
{
IsolatedStorageFile storage = IsolatedStorageFile.GetMachineStoreForAssembly();
IsolatedStorageFileStream fileStream = new IsolatedStorageFileStream(filename, FileMode.Open, storage);
StreamReader reader = new StreamReader(fileStream);
int number;
try
{
string line = reader.ReadLine();
number = int.Parse(line);
}
finally
{
reader.Close();
}
return number;
}
I've tried using all the GetMachineStoreFor* scopes.
EDIT: Since I need several assemblies to access the files, it doesn't seem possible to do with isolated storage, unless it's a ClickOnce application.
|
[
"When you instantiated the IsolatedStorageFile, did you scope it to IsolatedStorageScope.Machine?\nOk now that you have illustrated your code style and I have gone back to retesting the behaviour of the methods, here is the explanation:\n\nGetMachineStoreForAssembly() - scoped to the machine and the assembly identity. Different assemblies in the same application would have their own isolated storage.\nGetMachineStoreForDomain() - a misnomer in my opinion. scoped to the machine and the domain identity on top of the assembly identity. There should have been an option for just AppDomain alone.\nGetMachineStoreForApplication() - this is the one you are looking for. I have tested it and different assemblies can pick up the values written in another assembly. The only catch is, the application identity must be verifiable. When running locally, it cannot be properly determined and it will end up with exception \"Unable to determine application identity of the caller\". It can be verified by deploying the application via Click Once. Only then can this method apply and achieve its desired effect of shared isolated storage.\n\n",
"When you are saving, you are calling GetMachineStoreForDomain, but when you are retrieving, you are calling GetMachineStoreForAssembly.\nGetMachineStoreForAssembly is scoped to the assembly that the code is executing in, while the GetMachineStoreForDomain is scoped to the currently running AppDomain and the assembly where the code is executing. Just change your these calls to GetMachineStoreForApplication, and it should work.\nThe documentation for IsolatedStorageFile can be found at http://msdn.microsoft.com/en-us/library/system.io.isolatedstorage.isolatedstoragefile_members.aspx\n"
] |
[
4,
1
] |
[] |
[] |
[
".net",
"c#"
] |
stackoverflow_0000072626_.net_c#.txt
|
Q:
How do we create an installer than doesn't require administrator permissions?
When creating a setup/MSI with Visual Studio is it possible to make a setup for a simple application that doesn't require administrator permissions to install? If its not possible under Windows XP is it possible under Vista?
For example a simple image manipulation application that allows you to paste photos on top of backgrounds. I believe installing to the Program Files folder requires administrator permissions? Can we install in the \AppData folder instead?
The objective is to create an application which will install for users who are not members of the administrators group on the local machine and will not show the UAC prompt on Vista.
I believe a limitation this method would be that if it installs under the app data folder for the current user other users couldn't run it.
Update:
Can you package a click once install in a normal setup.exe type installer? You may ask why we want this - the reason is we have an installer that does a prereq check and installs anything required (such as .NET) and we then downloads and executes the MSI. We would like to display a normal installer start screen too even if that's the only thing displayed. We don't mind if the app can only be seen by one user (the user it's installed for).
A:
ClickOnce is a good solution to this problem. If you go to Project Properties > Publish, you can setup settings for this. In particular, "Install Mode and Settings" is good to look at:
The application is available online only -- this is effectively a "run once" application
The application is avaiable offline as well (launchable from Start Menu) -- this installs the app on the PC
You don't actually have to use the ClickOnce web deployment stuff. If you do a Build > Publish, and then zip up the contents of the publish\ folder, you can effectively distribute that as an installer. To make it even smoother, create a self-extracting archive from the folder that automatically runs the setup.exe file.
Even if you install this way, if you opt to use it, the online update will still work for the application. All you have to do is put the ClickOnce files online, and put the URL in the project's Publish properties page.
A:
Vista is more restrictive about this kind of thing, so if you can't do it for XP you can bet Vista won't let you either.
You are right that installing to the program files folder using windows installer requires administrative permissions. In fact, all write access to that folder requires admin permsissions, which is why you should no longer store your data in the same folder as your executable.
Fortunately, if you're using .Net you can use ClickOnce deployment instead of an msi, which should allow you to install to a folder in each user's profile without requiring admin permissions.
A:
The only way that I know of to do this is to build a ClickOnce application in .NET 2.0+
If the user of your application has the correct pre-requsits installed then the application can just be "launched".
Check out:
Microsoft Family.Show
A:
IF UAC is enabled, you couldn't write to Program Files. Installing to \AppData will indeed only install the program for one user.
However, you must note that any configuration changes that require changes to the registry probably(I'd have to double check on that) administrator privilege. Off the top of my head modifications to the desktop background are ultimately stored in HKEY_CURRENT_USER.
|
How do we create an installer than doesn't require administrator permissions?
|
When creating a setup/MSI with Visual Studio is it possible to make a setup for a simple application that doesn't require administrator permissions to install? If its not possible under Windows XP is it possible under Vista?
For example a simple image manipulation application that allows you to paste photos on top of backgrounds. I believe installing to the Program Files folder requires administrator permissions? Can we install in the \AppData folder instead?
The objective is to create an application which will install for users who are not members of the administrators group on the local machine and will not show the UAC prompt on Vista.
I believe a limitation this method would be that if it installs under the app data folder for the current user other users couldn't run it.
Update:
Can you package a click once install in a normal setup.exe type installer? You may ask why we want this - the reason is we have an installer that does a prereq check and installs anything required (such as .NET) and we then downloads and executes the MSI. We would like to display a normal installer start screen too even if that's the only thing displayed. We don't mind if the app can only be seen by one user (the user it's installed for).
|
[
"ClickOnce is a good solution to this problem. If you go to Project Properties > Publish, you can setup settings for this. In particular, \"Install Mode and Settings\" is good to look at: \n\nThe application is available online only -- this is effectively a \"run once\" application\nThe application is avaiable offline as well (launchable from Start Menu) -- this installs the app on the PC\n\nYou don't actually have to use the ClickOnce web deployment stuff. If you do a Build > Publish, and then zip up the contents of the publish\\ folder, you can effectively distribute that as an installer. To make it even smoother, create a self-extracting archive from the folder that automatically runs the setup.exe file. \nEven if you install this way, if you opt to use it, the online update will still work for the application. All you have to do is put the ClickOnce files online, and put the URL in the project's Publish properties page.\n",
"Vista is more restrictive about this kind of thing, so if you can't do it for XP you can bet Vista won't let you either.\nYou are right that installing to the program files folder using windows installer requires administrative permissions. In fact, all write access to that folder requires admin permsissions, which is why you should no longer store your data in the same folder as your executable.\nFortunately, if you're using .Net you can use ClickOnce deployment instead of an msi, which should allow you to install to a folder in each user's profile without requiring admin permissions.\n",
"The only way that I know of to do this is to build a ClickOnce application in .NET 2.0+\nIf the user of your application has the correct pre-requsits installed then the application can just be \"launched\".\nCheck out: \n\nMicrosoft Family.Show\n\n",
"IF UAC is enabled, you couldn't write to Program Files. Installing to \\AppData will indeed only install the program for one user.\nHowever, you must note that any configuration changes that require changes to the registry probably(I'd have to double check on that) administrator privilege. Off the top of my head modifications to the desktop background are ultimately stored in HKEY_CURRENT_USER.\n"
] |
[
4,
0,
0,
0
] |
[] |
[] |
[
"administrator",
"installation",
"windows_vista",
"windows_xp"
] |
stackoverflow_0000073305_administrator_installation_windows_vista_windows_xp.txt
|
Q:
Best way for a Swing GUI to communicate with domain logic?
I have some domain logic implemented in a number of POJOs. I want to write a Swing user interface to allow the user to initiate and see the results of various domain actions.
What's the best pattern/framework/library for communications between the UI and the domain? This boils down into:
the UI being able to convert a user gesture into a domain action
the domain being able to send state/result information back to the UI for display purposes
I'm aware of MVC as a broad concept and have fiddled with the Observer pattern (whose Java implementation has some drawbacks if I understand correctly), but I'm wondering if there's an accepted best practise for this problem?
A:
Definitely MVC - something like this example which clearly splits things out. The problem with the Swing examples is that they seem to show the MVC all working within the swing stuff, which does not seem right to me
A:
MVC is fantastic for an individual widget, however it gets a little unruly when you have pages and forms with lots of widgets.
One thing that might be worth looking into (and I'm not endorsing it, I haven't actually used it, just implemented something very similar for myself) is the Beans Binding Framework (JSR295)
A:
I have used the Observer pattern (using AspectJ magic) in the past with some success, but found that unless you were careful it quickly became a cluster.. uhh.. flick?
It quickly became hard to manage and most importantly extremely hard to debug.
Edit:
To expand slightly on my answer, we were using SWT, not Swing, so YMMV. We basically used AspectJ to hook up the transference of data from the UI components to the model objects. These model objects were dumb POJOs.
Actual business logic was done by 'watching' the model objects with AspectJ and firing off the required event if they changed. So if you changed a value in a textbox AspectJ would fire and copy that value into a POJO. If that field in the POJO had an event on it for business logic that would then fire. If that logic modified any POJOs (and it could) AspectJ would notice and copy the value from the POJO into the UI component.
|
Best way for a Swing GUI to communicate with domain logic?
|
I have some domain logic implemented in a number of POJOs. I want to write a Swing user interface to allow the user to initiate and see the results of various domain actions.
What's the best pattern/framework/library for communications between the UI and the domain? This boils down into:
the UI being able to convert a user gesture into a domain action
the domain being able to send state/result information back to the UI for display purposes
I'm aware of MVC as a broad concept and have fiddled with the Observer pattern (whose Java implementation has some drawbacks if I understand correctly), but I'm wondering if there's an accepted best practise for this problem?
|
[
"Definitely MVC - something like this example which clearly splits things out. The problem with the Swing examples is that they seem to show the MVC all working within the swing stuff, which does not seem right to me\n",
"MVC is fantastic for an individual widget, however it gets a little unruly when you have pages and forms with lots of widgets.\nOne thing that might be worth looking into (and I'm not endorsing it, I haven't actually used it, just implemented something very similar for myself) is the Beans Binding Framework (JSR295)\n",
"I have used the Observer pattern (using AspectJ magic) in the past with some success, but found that unless you were careful it quickly became a cluster.. uhh.. flick?\nIt quickly became hard to manage and most importantly extremely hard to debug.\nEdit:\nTo expand slightly on my answer, we were using SWT, not Swing, so YMMV. We basically used AspectJ to hook up the transference of data from the UI components to the model objects. These model objects were dumb POJOs.\nActual business logic was done by 'watching' the model objects with AspectJ and firing off the required event if they changed. So if you changed a value in a textbox AspectJ would fire and copy that value into a POJO. If that field in the POJO had an event on it for business logic that would then fire. If that logic modified any POJOs (and it could) AspectJ would notice and copy the value from the POJO into the UI component.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"java",
"swing",
"user_interface"
] |
stackoverflow_0000069927_java_swing_user_interface.txt
|
Q:
How do you get a reference to the enclosing class from an anonymous inner class in Java?
I'm currently creating an explicit reference to this in the outer class so that I have a name to refer to in the anonymous inner class. Is there a better way to do this?
A:
I just found this recently. Use OuterClassName.this.
class Outer {
void foo() {
new Thread() {
public void run() {
Outer.this.bar();
}
}.start();
}
void bar() {
System.out.println("BAR!");
}
}
Updated If you just want the object itself (instead of invoking members), then Outer.this is the way to go.
A:
Use EnclosingClass.this
A:
You can still use Outer.class to get the class of the outer class object (which will return the same Class object as Outer.this.getClass() but is more efficient)
If you want to access statics in the enclosing class, you can use Outer.name where name is the static field or method.
|
How do you get a reference to the enclosing class from an anonymous inner class in Java?
|
I'm currently creating an explicit reference to this in the outer class so that I have a name to refer to in the anonymous inner class. Is there a better way to do this?
|
[
"I just found this recently. Use OuterClassName.this.\nclass Outer {\n void foo() {\n new Thread() {\n public void run() {\n Outer.this.bar();\n }\n }.start();\n }\n void bar() {\n System.out.println(\"BAR!\");\n }\n}\n\nUpdated If you just want the object itself (instead of invoking members), then Outer.this is the way to go.\n",
"Use EnclosingClass.this\n",
"You can still use Outer.class to get the class of the outer class object (which will return the same Class object as Outer.this.getClass() but is more efficient)\nIf you want to access statics in the enclosing class, you can use Outer.name where name is the static field or method.\n"
] |
[
93,
19,
1
] |
[] |
[] |
[
"java",
"oop"
] |
stackoverflow_0000031201_java_oop.txt
|
Q:
dll/runner that returns TAP output for an NUnit test suite?
Anyone know uf there's a dll/runner anywhere that returns TAP output from an NUnit test suite?
A:
Seems unlikely to me, since there is an impedance mismatch. TAP has no concept for what NUnit calls a test, and what TAP calls a test usually corresponds to an NUnit assertion, but not precisely. So I’m not sure how the thing you’re looking for would work at all. (But maybe a heuristic could work well enough.)
A:
At the very least, a simple pass/fail for each TestFixture run would allow the output to be sucked into other TAP results for aggregating results/reports. Maybe it's as simple as a xslt to transform the xml report into TAP
|
dll/runner that returns TAP output for an NUnit test suite?
|
Anyone know uf there's a dll/runner anywhere that returns TAP output from an NUnit test suite?
|
[
"Seems unlikely to me, since there is an impedance mismatch. TAP has no concept for what NUnit calls a test, and what TAP calls a test usually corresponds to an NUnit assertion, but not precisely. So I’m not sure how the thing you’re looking for would work at all. (But maybe a heuristic could work well enough.)\n",
"At the very least, a simple pass/fail for each TestFixture run would allow the output to be sucked into other TAP results for aggregating results/reports. Maybe it's as simple as a xslt to transform the xml report into TAP\n"
] |
[
1,
0
] |
[] |
[] |
[
"nunit",
"tap",
"testing"
] |
stackoverflow_0000071848_nunit_tap_testing.txt
|
Q:
Using the docstring from one method to automatically overwrite that of another method
The problem: I have a class which contains a template method execute which calls another method _execute. Subclasses are supposed to overwrite _execute to implement some specific functionality. This functionality should be documented in the docstring of _execute.
Advanced users can create their own subclasses to extend the library. However, another user dealing with such a subclass should only use execute, so he won't see the correct docstring if he uses help(execute).
Therefore it would be nice to modify the base class in such a way that in a subclass the docstring of execute is automatically replaced with that of _execute. Any ideas how this might be done?
I was thinking of metaclasses to do this, to make this completely transparent to the user.
A:
Well, if you don't mind copying the original method in the subclass, you can use the following technique.
import new
def copyfunc(func):
return new.function(func.func_code, func.func_globals, func.func_name,
func.func_defaults, func.func_closure)
class Metaclass(type):
def __new__(meta, name, bases, attrs):
for key in attrs.keys():
if key[0] == '_':
skey = key[1:]
for base in bases:
original = getattr(base, skey, None)
if original is not None:
copy = copyfunc(original)
copy.__doc__ = attrs[key].__doc__
attrs[skey] = copy
break
return type.__new__(meta, name, bases, attrs)
class Class(object):
__metaclass__ = Metaclass
def execute(self):
'''original doc-string'''
return self._execute()
class Subclass(Class):
def _execute(self):
'''sub-class doc-string'''
pass
A:
Is there a reason you can't override the base class's execute function directly?
class Base(object):
def execute(self):
...
class Derived(Base):
def execute(self):
"""Docstring for derived class"""
Base.execute(self)
...stuff specific to Derived...
If you don't want to do the above:
Method objects don't support writing to the __doc__ attribute, so you have to change __doc__ in the actual function object. Since you don't want to override the one in the base class, you'd have to give each subclass its own copy of execute:
class Derived(Base):
def execute(self):
return Base.execute(self)
class _execute(self):
"""Docstring for subclass"""
...
execute.__doc__= _execute.__doc__
but this is similar to a roundabout way of redefining execute...
A:
Look at the functools.wraps() decorator; it does all of this, but I don't know offhand if you can get it to run in the right context
A:
Well the doc-string is stored in __doc__ so it wouldn't be too hard to re-assign it based on the doc-string of _execute after the fact.
Basically:
class MyClass(object):
def execute(self):
'''original doc-string'''
self._execute()
class SubClass(MyClass):
def _execute(self):
'''sub-class doc-string'''
pass
# re-assign doc-string of execute
def execute(self,*args,**kw):
return MyClass.execute(*args,**kw)
execute.__doc__=_execute.__doc__
Execute has to be re-declared to that the doc string gets attached to the version of execute for the SubClass and not for MyClass (which would otherwise interfere with other sub-classes).
That's not a very tidy way of doing it, but from the POV of the user of a library it should give the desired result. You could then wrap this up in a meta-class to make it easier for people who are sub-classing.
A:
I agree that the simplest, most Pythonic way of approaching this is to simply redefine execute in your subclasses and have it call the execute method of the base class:
class Sub(Base):
def execute(self):
"""New docstring goes here"""
return Base.execute(self)
This is very little code to accomplish what you want; the only downside is that you must repeat this code in every subclass that extends Base. However, this is a small price to pay for the behavior you want.
If you want a sloppy and verbose way of making sure that the docstring for execute is dynamically generated, you can use the descriptor protocol, which would be significantly less code than the other proposals here. This is annoying because you can't just set a descriptor on an existing function, which means that execute must be written as a separate class with a __call__ method.
Here's the code to do this, but keep in mind that my above example is much simpler and more Pythonic:
class Executor(object):
__doc__ = property(lambda self: self.inst._execute.__doc__)
def __call__(self):
return self.inst._execute()
class Base(object):
execute = Executor()
class Sub(Base):
def __init__(self):
self.execute.inst = self
def _execute(self):
"""Actually does something!"""
return "Hello World!"
spam = Sub()
print spam.execute.__doc__ # prints "Actually does something!"
help(spam) # the execute method says "Actually does something!"
|
Using the docstring from one method to automatically overwrite that of another method
|
The problem: I have a class which contains a template method execute which calls another method _execute. Subclasses are supposed to overwrite _execute to implement some specific functionality. This functionality should be documented in the docstring of _execute.
Advanced users can create their own subclasses to extend the library. However, another user dealing with such a subclass should only use execute, so he won't see the correct docstring if he uses help(execute).
Therefore it would be nice to modify the base class in such a way that in a subclass the docstring of execute is automatically replaced with that of _execute. Any ideas how this might be done?
I was thinking of metaclasses to do this, to make this completely transparent to the user.
|
[
"Well, if you don't mind copying the original method in the subclass, you can use the following technique.\nimport new\n\ndef copyfunc(func):\n return new.function(func.func_code, func.func_globals, func.func_name,\n func.func_defaults, func.func_closure)\n\nclass Metaclass(type):\n def __new__(meta, name, bases, attrs):\n for key in attrs.keys():\n if key[0] == '_':\n skey = key[1:]\n for base in bases:\n original = getattr(base, skey, None)\n if original is not None:\n copy = copyfunc(original)\n copy.__doc__ = attrs[key].__doc__\n attrs[skey] = copy\n break\n return type.__new__(meta, name, bases, attrs)\n\nclass Class(object):\n __metaclass__ = Metaclass\n def execute(self):\n '''original doc-string'''\n return self._execute()\n\nclass Subclass(Class):\n def _execute(self):\n '''sub-class doc-string'''\n pass\n\n",
"Is there a reason you can't override the base class's execute function directly?\nclass Base(object):\n def execute(self):\n ...\n\nclass Derived(Base):\n def execute(self):\n \"\"\"Docstring for derived class\"\"\"\n Base.execute(self)\n ...stuff specific to Derived...\n\nIf you don't want to do the above:\nMethod objects don't support writing to the __doc__ attribute, so you have to change __doc__ in the actual function object. Since you don't want to override the one in the base class, you'd have to give each subclass its own copy of execute:\nclass Derived(Base):\n def execute(self):\n return Base.execute(self)\n\n class _execute(self):\n \"\"\"Docstring for subclass\"\"\"\n ...\n\n execute.__doc__= _execute.__doc__\n\nbut this is similar to a roundabout way of redefining execute...\n",
"Look at the functools.wraps() decorator; it does all of this, but I don't know offhand if you can get it to run in the right context\n",
"Well the doc-string is stored in __doc__ so it wouldn't be too hard to re-assign it based on the doc-string of _execute after the fact.\nBasically:\n\n\nclass MyClass(object):\n def execute(self):\n '''original doc-string'''\n self._execute()\n\nclass SubClass(MyClass):\n def _execute(self):\n '''sub-class doc-string'''\n pass\n\n # re-assign doc-string of execute\n def execute(self,*args,**kw):\n return MyClass.execute(*args,**kw)\n execute.__doc__=_execute.__doc__\n\n\n\nExecute has to be re-declared to that the doc string gets attached to the version of execute for the SubClass and not for MyClass (which would otherwise interfere with other sub-classes).\nThat's not a very tidy way of doing it, but from the POV of the user of a library it should give the desired result. You could then wrap this up in a meta-class to make it easier for people who are sub-classing.\n",
"I agree that the simplest, most Pythonic way of approaching this is to simply redefine execute in your subclasses and have it call the execute method of the base class:\nclass Sub(Base):\n def execute(self):\n \"\"\"New docstring goes here\"\"\"\n return Base.execute(self)\n\nThis is very little code to accomplish what you want; the only downside is that you must repeat this code in every subclass that extends Base. However, this is a small price to pay for the behavior you want.\nIf you want a sloppy and verbose way of making sure that the docstring for execute is dynamically generated, you can use the descriptor protocol, which would be significantly less code than the other proposals here. This is annoying because you can't just set a descriptor on an existing function, which means that execute must be written as a separate class with a __call__ method.\nHere's the code to do this, but keep in mind that my above example is much simpler and more Pythonic:\nclass Executor(object):\n __doc__ = property(lambda self: self.inst._execute.__doc__)\n\n def __call__(self):\n return self.inst._execute()\n\nclass Base(object):\n execute = Executor()\n\nclass Sub(Base):\n def __init__(self):\n self.execute.inst = self\n\n def _execute(self):\n \"\"\"Actually does something!\"\"\"\n return \"Hello World!\"\n\nspam = Sub()\nprint spam.execute.__doc__ # prints \"Actually does something!\"\nhelp(spam) # the execute method says \"Actually does something!\"\n\n"
] |
[
4,
2,
1,
0,
0
] |
[] |
[] |
[
"metaclass",
"python"
] |
stackoverflow_0000071817_metaclass_python.txt
|
Q:
What's the best way to handle long running process in an ASP.Net application?
In my web application there is a process that queries data from all over the web, filters it, and saves it to the database. As you can imagine this process takes some time. My current solution is to increase the page timeout and give an AJAX progress bar to the user while it loads. This is a problem for two reasons - 1) it still takes to long and the user must wait 2) it sometimes still times out.
I've dabbled in threading the process and have read I should async post it to a web service ("Fire and forget").
Some references I've read:
- MSDN
- Fire and Forget
So my question is - what is the best method?
UPDATE: After the user inputs their data I would like to redirect them to the results page that incrementally updates as the process is running in the background.
A:
To avoid excessive architecture astronomy, I often use a hidden iframe to call the long running process and stream back progress information. Coupled with something like jsProgressBarHandler, you can pretty easily create great out-of-band progress indication for longer tasks where a generic progress animation doesn't cut it.
In your specific situation, you may want to use one LongRunningProcess.aspx call per task, to avoid those page timeouts.
For example, call LongRunningProcess.aspx?taskID=1 to kick it off and then at the end of that task, emit a
document.location = "LongRunningProcess.aspx?taskID=2".
Ad nauseum.
A:
We had a similar issue and solved it by starting the work via an asychronous web service call (which meant that the user did not have to wait for the work to finish). The web service then started a SQL Job which performed the work and periodically updated a table with the status of the work. We provided a UI which allowed the user to query the table.
A:
I ran into this exact problem at my last job. The best way I found was to fire off an asychronous process, and notify the user when it's done (email or something else). Making them wait that long is going to be problematic because of timeouts and wasted productivity for them. Having them wait for a progress bar can give them false sense of security that they can cancel the process when they close the browser which may not be the case depending on how you set up the system.
A:
How are you querying the remote data?
How often does it change?
Are the results something that could be cached for a period of time?
How long a period of time are we actually talking about here?
The 'best method' is likely to depend in some way on the answers to these questions...
A:
You can create another thread and store a reference to the thread in the session or application state, depending on wether the thread can run only once per website, or once per user session.
You can then redirect the user to a page where he can monitor the threads progress. You can set the page to refresh automatically, or display a refresh button to the user.
Upon completion of the thread, you can send an email to the user.
A:
My solution to this, has been an out of band service that does these and caches them in db.
When the person asks for something the first time, they get a bit of a wait, and then it shows up but if they refresh, its immediate, and then, because its int he db, its now part of the hourly update for the next 24 hours from the last request.
A:
Add the job, with its relevant parameters, to a job queue table. Then, write a windows service that will pick up these jobs and process them, save the results to an appropriate location, and email the requester with a link to the results. It is also a nice touch to give some sort of a UI so the user can check the status of their job(s).
This way is much better than launching a seperate thread or increasing the timeout, especially if your application is larger and needs to scale, as you can simply add multiple servers to process jobs if necessary.
|
What's the best way to handle long running process in an ASP.Net application?
|
In my web application there is a process that queries data from all over the web, filters it, and saves it to the database. As you can imagine this process takes some time. My current solution is to increase the page timeout and give an AJAX progress bar to the user while it loads. This is a problem for two reasons - 1) it still takes to long and the user must wait 2) it sometimes still times out.
I've dabbled in threading the process and have read I should async post it to a web service ("Fire and forget").
Some references I've read:
- MSDN
- Fire and Forget
So my question is - what is the best method?
UPDATE: After the user inputs their data I would like to redirect them to the results page that incrementally updates as the process is running in the background.
|
[
"To avoid excessive architecture astronomy, I often use a hidden iframe to call the long running process and stream back progress information. Coupled with something like jsProgressBarHandler, you can pretty easily create great out-of-band progress indication for longer tasks where a generic progress animation doesn't cut it.\nIn your specific situation, you may want to use one LongRunningProcess.aspx call per task, to avoid those page timeouts. \nFor example, call LongRunningProcess.aspx?taskID=1 to kick it off and then at the end of that task, emit a \ndocument.location = \"LongRunningProcess.aspx?taskID=2\". \n\nAd nauseum.\n",
"We had a similar issue and solved it by starting the work via an asychronous web service call (which meant that the user did not have to wait for the work to finish). The web service then started a SQL Job which performed the work and periodically updated a table with the status of the work. We provided a UI which allowed the user to query the table.\n",
"I ran into this exact problem at my last job. The best way I found was to fire off an asychronous process, and notify the user when it's done (email or something else). Making them wait that long is going to be problematic because of timeouts and wasted productivity for them. Having them wait for a progress bar can give them false sense of security that they can cancel the process when they close the browser which may not be the case depending on how you set up the system. \n",
"\nHow are you querying the remote data?\nHow often does it change?\nAre the results something that could be cached for a period of time?\nHow long a period of time are we actually talking about here?\n\nThe 'best method' is likely to depend in some way on the answers to these questions...\n",
"You can create another thread and store a reference to the thread in the session or application state, depending on wether the thread can run only once per website, or once per user session.\nYou can then redirect the user to a page where he can monitor the threads progress. You can set the page to refresh automatically, or display a refresh button to the user.\nUpon completion of the thread, you can send an email to the user.\n",
"My solution to this, has been an out of band service that does these and caches them in db.\nWhen the person asks for something the first time, they get a bit of a wait, and then it shows up but if they refresh, its immediate, and then, because its int he db, its now part of the hourly update for the next 24 hours from the last request.\n",
"Add the job, with its relevant parameters, to a job queue table. Then, write a windows service that will pick up these jobs and process them, save the results to an appropriate location, and email the requester with a link to the results. It is also a nice touch to give some sort of a UI so the user can check the status of their job(s).\nThis way is much better than launching a seperate thread or increasing the timeout, especially if your application is larger and needs to scale, as you can simply add multiple servers to process jobs if necessary.\n"
] |
[
7,
3,
1,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"asp.net",
"asp.net_ajax",
"c#"
] |
stackoverflow_0000073039_.net_asp.net_asp.net_ajax_c#.txt
|
Q:
Making a game in C++ using parallel processing
I wanted to "emulate" a popular flash game, Chrontron, in C++ and needed some help getting started. (NOTE: Not for release, just practicing for myself)
Basics:
Player has a time machine. On each iteration of using the time machine, a parallel state
is created, co-existing with a previous state. One of the states must complete all the
objectives of the level before ending the stage. In addition, all the stages must be able
to end the stage normally, without causing a state paradox (wherein they should have
been able to finish the stage normally but, due to the interactions of another state,
were not).
So, that sort of explains how the game works. You should play it a bit to really
understand what my problem is.
I'm thinking a good way to solve this would be to use linked lists to store each state,
which will probably either be a hash map, based on time, or a linked list that iterates
based on time. I'm still unsure.
ACTUAL QUESTION:
Now that I have some rough specs, I need some help deciding on which data structures to use for this, and why. Also, I want to know what Graphics API/Layer I should use to do this: SDL, OpenGL, or DirectX (my current choice is SDL). And how would I go about implementing parallel states? With parallel threads?
EDIT (To clarify more):
OS -- Windows (since this is a hobby project, may do this in Linux later)
Graphics -- 2D
Language -- C++ (must be C++ -- this is practice for a course next semester)
Q-Unanswered: SDL : OpenGL : Direct X
Q-Answered: Avoid Parallel Processing
Q-Answered: Use STL to implement time-step actions.
So far from what people have said, I should:
1. Use STL to store actions.
2. Iterate through actions based on time-step.
3. Forget parallel processing -- period. (But I'd still like some pointers as to how it
could be used and in what cases it should be used, since this is for practice).
Appending to the question, I've mostly used C#, PHP, and Java before so I wouldn't describe myself as a hotshot programmer. What C++ specific knowledge would help make this project easier for me? (ie. Vectors?)
A:
What you should do is first to read and understand the "fixed time-step" game loop (Here's a good explanation: http://www.gaffer.org/game-physics/fix-your-timestep).
Then what you do is to keep a list of list of pairs of frame counter and action. STL example:
std::list<std::list<std::pair<unsigned long, Action> > > state;
Or maybe a vector of lists of pairs. To create the state, for every action (player interaction) you store the frame number and what action is performed, most likely you'd get the best results if action simply was "key <X> pressed" or "key <X> released":
state.back().push_back(std::make_pair(currentFrame, VK_LEFT | KEY_PRESSED));
To play back the previous states, you'd have to reset the frame counter every time the player activates the time machine and then iterate through the state list for each previous state and see if any matches the current frame. If there is, perform the action for that state.
To optimize you could keep a list of iterators to where you are in each previous state-list. Here's some pseudo-code for that:
typedef std::list<std::pair<unsigned long, Action> > StateList;
std::list<StateList::iterator> stateIteratorList;
//
foreach(it in stateIteratorList)
{
if(it->first == currentFrame)
{
performAction(it->second);
++it;
}
}
I hope you get the idea...
Separate threads would simply complicate the matter greatly, this way you get the same result every time, which you cannot guarantee by using separate threads (can't really see how that would be implemented) or a non-fixed time-step game loop.
When it comes to graphics API, I'd go with SDL as it's probably the easiest thing to get you started. You can always use OpenGL from SDL later on if you want to go 3D.
A:
This sounds very similar to Braid. You really don't want parallel processing for this - parallel programming is hard, and for something like this, performance should not be an issue.
Since the game state vector will grow very quickly (probably on the order of several kilobytes per second, depending on the frame rate and how much data you store), you don't want a linked list, which has a lot of overhead in terms of space (and can introduce big performance penalties due to cache misses if it is laid out poorly). For each parallel timeline, you want a vector data structure. You can store each parallel timeline in a linked list. Each timeline knows at what time it began.
To run the game, you iterate through all active timelines and perform one frame's worth of actions from each of them in lockstep. No need for parallel processing.
A:
I have played this game before. I don't necessarily think parallel processing is the way to go. You have shared objects in the game (levers, boxes, elevators, etc) that will need to be shared between processes, possibly with every delta, thereby reducing the effectiveness of the parallelism.
I would personally just keep a list of actions, then for each subsequent iteration start interleaving them together. For example, if the list is in the format of <[iteration.action]> then the 3rd time thru would execute actions 1.1, 2.1, 3.1, 1.2, 2.2, 3.3, etc.
A:
After briefly glossing over the description, I think you have the right idea, I would have a state object that holds the state data, and place this into a linked list...I don't think you need parallel threads...
as far as the graphics API, I have only used opengl, and can say that it is pretty powerful and has a good C / C++ API, opengl would also be more cross platform as you can use the messa library on *Nix computers.
A:
A very interesting game idea. I think you are right that parrellel computing would be benefical to this design, but no more then any other high resource program.
The question is a bit ambigous. I see that you are going to write this in C++ but what OS are you coding it for? Do you intend on it being cross platform and what kind of graphics would you like, ie 3D, 2D, high end, web based.
So basically we need a lot more information.
A:
Parallel processing isn't the answer. You should simply "record" the players actions then play them back for the "previous actions"
So you create a vector (singly linked list) of vectors that holds the actions. Simply store the frame number that the action was taken (or the delta) and complete that action on the "dummy bot" that represents the player during that particular instance. You simply loop through the states and trigger them one after another.
You get a side effect of easily "breaking" the game when a state paradox happens simply because the next action fails.
A:
Unless you're desperate to use C++ for your own education, you should definitely look at XNA for your game & graphics framework (it uses C#). It's completely free, it does a lot of things for you, and soon you'll be able to sell your game on Xbox Live.
To answer your main question, nothing that you can already do in Flash would ever need to use more than one thread. Just store a list of positions in an array and loop through with a different offset for each robot.
|
Making a game in C++ using parallel processing
|
I wanted to "emulate" a popular flash game, Chrontron, in C++ and needed some help getting started. (NOTE: Not for release, just practicing for myself)
Basics:
Player has a time machine. On each iteration of using the time machine, a parallel state
is created, co-existing with a previous state. One of the states must complete all the
objectives of the level before ending the stage. In addition, all the stages must be able
to end the stage normally, without causing a state paradox (wherein they should have
been able to finish the stage normally but, due to the interactions of another state,
were not).
So, that sort of explains how the game works. You should play it a bit to really
understand what my problem is.
I'm thinking a good way to solve this would be to use linked lists to store each state,
which will probably either be a hash map, based on time, or a linked list that iterates
based on time. I'm still unsure.
ACTUAL QUESTION:
Now that I have some rough specs, I need some help deciding on which data structures to use for this, and why. Also, I want to know what Graphics API/Layer I should use to do this: SDL, OpenGL, or DirectX (my current choice is SDL). And how would I go about implementing parallel states? With parallel threads?
EDIT (To clarify more):
OS -- Windows (since this is a hobby project, may do this in Linux later)
Graphics -- 2D
Language -- C++ (must be C++ -- this is practice for a course next semester)
Q-Unanswered: SDL : OpenGL : Direct X
Q-Answered: Avoid Parallel Processing
Q-Answered: Use STL to implement time-step actions.
So far from what people have said, I should:
1. Use STL to store actions.
2. Iterate through actions based on time-step.
3. Forget parallel processing -- period. (But I'd still like some pointers as to how it
could be used and in what cases it should be used, since this is for practice).
Appending to the question, I've mostly used C#, PHP, and Java before so I wouldn't describe myself as a hotshot programmer. What C++ specific knowledge would help make this project easier for me? (ie. Vectors?)
|
[
"What you should do is first to read and understand the \"fixed time-step\" game loop (Here's a good explanation: http://www.gaffer.org/game-physics/fix-your-timestep).\nThen what you do is to keep a list of list of pairs of frame counter and action. STL example:\nstd::list<std::list<std::pair<unsigned long, Action> > > state;\n\nOr maybe a vector of lists of pairs. To create the state, for every action (player interaction) you store the frame number and what action is performed, most likely you'd get the best results if action simply was \"key <X> pressed\" or \"key <X> released\":\nstate.back().push_back(std::make_pair(currentFrame, VK_LEFT | KEY_PRESSED));\n\nTo play back the previous states, you'd have to reset the frame counter every time the player activates the time machine and then iterate through the state list for each previous state and see if any matches the current frame. If there is, perform the action for that state.\nTo optimize you could keep a list of iterators to where you are in each previous state-list. Here's some pseudo-code for that:\ntypedef std::list<std::pair<unsigned long, Action> > StateList;\nstd::list<StateList::iterator> stateIteratorList;\n//\nforeach(it in stateIteratorList)\n{\n if(it->first == currentFrame)\n {\n performAction(it->second);\n ++it;\n }\n}\n\nI hope you get the idea...\nSeparate threads would simply complicate the matter greatly, this way you get the same result every time, which you cannot guarantee by using separate threads (can't really see how that would be implemented) or a non-fixed time-step game loop.\nWhen it comes to graphics API, I'd go with SDL as it's probably the easiest thing to get you started. You can always use OpenGL from SDL later on if you want to go 3D.\n",
"This sounds very similar to Braid. You really don't want parallel processing for this - parallel programming is hard, and for something like this, performance should not be an issue.\nSince the game state vector will grow very quickly (probably on the order of several kilobytes per second, depending on the frame rate and how much data you store), you don't want a linked list, which has a lot of overhead in terms of space (and can introduce big performance penalties due to cache misses if it is laid out poorly). For each parallel timeline, you want a vector data structure. You can store each parallel timeline in a linked list. Each timeline knows at what time it began.\nTo run the game, you iterate through all active timelines and perform one frame's worth of actions from each of them in lockstep. No need for parallel processing.\n",
"I have played this game before. I don't necessarily think parallel processing is the way to go. You have shared objects in the game (levers, boxes, elevators, etc) that will need to be shared between processes, possibly with every delta, thereby reducing the effectiveness of the parallelism.\nI would personally just keep a list of actions, then for each subsequent iteration start interleaving them together. For example, if the list is in the format of <[iteration.action]> then the 3rd time thru would execute actions 1.1, 2.1, 3.1, 1.2, 2.2, 3.3, etc.\n",
"After briefly glossing over the description, I think you have the right idea, I would have a state object that holds the state data, and place this into a linked list...I don't think you need parallel threads...\nas far as the graphics API, I have only used opengl, and can say that it is pretty powerful and has a good C / C++ API, opengl would also be more cross platform as you can use the messa library on *Nix computers. \n",
"A very interesting game idea. I think you are right that parrellel computing would be benefical to this design, but no more then any other high resource program. \nThe question is a bit ambigous. I see that you are going to write this in C++ but what OS are you coding it for? Do you intend on it being cross platform and what kind of graphics would you like, ie 3D, 2D, high end, web based.\nSo basically we need a lot more information.\n",
"Parallel processing isn't the answer. You should simply \"record\" the players actions then play them back for the \"previous actions\"\nSo you create a vector (singly linked list) of vectors that holds the actions. Simply store the frame number that the action was taken (or the delta) and complete that action on the \"dummy bot\" that represents the player during that particular instance. You simply loop through the states and trigger them one after another.\nYou get a side effect of easily \"breaking\" the game when a state paradox happens simply because the next action fails.\n",
"Unless you're desperate to use C++ for your own education, you should definitely look at XNA for your game & graphics framework (it uses C#). It's completely free, it does a lot of things for you, and soon you'll be able to sell your game on Xbox Live.\nTo answer your main question, nothing that you can already do in Flash would ever need to use more than one thread. Just store a list of positions in an array and loop through with a different offset for each robot.\n"
] |
[
6,
5,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"c++",
"directx",
"graphics",
"opengl",
"sdl"
] |
stackoverflow_0000073117_c++_directx_graphics_opengl_sdl.txt
|
Q:
IList to IQueryable
I have an List and I'd like to wrap it into an IQueryable.
Is this possible?
A:
List<int> list = new List<int>() { 1, 2, 3, 4, };
IQueryable<int> query = list.AsQueryable();
If you don't see the AsQueryable() method, add a using statement for System.Linq.
A:
Use the AsQueryable<T>() extension method.
|
IList to IQueryable
|
I have an List and I'd like to wrap it into an IQueryable.
Is this possible?
|
[
"List<int> list = new List<int>() { 1, 2, 3, 4, };\nIQueryable<int> query = list.AsQueryable();\n\nIf you don't see the AsQueryable() method, add a using statement for System.Linq.\n",
"Use the AsQueryable<T>() extension method.\n"
] |
[
122,
12
] |
[] |
[] |
[
"iqueryable",
"linq_to_objects"
] |
stackoverflow_0000073542_iqueryable_linq_to_objects.txt
|
Q:
How can I transfer my domains from my existing registrar/hosting service to something like GoDaddy?
Will I have to pay again? I have about 9 months left before renewal but my current provider doesn't offer many options / control panels.
Update: thanks for everyone's help - I've finally completed this now.
I had to:
Ask my old registrar to "Unlock" the domain
Ask my old registrar to set the admin email address of the domain to my email
Ask my old registrar for the "authcode"
For the rest I just followed GoDaddy's instructions
What a pain in the a**
A:
This is how it works
Lets say you have 9 more months for your current domain to expire
you transfer the domain to GoDaddy (or to any other decent Registrar)
you will be charged the price (little more or equal) to the price of booking a new domain
BUT, you will have the domain for 9 months + one year (or the no. of years you paid godaddy for)
So, you choose to transfer the domain and pay USD9.99 (for a year), you will have the domains for 1 year + 9 months
A:
I did this when I had to switch hosts from awful, unreliable Fuitadnet. They managed the domain for me so I emailed them that I wanted to transfer my domain. (I transferred to GoDaddy.)
I don't remember all of the details, but I seem to recall it was a multiple-handshake process. First, they had to get my current registrar to release the domain; this involved having an email sent to me so I could confirm I actually wanted to release the domain. Then, I got a confirmation code that I sent to the new registrar, who did something or the other and came back with a new confirmation code. Once I entered the final confirmation code, the domain belonged to the new registrar. It took a few days and for some reason my first set of codes didn't work, but I found GoDaddy was pretty good at explaining what was going on.
I did have to pay a transfer fee, but the registration retained its length. I opted to renew it because there was a discount at the time.
If you contact your current host/registrar and they should be able to help you out; this was one of the few times I actually got good service out of fuitadnet.
A:
You must have the domain unlocked with your current registrar and make sure that your contact information is up to date.
You can then have the new registrar submit a transfer request. This will result in you being sent a notification (assuming your contact information is accurate).
You will have to follow the directions in that transfer request email.
The domain may take up to a few days to fully transfer to the new registrar.
When you transfer a domain, you are effectively extending the registration for another year so you will be charged the standard transfer/registration fee.
If you have any questions, you can always contact the company you would like to become the new registrar. I am sure they would be able to walk you through their process exactly.
A:
Just so you know, GoDaddy as a company has a somewhat dubious reputation. I personally have never had any problems with them but I have only a few low-profile sites and have never done anything even remotely complicated with the DNS.
A:
There is a domain transfer procedure. It's kind of complex, since it's intended to keep people from stealing domains by transferring them to another registrar (like happened to sex.com back in the 90's). GoDaddy does a good job of talking you through it (I've transferred a domain to them in the past). Of course, you're going to have to pay them to register the domain for you (though they occasionally offer discounts on domain transfers).
A:
You probably will have to pay. If you check with your current registrar and with your target registar and see what needs to be done with them and what the costs are.
It is diferent for every registar, even though the actual process is the same.
A:
they charge you to transfer (like 6.99?), but godaddy will then renew it for a year. You usually need to contact your current hosting and have them release it for transfer, then follow godaddy's procedure for transferring a new domain in.
A:
You may need to pay, but when I switched from Register.com to goDaddy.com I paid a very small amount to transfer (like $10) and also renewed the domain for another 2 years. (This turned out to be much cheaper than renewing with Register.com)
A:
Yeah!! I was charged by my new registrar when I moved. Also remember you should have the secret key (transfer code) before you start your transfer process.
A:
Depends on the host also. If you go with a company like BlueHost, for example, they'll give you free domain transfers from the losing registrar. I think, in fact, they may give you free transfers. They will and ask if you want to renew your domain with them which will cost you, though.
|
How can I transfer my domains from my existing registrar/hosting service to something like GoDaddy?
|
Will I have to pay again? I have about 9 months left before renewal but my current provider doesn't offer many options / control panels.
Update: thanks for everyone's help - I've finally completed this now.
I had to:
Ask my old registrar to "Unlock" the domain
Ask my old registrar to set the admin email address of the domain to my email
Ask my old registrar for the "authcode"
For the rest I just followed GoDaddy's instructions
What a pain in the a**
|
[
"This is how it works\nLets say you have 9 more months for your current domain to expire\nyou transfer the domain to GoDaddy (or to any other decent Registrar)\nyou will be charged the price (little more or equal) to the price of booking a new domain\nBUT, you will have the domain for 9 months + one year (or the no. of years you paid godaddy for)\nSo, you choose to transfer the domain and pay USD9.99 (for a year), you will have the domains for 1 year + 9 months\n",
"I did this when I had to switch hosts from awful, unreliable Fuitadnet. They managed the domain for me so I emailed them that I wanted to transfer my domain. (I transferred to GoDaddy.)\nI don't remember all of the details, but I seem to recall it was a multiple-handshake process. First, they had to get my current registrar to release the domain; this involved having an email sent to me so I could confirm I actually wanted to release the domain. Then, I got a confirmation code that I sent to the new registrar, who did something or the other and came back with a new confirmation code. Once I entered the final confirmation code, the domain belonged to the new registrar. It took a few days and for some reason my first set of codes didn't work, but I found GoDaddy was pretty good at explaining what was going on.\nI did have to pay a transfer fee, but the registration retained its length. I opted to renew it because there was a discount at the time.\nIf you contact your current host/registrar and they should be able to help you out; this was one of the few times I actually got good service out of fuitadnet.\n",
"You must have the domain unlocked with your current registrar and make sure that your contact information is up to date.\nYou can then have the new registrar submit a transfer request. This will result in you being sent a notification (assuming your contact information is accurate).\nYou will have to follow the directions in that transfer request email.\nThe domain may take up to a few days to fully transfer to the new registrar.\nWhen you transfer a domain, you are effectively extending the registration for another year so you will be charged the standard transfer/registration fee.\nIf you have any questions, you can always contact the company you would like to become the new registrar. I am sure they would be able to walk you through their process exactly.\n",
"Just so you know, GoDaddy as a company has a somewhat dubious reputation. I personally have never had any problems with them but I have only a few low-profile sites and have never done anything even remotely complicated with the DNS.\n",
"There is a domain transfer procedure. It's kind of complex, since it's intended to keep people from stealing domains by transferring them to another registrar (like happened to sex.com back in the 90's). GoDaddy does a good job of talking you through it (I've transferred a domain to them in the past). Of course, you're going to have to pay them to register the domain for you (though they occasionally offer discounts on domain transfers).\n",
"You probably will have to pay. If you check with your current registrar and with your target registar and see what needs to be done with them and what the costs are.\nIt is diferent for every registar, even though the actual process is the same.\n",
"they charge you to transfer (like 6.99?), but godaddy will then renew it for a year. You usually need to contact your current hosting and have them release it for transfer, then follow godaddy's procedure for transferring a new domain in.\n",
"You may need to pay, but when I switched from Register.com to goDaddy.com I paid a very small amount to transfer (like $10) and also renewed the domain for another 2 years. (This turned out to be much cheaper than renewing with Register.com) \n",
"Yeah!! I was charged by my new registrar when I moved. Also remember you should have the secret key (transfer code) before you start your transfer process.\n",
"Depends on the host also. If you go with a company like BlueHost, for example, they'll give you free domain transfers from the losing registrar. I think, in fact, they may give you free transfers. They will and ask if you want to renew your domain with them which will cost you, though.\n"
] |
[
2,
1,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"dns"
] |
stackoverflow_0000073432_dns.txt
|
Q:
Not able to delete directory
I am having a frequent problems with my web hosting (its shared)
I am not able to delete or change permission for a particular directory. The response is,
Cannot delete. Directory may not be empty
I checked the permissions and it looks OK. There are 100's of files in this folder which I don't want.
I contacted my support and they solved it saying it was permission issue. But it reappeared. Any suggestions?
The server is Linux.
A:
You can't rmdir a directory with files in it. You must first rm all files and subdirectories. Many times, the easiest solution is:
$ rm -rf old_directory
It's entirely possible that some of the files or subdirectories have permission limitations that might prevent them from being removed. Occasionally, this can be solved with:
$ chmod -R +w old_directory
But I suspect that's what your support people did earlier.
A:
This could also be because your FTP client might not be showing the hidden files (like cache, or any hiddn files that your application might create), while the hidden files are preventing you from deleting the directory. (though, in your case, I am not sure if this is the cause .. .it could be permission issue with your hosting provider.. Webserver running as another user (like apache or www) combined with your directories having global write perms).
A:
I assume that's a response from an FTP server?
Usually, a message from an FTP server really means it. If it says the directory is not empty, there might be certain files you cannot see that exists in the directory which maybe one of:
Your PHP/JSP/ASP/whatever scripts may run under a different user account thus creating files which you may not be able to see/delete
Is your hosting's web interface run under your FTP account? There might be conflicting permissions there if you manage some files from the web interface and then later via FTP.
Hosting server/operating system files created unintentionally e.g. from the hosting's web interface
If it comes from a script, write a one-time throw-away script that delete the files and that directory and then uploads and executes it.
And just to be sure, some FTP server doesn't support direct directory deletion, you need all the files first, is that the case?
|
Not able to delete directory
|
I am having a frequent problems with my web hosting (its shared)
I am not able to delete or change permission for a particular directory. The response is,
Cannot delete. Directory may not be empty
I checked the permissions and it looks OK. There are 100's of files in this folder which I don't want.
I contacted my support and they solved it saying it was permission issue. But it reappeared. Any suggestions?
The server is Linux.
|
[
"You can't rmdir a directory with files in it. You must first rm all files and subdirectories. Many times, the easiest solution is:\n$ rm -rf old_directory\n\nIt's entirely possible that some of the files or subdirectories have permission limitations that might prevent them from being removed. Occasionally, this can be solved with:\n$ chmod -R +w old_directory\n\nBut I suspect that's what your support people did earlier.\n",
"This could also be because your FTP client might not be showing the hidden files (like cache, or any hiddn files that your application might create), while the hidden files are preventing you from deleting the directory. (though, in your case, I am not sure if this is the cause .. .it could be permission issue with your hosting provider.. Webserver running as another user (like apache or www) combined with your directories having global write perms).\n",
"I assume that's a response from an FTP server?\nUsually, a message from an FTP server really means it. If it says the directory is not empty, there might be certain files you cannot see that exists in the directory which maybe one of:\n\nYour PHP/JSP/ASP/whatever scripts may run under a different user account thus creating files which you may not be able to see/delete\nIs your hosting's web interface run under your FTP account? There might be conflicting permissions there if you manage some files from the web interface and then later via FTP.\nHosting server/operating system files created unintentionally e.g. from the hosting's web interface\n\nIf it comes from a script, write a one-time throw-away script that delete the files and that directory and then uploads and executes it.\nAnd just to be sure, some FTP server doesn't support direct directory deletion, you need all the files first, is that the case?\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"file_permissions",
"linux",
"web_hosting"
] |
stackoverflow_0000073474_file_permissions_linux_web_hosting.txt
|
Q:
Jabber Openfire server v3.6.0a+ - how do I use Hybrid authentication?
I'm setting up a Jabber server for my website. I've already got some user accounts in place in the openfire database, and working IMs between them.
I'm now looking to add (some) of the users from my main database (members table, with login, password[plain text]) and allowed_to_IM[0 or 1] fields) to allow them to communicate between themselves. The Hybrid authentication is a new feature in v3.6.0a however, and there's little documentation in what configuration is required in the openfire.xml file for the database connectivity (to a second database), and what else may go in the properties (which have also taken much of the config's info away of the XML file).
My question is: Does anyone have a complete example that checks multiple databases? All the examples I'm seen seem to be just fragments.
A:
I have it using ldap and mysql and if it helps you my setting from openfire.xml are:
<connectionProvider>
<className>org.jivesoftware.database.DefaultConnectionProvider</className>
</connectionProvider>
<database>
<defaultProvider>
<driver>com.mysql.jdbc.Driver</driver>
<serverURL>jdbc:mysql://127.0.0.1:3306/openfire</serverURL>
<username>username</username>
<password>pass</password>
<minConnections>5</minConnections>
<maxConnections>15</maxConnections>
<connectionTimeout>1.0</connectionTimeout>
</defaultProvider>
</database>
<ldap>
ldapsetting removed
</ldap>
<hybridAuthProvider>
<primaryProvider>
<className>org.jivesoftware.openfire.auth.DefaultAuthProvider</className>
</primaryProvider>
<secondaryProvider>
<className>org.jivesoftware.openfire.ldap.LdapAuthProvider</className>
</secondaryProvider>
</hybridAuthProvider>
<provider>
<auth>
<className>org.jivesoftware.openfire.auth.HybridAuthProvider</className>
</auth>
<vcard>
<className>org.jivesoftware.openfire.auth.DefaultAuthProvider</className>
</vcard>
<user>
<className>org.jivesoftware.openfire.ldap.LdapUserProvider</className>
</user>
<auth>
<className>org.jivesoftware.openfire.ldap.LdapAuthProvider</className>
</auth>
<group>
<className>org.jivesoftware.openfire.ldap.LdapGroupProvider</className>
</group>
</provider>
|
Jabber Openfire server v3.6.0a+ - how do I use Hybrid authentication?
|
I'm setting up a Jabber server for my website. I've already got some user accounts in place in the openfire database, and working IMs between them.
I'm now looking to add (some) of the users from my main database (members table, with login, password[plain text]) and allowed_to_IM[0 or 1] fields) to allow them to communicate between themselves. The Hybrid authentication is a new feature in v3.6.0a however, and there's little documentation in what configuration is required in the openfire.xml file for the database connectivity (to a second database), and what else may go in the properties (which have also taken much of the config's info away of the XML file).
My question is: Does anyone have a complete example that checks multiple databases? All the examples I'm seen seem to be just fragments.
|
[
"I have it using ldap and mysql and if it helps you my setting from openfire.xml are:\n <connectionProvider>\n <className>org.jivesoftware.database.DefaultConnectionProvider</className>\n </connectionProvider>\n <database>\n <defaultProvider>\n <driver>com.mysql.jdbc.Driver</driver>\n <serverURL>jdbc:mysql://127.0.0.1:3306/openfire</serverURL>\n <username>username</username>\n <password>pass</password>\n <minConnections>5</minConnections>\n <maxConnections>15</maxConnections>\n <connectionTimeout>1.0</connectionTimeout>\n </defaultProvider>\n </database>\n <ldap>\n ldapsetting removed\n </ldap>\n <hybridAuthProvider>\n <primaryProvider>\n <className>org.jivesoftware.openfire.auth.DefaultAuthProvider</className>\n </primaryProvider>\n <secondaryProvider>\n <className>org.jivesoftware.openfire.ldap.LdapAuthProvider</className>\n </secondaryProvider>\n </hybridAuthProvider>\n <provider>\n <auth>\n <className>org.jivesoftware.openfire.auth.HybridAuthProvider</className>\n </auth>\n <vcard>\n <className>org.jivesoftware.openfire.auth.DefaultAuthProvider</className>\n </vcard>\n <user>\n <className>org.jivesoftware.openfire.ldap.LdapUserProvider</className>\n </user>\n <auth>\n <className>org.jivesoftware.openfire.ldap.LdapAuthProvider</className>\n </auth>\n <group>\n <className>org.jivesoftware.openfire.ldap.LdapGroupProvider</className>\n </group>\n </provider>\n\n"
] |
[
3
] |
[] |
[] |
[
"configuration",
"openfire",
"xmpp"
] |
stackoverflow_0000064364_configuration_openfire_xmpp.txt
|
Q:
BOM not expected in CF but sent by IIS/SharePoint
I'm trying to consume a SharePoint webservice from ColdFusion via cfinvoke ('cause I don't want to deal with (read: parse) the SOAP response itself).
The SOAP response includes a byte-order-mark character (BOM), which produces the following exception in CF:
"Cannot perform web service invocation GetList.
The fault returned when invoking the web service operation is:
'AxisFault
faultCode: {http://www.w3.org/2003/05/soap-envelope}Server.userException
faultSubcode:
faultString: org.xml.sax.SAXParseException: Content is not allowed in prolog."
The standard for UTF-8 encoding optionally includes the BOM character (http://unicode.org/faq/utf_bom.html#29). Microsoft almost universally includes the BOM character with UTF-8 encoded streams . From what I can tell there’s no way to change that in IIS. The XML parser that JRun (ColdFusion) uses by default doesn’t handle the BOM character for UTF-8 encoded XML streams. So, it appears that the way to fix this is to change the XML parser used by JRun (http://www.bpurcell.org/blog/index.cfm?mode=entry&entry=942).
Adobe says that it doesn't handle the BOM character (see comments from anoynomous and halL on May 2nd and 5th).
http://livedocs.adobe.com/coldfusion/8/htmldocs/Tags_g-h_09.html#comments
A:
I'm going to say that the answer to your question (is it possible?) is no. I don't know that definitively, but the poster who commented just above halL (in the comments on this page) gave a work-around for the problem -- so I assume it is possible to deal with when parsing manually.
You say that you're using CFInvoke because you don't want to deal with the soap response yourself. It looks like you don't have any choice.
A:
As Adam Tuttle said already, the workaround is on the page that you linked to
<!--- Remove BOM from the start of the string, if it exists --->
<cfif Left(responseText, 1) EQ chr(65279)>
<cfset responseText = mid(xmlText, 2, len(responseText))>
</cfif>
A:
It sounds like ColdFusion is using Apache Axis under the covers.
This doesn't apply exactly to your solution, but I've had to deal with this issue once before when consuming a .NET web service with Apache Axis/Java. The only solution I was able to find (since the owner of the web service was unwilling to change anything on his end) was to write a Handler class that Axis would plug into the pipeline which would delete the BOM from the message if it existed.
So perhaps it's possible to configure Axis through ColdFusion? If so you can add additional Handlers to the message handling flow.
|
BOM not expected in CF but sent by IIS/SharePoint
|
I'm trying to consume a SharePoint webservice from ColdFusion via cfinvoke ('cause I don't want to deal with (read: parse) the SOAP response itself).
The SOAP response includes a byte-order-mark character (BOM), which produces the following exception in CF:
"Cannot perform web service invocation GetList.
The fault returned when invoking the web service operation is:
'AxisFault
faultCode: {http://www.w3.org/2003/05/soap-envelope}Server.userException
faultSubcode:
faultString: org.xml.sax.SAXParseException: Content is not allowed in prolog."
The standard for UTF-8 encoding optionally includes the BOM character (http://unicode.org/faq/utf_bom.html#29). Microsoft almost universally includes the BOM character with UTF-8 encoded streams . From what I can tell there’s no way to change that in IIS. The XML parser that JRun (ColdFusion) uses by default doesn’t handle the BOM character for UTF-8 encoded XML streams. So, it appears that the way to fix this is to change the XML parser used by JRun (http://www.bpurcell.org/blog/index.cfm?mode=entry&entry=942).
Adobe says that it doesn't handle the BOM character (see comments from anoynomous and halL on May 2nd and 5th).
http://livedocs.adobe.com/coldfusion/8/htmldocs/Tags_g-h_09.html#comments
|
[
"I'm going to say that the answer to your question (is it possible?) is no. I don't know that definitively, but the poster who commented just above halL (in the comments on this page) gave a work-around for the problem -- so I assume it is possible to deal with when parsing manually.\nYou say that you're using CFInvoke because you don't want to deal with the soap response yourself. It looks like you don't have any choice.\n",
"As Adam Tuttle said already, the workaround is on the page that you linked to\n<!--- Remove BOM from the start of the string, if it exists --->\n<cfif Left(responseText, 1) EQ chr(65279)>\n<cfset responseText = mid(xmlText, 2, len(responseText))>\n</cfif>\n\n",
"It sounds like ColdFusion is using Apache Axis under the covers.\nThis doesn't apply exactly to your solution, but I've had to deal with this issue once before when consuming a .NET web service with Apache Axis/Java. The only solution I was able to find (since the owner of the web service was unwilling to change anything on his end) was to write a Handler class that Axis would plug into the pipeline which would delete the BOM from the message if it existed. \nSo perhaps it's possible to configure Axis through ColdFusion? If so you can add additional Handlers to the message handling flow.\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"byte_order_mark",
"coldfusion",
"iis",
"sharepoint",
"web_services"
] |
stackoverflow_0000056812_byte_order_mark_coldfusion_iis_sharepoint_web_services.txt
|
Q:
What is a good algorithm for deciding whether a passed in amount can be built additively from a set of numbers?
Possible Duplicate:
Algorithm to find which numbers from a list of size n sum to another number
What is a good algorithm for deciding whether a passed in amount can be built additively from a set of numbers? In my case, I am determining whether a certain currency amount (such as $40) can be met by adding up some combination of a set of bills (such as $5, $10 and $20 bills). That is a simple example, but the algorithm needs to work for the generic case where the bill set can differ over time (due to running out of a bill) or due to bill denominations differing by currency. The problem would apply to a foreign exchange teller at an airport.
So $50 can be met with a set of ($20 and $30), but cannot be met with a set of ($20 and $40).
In addition. If the amount cannot be met with the bill denominations available, how do you determine the closest amounts above and below which can be met?
A:
You are looking for the coin change problem:
http://en.wikipedia.org/wiki/Coin_problem
http://www.egr.unlv.edu/~jjtse/CS477/DP%20Coin%20Change.html
http://www.algorithmist.com/index.php/Coin_Change
A:
This seems closely related to the Subset Sum Problem, which is NP-Complete in general.
A:
Start with the largest bills and work down. With each denomination, start with the largest number of those bills and work down. You might need fewer of a large denomination because you need multiple smaller ones to hit a value on the head.
A:
Sum = 100
Bills = (40,30,20,10)
Number of 40's = 100 / 40 = 2
Remainder = 100 % 40 = 20
Number of 30's = 20 / 30 = 0
Remainder = 20 % 30 = 20
Number of 20's = 20 / 20 = 1
Remainder = 20 % 20 = 0
As soon as remainder = 0 you can stop.
If you run out of bills then you can't make it up and need to go to the second part which is how close can you get. This is a minimization problem that can be solved with Linear algebra methods (I'm a little rusty on that)
A:
You know - You asked this exact same question twice now.
What is a good non-recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers?
|
What is a good algorithm for deciding whether a passed in amount can be built additively from a set of numbers?
|
Possible Duplicate:
Algorithm to find which numbers from a list of size n sum to another number
What is a good algorithm for deciding whether a passed in amount can be built additively from a set of numbers? In my case, I am determining whether a certain currency amount (such as $40) can be met by adding up some combination of a set of bills (such as $5, $10 and $20 bills). That is a simple example, but the algorithm needs to work for the generic case where the bill set can differ over time (due to running out of a bill) or due to bill denominations differing by currency. The problem would apply to a foreign exchange teller at an airport.
So $50 can be met with a set of ($20 and $30), but cannot be met with a set of ($20 and $40).
In addition. If the amount cannot be met with the bill denominations available, how do you determine the closest amounts above and below which can be met?
|
[
"You are looking for the coin change problem:\n\nhttp://en.wikipedia.org/wiki/Coin_problem\nhttp://www.egr.unlv.edu/~jjtse/CS477/DP%20Coin%20Change.html\nhttp://www.algorithmist.com/index.php/Coin_Change\n\n",
"This seems closely related to the Subset Sum Problem, which is NP-Complete in general.\n",
"Start with the largest bills and work down. With each denomination, start with the largest number of those bills and work down. You might need fewer of a large denomination because you need multiple smaller ones to hit a value on the head.\n",
"Sum = 100\nBills = (40,30,20,10)\nNumber of 40's = 100 / 40 = 2\nRemainder = 100 % 40 = 20\nNumber of 30's = 20 / 30 = 0\nRemainder = 20 % 30 = 20\nNumber of 20's = 20 / 20 = 1\nRemainder = 20 % 20 = 0\nAs soon as remainder = 0 you can stop.\nIf you run out of bills then you can't make it up and need to go to the second part which is how close can you get. This is a minimization problem that can be solved with Linear algebra methods (I'm a little rusty on that)\n",
"You know - You asked this exact same question twice now. \nWhat is a good non-recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers?\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"algorithm"
] |
stackoverflow_0000067004_algorithm.txt
|
Q:
Embed data in a C++ program
I've got a C++ program that uses SQLite. I want to store the SQL queries in a separate file -- a plain-text file, not a source code file -- but embed that file in the executable file like a resource.
(This has to run on Linux, so I can't store it as an actual resource as far as I know, though that would be perfect if it were for Windows.)
Is there any simple way to do it, or will it effectively require me to write my own resource system for Linux? (Easily possible, but it would take a lot longer.)
A:
You can use objcopy to bind the contents of the file to a symbol your program can use. See, for instance, here for more information.
A:
Use macros. Technically that file would be source code file but it wouldn't look like this.
Example:
//queries.incl - SQL queries
Q(SELECT * FROM Users)
Q(INSERT [a] INTO Accounts)
//source.cpp
#define Q(query) #query,
char * queries[] = {
#include "queries.incl"
};
#undef Q
Later on you could do all sorts of other processing on that file by the same file, say you'd want to have array and a hash map of them, you could redefine Q to do another job and be done with it.
A:
You can always write a small program or script to convert your text file into a header file and run it as part of your build process.
A:
Here's a sample that we used for cross-platform embeddeding of files.
It's pretty simplistic, but will probably work for you.
You may also need to change how it's handling linefeeds in the escapeLine function.
#include <string>
#include <iostream>
#include <fstream>
#include <cstdio>
using namespace std;
std::string escapeLine( std::string orig )
{
string retme;
for (unsigned int i=0; i<orig.size(); i++)
{
switch (orig[i])
{
case '\\':
retme += "\\\\";
break;
case '"':
retme += "\\\"";
break;
case '\n': // Strip out the final linefeed.
break;
default:
retme += orig[i];
}
}
retme += "\\n"; // Add an escaped linefeed to the escaped string.
return retme;
}
int main( int argc, char ** argv )
{
string filenamein, filenameout;
if ( argc > 1 )
filenamein = argv[ 1 ];
else
{
// Not enough arguments
fprintf( stderr, "Usage: %s <file to convert.mel> [ <output file name.mel> ]\n", argv[0] );
exit( -1 );
}
if ( argc > 2 )
filenameout = argv[ 2 ];
else
{
string new_ending = "_mel.h";
filenameout = filenamein;
std::string::size_type pos;
pos = filenameout.find( ".mel" );
if (pos == std::string::npos)
filenameout += new_ending;
else
filenameout.replace( pos, new_ending.size(), new_ending );
}
printf( "Converting \"%s\" to \"%s\"\n", filenamein.c_str(), filenameout.c_str() );
ifstream filein( filenamein.c_str(), ios::in );
ofstream fileout( filenameout.c_str(), ios::out );
if (!filein.good())
{
fprintf( stderr, "Unable to open input file %s\n", filenamein.c_str() );
exit( -2 );
}
if (!fileout.good())
{
fprintf( stderr, "Unable to open output file %s\n", filenameout.c_str() );
exit( -3 );
}
// Write the file.
fileout << "tempstr = ";
while( filein.good() )
{
string buff;
if ( getline( filein, buff ) )
{
fileout << "\"" << escapeLine( buff ) << "\"" << endl;
}
}
fileout << ";" << endl;
filein.close();
fileout.close();
return 0;
}
A:
It's slightly ugly, but you can always use something like:
const char *query_foo =
#include "query_foo.txt"
const char *query_bar =
#include "query_bar.txt"
Where query_foo.txt would contain the quoted query text.
A:
I have seen this to be done by converting the resource file to a C source file with only one char array defined containing the content of resource file in a hexadecimal format (to avoid problems with malicious characters). This automatically generated source file is then simply compiled and linked to the project.
It should be pretty easy to implement the convertor to dump C file for each resource file also as to write some facade functions for accessing the resources.
|
Embed data in a C++ program
|
I've got a C++ program that uses SQLite. I want to store the SQL queries in a separate file -- a plain-text file, not a source code file -- but embed that file in the executable file like a resource.
(This has to run on Linux, so I can't store it as an actual resource as far as I know, though that would be perfect if it were for Windows.)
Is there any simple way to do it, or will it effectively require me to write my own resource system for Linux? (Easily possible, but it would take a lot longer.)
|
[
"You can use objcopy to bind the contents of the file to a symbol your program can use. See, for instance, here for more information.\n",
"Use macros. Technically that file would be source code file but it wouldn't look like this.\nExample:\n//queries.incl - SQL queries\nQ(SELECT * FROM Users)\nQ(INSERT [a] INTO Accounts)\n\n\n//source.cpp\n#define Q(query) #query,\nchar * queries[] = {\n#include \"queries.incl\"\n};\n#undef Q\n\nLater on you could do all sorts of other processing on that file by the same file, say you'd want to have array and a hash map of them, you could redefine Q to do another job and be done with it. \n",
"You can always write a small program or script to convert your text file into a header file and run it as part of your build process.\n",
"Here's a sample that we used for cross-platform embeddeding of files.\nIt's pretty simplistic, but will probably work for you.\nYou may also need to change how it's handling linefeeds in the escapeLine function.\n#include <string>\n#include <iostream>\n#include <fstream>\n#include <cstdio>\n\nusing namespace std;\n\nstd::string escapeLine( std::string orig )\n{\n string retme;\n for (unsigned int i=0; i<orig.size(); i++)\n {\n switch (orig[i])\n {\n case '\\\\':\n retme += \"\\\\\\\\\";\n break;\n case '\"':\n retme += \"\\\\\\\"\";\n break;\n case '\\n': // Strip out the final linefeed.\n break;\n default:\n retme += orig[i];\n }\n }\n retme += \"\\\\n\"; // Add an escaped linefeed to the escaped string.\n return retme;\n}\n\nint main( int argc, char ** argv )\n{\n string filenamein, filenameout;\n\n if ( argc > 1 )\n filenamein = argv[ 1 ];\n else\n {\n // Not enough arguments\n fprintf( stderr, \"Usage: %s <file to convert.mel> [ <output file name.mel> ]\\n\", argv[0] );\n exit( -1 );\n }\n\n if ( argc > 2 )\n filenameout = argv[ 2 ];\n else\n {\n string new_ending = \"_mel.h\";\n filenameout = filenamein;\n std::string::size_type pos;\n pos = filenameout.find( \".mel\" );\n if (pos == std::string::npos)\n filenameout += new_ending;\n else\n filenameout.replace( pos, new_ending.size(), new_ending );\n }\n\n printf( \"Converting \\\"%s\\\" to \\\"%s\\\"\\n\", filenamein.c_str(), filenameout.c_str() );\n\n ifstream filein( filenamein.c_str(), ios::in );\n ofstream fileout( filenameout.c_str(), ios::out );\n\n if (!filein.good())\n {\n fprintf( stderr, \"Unable to open input file %s\\n\", filenamein.c_str() );\n exit( -2 );\n }\n if (!fileout.good())\n {\n fprintf( stderr, \"Unable to open output file %s\\n\", filenameout.c_str() );\n exit( -3 );\n }\n\n // Write the file.\n fileout << \"tempstr = \";\n\n while( filein.good() )\n {\n string buff;\n if ( getline( filein, buff ) )\n {\n fileout << \"\\\"\" << escapeLine( buff ) << \"\\\"\" << endl;\n }\n }\n\n fileout << \";\" << endl;\n\n filein.close();\n fileout.close();\n\n return 0;\n}\n\n",
"It's slightly ugly, but you can always use something like:\nconst char *query_foo =\n#include \"query_foo.txt\"\n\nconst char *query_bar =\n#include \"query_bar.txt\"\n\nWhere query_foo.txt would contain the quoted query text.\n",
"I have seen this to be done by converting the resource file to a C source file with only one char array defined containing the content of resource file in a hexadecimal format (to avoid problems with malicious characters). This automatically generated source file is then simply compiled and linked to the project. \nIt should be pretty easy to implement the convertor to dump C file for each resource file also as to write some facade functions for accessing the resources.\n"
] |
[
25,
4,
3,
2,
1,
0
] |
[] |
[] |
[
"c++",
"linux",
"sqlite"
] |
stackoverflow_0000072616_c++_linux_sqlite.txt
|
Q:
How do you direct traffic to/from a particular site to a specific NIC?
In Windows XP:
How do you direct traffic to/from a particular site to a specific NIC?
For Instance: How do I say, all connections to stackoverflow.com should use my wireless connection, while all other sites will use my ethernet?
A:
I'm not sure if there's an easier way, but one way would be to add a route to the IP(s) of stackoverflow.com that explicitly specifies your wireless connection, using a lower metric (cost) than your default route.
Running nslookup www.stackoverflow.com shows only one IP: 67.199.15.132, so the syntax would be:
route -p add 67.199.15.132 [your wireless gateway] metric [lower metric than default route] IF [wireless interface]
See the route command for more info.
A:
you should be able to do it using the route command. Route add (ip address) (netmask) (gateway) metric 1
A:
Modify your routing table so that specific hosts go through the desired interface.
You need to have a 'default' route, which would either be your ethernet or wireless. But to direct traffic through the other interface, use the command line 'route' command to add a route to the specific IP address you're wanting to redirect.
For example, stackoverflow.com has the IP address 67.199.15.132 (you can find this by using nslookup or pinging it). Issue a route add 67.199.15.132 mask 255.255.255.255 a.b.c.d IF e
where a.b.c.d == the IP address of the router on the other end of your wireless interface, and e is the interface number (a 'route print' command will list each interface and it's interface number).
If you add the '-p' flag to the route command the route will be persistent between reboots.
A:
Within XP, I have often found that by adding/modifying static routes, I can typically
accomplish what I need in such cases.
Of course, there are other 'high level' COTS tools/firewalls that might provide you a better interface.
One caveat with modifying routes - VPN tunnels are not too happy about chnages in static routes once the VPN is set up so be sure to set it up at Windows boot up after the NICs are initialized through some scripting.
Static routes- these will work fine, unless you are using a VPN tunnel.
Windows 'route' help
Manipulates network routing tables.
ROUTE [-f] [-p] [command [destination]
[MASK netmask] [gateway] [METRIC metric] [IF interface]
-f Clears the routing tables of all gateway entries. If this is
used in conjunction with one of the commands, the tables are
cleared prior to running the command.
-p When used with the ADD command, makes a route persistent across
boots of the system. By default, routes are not preserved
when the system is restarted. Ignored for all other commands,
which always affect the appropriate persistent routes. This
option is not supported in Windows 95.
command One of these:
PRINT Prints a route
ADD Adds a route
DELETE Deletes a route
CHANGE Modifies an existing route
|
How do you direct traffic to/from a particular site to a specific NIC?
|
In Windows XP:
How do you direct traffic to/from a particular site to a specific NIC?
For Instance: How do I say, all connections to stackoverflow.com should use my wireless connection, while all other sites will use my ethernet?
|
[
"I'm not sure if there's an easier way, but one way would be to add a route to the IP(s) of stackoverflow.com that explicitly specifies your wireless connection, using a lower metric (cost) than your default route.\nRunning nslookup www.stackoverflow.com shows only one IP: 67.199.15.132, so the syntax would be:\nroute -p add 67.199.15.132 [your wireless gateway] metric [lower metric than default route] IF [wireless interface]\nSee the route command for more info.\n",
"you should be able to do it using the route command. Route add (ip address) (netmask) (gateway) metric 1\n",
"Modify your routing table so that specific hosts go through the desired interface. \nYou need to have a 'default' route, which would either be your ethernet or wireless. But to direct traffic through the other interface, use the command line 'route' command to add a route to the specific IP address you're wanting to redirect.\nFor example, stackoverflow.com has the IP address 67.199.15.132 (you can find this by using nslookup or pinging it). Issue a route add 67.199.15.132 mask 255.255.255.255 a.b.c.d IF e\nwhere a.b.c.d == the IP address of the router on the other end of your wireless interface, and e is the interface number (a 'route print' command will list each interface and it's interface number).\nIf you add the '-p' flag to the route command the route will be persistent between reboots.\n",
"Within XP, I have often found that by adding/modifying static routes, I can typically\naccomplish what I need in such cases. \nOf course, there are other 'high level' COTS tools/firewalls that might provide you a better interface.\nOne caveat with modifying routes - VPN tunnels are not too happy about chnages in static routes once the VPN is set up so be sure to set it up at Windows boot up after the NICs are initialized through some scripting.\nStatic routes- these will work fine, unless you are using a VPN tunnel.\nWindows 'route' help\nManipulates network routing tables.\nROUTE [-f] [-p] [command [destination]\n [MASK netmask] [gateway] [METRIC metric] [IF interface]\n-f Clears the routing tables of all gateway entries. If this is\n used in conjunction with one of the commands, the tables are\n cleared prior to running the command.\n -p When used with the ADD command, makes a route persistent across\n boots of the system. By default, routes are not preserved\n when the system is restarted. Ignored for all other commands,\n which always affect the appropriate persistent routes. This\n option is not supported in Windows 95.\n command One of these:\n PRINT Prints a route\n ADD Adds a route\n DELETE Deletes a route\n CHANGE Modifies an existing route\n"
] |
[
3,
1,
1,
1
] |
[] |
[] |
[
"networking"
] |
stackoverflow_0000073518_networking.txt
|
Q:
Why doesn't inheritance work the way I think it should work?
I'm having some inheritance issues as I've got a group of inter-related abstract classes that need to all be overridden together to create a client implementation. Ideally I would like to do something like the following:
abstract class Animal
{
public Leg GetLeg() {...}
}
abstract class Leg { }
class Dog : Animal
{
public override DogLeg Leg() {...}
}
class DogLeg : Leg { }
This would allow anyone using the Dog class to automatically get DogLegs and anyone using the Animal class to get Legs. The problem is that the overridden function has to have the same type as the base class so this will not compile. I don't see why it shouldn't though, since DogLeg is implicitly castable to Leg. I know there are plenty of ways around this, but I'm more curious why this isn't possible/implemented in C#.
EDIT: I modified this somewhat, since I'm actually using properties instead of functions in my code.
EDIT: I changed it back to functions, because the answer only applies to that situation (covariance on the value parameter of a property's set function shouldn't work). Sorry for the fluctuations! I realize it makes a lot of the answers seem irrelevant.
A:
The short answer is that GetLeg is invariant in its return type. The long answer can be found here: Covariance and contravariance
I'd like to add that while inheritance is usually the first abstraction tool that most developers pull out of their toolbox, it is almost always possible to use composition instead. Composition is slightly more work for the API developer, but makes the API more useful for its consumers.
A:
Clearly, you'll need a cast if you're
operating on a broken DogLeg.
A:
Dog should return a Leg not a DogLeg as the return type. The actual class may be a DogLeg, but the point is to decouple so the user of Dog doesn't have to know about DogLegs, they only need to know about Legs.
Change:
class Dog : Animal
{
public override DogLeg GetLeg() {...}
}
to:
class Dog : Animal
{
public override Leg GetLeg() {...}
}
Don't Do this:
if(a instanceof Dog){
DogLeg dl = (DogLeg)a.GetLeg();
it defeats the purpose of programing to the abstract type.
The reason to hide DogLeg is because the GetLeg function in the abstract class returns an Abstract Leg. If you are overriding the GetLeg you must return a Leg. Thats the point of having a method in an abstract class. To propagate that method to it's childern. If you want the users of the Dog to know about DogLegs make a method called GetDogLeg and return a DogLeg.
If you COULD do as the question asker wants to, then every user of Animal would need to know about ALL animals.
A:
It is a perfectly valid desire to have the signature an overriding method have a return type that is a subtype of the return type in the overridden method (phew). After all, they are run-time type compatible.
But C# does not yet support "covariant return types" in overridden methods (unlike C++ [1998] & Java [2004]).
You'll need to work around and make do for the foreseeable future, as Eric Lippert stated in his blog
[June 19, 2008]:
That kind of variance is called "return type covariance".
we have no plans to implement that kind of variance in C#.
A:
abstract class Animal
{
public virtual Leg GetLeg ()
}
abstract class Leg { }
class Dog : Animal
{
public override Leg GetLeg () { return new DogLeg(); }
}
class DogLeg : Leg { void Hump(); }
Do it like this, then you can leverage the abstraction in your client:
Leg myleg = myDog.GetLeg();
Then if you need to, you can cast it:
if (myleg is DogLeg) { ((DogLeg)myLeg).Hump()); }
Totally contrived, but the point is so you can do this:
foreach (Animal a in animals)
{
a.GetLeg().SomeMethodThatIsOnAllLegs();
}
While still retaining the ability to have a special Hump method on Doglegs.
A:
You can use generics and interfaces to implement that in C#:
abstract class Leg { }
interface IAnimal { Leg GetLeg(); }
abstract class Animal<TLeg> : IAnimal where TLeg : Leg
{ public abstract TLeg GetLeg();
Leg IAnimal.GetLeg() { return this.GetLeg(); }
}
class Dog : Animal<Dog.DogLeg>
{ public class DogLeg : Leg { }
public override DogLeg GetLeg() { return new DogLeg();}
}
A:
GetLeg() must return Leg to be an override. Your Dog class however, can still return DogLeg objects since they are a child class of Leg. clients can then cast and operate on them as doglegs.
public class ClientObj{
public void doStuff(){
Animal a=getAnimal();
if(a is Dog){
DogLeg dl = (DogLeg)a.GetLeg();
}
}
}
A:
The concept that is causing you problems is described at http://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)
A:
Not that it is much use, but it is maybe interesting to note the Java does support covariant returns, and so this would work exactly how you hoped. Except obviously that Java doesn't have properties ;)
A:
Perhaps it's easier to see the problem with an example:
Animal dog = new Dog();
dog.SetLeg(new CatLeg());
Now that should compile if you're Dog compiled, but we probably don't want such a mutant.
A related issue is should Dog[] be an Animal[], or IList<Dog> an IList<Animal>?
A:
C# has explicit interface implementations to address just this issue:
abstract class Leg { }
class DogLeg : Leg { }
interface IAnimal
{
Leg GetLeg();
}
class Dog : IAnimal
{
public override DogLeg GetLeg() { /* */ }
Leg IAnimal.GetLeg() { return GetLeg(); }
}
If you have a Dog through a reference of type Dog, then calling GetLeg() will return a DogLeg. If you have the same object, but the reference is of type IAnimal, then it will return a Leg.
A:
Right, I understand that I can just cast, but that means the client has to know that Dogs have DogLegs. What I'm wondering is if there are technical reasons why this isn't possible, given that an implicit conversion exists.
A:
@Brian Leahy
Obviously if you are only operating on it as a Leg there is no need or reason to cast. But if there is some DogLeg or Dog specific behavior, there are sometimes reasons that the cast is neccessary.
A:
@Luke
I think your perhaps misunderstanding inheritance. Dog.GetLeg() will return a DogLeg object.
public class Dog{
public Leg GetLeg(){
DogLeg dl = new DogLeg(super.GetLeg());
//set dogleg specific properties
}
}
Animal a = getDog();
Leg l = a.GetLeg();
l.kick();
the actual method called will be Dog.GetLeg(); and DogLeg.Kick() (I'm assuming a method Leg.kick() exists) there for, the declared return type being DogLeg is unneccessary, because that is what returned, even if the return type for Dog.GetLeg() is Leg.
A:
You could also return the interface ILeg that both Leg and/or DogLeg implement.
A:
The important thing to remember is that you can use a derived type every place you use the base type (you can pass Dog to any method/property/field/variable that expects Animal)
Let's take this function:
public void AddLeg(Animal a)
{
a.Leg = new Leg();
}
A perfectly valid function, now let's call the function like that:
AddLeg(new Dog());
If the property Dog.Leg isn't of type Leg the AddLeg function suddenly contains an error and cannot be compiled.
A:
You can achieve what you want by using a generic with an appropriate constraint, like the following:
abstract class Animal<LegType> where LegType : Leg
{
public abstract LegType GetLeg();
}
abstract class Leg { }
class Dog : Animal<DogLeg>
{
public override DogLeg GetLeg()
{
return new DogLeg();
}
}
class DogLeg : Leg { }
|
Why doesn't inheritance work the way I think it should work?
|
I'm having some inheritance issues as I've got a group of inter-related abstract classes that need to all be overridden together to create a client implementation. Ideally I would like to do something like the following:
abstract class Animal
{
public Leg GetLeg() {...}
}
abstract class Leg { }
class Dog : Animal
{
public override DogLeg Leg() {...}
}
class DogLeg : Leg { }
This would allow anyone using the Dog class to automatically get DogLegs and anyone using the Animal class to get Legs. The problem is that the overridden function has to have the same type as the base class so this will not compile. I don't see why it shouldn't though, since DogLeg is implicitly castable to Leg. I know there are plenty of ways around this, but I'm more curious why this isn't possible/implemented in C#.
EDIT: I modified this somewhat, since I'm actually using properties instead of functions in my code.
EDIT: I changed it back to functions, because the answer only applies to that situation (covariance on the value parameter of a property's set function shouldn't work). Sorry for the fluctuations! I realize it makes a lot of the answers seem irrelevant.
|
[
"The short answer is that GetLeg is invariant in its return type. The long answer can be found here: Covariance and contravariance\nI'd like to add that while inheritance is usually the first abstraction tool that most developers pull out of their toolbox, it is almost always possible to use composition instead. Composition is slightly more work for the API developer, but makes the API more useful for its consumers.\n",
"Clearly, you'll need a cast if you're \noperating on a broken DogLeg. \n",
"Dog should return a Leg not a DogLeg as the return type. The actual class may be a DogLeg, but the point is to decouple so the user of Dog doesn't have to know about DogLegs, they only need to know about Legs.\nChange:\nclass Dog : Animal\n{\n public override DogLeg GetLeg() {...}\n}\n\nto:\nclass Dog : Animal\n{\n public override Leg GetLeg() {...}\n}\n\nDon't Do this:\n if(a instanceof Dog){\n DogLeg dl = (DogLeg)a.GetLeg();\n\nit defeats the purpose of programing to the abstract type.\nThe reason to hide DogLeg is because the GetLeg function in the abstract class returns an Abstract Leg. If you are overriding the GetLeg you must return a Leg. Thats the point of having a method in an abstract class. To propagate that method to it's childern. If you want the users of the Dog to know about DogLegs make a method called GetDogLeg and return a DogLeg.\nIf you COULD do as the question asker wants to, then every user of Animal would need to know about ALL animals.\n",
"It is a perfectly valid desire to have the signature an overriding method have a return type that is a subtype of the return type in the overridden method (phew). After all, they are run-time type compatible.\nBut C# does not yet support \"covariant return types\" in overridden methods (unlike C++ [1998] & Java [2004]).\nYou'll need to work around and make do for the foreseeable future, as Eric Lippert stated in his blog\n[June 19, 2008]:\n\nThat kind of variance is called \"return type covariance\".\nwe have no plans to implement that kind of variance in C#.\n\n",
"abstract class Animal\n{\n public virtual Leg GetLeg ()\n}\n\nabstract class Leg { }\n\nclass Dog : Animal\n{\n public override Leg GetLeg () { return new DogLeg(); }\n}\n\nclass DogLeg : Leg { void Hump(); }\n\nDo it like this, then you can leverage the abstraction in your client:\nLeg myleg = myDog.GetLeg();\n\nThen if you need to, you can cast it:\nif (myleg is DogLeg) { ((DogLeg)myLeg).Hump()); }\n\nTotally contrived, but the point is so you can do this:\nforeach (Animal a in animals)\n{\n a.GetLeg().SomeMethodThatIsOnAllLegs();\n}\n\nWhile still retaining the ability to have a special Hump method on Doglegs.\n",
"You can use generics and interfaces to implement that in C#:\nabstract class Leg { }\n\ninterface IAnimal { Leg GetLeg(); }\n\nabstract class Animal<TLeg> : IAnimal where TLeg : Leg\n { public abstract TLeg GetLeg();\n Leg IAnimal.GetLeg() { return this.GetLeg(); }\n }\n\nclass Dog : Animal<Dog.DogLeg>\n { public class DogLeg : Leg { }\n public override DogLeg GetLeg() { return new DogLeg();}\n } \n\n",
"GetLeg() must return Leg to be an override. Your Dog class however, can still return DogLeg objects since they are a child class of Leg. clients can then cast and operate on them as doglegs.\npublic class ClientObj{\n public void doStuff(){\n Animal a=getAnimal();\n if(a is Dog){\n DogLeg dl = (DogLeg)a.GetLeg();\n }\n }\n}\n\n",
"The concept that is causing you problems is described at http://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)\n",
"Not that it is much use, but it is maybe interesting to note the Java does support covariant returns, and so this would work exactly how you hoped. Except obviously that Java doesn't have properties ;)\n",
"Perhaps it's easier to see the problem with an example:\nAnimal dog = new Dog();\ndog.SetLeg(new CatLeg());\n\nNow that should compile if you're Dog compiled, but we probably don't want such a mutant.\nA related issue is should Dog[] be an Animal[], or IList<Dog> an IList<Animal>?\n",
"C# has explicit interface implementations to address just this issue:\nabstract class Leg { }\nclass DogLeg : Leg { }\n\ninterface IAnimal\n{\n Leg GetLeg();\n}\n\nclass Dog : IAnimal\n{\n public override DogLeg GetLeg() { /* */ }\n\n Leg IAnimal.GetLeg() { return GetLeg(); }\n}\n\nIf you have a Dog through a reference of type Dog, then calling GetLeg() will return a DogLeg. If you have the same object, but the reference is of type IAnimal, then it will return a Leg.\n",
"Right, I understand that I can just cast, but that means the client has to know that Dogs have DogLegs. What I'm wondering is if there are technical reasons why this isn't possible, given that an implicit conversion exists.\n",
"@Brian Leahy\nObviously if you are only operating on it as a Leg there is no need or reason to cast. But if there is some DogLeg or Dog specific behavior, there are sometimes reasons that the cast is neccessary.\n",
"@Luke\nI think your perhaps misunderstanding inheritance. Dog.GetLeg() will return a DogLeg object. \npublic class Dog{\n public Leg GetLeg(){\n DogLeg dl = new DogLeg(super.GetLeg());\n //set dogleg specific properties\n }\n}\n\n\n Animal a = getDog();\n Leg l = a.GetLeg();\n l.kick();\n\nthe actual method called will be Dog.GetLeg(); and DogLeg.Kick() (I'm assuming a method Leg.kick() exists) there for, the declared return type being DogLeg is unneccessary, because that is what returned, even if the return type for Dog.GetLeg() is Leg.\n",
"You could also return the interface ILeg that both Leg and/or DogLeg implement.\n",
"The important thing to remember is that you can use a derived type every place you use the base type (you can pass Dog to any method/property/field/variable that expects Animal)\nLet's take this function:\npublic void AddLeg(Animal a)\n{\n a.Leg = new Leg();\n}\n\nA perfectly valid function, now let's call the function like that:\nAddLeg(new Dog());\n\nIf the property Dog.Leg isn't of type Leg the AddLeg function suddenly contains an error and cannot be compiled.\n",
"You can achieve what you want by using a generic with an appropriate constraint, like the following:\nabstract class Animal<LegType> where LegType : Leg\n{\n public abstract LegType GetLeg();\n}\n\nabstract class Leg { }\n\nclass Dog : Animal<DogLeg>\n{\n public override DogLeg GetLeg()\n {\n return new DogLeg();\n }\n}\n\nclass DogLeg : Leg { }\n\n"
] |
[
16,
12,
6,
4,
3,
3,
2,
2,
2,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c#",
"contravariance",
"covariance",
"inheritance",
"oop"
] |
stackoverflow_0000046933_c#_contravariance_covariance_inheritance_oop.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.