content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Reporting Services Line Graph: How to better control the smoothed curve
I have a report that I built for a client where I need to plot x 0-100, y 0-100. Let's imagine I have these points:
0, 0
2, 24
50, 70
100, 100
I need to represent these as a smoothed line chart, as the application of it is a dot gain graph for printing presses.
Here's the problem. The line draws fine from 100,100 (top right) down to 2,24. But then what happens is from 2,24 to 0,0 the line curves out off the left of the graph and then to down to 0,0. Imagine it putting a point at -10,10.
I understand this is because of the generic Bézier curve algorithm it is using and the large separation of control points, thus heavily weighting it.
I was wondering however if anyone knows a way I can control it. I have tried adding in averaged points between the existing control points, but it still curves off the graph as if it's still heavily weighted.
The only other answer I can think of is custom drawing a graph or looking into Dundas Charts and using its GDI+ drawing support.
But before I go that route, anyone have any thoughts?
Here's the thing. I know how to draw the curve manually. The problem lies in the fact that there is such a high weighting between 2 and 50. I tried to add points in at the lows and the mids, but it was still bowing off the edge. I will have to go check out the source and modify the graph back and see if I can get a screenshot up.
Right now I just have the graph stop at 2 until I can get this solved.
A:
alt text http://img140.imageshack.us/img140/1279/smoothlinebezierxl0.jpg
(Providing a picture of the behaviour to help you get a better answer).
For those with a theory, you can try this out in Excel as well (not just Reporting Services).
You mentioned adding points in your question, but it seems like adding in interpolated points near the problem area has the desired effect (e.g. { (1,12), (1.5, 18) }). This is a clumsy "solution" at best though.
A:
You could try using a cosine interpolation for the points in-between.
|
Reporting Services Line Graph: How to better control the smoothed curve
|
I have a report that I built for a client where I need to plot x 0-100, y 0-100. Let's imagine I have these points:
0, 0
2, 24
50, 70
100, 100
I need to represent these as a smoothed line chart, as the application of it is a dot gain graph for printing presses.
Here's the problem. The line draws fine from 100,100 (top right) down to 2,24. But then what happens is from 2,24 to 0,0 the line curves out off the left of the graph and then to down to 0,0. Imagine it putting a point at -10,10.
I understand this is because of the generic Bézier curve algorithm it is using and the large separation of control points, thus heavily weighting it.
I was wondering however if anyone knows a way I can control it. I have tried adding in averaged points between the existing control points, but it still curves off the graph as if it's still heavily weighted.
The only other answer I can think of is custom drawing a graph or looking into Dundas Charts and using its GDI+ drawing support.
But before I go that route, anyone have any thoughts?
Here's the thing. I know how to draw the curve manually. The problem lies in the fact that there is such a high weighting between 2 and 50. I tried to add points in at the lows and the mids, but it was still bowing off the edge. I will have to go check out the source and modify the graph back and see if I can get a screenshot up.
Right now I just have the graph stop at 2 until I can get this solved.
|
[
"alt text http://img140.imageshack.us/img140/1279/smoothlinebezierxl0.jpg\n(Providing a picture of the behaviour to help you get a better answer).\nFor those with a theory, you can try this out in Excel as well (not just Reporting Services). \nYou mentioned adding points in your question, but it seems like adding in interpolated points near the problem area has the desired effect (e.g. { (1,12), (1.5, 18) }). This is a clumsy \"solution\" at best though.\n",
"You could try using a cosine interpolation for the points in-between.\n"
] |
[
2,
0
] |
[] |
[] |
[
".net",
"charts",
"graphics",
"reporting_services",
"splines"
] |
stackoverflow_0000109464_.net_charts_graphics_reporting_services_splines.txt
|
Q:
URLSCAN question
I have uriscan installed on my Win2003 server and it is blocking an older ColdFusion script. The log entry has the following--
2008-09-19 00:16:57 66.82.162.13 1416208729 GET /Admin/Uploads/Mountain/Wolf%2520Creek%2520gazeebo.jpg Rejected URL+is+double+escaped URL - -
How do I get uriscan to allow submissions like this without turning off the double-escaped url feature?
A:
To quote another post on the subject,
some aspect of your process for
submitting URIs is doing some bad
encoding.
http://www.usenet-forums.com/archive/index.php/t-39111.html
I recommend changing the name of the JPG to not have spaces in it as a good practice, then later try to figure out with a non-production page why you're not interpreting the %20 as an encoded space, but as a percent sign and two digits.
A:
How do I get uriscan to allow
submissions like this without turning
off the double-escaped url feature?
How do you get it to allow double-escaped URLs without turning off the double-escaped url feature? I think there's something wrong with what you're trying to do. My question is this: does your HTML source literally show image requests with "%2520" in them? Is that the correct name for your file? If so, you really have only two options: rename the file or turn off the feature disallowing double escapes.
|
URLSCAN question
|
I have uriscan installed on my Win2003 server and it is blocking an older ColdFusion script. The log entry has the following--
2008-09-19 00:16:57 66.82.162.13 1416208729 GET /Admin/Uploads/Mountain/Wolf%2520Creek%2520gazeebo.jpg Rejected URL+is+double+escaped URL - -
How do I get uriscan to allow submissions like this without turning off the double-escaped url feature?
|
[
"To quote another post on the subject,\n\nsome aspect of your process for\nsubmitting URIs is doing some bad\nencoding.\nhttp://www.usenet-forums.com/archive/index.php/t-39111.html\n\nI recommend changing the name of the JPG to not have spaces in it as a good practice, then later try to figure out with a non-production page why you're not interpreting the %20 as an encoded space, but as a percent sign and two digits.\n",
"\nHow do I get uriscan to allow\n submissions like this without turning\n off the double-escaped url feature?\n\nHow do you get it to allow double-escaped URLs without turning off the double-escaped url feature? I think there's something wrong with what you're trying to do. My question is this: does your HTML source literally show image requests with \"%2520\" in them? Is that the correct name for your file? If so, you really have only two options: rename the file or turn off the feature disallowing double escapes.\n"
] |
[
1,
0
] |
[] |
[] |
[
"urlscan"
] |
stackoverflow_0000104341_urlscan.txt
|
Q:
Finding SQL queries in compiled app
I have just inherited a server application, however it seems that the only copy of the database is corrupt and the working version is gone, so is it possible to find what queries the application is running so I can try to rebuild the tables?
Edit: I have some files with no extensions that I are named the same as the databases, IDK if there is anything that can be done with them but if anyone has any ideas.
The accepted answer seems the most likely to succeed however I was able to find another backup so I have not tested it.
A:
Turn on SQL query logging and watch what the application asks for.
A:
If you have access to either a unix machine, or can install the cygwin utilities (http://www.cygwin.com/), there is a command called 'strings' which will search through any file type and print out any contiguous sequence of character data (might just be ascii). That tool should help you identify the sql queries embedded in the aplication.
A:
Look for SQL Profiler, which (depending on which version you have) is normally available from the tools menu in query analyzer (isqlw.exe) or management studio (in later versions).
With SQL profiler you can run a trace on the server which can show you which queries are being requested by the application.
A:
You could run the UNIX command "strings" on the program to see whether it has embedded sql strings:
http://en.wikipedia.org/wiki/Strings_(Unix)
A:
You could RegEx the files to search for
"SELECT *"
"UPDATE *"
"DELETE FROM *"
"INSERT INTO *"
|
Finding SQL queries in compiled app
|
I have just inherited a server application, however it seems that the only copy of the database is corrupt and the working version is gone, so is it possible to find what queries the application is running so I can try to rebuild the tables?
Edit: I have some files with no extensions that I are named the same as the databases, IDK if there is anything that can be done with them but if anyone has any ideas.
The accepted answer seems the most likely to succeed however I was able to find another backup so I have not tested it.
|
[
"Turn on SQL query logging and watch what the application asks for.\n",
"If you have access to either a unix machine, or can install the cygwin utilities (http://www.cygwin.com/), there is a command called 'strings' which will search through any file type and print out any contiguous sequence of character data (might just be ascii). That tool should help you identify the sql queries embedded in the aplication.\n",
"Look for SQL Profiler, which (depending on which version you have) is normally available from the tools menu in query analyzer (isqlw.exe) or management studio (in later versions).\nWith SQL profiler you can run a trace on the server which can show you which queries are being requested by the application. \n",
"You could run the UNIX command \"strings\" on the program to see whether it has embedded sql strings:\nhttp://en.wikipedia.org/wiki/Strings_(Unix)\n",
"You could RegEx the files to search for\n\n\"SELECT *\"\n\"UPDATE *\"\n\"DELETE FROM *\"\n\"INSERT INTO *\"\n\n"
] |
[
3,
2,
2,
0,
0
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000109594_sql_server.txt
|
Q:
Is there a WMI Redistributable Package?
I've been working on a project that accesses the WMI to get information about the software installed on a user's machine. We've been querying Win32_Product only to find that it doesn't exist in 64-bit versions of Windows because it's an "optional component".
I know there are a lot of really good alternatives to querying the WMI for this information, but I've got a bit of a vested interest in finding out how well this is going to work out.
What I want to know is if there's some kind of redistributable that can be packaged with our software to allow 64-bit users to get the WMI Installer Provider put onto their machines? Right now, they have to install it manually and the installation requires they have their Windows disc handy.
Edit:
You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists.
For Operation System, we've been using .NET 3.5 so we need packages that will work on XP64 and 64bit versions of Windows Vista.
A:
You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists.
For Windows Server 2003, the WMI SDK and redistributables are part of the Server SDK
I believe that the same is true for the Server 2008 SDK
A:
Wouldn't the normal approach for a Windows component be that the administrators of a set of servers use whatever their local software push technology (i.e. SMS) to ensure that component is installed? This is not that uncommon of a requirement for the remote management of servers via WMI.
By the way, the WMI Installer Provider is not provided in the Standard Edition of the server products, but it is in the Enterprise Edition. So, Windows 2003 Server will not have this installed by default, but Windows 2003 Server Enterprise (and DataCenter) will.
This answer does imply that you are putting the burden of installation back on your user base, but for Windows administrators this should not be any issue.
|
Is there a WMI Redistributable Package?
|
I've been working on a project that accesses the WMI to get information about the software installed on a user's machine. We've been querying Win32_Product only to find that it doesn't exist in 64-bit versions of Windows because it's an "optional component".
I know there are a lot of really good alternatives to querying the WMI for this information, but I've got a bit of a vested interest in finding out how well this is going to work out.
What I want to know is if there's some kind of redistributable that can be packaged with our software to allow 64-bit users to get the WMI Installer Provider put onto their machines? Right now, they have to install it manually and the installation requires they have their Windows disc handy.
Edit:
You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists.
For Operation System, we've been using .NET 3.5 so we need packages that will work on XP64 and 64bit versions of Windows Vista.
|
[
"You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists.\nFor Windows Server 2003, the WMI SDK and redistributables are part of the Server SDK\nI believe that the same is true for the Server 2008 SDK\n",
"Wouldn't the normal approach for a Windows component be that the administrators of a set of servers use whatever their local software push technology (i.e. SMS) to ensure that component is installed? This is not that uncommon of a requirement for the remote management of servers via WMI.\nBy the way, the WMI Installer Provider is not provided in the Standard Edition of the server products, but it is in the Enterprise Edition. So, Windows 2003 Server will not have this installed by default, but Windows 2003 Server Enterprise (and DataCenter) will.\nThis answer does imply that you are putting the burden of installation back on your user base, but for Windows administrators this should not be any issue.\n"
] |
[
2,
0
] |
[] |
[] |
[
"64_bit",
"windows",
"wmi"
] |
stackoverflow_0000003790_64_bit_windows_wmi.txt
|
Q:
Multiple Constructors with complex logic
In C#, if you have multiple constructors, you can do something like this:
public MyClass(Guid inputId, string inputName){
// do something
}
public MyClass(Guid inputId): this(inputId, "foo") {}
The idea is of course code reuse. However, what is the best approach when there is a bit of complex logic needed? Say I want this constructor:
public MyClass(MyOtherClass inputObject)
{
Guid inputId = inputObject.ID;
MyThirdClass mc = inputObject.CreateHelper();
string inputText = mc.Text;
mc.Dispose();
// Need to call the main Constructor now with inputId and inputText
}
The caveat here is that I need to create an object that has to be disposed after use. (Clarification: Not immediately, but I have to call Dispose() rather than waiting for Garbage Collection)
However, I did not see a way to just call the base constructor again if I add some code inside my overloaded constructor. Is there a way to call the base constructor from within an overloaded one?
Or is it possible to use
public MyClass(MyOtherClass inputObject): this(inputObject.ID,
inputObject.CreateHelper().Text)
{}
Would this automatically Dispose the generated Object from CreateHelper()?
Edit: Thanks so far. Two problems: I do not control MyOtherClass and I do not have Extension Methods (only .NET 3.0...). I do control my own class though, and since I've just started writing it, I have no problem refactoring the constructors if there is a good approach.
A:
The most common pattern used to solve this problem is to have an Initialize() method that your constructors call, but in the example you just gave, adding a static method that you called like the code below, would do the trick.
public MyClass(MyOtherClass inputObject): this(inputObject.ID, GetHelperText(inputObject) {}
private static string GetHelperText(MyOtherClass o)
{
using (var helper = o.CreateHelper())
return helper.Text;
}
A:
I don't see any reason to believe that creating an object in the constructor will automatically dispose the object. Yes, your object will immediately go out of scope and be available for garbage collection, but that is certainly not the same as being disposed.
There really isn't a great way to do exactly what you want to do, but the whole thing feels like it could benefit from some refactoring. That is usually the case in my own code when I find myself trying to bend over backwards to create a constructor overload.
If you have control over MyOtherClass, why not simplify the access to that text property by adding a getter method that handles the dispose:
public class MyOtherClass
{
//...
public string GetText()
{
using (var h = CreateHelper())
return h.Text;
}
}
if you don't control MyOtherClass you could use an extension method
public static class MyOtherClassExtensions
{
public static string GetText(this MyOtherClass parent)
{
using(var helper = parent.CreateHelper())
{
return helper.Text;
}
}
}
Then, of course, in your constructor you can safely call
public MyClass(MyOtherClass inputObject): this(inputObject.ID, inputObject.GetText()) {}
A:
The object would only be automatically disposed when garbage collection runs. If you want the dispose to run as soon as it went out of scope, you should use a using block:
using (MyThirdClass mc = inputObject.CreateHelper())
{
// do something with mc
}
This is really more of an issue with style and not really central to the question you had.
|
Multiple Constructors with complex logic
|
In C#, if you have multiple constructors, you can do something like this:
public MyClass(Guid inputId, string inputName){
// do something
}
public MyClass(Guid inputId): this(inputId, "foo") {}
The idea is of course code reuse. However, what is the best approach when there is a bit of complex logic needed? Say I want this constructor:
public MyClass(MyOtherClass inputObject)
{
Guid inputId = inputObject.ID;
MyThirdClass mc = inputObject.CreateHelper();
string inputText = mc.Text;
mc.Dispose();
// Need to call the main Constructor now with inputId and inputText
}
The caveat here is that I need to create an object that has to be disposed after use. (Clarification: Not immediately, but I have to call Dispose() rather than waiting for Garbage Collection)
However, I did not see a way to just call the base constructor again if I add some code inside my overloaded constructor. Is there a way to call the base constructor from within an overloaded one?
Or is it possible to use
public MyClass(MyOtherClass inputObject): this(inputObject.ID,
inputObject.CreateHelper().Text)
{}
Would this automatically Dispose the generated Object from CreateHelper()?
Edit: Thanks so far. Two problems: I do not control MyOtherClass and I do not have Extension Methods (only .NET 3.0...). I do control my own class though, and since I've just started writing it, I have no problem refactoring the constructors if there is a good approach.
|
[
"The most common pattern used to solve this problem is to have an Initialize() method that your constructors call, but in the example you just gave, adding a static method that you called like the code below, would do the trick.\npublic MyClass(MyOtherClass inputObject): this(inputObject.ID, GetHelperText(inputObject) {}\n\nprivate static string GetHelperText(MyOtherClass o)\n{\n using (var helper = o.CreateHelper())\n return helper.Text;\n}\n\n",
"I don't see any reason to believe that creating an object in the constructor will automatically dispose the object. Yes, your object will immediately go out of scope and be available for garbage collection, but that is certainly not the same as being disposed.\nThere really isn't a great way to do exactly what you want to do, but the whole thing feels like it could benefit from some refactoring. That is usually the case in my own code when I find myself trying to bend over backwards to create a constructor overload.\nIf you have control over MyOtherClass, why not simplify the access to that text property by adding a getter method that handles the dispose:\npublic class MyOtherClass\n{\n //...\n public string GetText()\n {\n using (var h = CreateHelper())\n return h.Text;\n }\n}\n\nif you don't control MyOtherClass you could use an extension method\npublic static class MyOtherClassExtensions\n{\n public static string GetText(this MyOtherClass parent)\n {\n using(var helper = parent.CreateHelper())\n {\n return helper.Text;\n }\n }\n}\n\nThen, of course, in your constructor you can safely call\npublic MyClass(MyOtherClass inputObject): this(inputObject.ID, inputObject.GetText()) {}\n\n",
"The object would only be automatically disposed when garbage collection runs. If you want the dispose to run as soon as it went out of scope, you should use a using block:\nusing (MyThirdClass mc = inputObject.CreateHelper())\n{\n // do something with mc\n}\n\nThis is really more of an issue with style and not really central to the question you had.\n"
] |
[
17,
2,
1
] |
[] |
[] |
[
".net",
"c#"
] |
stackoverflow_0000109717_.net_c#.txt
|
Q:
Unit-testing COM written in .NET
Is there a way to unit-test COM-visible .NET assemblies from .NET (not via direct .NET assembly reference)? When i add reference in my test project to the COM component whitten in .NET it complains.
A:
There's always vbunit. Unit testing vb code (vb classic / VB6) and com objects is what it does.
|
Unit-testing COM written in .NET
|
Is there a way to unit-test COM-visible .NET assemblies from .NET (not via direct .NET assembly reference)? When i add reference in my test project to the COM component whitten in .NET it complains.
|
[
"There's always vbunit. Unit testing vb code (vb classic / VB6) and com objects is what it does.\n"
] |
[
1
] |
[] |
[] |
[
".net",
"com",
"unit_testing"
] |
stackoverflow_0000109719_.net_com_unit_testing.txt
|
Q:
How do you serialize javascript objects with methods using JSON?
I am looking for an enhancement to JSON that will also serialize methods. I have an object that acts as a collection of objects, and would like to serialize the methods of the collection object as well. So far I've located ClassyJSON. Any thoughts?
A:
Try to get away without serializing javascript code. That way lies a world of pain. Debugging will be much easier if code can only come from static files, not from a database. Instead, walk your JSON responses after you receive them and pass the appropriate data to the appropriate object constructors.
If you absolutely must serialize them, calling toString() on a function will return its source.
A:
If you use WCF framework to develop RESTful web service, that is very easy to achieve.
Simply create your data structure classes with your desired collection with DataContract, DataMember attributes.
[DataContract]
public class Foo
{
[DataMember]
public string FooName {get;set;}
[DataMember]
public FooItem[] FooItems {get;set;}
}
[DataContract]
public class FooItem
{
[DataMember]
public string Name {get;set;}
}
A:
I don't think serializing methods is ever a good idea. If you intend to run the code serverside, you open yourself to attacks. If you want to run it client side, you are better off just the local methods, possibly referencing the name of the method you are going to use in the serialized objects.
I do believe though that "f = "+function() {} will yield you a to string version that you can eval:
var test = "f = " + function() { alert("Hello"); };
eval(test)
And for good json handling, I would recommend prototype, which has great methods for serializing objects to json.
|
How do you serialize javascript objects with methods using JSON?
|
I am looking for an enhancement to JSON that will also serialize methods. I have an object that acts as a collection of objects, and would like to serialize the methods of the collection object as well. So far I've located ClassyJSON. Any thoughts?
|
[
"Try to get away without serializing javascript code. That way lies a world of pain. Debugging will be much easier if code can only come from static files, not from a database. Instead, walk your JSON responses after you receive them and pass the appropriate data to the appropriate object constructors.\nIf you absolutely must serialize them, calling toString() on a function will return its source.\n",
"If you use WCF framework to develop RESTful web service, that is very easy to achieve.\nSimply create your data structure classes with your desired collection with DataContract, DataMember attributes.\n[DataContract]\npublic class Foo\n{\n [DataMember]\n public string FooName {get;set;}\n [DataMember]\n public FooItem[] FooItems {get;set;}\n}\n\n\n[DataContract]\npublic class FooItem\n{\n [DataMember]\n public string Name {get;set;}\n}\n\n",
"I don't think serializing methods is ever a good idea. If you intend to run the code serverside, you open yourself to attacks. If you want to run it client side, you are better off just the local methods, possibly referencing the name of the method you are going to use in the serialized objects.\nI do believe though that \"f = \"+function() {} will yield you a to string version that you can eval:\nvar test = \"f = \" + function() { alert(\"Hello\"); };\neval(test)\n\nAnd for good json handling, I would recommend prototype, which has great methods for serializing objects to json.\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"asp.net",
"json",
"viewstate"
] |
stackoverflow_0000109769_asp.net_json_viewstate.txt
|
Q:
Python signal woes: SIGQUIT handler delays execution if SIGQUIT received during execution of another signal handler?
The following program is very simple: it outputs a single dot each half a second. If it recieves a SIGQUIT, it proceeds to output ten Qs. If it recieves a SIGTSTP (Ctrl-Z), it outputs ten Zs.
If it recieves a SIGTSTP while printing Qs, it will print ten Zs after it's done with the ten Qs. This is a good thing.
However, if it recieves a SIGQUIT while printing Zs, it fails to print Qs after them. Instead, it prints them out only after I manually terminate execution via a KeyboardInterrupt. I want the Qs to be printed immediately after the Zs.
This happens using Python2.3.
What am I doing wrong?
#!/usr/bin/python
from signal import *
from time import sleep
from sys import stdout
def write(text):
stdout.write(text)
stdout.flush()
def process_quit(signum, frame):
for i in range(10):
write("Q")
sleep(0.5)
def process_tstp(signum, frame):
for i in range(10):
write("Z")
sleep(0.5)
signal(SIGQUIT, process_quit)
signal(SIGTSTP, process_tstp)
while 1:
write('.')
sleep(0.5)
A:
Your larger problem is blocking in signal handlers.
This is usually discouraged since it can lead to strange timing conditions. But it's not quite the cause of your problem since the timing condition you're vulnerable to exists because of your choice of signal handlers.
Anyway, here's how to at least minimize the timing condition by only setting flags in your handlers and leaving the main while loop to do the actual work. The explanation for why your code is behaving strangely is described after the code.
#!/usr/bin/python
from signal import *
from time import sleep
from sys import stdout
print_Qs = 0
print_Zs = 0
def write(text):
stdout.write(text)
stdout.flush()
def process_quit(signum, frame):
global print_Qs
print_Qs = 10
def process_tstp(signum, frame):
global print_Zs
print_Zs = 10
signal(SIGQUIT, process_quit)
signal(SIGTSTP, process_tstp)
while 1:
if print_Zs:
print_Zs -= 1
c = 'Z'
elif print_Qs:
print_Qs -= 1
c = 'Q'
else:
c = '.'
write(c)
sleep(0.5)
Anyway, here's what's going on.
SIGTSTP is more special than SIGQUIT.
SIGTSTP masks the other signals from being delivered while its signal handler is running. When the kernel goes to deliver SIGQUIT and sees that SIGTSTP's handler is still running, it simply saves it for later. Once another signal comes through for delivery, such as SIGINT when you CTRL+C (aka KeyboardInterrupt), the kernel remembers that it never delivered SIGQUIT and delivers it now.
You will notice if you change while 1: to for i in range(60): in the main loop and do your test case again, the program will exit without running the SIGTSTP handler since exit doesn't re-trigger the kernel's signal delivery mechanism.
Good luck!
A:
On Python 2.5.2 on Linux 2.6.24, your code works exactly as you describe your desired results (if a signal is received while still processing a previous signal, the new signal is processed immediately after the first one is finished).
On Python 2.4.4 on Linux 2.6.16, I see the problem behavior you describe.
I don't know whether this is due to a change in Python or in the Linux kernel.
|
Python signal woes: SIGQUIT handler delays execution if SIGQUIT received during execution of another signal handler?
|
The following program is very simple: it outputs a single dot each half a second. If it recieves a SIGQUIT, it proceeds to output ten Qs. If it recieves a SIGTSTP (Ctrl-Z), it outputs ten Zs.
If it recieves a SIGTSTP while printing Qs, it will print ten Zs after it's done with the ten Qs. This is a good thing.
However, if it recieves a SIGQUIT while printing Zs, it fails to print Qs after them. Instead, it prints them out only after I manually terminate execution via a KeyboardInterrupt. I want the Qs to be printed immediately after the Zs.
This happens using Python2.3.
What am I doing wrong?
#!/usr/bin/python
from signal import *
from time import sleep
from sys import stdout
def write(text):
stdout.write(text)
stdout.flush()
def process_quit(signum, frame):
for i in range(10):
write("Q")
sleep(0.5)
def process_tstp(signum, frame):
for i in range(10):
write("Z")
sleep(0.5)
signal(SIGQUIT, process_quit)
signal(SIGTSTP, process_tstp)
while 1:
write('.')
sleep(0.5)
|
[
"Your larger problem is blocking in signal handlers.\nThis is usually discouraged since it can lead to strange timing conditions. But it's not quite the cause of your problem since the timing condition you're vulnerable to exists because of your choice of signal handlers.\nAnyway, here's how to at least minimize the timing condition by only setting flags in your handlers and leaving the main while loop to do the actual work. The explanation for why your code is behaving strangely is described after the code.\n#!/usr/bin/python\n\nfrom signal import *\nfrom time import sleep\nfrom sys import stdout\n\nprint_Qs = 0\nprint_Zs = 0\n\ndef write(text):\n stdout.write(text)\n stdout.flush()\n\ndef process_quit(signum, frame):\n global print_Qs\n print_Qs = 10\n\ndef process_tstp(signum, frame):\n global print_Zs\n print_Zs = 10\n\nsignal(SIGQUIT, process_quit)\nsignal(SIGTSTP, process_tstp)\n\nwhile 1:\n if print_Zs:\n print_Zs -= 1\n c = 'Z'\n elif print_Qs:\n print_Qs -= 1\n c = 'Q'\n else:\n c = '.'\n write(c)\n sleep(0.5)\n\nAnyway, here's what's going on.\nSIGTSTP is more special than SIGQUIT.\nSIGTSTP masks the other signals from being delivered while its signal handler is running. When the kernel goes to deliver SIGQUIT and sees that SIGTSTP's handler is still running, it simply saves it for later. Once another signal comes through for delivery, such as SIGINT when you CTRL+C (aka KeyboardInterrupt), the kernel remembers that it never delivered SIGQUIT and delivers it now.\nYou will notice if you change while 1: to for i in range(60): in the main loop and do your test case again, the program will exit without running the SIGTSTP handler since exit doesn't re-trigger the kernel's signal delivery mechanism.\nGood luck!\n",
"On Python 2.5.2 on Linux 2.6.24, your code works exactly as you describe your desired results (if a signal is received while still processing a previous signal, the new signal is processed immediately after the first one is finished).\nOn Python 2.4.4 on Linux 2.6.16, I see the problem behavior you describe.\nI don't know whether this is due to a change in Python or in the Linux kernel.\n"
] |
[
6,
1
] |
[] |
[] |
[
"python",
"signals"
] |
stackoverflow_0000109705_python_signals.txt
|
Q:
How to setup Trac to run at / with Lighttpd on a subdomain
I have the following config in my lighttpd.conf:
$HTTP["host"] == "trac.domain.tld" {
server.document-root = "/usr/home/daniels/trac/htdocs/"
fastcgi.server = ( "/trac" =>
( "trac" =>
( "socket" => "/tmp/trac-fastcgi.sock",
"bin-path" => "/usr/home/daniels/trac/cgi-bin/trac.fcgi",
"check-local" => "disable",
"bin-environment" =>
( "TRAC_ENV" => "/usr/home/daniels/trac" )
)
)
)
}
And it runs at trac.domain.tld/trac.
How can i make it to run at trac.domain.tld/ so i will have trac.domain.tld/wiki, trac.domain.tld/timeline, etc instead of trac.domain.tld/trac/wiki, etc...
A:
Just change "/trac" to "/" in fastcgi.server
A:
Look for "For top level setup: ..." here.
|
How to setup Trac to run at / with Lighttpd on a subdomain
|
I have the following config in my lighttpd.conf:
$HTTP["host"] == "trac.domain.tld" {
server.document-root = "/usr/home/daniels/trac/htdocs/"
fastcgi.server = ( "/trac" =>
( "trac" =>
( "socket" => "/tmp/trac-fastcgi.sock",
"bin-path" => "/usr/home/daniels/trac/cgi-bin/trac.fcgi",
"check-local" => "disable",
"bin-environment" =>
( "TRAC_ENV" => "/usr/home/daniels/trac" )
)
)
)
}
And it runs at trac.domain.tld/trac.
How can i make it to run at trac.domain.tld/ so i will have trac.domain.tld/wiki, trac.domain.tld/timeline, etc instead of trac.domain.tld/trac/wiki, etc...
|
[
"Just change \"/trac\" to \"/\" in fastcgi.server\n",
"Look for \"For top level setup: ...\" here.\n"
] |
[
1,
0
] |
[] |
[] |
[
"fastcgi",
"lighttpd",
"trac"
] |
stackoverflow_0000109761_fastcgi_lighttpd_trac.txt
|
Q:
Reading datagridview
I populated a datagridview from a datatable. How do I read from the datagridview when the application is running?
A:
how did you populate it? is the DataSource something useful like a BindlingList?
If it is then something like:
BindingSource bindingSource = this.dataGridView1.DataSource as BindingSource;
//substitute your business object type for T
T entity = bindingSource.Current as T;
would get you the entity bound to the row.
Otherwise there is always the datagridview.Columns[n].Cells[n].Value but really I'd look at using the objects in the DataSource
Edit: Ah... a datatable... righto:
var table = dataGridView1.DataSource as DataTable;
foreach(DataRow row in table.Rows)
{
foreach(DataColumn column in table.Columns)
{
Console.WriteLine(row[column]);
}
}
A:
You can iterate through your datagridview and retrieve each cell.
for(int i =0; i < DataGridView.Rows.Count; i++){
DataGridView.Rows.Columns["columnName"].Text= "";
}
There is an example here.
A:
namespace WindowsFormsApplication2
{
public partial class Form1 : Form
{
public static DataTable objDataTable = new DataTable("UpdateAddress");
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
Stream myStream = null;
OpenFileDialog openFileDialog1 = new OpenFileDialog();
openFileDialog1.InitialDirectory = "c:\\";
openFileDialog1.Filter = "csv files (*.csv)|*.txt|All files (*.*)|*.*";
openFileDialog1.FilterIndex = 2;
openFileDialog1.RestoreDirectory = true;
if (openFileDialog1.ShowDialog() == DialogResult.OK)
{
try
{
if ((myStream = openFileDialog1.OpenFile()) != null)
{
string fileName = openFileDialog1.FileName;
List<string> dataFile = new List<string>();
dataFile = ReadList(fileName);
foreach (string item in dataFile)
{
string[] temp = item.Split(',');
DataRow objDR = objDataTable.NewRow();
objDR["EmployeeID"] = temp[0].ToString();
objDR["Street"] = temp[1].ToString();
objDR["POBox"] = temp[2].ToString();
objDR["City"] = temp[3].ToString();
objDR["State"] = temp[4].ToString();
objDR["Zip"] = temp[5].ToString();
objDR["Country"] = temp[6].ToString();
objDataTable.Rows.Add(objDR);
}
}
}
catch (Exception ex)
{
MessageBox.Show("Error: Could not read file from disk. Original error: " + ex.Message);
}
}
}
public static List<string> ReadList(string filename)
{
List<string> fileData = new List<string>();
StreamReader sr = new StreamReader(filename);
while (!sr.EndOfStream)
fileData.Add(sr.ReadLine());
return fileData;
}
private void Form1_Load(object sender, EventArgs e)
{
objDataTable.Columns.Add("EmployeeID", typeof(int));
objDataTable.Columns.Add("Street", typeof(string));
objDataTable.Columns.Add("POBox", typeof(string));
objDataTable.Columns.Add("City", typeof(string));
objDataTable.Columns.Add("State", typeof(string));
objDataTable.Columns.Add("Zip", typeof(string));
objDataTable.Columns.Add("Country", typeof(string));
objDataTable.Columns.Add("Status", typeof(string));
dataGridView1.DataSource = objDataTable;
dataGridView1.Refresh();
}
private void button2_Click(object sender, EventArgs e)
{
// Displays a SaveFileDialog so the user can save the backup of AD address before the update
// assigned to Button2.
SaveFileDialog saveFileDialog1 = new SaveFileDialog();
saveFileDialog1.Filter = "BAK Files|*.BAK";
saveFileDialog1.Title = "Save AD Backup";
saveFileDialog1.ShowDialog();
if (saveFileDialog1.FileName != "")
{
TextWriter fileOut = new StreamWriter(saveFileDialog1.FileName);
//This is where I want read from the datagridview the EmployeeID column and use it in my BackupAddress method.
}
}
A:
You might want to take a look at DataTable.WriteXml, and it's brother DataTable.ReadXml. No fuss, no muss saving of a DataTable.
|
Reading datagridview
|
I populated a datagridview from a datatable. How do I read from the datagridview when the application is running?
|
[
"how did you populate it? is the DataSource something useful like a BindlingList? \nIf it is then something like:\nBindingSource bindingSource = this.dataGridView1.DataSource as BindingSource;\n//substitute your business object type for T \nT entity = bindingSource.Current as T;\n\nwould get you the entity bound to the row.\nOtherwise there is always the datagridview.Columns[n].Cells[n].Value but really I'd look at using the objects in the DataSource\nEdit: Ah... a datatable... righto: \n var table = dataGridView1.DataSource as DataTable;\n\n foreach(DataRow row in table.Rows)\n {\n foreach(DataColumn column in table.Columns)\n {\n Console.WriteLine(row[column]);\n }\n }\n\n",
"You can iterate through your datagridview and retrieve each cell.\nfor(int i =0; i < DataGridView.Rows.Count; i++){\n DataGridView.Rows.Columns[\"columnName\"].Text= \"\";\n} \n\nThere is an example here. \n",
"namespace WindowsFormsApplication2\n{\n public partial class Form1 : Form\n {\n public static DataTable objDataTable = new DataTable(\"UpdateAddress\");\n\n public Form1()\n {\n InitializeComponent();\n\n }\n\n private void button1_Click(object sender, EventArgs e)\n {\n Stream myStream = null;\n OpenFileDialog openFileDialog1 = new OpenFileDialog();\n\n openFileDialog1.InitialDirectory = \"c:\\\\\";\n openFileDialog1.Filter = \"csv files (*.csv)|*.txt|All files (*.*)|*.*\";\n openFileDialog1.FilterIndex = 2;\n openFileDialog1.RestoreDirectory = true;\n\n if (openFileDialog1.ShowDialog() == DialogResult.OK)\n {\n try\n {\n if ((myStream = openFileDialog1.OpenFile()) != null)\n {\n string fileName = openFileDialog1.FileName;\n\n List<string> dataFile = new List<string>();\n dataFile = ReadList(fileName);\n foreach (string item in dataFile)\n {\n string[] temp = item.Split(',');\n DataRow objDR = objDataTable.NewRow();\n objDR[\"EmployeeID\"] = temp[0].ToString();\n objDR[\"Street\"] = temp[1].ToString();\n objDR[\"POBox\"] = temp[2].ToString();\n objDR[\"City\"] = temp[3].ToString();\n objDR[\"State\"] = temp[4].ToString();\n objDR[\"Zip\"] = temp[5].ToString();\n objDR[\"Country\"] = temp[6].ToString();\n objDataTable.Rows.Add(objDR);\n\n }\n }\n }\n catch (Exception ex)\n {\n MessageBox.Show(\"Error: Could not read file from disk. Original error: \" + ex.Message);\n }\n }\n }\n\n public static List<string> ReadList(string filename)\n {\n List<string> fileData = new List<string>();\n StreamReader sr = new StreamReader(filename);\n while (!sr.EndOfStream)\n fileData.Add(sr.ReadLine());\n return fileData;\n }\n\n private void Form1_Load(object sender, EventArgs e)\n {\n objDataTable.Columns.Add(\"EmployeeID\", typeof(int));\n objDataTable.Columns.Add(\"Street\", typeof(string));\n objDataTable.Columns.Add(\"POBox\", typeof(string));\n objDataTable.Columns.Add(\"City\", typeof(string));\n objDataTable.Columns.Add(\"State\", typeof(string));\n objDataTable.Columns.Add(\"Zip\", typeof(string));\n objDataTable.Columns.Add(\"Country\", typeof(string));\n objDataTable.Columns.Add(\"Status\", typeof(string));\n\n dataGridView1.DataSource = objDataTable;\n dataGridView1.Refresh();\n }\n\n private void button2_Click(object sender, EventArgs e)\n {\n // Displays a SaveFileDialog so the user can save the backup of AD address before the update\n // assigned to Button2.\n SaveFileDialog saveFileDialog1 = new SaveFileDialog();\n saveFileDialog1.Filter = \"BAK Files|*.BAK\";\n saveFileDialog1.Title = \"Save AD Backup\";\n saveFileDialog1.ShowDialog();\n\n if (saveFileDialog1.FileName != \"\")\n {\n TextWriter fileOut = new StreamWriter(saveFileDialog1.FileName); \n //This is where I want read from the datagridview the EmployeeID column and use it in my BackupAddress method.\n }\n\n }\n\n",
"You might want to take a look at DataTable.WriteXml, and it's brother DataTable.ReadXml. No fuss, no muss saving of a DataTable.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
".net",
"datagridview",
"datatable",
"winforms"
] |
stackoverflow_0000109417_.net_datagridview_datatable_winforms.txt
|
Q:
Use of the Exception class in c#
Errors that occur deep down in a data access layer or even higher up, (say within ADO.net operations for example) rarely make much sense to an end user. Simply bubbling these errors up to a UI and displaying them will usually achieve nothing except frustration for an end user.
I have recently employed a basic technique for reporting errors such as this whereby I catch the error and at least add some user friendly text so that at least the end user understands what failed.
To do this I am catching an exception within each specific function (say for example a fetch function in a data access layer), then raising a new error with user friendly text about the function that has failed and probably cause, but then embedding the original exception in the new exception as the "inner exception" of that new exception.
This can then occur at each layer if necessary, each consumer of the lower level function adding it's own context to the error message, so that what reaches the UI is an increasingly user friendly error message.
Once the error reaches the UI - if necessary - it can then iterate through the nested exceptions in order to display an error message that firstly tells the user which operation failed, but also provides a bit of technical information about what actually went wrong.
e.g.
"The list of customer names your
requested could not be displayed."
"Obtaining the list of customers
you requested failed due to an error with the database."
"There was an error connecting to the
database when retrieving a list of
customers"
"Login failed for user xx"
My question is this: Is this horribly inefficient (all those nested exceptions)? I suspect it is not best practice so what should I be doing to achieve the same thing - or should I in fact be trying to achieve something better?
A:
It is just slightly horrible.
If you are showing an error to the end user, the user is supposed to be able to act about it. In "The list of customer names your requested could not be displayed." case, your user will just think "so what?" On all of these cases, just display a "something bad happened" message. You do not even need to catch these exceptions, when something goes bad, let some global method (like application_error) handle it and display a generic message. When you or your user can do something about the error, catch it and do the thing or notify the user.
But you will want to log every error that you do not handle.
By the way, displaying information about the errors occuring may yield to security vulnerabilities. The less the attackers know about your system, the less likely they will find ways to hack it (remember those messages like "Syntax error in sql statement: Select * From Users Where username='a'; drp database;--'..." expected: 'drop' instead of 'drp'. They do not make sites like these anymore).
A:
It is technically costly to throw new exceptions, however I won't make a big debate out of that since "costly" is relative - if you're throwing 100 such exceptions a minute, you will likely not see the cost; if you're throwing 1000 such exceptions a second, you very well may see a performance hit (hence, not really worth discussing here - performance is your call).
I guess I have to ask why this approach is being used. Is it really true that you can add meaningful exception information at every level where an exception might be thrown and, if so, is it also true that the information will be:
Something you actually want to share with your user?
Something your user will be able to interpret, understand and use?
Written in such a way that it will not interfere with later reuse of low-level components, the utility of which might not be known when they were written?
I ask about sharing information with your user because, in your example, your artificial stack starts by informing the user there was a problem authenticating on the database. For a potential hacker, that's a good piece of information that exposes something about what the operation was doing.
As for handing back an entire custom exception stack, I don't think it's something that will be useful to most (honest) users. If I'm having trouble getting a list of customer names, for instance, is it going to help me (as a user) to know there was a problem authenticating with the database? Unless you're using integrated authentication, and each of your users has an account, and the ability to contact a system administrator to find out why their account lacks privileges, probably not.
I would begin by first deciding if there is really a semantic difference between the Framework exception thrown and the exception message you'd like to provide to the user. If there is, then go ahead and use a custom exception at the lowest level ('login failed' in your example). The steps following that, up to the actual presentation of the exception, don't really require any custom exceptions. The exception you're interested in has already been generated (the login has failed) - continuing to wrap that message at every level of the call stack serves no real purpose other than exposing your call stack to your users. For those "middle" steps, assuming any try/catch blocks are in place, a simple 'log and throw' strategy would work fine.
Really, though, this strategy has another potential flaw: it forces upon the developer the responsibility for maintaining the custom exception standard that's been implemented. Since you can't possibly know every permutation of call hierarchy when writing low-level types (their "clients" might not even have been written yet), it seems unlikely that all developers - or even one developer - would remember to wrap and customize any error condition in every code block.
Instead of working from the bottom up, I typically worry about the display of thrown exceptions as late in the process as possible (i.e. as close to the "top" of the call stack as possible). Normally, I don't try to replace any messages in exceptions thrown at low levels of my applications - particularly since the usage of those low level members tend to get more and more abstract the deeper the call gets. I tend to catch and log exceptions in the business tier and lower, then deal with displaying them in a comprehensible manner in the presentation tier.
Here are a couple of decent articles on exception handling best practices:
http://www.codeproject.com/KB/architecture/exceptionbestpractices.aspx
http://aspalliance.com/1119
Jeez this got wordy...apologies in advance.
A:
Yes, exceptions are expensive so there is a cost involved in catching and rethrowing or throwing a more useful exception.
But wait a minute! If the framework code or the library you're using throws an exception then things are already coming unstuck. Do you have non-functional requirements for how quickly an error message is propagated following an exception? I doubt it. Is it really a big deal? Something unforeseen and 'exceptional' has happened. The main thing is to present sensible, helpful information to the user.
I think you're on the right track with what you're doing.
A:
Of course it's horribly inefficient. But at the point that an exception occurs that is important enough to show to the end user, you should not care about that.
A:
Where I work we have only a few reasons to catch exceptions. We only do it when...
We can do something about it - e.g We know that this can happen sometimes and we can rectify it in code as it happens (very rare).
We want to know it happens and where (then we just rethrow the exception).
We want to add a friendly message in which case we wrap the original exception in new excpetion derived from application exception and add a friendly message to that and then let it bubble up unchanged from that point.
In your example we'd probably just display "Logon error occurred." anbd leave it at that while logging the real error and providing a why for the user to drill into the exception if they wanted too. (Perhaps a button on the error form).
We want to suppress the exception completely and keep going. Needless to say we only do this for expected exception types and only when there is no other way to detect the condition that generates the exception.
A:
Generally when you're dealing with exceptions, performance and efficiency are the least of your worries. You should be more worried about doing something to help the user recover from the problem. If there was a problem writing a certain record to the database, either roll the changes back or at least dump the row information so the user doesn't lose it.
|
Use of the Exception class in c#
|
Errors that occur deep down in a data access layer or even higher up, (say within ADO.net operations for example) rarely make much sense to an end user. Simply bubbling these errors up to a UI and displaying them will usually achieve nothing except frustration for an end user.
I have recently employed a basic technique for reporting errors such as this whereby I catch the error and at least add some user friendly text so that at least the end user understands what failed.
To do this I am catching an exception within each specific function (say for example a fetch function in a data access layer), then raising a new error with user friendly text about the function that has failed and probably cause, but then embedding the original exception in the new exception as the "inner exception" of that new exception.
This can then occur at each layer if necessary, each consumer of the lower level function adding it's own context to the error message, so that what reaches the UI is an increasingly user friendly error message.
Once the error reaches the UI - if necessary - it can then iterate through the nested exceptions in order to display an error message that firstly tells the user which operation failed, but also provides a bit of technical information about what actually went wrong.
e.g.
"The list of customer names your
requested could not be displayed."
"Obtaining the list of customers
you requested failed due to an error with the database."
"There was an error connecting to the
database when retrieving a list of
customers"
"Login failed for user xx"
My question is this: Is this horribly inefficient (all those nested exceptions)? I suspect it is not best practice so what should I be doing to achieve the same thing - or should I in fact be trying to achieve something better?
|
[
"It is just slightly horrible.\nIf you are showing an error to the end user, the user is supposed to be able to act about it. In \"The list of customer names your requested could not be displayed.\" case, your user will just think \"so what?\" On all of these cases, just display a \"something bad happened\" message. You do not even need to catch these exceptions, when something goes bad, let some global method (like application_error) handle it and display a generic message. When you or your user can do something about the error, catch it and do the thing or notify the user.\nBut you will want to log every error that you do not handle.\nBy the way, displaying information about the errors occuring may yield to security vulnerabilities. The less the attackers know about your system, the less likely they will find ways to hack it (remember those messages like \"Syntax error in sql statement: Select * From Users Where username='a'; drp database;--'...\" expected: 'drop' instead of 'drp'. They do not make sites like these anymore).\n",
"It is technically costly to throw new exceptions, however I won't make a big debate out of that since \"costly\" is relative - if you're throwing 100 such exceptions a minute, you will likely not see the cost; if you're throwing 1000 such exceptions a second, you very well may see a performance hit (hence, not really worth discussing here - performance is your call). \nI guess I have to ask why this approach is being used. Is it really true that you can add meaningful exception information at every level where an exception might be thrown and, if so, is it also true that the information will be:\n\nSomething you actually want to share with your user?\nSomething your user will be able to interpret, understand and use?\nWritten in such a way that it will not interfere with later reuse of low-level components, the utility of which might not be known when they were written?\n\nI ask about sharing information with your user because, in your example, your artificial stack starts by informing the user there was a problem authenticating on the database. For a potential hacker, that's a good piece of information that exposes something about what the operation was doing.\nAs for handing back an entire custom exception stack, I don't think it's something that will be useful to most (honest) users. If I'm having trouble getting a list of customer names, for instance, is it going to help me (as a user) to know there was a problem authenticating with the database? Unless you're using integrated authentication, and each of your users has an account, and the ability to contact a system administrator to find out why their account lacks privileges, probably not.\nI would begin by first deciding if there is really a semantic difference between the Framework exception thrown and the exception message you'd like to provide to the user. If there is, then go ahead and use a custom exception at the lowest level ('login failed' in your example). The steps following that, up to the actual presentation of the exception, don't really require any custom exceptions. The exception you're interested in has already been generated (the login has failed) - continuing to wrap that message at every level of the call stack serves no real purpose other than exposing your call stack to your users. For those \"middle\" steps, assuming any try/catch blocks are in place, a simple 'log and throw' strategy would work fine.\nReally, though, this strategy has another potential flaw: it forces upon the developer the responsibility for maintaining the custom exception standard that's been implemented. Since you can't possibly know every permutation of call hierarchy when writing low-level types (their \"clients\" might not even have been written yet), it seems unlikely that all developers - or even one developer - would remember to wrap and customize any error condition in every code block.\nInstead of working from the bottom up, I typically worry about the display of thrown exceptions as late in the process as possible (i.e. as close to the \"top\" of the call stack as possible). Normally, I don't try to replace any messages in exceptions thrown at low levels of my applications - particularly since the usage of those low level members tend to get more and more abstract the deeper the call gets. I tend to catch and log exceptions in the business tier and lower, then deal with displaying them in a comprehensible manner in the presentation tier.\nHere are a couple of decent articles on exception handling best practices:\nhttp://www.codeproject.com/KB/architecture/exceptionbestpractices.aspx\nhttp://aspalliance.com/1119\nJeez this got wordy...apologies in advance.\n",
"Yes, exceptions are expensive so there is a cost involved in catching and rethrowing or throwing a more useful exception.\nBut wait a minute! If the framework code or the library you're using throws an exception then things are already coming unstuck. Do you have non-functional requirements for how quickly an error message is propagated following an exception? I doubt it. Is it really a big deal? Something unforeseen and 'exceptional' has happened. The main thing is to present sensible, helpful information to the user. \nI think you're on the right track with what you're doing.\n",
"Of course it's horribly inefficient. But at the point that an exception occurs that is important enough to show to the end user, you should not care about that.\n",
"Where I work we have only a few reasons to catch exceptions. We only do it when...\n\nWe can do something about it - e.g We know that this can happen sometimes and we can rectify it in code as it happens (very rare).\nWe want to know it happens and where (then we just rethrow the exception).\nWe want to add a friendly message in which case we wrap the original exception in new excpetion derived from application exception and add a friendly message to that and then let it bubble up unchanged from that point.\n\nIn your example we'd probably just display \"Logon error occurred.\" anbd leave it at that while logging the real error and providing a why for the user to drill into the exception if they wanted too. (Perhaps a button on the error form).\n\nWe want to suppress the exception completely and keep going. Needless to say we only do this for expected exception types and only when there is no other way to detect the condition that generates the exception.\n\n",
"Generally when you're dealing with exceptions, performance and efficiency are the least of your worries. You should be more worried about doing something to help the user recover from the problem. If there was a problem writing a certain record to the database, either roll the changes back or at least dump the row information so the user doesn't lose it.\n"
] |
[
9,
8,
3,
2,
2,
1
] |
[] |
[] |
[
".net",
"exception_handling"
] |
stackoverflow_0000109790_.net_exception_handling.txt
|
Q:
Update a backend database on software update with Java
With which tool / library it is possible to update an existing database structure. On the update of the software it is also needed to change the database. Because there can be different versions of the software it should compare the current status with the target status of the database. It should:
add table columns, fill it with default values.
delete table columns
change the data type of columns, for example varchar(30) --> varchar(40)
add / remove indexes
add / alter / delete views
update some data in the tables
...
It should support the DBMS:
MS SQL Server 2000 - 2008
Oracle Server 8 - 11
MySQL
Because our software setup and application run in Java that it must also be run in Java. What can we use?
Ideally it scan our development and save it in an XML file. Then we can add some data modification SQL command. Then it can be run on customer side with the setup of the update.
A:
Check out Liquibase. A database migrations tool, like dbmigrate, might also be worth a lok.
A:
Autopatch is what we are using. It works pretty well.
It allows sql patches, data patches, and java patches all applied to your sql database.
|
Update a backend database on software update with Java
|
With which tool / library it is possible to update an existing database structure. On the update of the software it is also needed to change the database. Because there can be different versions of the software it should compare the current status with the target status of the database. It should:
add table columns, fill it with default values.
delete table columns
change the data type of columns, for example varchar(30) --> varchar(40)
add / remove indexes
add / alter / delete views
update some data in the tables
...
It should support the DBMS:
MS SQL Server 2000 - 2008
Oracle Server 8 - 11
MySQL
Because our software setup and application run in Java that it must also be run in Java. What can we use?
Ideally it scan our development and save it in an XML file. Then we can add some data modification SQL command. Then it can be run on customer side with the setup of the update.
|
[
"Check out Liquibase. A database migrations tool, like dbmigrate, might also be worth a lok.\n",
"Autopatch is what we are using. It works pretty well.\nIt allows sql patches, data patches, and java patches all applied to your sql database.\n"
] |
[
2,
1
] |
[] |
[] |
[
"alter_table",
"database",
"java",
"jdbc"
] |
stackoverflow_0000109746_alter_table_database_java_jdbc.txt
|
Q:
Use-cases for reflection
Recently I was talking to a co-worker about C++ and lamented that there was no way to take a string with the name of a class field and extract the field with that name; in other words, it lacks reflection. He gave me a baffled look and asked when anyone would ever need to do such a thing.
Off the top of my head I didn't have a good answer for him, other than "hey, I need to do it right now". So I sat down and came up with a list of some of the things I've actually done with reflection in various languages. Unfortunately, most of my examples come from my web programming in Python, and I was hoping that the people here would have more examples. Here's the list I came up with:
Given a config file with lines like
x = "Hello World!"
y = 5.0
dynamically set the fields of some config object equal to the values in that file. (This was what I wished I could do in C++, but actually couldn't do.)
When sorting a list of objects, sort based on an arbitrary attribute given that attribute's name from a config file or web request.
When writing software that uses a network protocol, reflection lets you call methods based on string values from that protocol. For example, I wrote an IRC bot that would translate
!some_command arg1 arg2
into a method call actions.some_command(arg1, arg2) and print whatever that function returned back to the IRC channel.
When using Python's __getattr__ function (which is sort of like method_missing in Ruby/Smalltalk) I was working with a class with a whole lot of statistics, such as late_total. For every statistic, I wanted to be able to add _percent to get that statistic as a percentage of the total things I was counting (for example, stats.late_total_percent). Reflection made this very easy.
So can anyone here give any examples from their own programming experiences of times when reflection has been helpful? The next time a co-worker asks me why I'd "ever want to do something like that" I'd like to be more prepared.
A:
I can list following usage for reflection:
Late binding
Security (introspect code for security reasons)
Code analysis
Dynamic typing (duck typing is not possible without reflection)
Metaprogramming
Some real-world usages of reflection from my personal experience:
Developed plugin system based on reflection
Used aspect-oriented programming model
Performed static code analysis
Used various Dependency Injection frameworks
...
Reflection is good thing :)
A:
I've used reflection to get current method information for exceptions, logging, etc.
string src = MethodInfo.GetCurrentMethod().ToString();
string msg = "Big Mistake";
Exception newEx = new Exception(msg, ex);
newEx.Source = src;
instead of
string src = "MyMethod";
string msg = "Big MistakeA";
Exception newEx = new Exception(msg, ex);
newEx.Source = src;
It's just easier for copy/paste inheritance and code generation.
A:
I'm in a situation now where I have a stream of XML coming in over the wire and I need to instantiate an Entity object that will populate itself from elements in the stream. It's easier to use reflection to figure out which Entity object can handle which XML element than to write a gigantic, maintenance-nightmare conditional statement. There's clearly a dependency between the XML schema and how I structure and name my objects, but I control both so it's not a big problem.
A:
There are lot's of times you want to dynamically instantiate and work with objects where the type isn't known until runtime. For example with OR-mappers or in a plugin architecture. Mocking frameworks use it, if you want to write a logging-library and dynamically want to examine type and properties of exceptions.
If I think a bit longer I can probably come up with more examples.
A:
I find reflection very useful if the input data (like xml) has a complex structure which is easily mapped to object-instances or i need some kind of "is a" relationship between the instances.
As reflection is relatively easy in java, I sometimes use it for simple data (key-value maps) where I have a small fixed set of keys. One one hand it's simple to determine if a key is valid (if the class has a setter setKey(String data)), on the other hand i can change the type of the (textual) input data and hide the transformation (e.g simple cast to int in getKey()), so the rest of the application can rely on correctly typed data.
If the type of some key-value-pair changes for one object (e.g. form int to float), i only have to change it in the data-object and its users but don't have to keep in mind to check the parser too. This might not be a sensible approach, if performance is an issue...
A:
Writing dispatchers. Twisted uses python's reflective capabilities to dispatch XML-RPC and SOAP calls. RMI uses Java's reflection api for dispatch.
Command line parsing. Building up a config object based on the command line parameters that are passed in.
When writing unit tests, it can be helpful to use reflection, though mostly I've used this to bypass access modifiers (Java).
A:
I've used reflection in C# when there was some internal or private method in the framework or a third party library that I wanted to access.
(Disclaimer: It's not necessarily a best-practice because private and internal methods may be changed in later versions. But it worked for what I needed.)
A:
Well, in statically-typed languages, you'd want to use reflection any time you need to do something "dynamic". It comes in handy for tooling purposes (scanning the members of an object). In Java it's used in JMX and dynamic proxies quite a bit. And there are tons of one-off cases where it's really the only way to go (pretty much anytime you need to do something the compiler won't let you do).
A:
I generally use reflection for debugging. Reflection can more easily and more accurately display the objects within the system than an assortment of print statements. In many languages that have first-class functions, you can even invoke the functions of the object without writing special code.
There is, however, a way to do what you want(ed). Use a hashtable. Store the fields keyed against the field name.
If you really wanted to, you could then create standard Get/Set functions, or create macros that do it on the fly. #define GetX() Get("X") sort of thing.
You could even implement your own imperfect reflection that way.
For the advanced user, if you can compile the code, it may be possible to enable debug output generation and use that to perform reflection.
|
Use-cases for reflection
|
Recently I was talking to a co-worker about C++ and lamented that there was no way to take a string with the name of a class field and extract the field with that name; in other words, it lacks reflection. He gave me a baffled look and asked when anyone would ever need to do such a thing.
Off the top of my head I didn't have a good answer for him, other than "hey, I need to do it right now". So I sat down and came up with a list of some of the things I've actually done with reflection in various languages. Unfortunately, most of my examples come from my web programming in Python, and I was hoping that the people here would have more examples. Here's the list I came up with:
Given a config file with lines like
x = "Hello World!"
y = 5.0
dynamically set the fields of some config object equal to the values in that file. (This was what I wished I could do in C++, but actually couldn't do.)
When sorting a list of objects, sort based on an arbitrary attribute given that attribute's name from a config file or web request.
When writing software that uses a network protocol, reflection lets you call methods based on string values from that protocol. For example, I wrote an IRC bot that would translate
!some_command arg1 arg2
into a method call actions.some_command(arg1, arg2) and print whatever that function returned back to the IRC channel.
When using Python's __getattr__ function (which is sort of like method_missing in Ruby/Smalltalk) I was working with a class with a whole lot of statistics, such as late_total. For every statistic, I wanted to be able to add _percent to get that statistic as a percentage of the total things I was counting (for example, stats.late_total_percent). Reflection made this very easy.
So can anyone here give any examples from their own programming experiences of times when reflection has been helpful? The next time a co-worker asks me why I'd "ever want to do something like that" I'd like to be more prepared.
|
[
"I can list following usage for reflection:\n\nLate binding\nSecurity (introspect code for security reasons)\nCode analysis\nDynamic typing (duck typing is not possible without reflection)\nMetaprogramming\n\nSome real-world usages of reflection from my personal experience:\n\nDeveloped plugin system based on reflection\nUsed aspect-oriented programming model\nPerformed static code analysis\nUsed various Dependency Injection frameworks\n...\n\nReflection is good thing :)\n",
"I've used reflection to get current method information for exceptions, logging, etc.\nstring src = MethodInfo.GetCurrentMethod().ToString();\nstring msg = \"Big Mistake\";\nException newEx = new Exception(msg, ex);\nnewEx.Source = src;\n\ninstead of \nstring src = \"MyMethod\";\nstring msg = \"Big MistakeA\";\nException newEx = new Exception(msg, ex);\nnewEx.Source = src;\n\nIt's just easier for copy/paste inheritance and code generation.\n",
"I'm in a situation now where I have a stream of XML coming in over the wire and I need to instantiate an Entity object that will populate itself from elements in the stream. It's easier to use reflection to figure out which Entity object can handle which XML element than to write a gigantic, maintenance-nightmare conditional statement. There's clearly a dependency between the XML schema and how I structure and name my objects, but I control both so it's not a big problem. \n",
"There are lot's of times you want to dynamically instantiate and work with objects where the type isn't known until runtime. For example with OR-mappers or in a plugin architecture. Mocking frameworks use it, if you want to write a logging-library and dynamically want to examine type and properties of exceptions.\nIf I think a bit longer I can probably come up with more examples.\n",
"I find reflection very useful if the input data (like xml) has a complex structure which is easily mapped to object-instances or i need some kind of \"is a\" relationship between the instances. \nAs reflection is relatively easy in java, I sometimes use it for simple data (key-value maps) where I have a small fixed set of keys. One one hand it's simple to determine if a key is valid (if the class has a setter setKey(String data)), on the other hand i can change the type of the (textual) input data and hide the transformation (e.g simple cast to int in getKey()), so the rest of the application can rely on correctly typed data.\nIf the type of some key-value-pair changes for one object (e.g. form int to float), i only have to change it in the data-object and its users but don't have to keep in mind to check the parser too. This might not be a sensible approach, if performance is an issue...\n",
"Writing dispatchers. Twisted uses python's reflective capabilities to dispatch XML-RPC and SOAP calls. RMI uses Java's reflection api for dispatch.\nCommand line parsing. Building up a config object based on the command line parameters that are passed in.\nWhen writing unit tests, it can be helpful to use reflection, though mostly I've used this to bypass access modifiers (Java).\n",
"I've used reflection in C# when there was some internal or private method in the framework or a third party library that I wanted to access.\n(Disclaimer: It's not necessarily a best-practice because private and internal methods may be changed in later versions. But it worked for what I needed.)\n",
"Well, in statically-typed languages, you'd want to use reflection any time you need to do something \"dynamic\". It comes in handy for tooling purposes (scanning the members of an object). In Java it's used in JMX and dynamic proxies quite a bit. And there are tons of one-off cases where it's really the only way to go (pretty much anytime you need to do something the compiler won't let you do).\n",
"I generally use reflection for debugging. Reflection can more easily and more accurately display the objects within the system than an assortment of print statements. In many languages that have first-class functions, you can even invoke the functions of the object without writing special code.\nThere is, however, a way to do what you want(ed). Use a hashtable. Store the fields keyed against the field name.\nIf you really wanted to, you could then create standard Get/Set functions, or create macros that do it on the fly. #define GetX() Get(\"X\") sort of thing.\nYou could even implement your own imperfect reflection that way.\nFor the advanced user, if you can compile the code, it may be possible to enable debug output generation and use that to perform reflection.\n"
] |
[
21,
4,
3,
2,
2,
1,
1,
1,
1
] |
[] |
[] |
[
"reflection"
] |
stackoverflow_0000049737_reflection.txt
|
Q:
How are you generating tests from specifications?
I came across a printed article by Bertrand Meyer where he states that tests can be generated from specifications. My development team does nothing like this, but it sounds like a good technique to consider. How are you generating tests from specifications? How would you describe the success your having in discovering program faults via this method?
A:
This might be a reference to RSpec, which is a really clever way of developing tests as a series of requirements. I'm still getting used to it, but it's been very handy in both defining what I need to do and then ensuring I do it.
A:
@Tim Sullivan from Bertrand Meyer it can only be related to Eiffel :)
I think he's talking about ESpec. Given the name RSpec from the Ruby Folk, I think we can give them the label "heavily inspired".
A:
I would say it depends on your specs. I have yet to work anywhere where the specs were good enough to create full unit tests from specifications - the level of detail just wasn't there. My managers always told us that if we specified to that level they could just ship the specs off to India and get it coded on the cheap ;)
A:
There are all sorts of ways to do it, ranging from what I'd consider an 'art form' (and not necessarily good art) all the way to mathematically derived tests from formal specifications. At the end of the day, your development team needs to decided on what they can do based on the schedule they are working with. That being said, being able to test software against specs is a Good Thing.
Only your team can gauge the 'depth' of your tests, and that will probably be a function of how good your specs are. If they say something like, 'the login UI needs to provide a cancel button and a login button, and they need to work', your tests are going to be pretty general. But keep in mind - even very general tests are a Good Thing. Testing is a Good Thing. Too many developers have a bad attitude when it comes to testing, but at the end of the day, you're shipping software which should work, and to me, that means a lot.
The effectiveness your tests will having in finding program faults will depend on the detail you put into them. What is especially nice about having test procedures written to specs is that you can test each build to the same level of detail as the previous build (typically referred to as a regression test).
|
How are you generating tests from specifications?
|
I came across a printed article by Bertrand Meyer where he states that tests can be generated from specifications. My development team does nothing like this, but it sounds like a good technique to consider. How are you generating tests from specifications? How would you describe the success your having in discovering program faults via this method?
|
[
"This might be a reference to RSpec, which is a really clever way of developing tests as a series of requirements. I'm still getting used to it, but it's been very handy in both defining what I need to do and then ensuring I do it.\n",
"@Tim Sullivan from Bertrand Meyer it can only be related to Eiffel :)\nI think he's talking about ESpec. Given the name RSpec from the Ruby Folk, I think we can give them the label \"heavily inspired\".\n",
"I would say it depends on your specs. I have yet to work anywhere where the specs were good enough to create full unit tests from specifications - the level of detail just wasn't there. My managers always told us that if we specified to that level they could just ship the specs off to India and get it coded on the cheap ;)\n",
"There are all sorts of ways to do it, ranging from what I'd consider an 'art form' (and not necessarily good art) all the way to mathematically derived tests from formal specifications. At the end of the day, your development team needs to decided on what they can do based on the schedule they are working with. That being said, being able to test software against specs is a Good Thing.\nOnly your team can gauge the 'depth' of your tests, and that will probably be a function of how good your specs are. If they say something like, 'the login UI needs to provide a cancel button and a login button, and they need to work', your tests are going to be pretty general. But keep in mind - even very general tests are a Good Thing. Testing is a Good Thing. Too many developers have a bad attitude when it comes to testing, but at the end of the day, you're shipping software which should work, and to me, that means a lot.\nThe effectiveness your tests will having in finding program faults will depend on the detail you put into them. What is especially nice about having test procedures written to specs is that you can test each build to the same level of detail as the previous build (typically referred to as a regression test).\n"
] |
[
4,
2,
1,
0
] |
[] |
[] |
[
"automated_tests",
"faults",
"specifications",
"testing"
] |
stackoverflow_0000029100_automated_tests_faults_specifications_testing.txt
|
Q:
How to logically organize recurring tasks?
What's the best way to create recurring tasks?
Should I create some special syntax and parse it, kind of similar to Cronjobs on Linux or should I much rather just use a cronjob that runs every hour to create more of those recurring tasks with no end?
Keep in mind, that you can have endless recurring tasks and tasks with an enddate.
A:
Quartz is an open source job scheduling system that uses cron expressions to control the periodicity of the job executions.
A:
My approach is always "minimum effort for maximum effect" (or best bang per buck).
If it can be done with cron, why not use cron? I'd consider it wasted effort to re-implement cron just for the fun of it so, unless you really need features that cron doesn't have, stick with it.
|
How to logically organize recurring tasks?
|
What's the best way to create recurring tasks?
Should I create some special syntax and parse it, kind of similar to Cronjobs on Linux or should I much rather just use a cronjob that runs every hour to create more of those recurring tasks with no end?
Keep in mind, that you can have endless recurring tasks and tasks with an enddate.
|
[
"Quartz is an open source job scheduling system that uses cron expressions to control the periodicity of the job executions.\n",
"My approach is always \"minimum effort for maximum effect\" (or best bang per buck).\nIf it can be done with cron, why not use cron? I'd consider it wasted effort to re-implement cron just for the fun of it so, unless you really need features that cron doesn't have, stick with it.\n"
] |
[
1,
1
] |
[] |
[] |
[
"calendar",
"logic",
"task"
] |
stackoverflow_0000109776_calendar_logic_task.txt
|
Q:
Auto-format structured data (phone, date) using jQuery plugin (or failing that vanilla JavaScript)
I like jQuery and I was wondering if anyone have used a good plugin or (non-jQuery) JavaScript library that allows for auto-formatting of structured fields like phone numbers or dates. I know of the jquery-ui-datapicker plugin, and not what I am looking for here. You may type in a phone number as 123 which then becomes (123), additional numbers will be formatted as (123) 456 7890 Ext. 123456. If you press delete the auto-formatting stuff disappears automatically, and repositioning of the cursor, say, after (123) and pressing delete will remove the 3 and make the rest (124) 567 8901 Ext. 23456. The ones that I have played with appears unreliable.
A:
Does the Masked Input plugin do what you need or that one you have already found to be unreliable?
A:
Allan,
I do believe your best bet would be to use regular expressions inside of two separate formatting methods in order to achieve the desired results. This will be rather straight forward for phone numbers and I'll post a code example if one isn't posted by the time I sit back and have 10 minutes straight to write something up. Perhaps for the date field, you can use something like the jQuery UI Datepicker instead? http://marcgrabanski.com/pages/code/jquery-ui-datepicker
HTH,
/sf
|
Auto-format structured data (phone, date) using jQuery plugin (or failing that vanilla JavaScript)
|
I like jQuery and I was wondering if anyone have used a good plugin or (non-jQuery) JavaScript library that allows for auto-formatting of structured fields like phone numbers or dates. I know of the jquery-ui-datapicker plugin, and not what I am looking for here. You may type in a phone number as 123 which then becomes (123), additional numbers will be formatted as (123) 456 7890 Ext. 123456. If you press delete the auto-formatting stuff disappears automatically, and repositioning of the cursor, say, after (123) and pressing delete will remove the 3 and make the rest (124) 567 8901 Ext. 23456. The ones that I have played with appears unreliable.
|
[
"Does the Masked Input plugin do what you need or that one you have already found to be unreliable?\n",
"Allan,\nI do believe your best bet would be to use regular expressions inside of two separate formatting methods in order to achieve the desired results. This will be rather straight forward for phone numbers and I'll post a code example if one isn't posted by the time I sit back and have 10 minutes straight to write something up. Perhaps for the date field, you can use something like the jQuery UI Datepicker instead? http://marcgrabanski.com/pages/code/jquery-ui-datepicker\nHTH,\n/sf\n"
] |
[
38,
1
] |
[] |
[] |
[
"html",
"javascript",
"jquery",
"user_interface"
] |
stackoverflow_0000109854_html_javascript_jquery_user_interface.txt
|
Q:
Perl JOIN-like behavior in Oracle?
I have two tables, let's call them PERSON and NAME.
PERSON
person_id
dob
NAME
name_id
person_id
name
And let's say that the NAME table has data like:
name_id person_id name
1 1 Joe
2 1 Fred
3 1 Sam
4 2 Jane
5 2 Kim
I need a query (Oracle 10g) that will return
name_id names
1 Joe, Fred, Sam
2 Jane, Kim
Is there a simple way to do this?
Update:
According to the article that figs was kind enough to provide, starting in 9i you can do:
SELECT wmsys.wm_concat(dname) departments FROM dept;
For this example, the answer becomes:
SELECT name_id, wmsys.wm_concat(name) from names group by name_id
A:
The short answer is to use a PL/SQL function. For more details, have a look in this post.
|
Perl JOIN-like behavior in Oracle?
|
I have two tables, let's call them PERSON and NAME.
PERSON
person_id
dob
NAME
name_id
person_id
name
And let's say that the NAME table has data like:
name_id person_id name
1 1 Joe
2 1 Fred
3 1 Sam
4 2 Jane
5 2 Kim
I need a query (Oracle 10g) that will return
name_id names
1 Joe, Fred, Sam
2 Jane, Kim
Is there a simple way to do this?
Update:
According to the article that figs was kind enough to provide, starting in 9i you can do:
SELECT wmsys.wm_concat(dname) departments FROM dept;
For this example, the answer becomes:
SELECT name_id, wmsys.wm_concat(name) from names group by name_id
|
[
"The short answer is to use a PL/SQL function. For more details, have a look in this post.\n"
] |
[
0
] |
[] |
[] |
[
"join",
"oracle"
] |
stackoverflow_0000105836_join_oracle.txt
|
Q:
Getting the back/fwd history of the WebBrowser Control
In C# WinForms, what's the proper way to get the backward/forward history stacks for the System.Windows.Forms.WebBrowser?
A:
Check out http://www.bsalsa.com/downloads.html. This is a series of Delphi components (free source code, you can see an example of this here: http://staruml.cvs.sourceforge.net/staruml/staruml/staruml/components/plastic-components/src/embeddedwb.pas?revision=1.1&view=markup - it's the starUML projects code) and they have, among other things, a way to get at the history, favorites, etc using the IE MSHTML interfaces. It's written in Object Pascal but it shouldn't be too hard to figure out what's going on. If you download the "Embedded Web Browser Components Package" take a look at the stuff in EmbeddedWB_D2005\Source - there's all sorts of goodies there.
|
Getting the back/fwd history of the WebBrowser Control
|
In C# WinForms, what's the proper way to get the backward/forward history stacks for the System.Windows.Forms.WebBrowser?
|
[
"Check out http://www.bsalsa.com/downloads.html. This is a series of Delphi components (free source code, you can see an example of this here: http://staruml.cvs.sourceforge.net/staruml/staruml/staruml/components/plastic-components/src/embeddedwb.pas?revision=1.1&view=markup - it's the starUML projects code) and they have, among other things, a way to get at the history, favorites, etc using the IE MSHTML interfaces. It's written in Object Pascal but it shouldn't be too hard to figure out what's going on. If you download the \"Embedded Web Browser Components Package\" take a look at the stuff in EmbeddedWB_D2005\\Source - there's all sorts of goodies there.\n"
] |
[
4
] |
[
"It doesn't look like it's possible.\nMy suggestion would be to catch the Navigated event and maintain your own list. A possible problem with that is when the user clicks back in the browser, you don't know to unwind the stack.\n"
] |
[
-1
] |
[
".net",
"c#",
"navigation",
"webbrowser_control",
"winforms"
] |
stackoverflow_0000054758_.net_c#_navigation_webbrowser_control_winforms.txt
|
Q:
How to branch a virtual server in Hyper-V?
We use Hyper-V extensively in our development environment. Each developer has a virtual server that they own and then we have a bunch of build, test, R&D, and staging virtual servers.
Is there any documented or best practice way to duplicate a virtual machine in Hyper-V?
What I would really like to be able to do is to split a machine from a snapshot and have multiple virtual machines that both roll up underneath a common root machines snapshot.
I don't mind having to run some tools or having to rejoin the domain, I just want the ability to spawn new machines from an existing snapshot.
Is this possible?
A:
I think the real problem is the duplication of servers on the Network - plus that evil kerberos-keys-getting-out-of-date issue that any offline copy of a Virtual Server can suffer.
I'd suggest creating a SysPreped image as the base and then create multiple machines from that. I don't think branching servers would be very wise (at least not on the same network).
Otherwise I'd just copy and paste the VHD to a new path and create a new server for each branch - keeping them in their own network space (and IP range).
|
How to branch a virtual server in Hyper-V?
|
We use Hyper-V extensively in our development environment. Each developer has a virtual server that they own and then we have a bunch of build, test, R&D, and staging virtual servers.
Is there any documented or best practice way to duplicate a virtual machine in Hyper-V?
What I would really like to be able to do is to split a machine from a snapshot and have multiple virtual machines that both roll up underneath a common root machines snapshot.
I don't mind having to run some tools or having to rejoin the domain, I just want the ability to spawn new machines from an existing snapshot.
Is this possible?
|
[
"I think the real problem is the duplication of servers on the Network - plus that evil kerberos-keys-getting-out-of-date issue that any offline copy of a Virtual Server can suffer.\nI'd suggest creating a SysPreped image as the base and then create multiple machines from that. I don't think branching servers would be very wise (at least not on the same network).\nOtherwise I'd just copy and paste the VHD to a new path and create a new server for each branch - keeping them in their own network space (and IP range).\n"
] |
[
2
] |
[] |
[] |
[
"hyper_v",
"virtualization"
] |
stackoverflow_0000109731_hyper_v_virtualization.txt
|
Q:
When and why should $_REQUEST be used instead of $_GET / $_POST / $_COOKIE?
Question in the title.
And what happens when all 3 of $_GET[foo], $_POST[foo] and $_COOKIE[foo] exist? Which one of them gets included to $_REQUEST?
A:
I'd say never.
If I wanted something to be set via the various methods, I'd code for each of them to remind myself that I'd done it that way - otherwise you might end up with things being overwritten without realising.
Shouldn't it work like this:
$_GET = non destructive actions (sorting, recording actions, queries)
$_POST = destructive actions (deleting, updating)
$_COOKIE = trivial settings (stylesheet preferences etc)
$_SESSION = non trivial settings (username, logged in?, access levels)
A:
Sometimes you might want the same script to be called with several different ways. A form submit and an AJAX call comes to mind. In most cases, however, it´s better to be explicit.
Also, see http://docs.php.net/manual/en/ini.core.php#ini.request-order on how the different sources of variables overwrite each other if there is a name collision.
A:
$_REQUEST is only a shortcut to prevent you from testing post, get and cookie if the data can come from any of these.
There are some pitfalls:
data is taken from GET, POST and finally COOKIE. The last overrides the first, so be careful with that.
REST architectures require you to separate the POST and GET semantics, you can't rely on $_REQUEST in that case.
Nevertheless, if you know what you're doing, then it's just another handy PHP trick.
I'd use it if I wanted to quickly update a var that may come from several sources, for example:
In your controller, to decide what page to serve without checking if the request comes from a form action or a hypertext link.
To check if a session is still active regardless of the way the session id is transmitted.
A:
To answer the "what happens when all 3 exist" question, the answer is "it depends."
PHP auto-fills $_REQUEST based on the request_order directive (or variables_order if request_order is absent) in PHP.INI. The default is usually "GPC" which means GET is loaded first, then POST is loaded (overwriting GET if there is a collision), then cookies are loaded (overwriting get/post if there is a collision). However, you can change this directive in the PHP.INI file. For example, changing it to "CPG" makes cookies load first, then post, then get.
As far as when to use it? I'll echo the sentiment of "Never." You already don't trust the user, so why give the user more tools? As the developer, you should know where you expect the data to come from. It's all about reducing your attack surface area.
A:
When you're not certain where the values are populated or when you use them both and want to loop over all values by both POST and GET methods.
A:
I use POST when I don't want people to have easy access to what is being passed and I use GET when I don't mind them seeing the value in the url. I generally don't use cookies for much as I find SESSION to be fine for persisting values (although having a proper registry is the best way to utilize that).
|
When and why should $_REQUEST be used instead of $_GET / $_POST / $_COOKIE?
|
Question in the title.
And what happens when all 3 of $_GET[foo], $_POST[foo] and $_COOKIE[foo] exist? Which one of them gets included to $_REQUEST?
|
[
"I'd say never.\nIf I wanted something to be set via the various methods, I'd code for each of them to remind myself that I'd done it that way - otherwise you might end up with things being overwritten without realising.\nShouldn't it work like this:\n$_GET = non destructive actions (sorting, recording actions, queries)\n$_POST = destructive actions (deleting, updating)\n$_COOKIE = trivial settings (stylesheet preferences etc)\n$_SESSION = non trivial settings (username, logged in?, access levels)\n",
"Sometimes you might want the same script to be called with several different ways. A form submit and an AJAX call comes to mind. In most cases, however, it´s better to be explicit.\nAlso, see http://docs.php.net/manual/en/ini.core.php#ini.request-order on how the different sources of variables overwrite each other if there is a name collision.\n",
"$_REQUEST is only a shortcut to prevent you from testing post, get and cookie if the data can come from any of these.\nThere are some pitfalls:\n\ndata is taken from GET, POST and finally COOKIE. The last overrides the first, so be careful with that.\nREST architectures require you to separate the POST and GET semantics, you can't rely on $_REQUEST in that case.\n\nNevertheless, if you know what you're doing, then it's just another handy PHP trick.\nI'd use it if I wanted to quickly update a var that may come from several sources, for example:\n\nIn your controller, to decide what page to serve without checking if the request comes from a form action or a hypertext link.\n\nTo check if a session is still active regardless of the way the session id is transmitted.\n\n\n",
"To answer the \"what happens when all 3 exist\" question, the answer is \"it depends.\"\nPHP auto-fills $_REQUEST based on the request_order directive (or variables_order if request_order is absent) in PHP.INI. The default is usually \"GPC\" which means GET is loaded first, then POST is loaded (overwriting GET if there is a collision), then cookies are loaded (overwriting get/post if there is a collision). However, you can change this directive in the PHP.INI file. For example, changing it to \"CPG\" makes cookies load first, then post, then get.\nAs far as when to use it? I'll echo the sentiment of \"Never.\" You already don't trust the user, so why give the user more tools? As the developer, you should know where you expect the data to come from. It's all about reducing your attack surface area.\n",
"When you're not certain where the values are populated or when you use them both and want to loop over all values by both POST and GET methods.\n",
"I use POST when I don't want people to have easy access to what is being passed and I use GET when I don't mind them seeing the value in the url. I generally don't use cookies for much as I find SESSION to be fine for persisting values (although having a proper registry is the best way to utilize that).\n"
] |
[
53,
7,
5,
4,
2,
1
] |
[] |
[] |
[
"php"
] |
stackoverflow_0000107683_php.txt
|
Q:
Best practices for configuring Apache / Tomcat
We are currently using Apache 2.2.3 and Tomcat 5 (Embedded in JBoss 4.2.2) using mod_proxy_jk as the connector.
Can someone shed some light on the the correct way to calculate / configure the values below (as well as anything else that may be relevant). Both Apache and Tomcat are running on separate machines and have copious amounts of ram (4gb each).
Relevant server.xml portions:
<Connector port="8009"
address="${jboss.bind.address}"
protocol="AJP/1.3"
emptySessionPath="true"
enableLookups="false"
redirectPort="8443"
maxThreads="320"
connectionTimeout="45000"
/>
Relevant httpd.conf portions:
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 0
</IfModule>
A:
You should consider the workload the servers might get.
The most important factor might be the number of simultaneously connected clients at peak times. Try to determine it and tune your settings in a way where:
there are enough processing threads in both Apache and Tomcat that they don't need to spawn new threads when the server is heavily loaded
there are not way more processing threads in the servers than needed as they would waste resources.
With this kind of setup you can minimize the internal maintenance overhead of the servers, that could help a lot, especially when your load is sporadic.
For example consider an application where you have ~300 new requests/second. Each request requires on average 2.5 seconds to serve. It means that at any given time you have ~750 requests that need to be handled simultaneously. In this situation you probably want to tune your servers so that they have ~750 processing threads at startup and you might want to add something like ~1000 processing threads at maximum to handle extremely high loads.
Also consider for exactly what do you require a thread for. In the previous example each request was independent from the others, there was no session tracking used. In a more "web-ish" scenario you might have users logged in to your website, and depending on your software used, Apache and/or Tomcat might need to use the same thread to serve the requests that come in one session. In this case, you might need more threads. However as I know Tomcat at least, you won't really need to consider this as it works with thread pools internally anyways.
A:
MaxClients
This is the fundamental cap of parallel client connections your apache should handle at once.
With prefork, only one request can be handled per process. Therefore the whole apache can process at most $MaxClients requests in the time it takes to handle a single request. Of course, this ideal maximum can only be reached if the application needs less than 1/$MaxClients resources per request.
If, for example, the application takes a second of cpu-time to answer a single request, setting MaxClients to four will limit your throughput to four requests per second: Each request uses up an apache connection and apache will only handle four at a time. But if the server has only two CPUs, not even this can be reached, because every wall-clock second only has two cpu seconds, but the requests would need four cpu seconds.
MinSpareServers
This tells apache how many idle processes should hang around. The bigger this number the more burst load apache can swallow before needing to spawn extra processes, which is expensive and thus slows down the current request.
The correct setting of this depends on your workload. If you have pages with many sub-requests (pictures, iframes, javascript, css) then hitting a single page might use up many more processes for a short time.
MaxSpareServers
Having too many unused apache processes hanging around just wastes memory, thus apache uses the MaxSpareServers number to limit the amount of spare processes it is holding in reserve for bursts of requests.
MaxRequestsPerChild
This limits the number of requests a single process will handle throughout its lifetime. If you are very concerned about stability, you should put an actual limit here to continually recycle the apache processes to prevent resource leaks from affecting the system.
StartServers
This is just the amount of processes apache starts by default. Set this to the usual amount of running apache processes to reduce warm-up time of your system. Even if you ignore this setting, apache will use the Min-/MaxSpareServers values to spawn new processes as required.
More information
See also the documentation for apache's multi-processing modules.
A:
The default settings are generally decent starting points to see what your applications is really going to need. I don't know how much traffic you're expecting, so guessing at the MaxThreads, MaxClients, and MaxServers is a bit difficult. I can tell you that most of the customers I deal with (work for a linux web host, that deals mainly with customers running Java apps in Tomcat) use the default settings for quite some time without too many tweaks needed.
If you're not expecting much traffic, then these settings being "too high" really shouldn't effect you too much either. Apache's not going to allocate resources for the whole 256 potential clients unless it becomes necessary. The same goes for Tomcat as well.
|
Best practices for configuring Apache / Tomcat
|
We are currently using Apache 2.2.3 and Tomcat 5 (Embedded in JBoss 4.2.2) using mod_proxy_jk as the connector.
Can someone shed some light on the the correct way to calculate / configure the values below (as well as anything else that may be relevant). Both Apache and Tomcat are running on separate machines and have copious amounts of ram (4gb each).
Relevant server.xml portions:
<Connector port="8009"
address="${jboss.bind.address}"
protocol="AJP/1.3"
emptySessionPath="true"
enableLookups="false"
redirectPort="8443"
maxThreads="320"
connectionTimeout="45000"
/>
Relevant httpd.conf portions:
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 0
</IfModule>
|
[
"You should consider the workload the servers might get.\nThe most important factor might be the number of simultaneously connected clients at peak times. Try to determine it and tune your settings in a way where:\n\nthere are enough processing threads in both Apache and Tomcat that they don't need to spawn new threads when the server is heavily loaded\nthere are not way more processing threads in the servers than needed as they would waste resources.\n\nWith this kind of setup you can minimize the internal maintenance overhead of the servers, that could help a lot, especially when your load is sporadic.\nFor example consider an application where you have ~300 new requests/second. Each request requires on average 2.5 seconds to serve. It means that at any given time you have ~750 requests that need to be handled simultaneously. In this situation you probably want to tune your servers so that they have ~750 processing threads at startup and you might want to add something like ~1000 processing threads at maximum to handle extremely high loads.\nAlso consider for exactly what do you require a thread for. In the previous example each request was independent from the others, there was no session tracking used. In a more \"web-ish\" scenario you might have users logged in to your website, and depending on your software used, Apache and/or Tomcat might need to use the same thread to serve the requests that come in one session. In this case, you might need more threads. However as I know Tomcat at least, you won't really need to consider this as it works with thread pools internally anyways.\n",
"MaxClients\nThis is the fundamental cap of parallel client connections your apache should handle at once.\nWith prefork, only one request can be handled per process. Therefore the whole apache can process at most $MaxClients requests in the time it takes to handle a single request. Of course, this ideal maximum can only be reached if the application needs less than 1/$MaxClients resources per request.\nIf, for example, the application takes a second of cpu-time to answer a single request, setting MaxClients to four will limit your throughput to four requests per second: Each request uses up an apache connection and apache will only handle four at a time. But if the server has only two CPUs, not even this can be reached, because every wall-clock second only has two cpu seconds, but the requests would need four cpu seconds.\nMinSpareServers\nThis tells apache how many idle processes should hang around. The bigger this number the more burst load apache can swallow before needing to spawn extra processes, which is expensive and thus slows down the current request.\nThe correct setting of this depends on your workload. If you have pages with many sub-requests (pictures, iframes, javascript, css) then hitting a single page might use up many more processes for a short time.\nMaxSpareServers\nHaving too many unused apache processes hanging around just wastes memory, thus apache uses the MaxSpareServers number to limit the amount of spare processes it is holding in reserve for bursts of requests.\nMaxRequestsPerChild\nThis limits the number of requests a single process will handle throughout its lifetime. If you are very concerned about stability, you should put an actual limit here to continually recycle the apache processes to prevent resource leaks from affecting the system.\nStartServers\nThis is just the amount of processes apache starts by default. Set this to the usual amount of running apache processes to reduce warm-up time of your system. Even if you ignore this setting, apache will use the Min-/MaxSpareServers values to spawn new processes as required.\nMore information\nSee also the documentation for apache's multi-processing modules.\n",
"The default settings are generally decent starting points to see what your applications is really going to need. I don't know how much traffic you're expecting, so guessing at the MaxThreads, MaxClients, and MaxServers is a bit difficult. I can tell you that most of the customers I deal with (work for a linux web host, that deals mainly with customers running Java apps in Tomcat) use the default settings for quite some time without too many tweaks needed.\nIf you're not expecting much traffic, then these settings being \"too high\" really shouldn't effect you too much either. Apache's not going to allocate resources for the whole 256 potential clients unless it becomes necessary. The same goes for Tomcat as well. \n"
] |
[
7,
5,
1
] |
[] |
[] |
[
"apache",
"java",
"jboss",
"mod_proxy",
"tomcat"
] |
stackoverflow_0000105754_apache_java_jboss_mod_proxy_tomcat.txt
|
Q:
Best way to stamp an image with another image to create a watermark in ASP.NET?
Anyone know? Want to be able to on the fly stamp an image with another image as a watermark, also to do large batches. Any type of existing library or a technique you know of would be great.
A:
This will answer your question:
http://www.codeproject.com/KB/GDI-plus/watermark.aspx
Good luck!
A:
I have had good luck with ImageMagick. It has an API for .NET too.
A:
here is my full article: http://forums.asp.net/p/1323176/2634923.aspx
use the SDK Command Prompt and navigate the active folder to the folder containing the below source code... then compile the code using
vbc.exe watermark.vb /t:exe /out:watermark.exe
this will create an exe in the folder.. the exe accepts two parameters:
ex.
watermark.exe "c:\source folder" "c:\destination folder"
this will iterate through the parent folder and all subfolders. all found jpegs will be watermarked with the image you specify in the code and copied to the destination folder. The original image will stay untouched.
// watermark.vb --
Imports System
Imports System.Drawing
Imports System.Drawing.Drawing2D
Imports System.Drawing.Imaging
Imports System.IO
Namespace WatermarkManager
Class Watermark
Shared sourceDirectory As String = "", destinationDirectory As String = ""
Overloads Shared Sub Main(ByVal args() As String)
'See if an argument was passed from the command line
If args.Length = 2 Then
sourceDirectory = args(0)
destinationDirectory = args(1)
' make sure sourceFolder is legit
If Directory.Exists(sourceDirectory) = False
TerminateExe("Invalid source folder. Folder does not exist.")
Exit Sub
End If
' try and create destination folder
Try
Directory.CreateDirectory(destinationDirectory)
Catch
TerminateExe("Error creating destination folder. Invalid path cannot be created.")
Exit Sub
End Try
' start the magic
CreateHierarchy(sourceDirectory,destinationDirectory)
ElseIf args.Length = 1
If args(0) = "/?"
DisplayHelp()
Else
TerminateExe("expected: watermark.exe [source path] [destination path]")
End If
Exit Sub
Else
TerminateExe("expected: watermark.exe [source path] [destination path]")
Exit Sub
End If
TerminateExe()
End Sub
Shared Sub CreateHierarchy(ByVal sourceDirectory As String, ByVal destinationDirectory As String)
Dim tmpSourceDirectory As String = sourceDirectory
' copy directory hierarchy to destination folder
For Each Item As String In Directory.GetDirectories(sourceDirectory)
Directory.CreateDirectory(destinationDirectory + Item.SubString(Item.LastIndexOf("\")))
If hasSubDirectories(Item)
CreateSubDirectories(Item)
End If
Next
' reset destinationDirectory
destinationDirectory = tmpSourceDirectory
' now that folder structure is set up, let's iterate through files
For Each Item As String In Directory.GetDirectories(sourceDirectory)
SearchDirectory(Item)
Next
End Sub
Shared Function hasSubDirectories(ByVal path As String) As Boolean
Dim subdirs() As String = Directory.GetDirectories(path)
If subdirs.Length > 0
Return True
End If
Return False
End Function
Shared Sub CheckFiles(ByVal path As String)
For Each f As String In Directory.GetFiles(path)
If f.SubString(f.Length-3).ToLower = "jpg"
WatermarkImage(f)
End If
Next
End Sub
Shared Sub WatermarkImage(ByVal f As String)
Dim img As System.Drawing.Image = System.Drawing.Image.FromFile(f)
Dim graphic As Graphics
Dim indexedImage As New Bitmap(img)
graphic = Graphics.FromImage(indexedImage)
graphic.DrawImage(img, 0, 0, img.Width, img.Height)
img = indexedImage
graphic.SmoothingMode = SmoothingMode.AntiAlias
graphic.InterpolationMode = InterpolationMode.HighQualityBicubic
Dim x As Integer, y As Integer
Dim source As New Bitmap("c:\watermark.png")
Dim logo As New Bitmap(source, CInt(img.Width / 3), CInt(img.Width / 3))
source.Dispose()
x = img.Width - logo.Width
y = img.Height - logo.Height
graphic.DrawImage(logo, New Point(x,y))
logo.Dispose()
img.Save(destinationDirectory+f.SubString(f.LastIndexOf("\")), ImageFormat.Jpeg)
indexedImage.Dispose()
img.Dispose()
graphic.Dispose()
Console.WriteLine("successfully watermarked " + f.SubString(f.LastIndexOf("\")+1))
Console.WriteLine("saved to: " + vbCrLf + destinationDirectory + vbCrLf)
End Sub
Shared Sub SearchDirectory(ByVal path As String)
destinationDirectory = destinationDirectory + path.SubString(path.LastIndexOf("\"))
CheckFiles(path)
For Each Item As String In Directory.GetDirectories(path)
destinationDirectory += Item.SubString(Item.LastIndexOf("\"))
CheckFiles(Item)
If hasSubDirectories(Item)
destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf("\"))
SearchDirectory(Item)
destinationDirectory += Item.SubString(Item.LastIndexOf("\"))
End If
destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf("\"))
Next
destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf("\"))
End Sub
Shared Sub CreateSubDirectories(ByVal path As String)
destinationDirectory = destinationDirectory + path.SubString(path.LastIndexOf("\"))
For Each Item As String In Directory.GetDirectories(path)
destinationDirectory += Item.SubString(Item.LastIndexOf("\"))
Directory.CreateDirectory(destinationDirectory)
Console.WriteLine(vbCrlf + "created: " + vbCrlf + destinationDirectory)
If hasSubDirectories(Item)
destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf("\"))
CreateSubDirectories(Item)
destinationDirectory += Item.SubString(Item.LastIndexOf("\"))
End If
destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf("\"))
Next
destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf("\"))
End Sub
Shared Sub TerminateExe(ByVal Optional msg As String = "")
If msg ""
Console.WriteLine(vbCrLf + "AN ERROR HAS OCCURRED //" + vbCrLf + msg)
End If
Console.WriteLine(vbCrLf + "Press [enter] to close...")
'Console.Read()
End Sub
Shared Sub DisplayHelp()
Console.WriteLine("watermark.exe accepts two parameters:" + vbCrLf + " - [source folder]")
Console.WriteLine(" - [destination folder]")
Console.WriteLine("ex." + vbCrLf + "watermark.exe ""c:\web_projects\dclr source"" ""d:\new_dclr\copy1 dest""")
Console.WriteLine(vbCrLf + "Press [enter] to close...")
Console.Read()
End Sub
End Class
End Namespace
|
Best way to stamp an image with another image to create a watermark in ASP.NET?
|
Anyone know? Want to be able to on the fly stamp an image with another image as a watermark, also to do large batches. Any type of existing library or a technique you know of would be great.
|
[
"This will answer your question:\nhttp://www.codeproject.com/KB/GDI-plus/watermark.aspx\nGood luck!\n",
"I have had good luck with ImageMagick. It has an API for .NET too.\n",
"here is my full article: http://forums.asp.net/p/1323176/2634923.aspx\nuse the SDK Command Prompt and navigate the active folder to the folder containing the below source code... then compile the code using\n\nvbc.exe watermark.vb /t:exe /out:watermark.exe\n\nthis will create an exe in the folder.. the exe accepts two parameters:\nex.\n\nwatermark.exe \"c:\\source folder\" \"c:\\destination folder\"\n\nthis will iterate through the parent folder and all subfolders. all found jpegs will be watermarked with the image you specify in the code and copied to the destination folder. The original image will stay untouched.\n// watermark.vb --\n\nImports System\nImports System.Drawing\nImports System.Drawing.Drawing2D\nImports System.Drawing.Imaging\nImports System.IO\n\nNamespace WatermarkManager\n Class Watermark\n Shared sourceDirectory As String = \"\", destinationDirectory As String = \"\"\n\n Overloads Shared Sub Main(ByVal args() As String)\n\n 'See if an argument was passed from the command line\n If args.Length = 2 Then\n sourceDirectory = args(0)\n destinationDirectory = args(1)\n\n ' make sure sourceFolder is legit\n If Directory.Exists(sourceDirectory) = False\n TerminateExe(\"Invalid source folder. Folder does not exist.\")\n Exit Sub\n End If\n\n ' try and create destination folder\n Try\n Directory.CreateDirectory(destinationDirectory)\n Catch\n TerminateExe(\"Error creating destination folder. Invalid path cannot be created.\")\n Exit Sub\n End Try\n\n ' start the magic\n CreateHierarchy(sourceDirectory,destinationDirectory)\n\n ElseIf args.Length = 1\n If args(0) = \"/?\"\n DisplayHelp()\n Else\n TerminateExe(\"expected: watermark.exe [source path] [destination path]\")\n End If\n Exit Sub\n Else\n TerminateExe(\"expected: watermark.exe [source path] [destination path]\")\n Exit Sub\n End If\n\n TerminateExe()\n End Sub\n\n Shared Sub CreateHierarchy(ByVal sourceDirectory As String, ByVal destinationDirectory As String)\n\n Dim tmpSourceDirectory As String = sourceDirectory\n\n ' copy directory hierarchy to destination folder\n For Each Item As String In Directory.GetDirectories(sourceDirectory)\n Directory.CreateDirectory(destinationDirectory + Item.SubString(Item.LastIndexOf(\"\\\")))\n\n If hasSubDirectories(Item)\n CreateSubDirectories(Item)\n End If\n Next\n\n ' reset destinationDirectory\n destinationDirectory = tmpSourceDirectory\n\n ' now that folder structure is set up, let's iterate through files\n For Each Item As String In Directory.GetDirectories(sourceDirectory)\n SearchDirectory(Item)\n Next\n End Sub\n\n Shared Function hasSubDirectories(ByVal path As String) As Boolean\n Dim subdirs() As String = Directory.GetDirectories(path)\n If subdirs.Length > 0\n Return True\n End If\n Return False\n End Function\n\n Shared Sub CheckFiles(ByVal path As String)\n For Each f As String In Directory.GetFiles(path)\n If f.SubString(f.Length-3).ToLower = \"jpg\"\n WatermarkImage(f)\n End If\n Next\n End Sub\n\n Shared Sub WatermarkImage(ByVal f As String)\n\n Dim img As System.Drawing.Image = System.Drawing.Image.FromFile(f)\n Dim graphic As Graphics\n Dim indexedImage As New Bitmap(img)\n graphic = Graphics.FromImage(indexedImage)\n graphic.DrawImage(img, 0, 0, img.Width, img.Height)\n img = indexedImage\n\n graphic.SmoothingMode = SmoothingMode.AntiAlias\n graphic.InterpolationMode = InterpolationMode.HighQualityBicubic\n\n Dim x As Integer, y As Integer\n Dim source As New Bitmap(\"c:\\watermark.png\")\n Dim logo As New Bitmap(source, CInt(img.Width / 3), CInt(img.Width / 3))\n source.Dispose()\n x = img.Width - logo.Width\n y = img.Height - logo.Height\n graphic.DrawImage(logo, New Point(x,y))\n logo.Dispose()\n\n img.Save(destinationDirectory+f.SubString(f.LastIndexOf(\"\\\")), ImageFormat.Jpeg)\n indexedImage.Dispose()\n img.Dispose()\n graphic.Dispose()\n\n Console.WriteLine(\"successfully watermarked \" + f.SubString(f.LastIndexOf(\"\\\")+1))\n Console.WriteLine(\"saved to: \" + vbCrLf + destinationDirectory + vbCrLf)\n\n End Sub\n\n Shared Sub SearchDirectory(ByVal path As String)\n destinationDirectory = destinationDirectory + path.SubString(path.LastIndexOf(\"\\\"))\n CheckFiles(path)\n For Each Item As String In Directory.GetDirectories(path)\n destinationDirectory += Item.SubString(Item.LastIndexOf(\"\\\"))\n\n CheckFiles(Item)\n\n If hasSubDirectories(Item)\n destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf(\"\\\"))\n SearchDirectory(Item)\n destinationDirectory += Item.SubString(Item.LastIndexOf(\"\\\"))\n End If\n destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf(\"\\\"))\n Next\n destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf(\"\\\"))\n End Sub\n\n Shared Sub CreateSubDirectories(ByVal path As String)\n destinationDirectory = destinationDirectory + path.SubString(path.LastIndexOf(\"\\\"))\n For Each Item As String In Directory.GetDirectories(path)\n destinationDirectory += Item.SubString(Item.LastIndexOf(\"\\\"))\n Directory.CreateDirectory(destinationDirectory)\n Console.WriteLine(vbCrlf + \"created: \" + vbCrlf + destinationDirectory)\n\n If hasSubDirectories(Item)\n destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf(\"\\\"))\n CreateSubDirectories(Item)\n destinationDirectory += Item.SubString(Item.LastIndexOf(\"\\\"))\n End If\n destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf(\"\\\"))\n Next\n destinationDirectory = destinationDirectory.SubString(0,destinationDirectory.LastIndexOf(\"\\\"))\n End Sub\n\n Shared Sub TerminateExe(ByVal Optional msg As String = \"\")\n If msg \"\"\n Console.WriteLine(vbCrLf + \"AN ERROR HAS OCCURRED //\" + vbCrLf + msg)\n End If\n Console.WriteLine(vbCrLf + \"Press [enter] to close...\")\n 'Console.Read()\n End Sub\n\n Shared Sub DisplayHelp()\n Console.WriteLine(\"watermark.exe accepts two parameters:\" + vbCrLf + \" - [source folder]\")\n Console.WriteLine(\" - [destination folder]\")\n Console.WriteLine(\"ex.\" + vbCrLf + \"watermark.exe \"\"c:\\web_projects\\dclr source\"\" \"\"d:\\new_dclr\\copy1 dest\"\"\")\n Console.WriteLine(vbCrLf + \"Press [enter] to close...\")\n Console.Read()\n End Sub\n End Class\nEnd Namespace\n\n"
] |
[
15,
4,
3
] |
[] |
[] |
[
"asp.net",
"graphics",
"stamp",
"watermark"
] |
stackoverflow_0000110018_asp.net_graphics_stamp_watermark.txt
|
Q:
How does one decrypt a PDF with an owner password, but no user password?
Although the PDF specification is available from Adobe, it's not exactly the simplest document to read through. PDF allows documents to be encrypted so that either a user password and/or an owner password is required to do various things with the document (display, print, etc). A common use is to lock a PDF so that end users can read it without entering any password, but a password is required to do anything else.
I'm trying to parse PDFs that are locked in this way (to get the same privileges as you would get opening them in any reader). Using an empty string as the user password doesn't work, but it seems (section 3.5.2 of the spec) that there has to be a user password to create the hash for the admin password.
What I would like is either an explanation of how to do this, or any code that I can read (ideally Python, C, or C++, but anything readable will do) that does this so that I can understand what I'm meant to be doing. Standalone code, rather than reading through (e.g.) the gsview source, would be best.
A:
A plugin for GSview for viewing encrypted PDFs is here.
If this works for you, you may be able to look at the source.
A:
If I remember correctly, there is a fixed padding string of 32 (?) bytes to apply to any password. All passwords need to be 32 bytes at the start of computing the encryption key, either by truncating or adding some of those padding bytes.
If no user password was set you simply have to pad with all 32 bytes of the string, i.e. use the 32 padding bytes as the starting point for computing the encryption key.
I have to admit it's been a while since I've done this, I do remember that the encryption part of the PDF is an absolute mess as it got changed significantly in nearly every revision, requiring you to cope with a lot of cases to handle all PDF's.
Good luck.
A:
xpdf is probably a good reference implementation for this sort of problem. I have successfully used them to open encrypted pdfs before.
|
How does one decrypt a PDF with an owner password, but no user password?
|
Although the PDF specification is available from Adobe, it's not exactly the simplest document to read through. PDF allows documents to be encrypted so that either a user password and/or an owner password is required to do various things with the document (display, print, etc). A common use is to lock a PDF so that end users can read it without entering any password, but a password is required to do anything else.
I'm trying to parse PDFs that are locked in this way (to get the same privileges as you would get opening them in any reader). Using an empty string as the user password doesn't work, but it seems (section 3.5.2 of the spec) that there has to be a user password to create the hash for the admin password.
What I would like is either an explanation of how to do this, or any code that I can read (ideally Python, C, or C++, but anything readable will do) that does this so that I can understand what I'm meant to be doing. Standalone code, rather than reading through (e.g.) the gsview source, would be best.
|
[
"A plugin for GSview for viewing encrypted PDFs is here.\nIf this works for you, you may be able to look at the source.\n",
"If I remember correctly, there is a fixed padding string of 32 (?) bytes to apply to any password. All passwords need to be 32 bytes at the start of computing the encryption key, either by truncating or adding some of those padding bytes.\nIf no user password was set you simply have to pad with all 32 bytes of the string, i.e. use the 32 padding bytes as the starting point for computing the encryption key.\nI have to admit it's been a while since I've done this, I do remember that the encryption part of the PDF is an absolute mess as it got changed significantly in nearly every revision, requiring you to cope with a lot of cases to handle all PDF's.\nGood luck.\n",
"xpdf is probably a good reference implementation for this sort of problem. I have successfully used them to open encrypted pdfs before.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"c++",
"encryption",
"passwords",
"pdf",
"python"
] |
stackoverflow_0000049455_c++_encryption_passwords_pdf_python.txt
|
Q:
salient concerns and questions to consider in designing a website content management system
I'm designing my website and was curious, before I go rip someone else's ideas, what were the salient considerations and questions one should ask in designing a database?
A:
I think the most important question is, "Why are you doing it (the CMS, not the web site)?" This is very well-trod ground. Unless you have some really innovative ideas and unique insights into how you want it to be done ... and your question suggests that you probably don't ... you would probably be better-served by choosing an existing solution.
A:
In 99% of cases, writing a CMS is simply busy work for re-inventing the wheel. There are so many open-source CMSs out there that I can almost guarantee you can find one that will suit your needs.
That said, if you're still determined to write your own, I would only write exactly as much functionality as you need. Writing a CMS can be a very simple task. But it's one of those things that can become a convoluted nightmare of overlapping, unused features. Only write what you need, and you can add features as the need arises.
A:
This is just off the top of my head:
Content organization should be one of your primary concerns. How are you going to organize all the disparate pieces of content?
Security, and what levels? do you need to only secure the ability to edit any content? certain pieces of content? How about viewing of content, does that need to be secured in any way?
Versioning of content?
Multilingual?
What kind of content? Simple text? images? videos? blog postings?
That should at least get you started in thinking in the right direction.
|
salient concerns and questions to consider in designing a website content management system
|
I'm designing my website and was curious, before I go rip someone else's ideas, what were the salient considerations and questions one should ask in designing a database?
|
[
"I think the most important question is, \"Why are you doing it (the CMS, not the web site)?\" This is very well-trod ground. Unless you have some really innovative ideas and unique insights into how you want it to be done ... and your question suggests that you probably don't ... you would probably be better-served by choosing an existing solution.\n",
"In 99% of cases, writing a CMS is simply busy work for re-inventing the wheel. There are so many open-source CMSs out there that I can almost guarantee you can find one that will suit your needs.\nThat said, if you're still determined to write your own, I would only write exactly as much functionality as you need. Writing a CMS can be a very simple task. But it's one of those things that can become a convoluted nightmare of overlapping, unused features. Only write what you need, and you can add features as the need arises.\n",
"This is just off the top of my head:\n\nContent organization should be one of your primary concerns. How are you going to organize all the disparate pieces of content? \nSecurity, and what levels? do you need to only secure the ability to edit any content? certain pieces of content? How about viewing of content, does that need to be secured in any way?\nVersioning of content?\nMultilingual?\nWhat kind of content? Simple text? images? videos? blog postings?\n\nThat should at least get you started in thinking in the right direction.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"database",
"database_design"
] |
stackoverflow_0000109736_database_database_design.txt
|
Q:
Windows Licensing Question
This is slightly off topic of programming but still has to do with my programming project. I'm writing an app that uses a custom proxy server. I would like to write the server in C# since it would be easier to write and maintain, but I am concerned about the licensing cost of Windows Server + CALS vs a Linux server (obviously, no CALS). There could potentially be many client sites with their own server and 200-500 users at each site.
The proxy will work similar to a content filter. Take returning web pages, process based on the content, and either return the webpage, or redirect to a page on another webserver. There will not be any use of SQL server, user authentication, etc.
Will I need Cals for this? If so, about how much would it cost to setup a Windows Server with proper licensing (per server, in USA)?
A:
This really is an OT question. In any case, there is nothing easier than contacting your local MS distributor. As stackoverflow is by nature an international site, asking a question like that, where the answer is most likely to vary by location (MS license prices really are highly variable and country-specific) is in my opinion not likely to receive an useful answer.
A:
I realize this isn't exactly answering your question but if you want to use Linux, maybe you want to look into using Mono. .Net on Linux.
A:
If users will not be actually connecting to any MS server apps (such as Exchange, SQL Server, etc) and won't be using any OS features directly (i.e. connecting to UNC paths) then all that should be required is the server license for the machine to run the OS. You need Windows Server CALs when clients connect to shares, Exchange CALs for mail clients, and SQL Server CALs for apps that connect to your databases. If the clients of your server won't be connecting to anything but the ports offered by your service, you should be in the clear, and it shouldn't cost any more to build a server for 100 users than 10.
A:
You may not need any CALs for users depending on how you use the server. Certain functionality requires the purchase of CALs but some doesn't. There's no real good way to answer this question since the requirements are too vague. Does it use domain services? Does it use SQL server? Clustering? There are many variables.
If you are looking at what the most you could possibly pay, go to CDW and look at the Open License/Open Business products to get an estimate.
A:
Like said above, if you are using your own connections and nothing else on the server you wont need the cals.
A:
I would Google the ROI on Linux vs Windows for a commercial server, I have no option generally on this, but I have seen that long term they level out, in the grand scheme of things the initial cost of the Windows license is actually minimal and insignificant.
Choose the best technology to solve the end users problem, document why, provide an evaluation report, include maintenance costs, development costs etc. When you do this the answer will be clear to you and your customer.
A:
If your users are not connecting to any other windows resources (Active Directory, SQL Server, File Shares, etc) then you shouldn't need CALs but you I believe there is something like an external connector license. There's also a 'web edition' which looks like it's in the range of ~$400.
Also it looks like Microsoft will be removing the CAL restrictions on web servers completely in Windows Server 2008
Microsoft should call their licensing division Enigma...
|
Windows Licensing Question
|
This is slightly off topic of programming but still has to do with my programming project. I'm writing an app that uses a custom proxy server. I would like to write the server in C# since it would be easier to write and maintain, but I am concerned about the licensing cost of Windows Server + CALS vs a Linux server (obviously, no CALS). There could potentially be many client sites with their own server and 200-500 users at each site.
The proxy will work similar to a content filter. Take returning web pages, process based on the content, and either return the webpage, or redirect to a page on another webserver. There will not be any use of SQL server, user authentication, etc.
Will I need Cals for this? If so, about how much would it cost to setup a Windows Server with proper licensing (per server, in USA)?
|
[
"This really is an OT question. In any case, there is nothing easier than contacting your local MS distributor. As stackoverflow is by nature an international site, asking a question like that, where the answer is most likely to vary by location (MS license prices really are highly variable and country-specific) is in my opinion not likely to receive an useful answer.\n",
"I realize this isn't exactly answering your question but if you want to use Linux, maybe you want to look into using Mono. .Net on Linux. \n",
"If users will not be actually connecting to any MS server apps (such as Exchange, SQL Server, etc) and won't be using any OS features directly (i.e. connecting to UNC paths) then all that should be required is the server license for the machine to run the OS. You need Windows Server CALs when clients connect to shares, Exchange CALs for mail clients, and SQL Server CALs for apps that connect to your databases. If the clients of your server won't be connecting to anything but the ports offered by your service, you should be in the clear, and it shouldn't cost any more to build a server for 100 users than 10.\n",
"You may not need any CALs for users depending on how you use the server. Certain functionality requires the purchase of CALs but some doesn't. There's no real good way to answer this question since the requirements are too vague. Does it use domain services? Does it use SQL server? Clustering? There are many variables.\nIf you are looking at what the most you could possibly pay, go to CDW and look at the Open License/Open Business products to get an estimate.\n",
"Like said above, if you are using your own connections and nothing else on the server you wont need the cals.\n",
"I would Google the ROI on Linux vs Windows for a commercial server, I have no option generally on this, but I have seen that long term they level out, in the grand scheme of things the initial cost of the Windows license is actually minimal and insignificant.\nChoose the best technology to solve the end users problem, document why, provide an evaluation report, include maintenance costs, development costs etc. When you do this the answer will be clear to you and your customer.\n",
"If your users are not connecting to any other windows resources (Active Directory, SQL Server, File Shares, etc) then you shouldn't need CALs but you I believe there is something like an external connector license. There's also a 'web edition' which looks like it's in the range of ~$400. \nAlso it looks like Microsoft will be removing the CAL restrictions on web servers completely in Windows Server 2008 \nMicrosoft should call their licensing division Enigma...\n"
] |
[
6,
4,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"licensing",
"linux",
"windows"
] |
stackoverflow_0000110008_licensing_linux_windows.txt
|
Q:
What are good sources to study the threading implementation of a XMPP application?
From my understanding the XMPP protocol is based on an always-on connection where you have no, immediate, indication of when an XML message ends.
This means you have to evaluate the stream as it comes. This also means that, probably, you have to deal with asynchronous connections since the socket can block in the middle of an XML message, either due to message length or a connection being slow.
I would appreciate one source per answer so we can mod them up and see what's the favourite.
A:
Are you wanting to deal with multiple connections at once? Good asynch socket processing is a must in that case, to avoid one thread per connection.
Otherwise, you just need an XML parser that can deal with a chunk of bytes at a time. Expat is the canonical example; if you're in Java, try XP. These types of XML parsers will fire events as possible, and buffer partial stanzas until the rest arrives.
Now, to address your assertion that there is no notification when a stanza ends, that's not really true. The important thing is not to process the XML stream as if it is a sequence of documents. Use the following pseudo-code:
stanza = null
while parser has more:
switch on token type:
START_TAG:
elem = create element from parser state
if stanza is not null:
add elem as child of stanza
stanza = elem
END_TAG:
parent = parent of stanza
if parent is not null:
fire OnStanza event
stanza = parent
This approach should work with an event-based or pull parser. It only requires holding on to one pointer worth of state. Obviously, you'll also need to handle attributes, character data, entity references (like & and the like), and special-purpose the stream:stream tag, but this should get you started.
A:
Igniterealtime.org provides an open source XMPP-server and client written in java
A:
ejabberd is written in Erlang. I don't know the details of the ejabberd implementation, but one advantage of using Erlang is really inexpensive threads. I'll speculate they start a thread per XMPP connection. In Erlang terminology these would be called processes, but these are not protected-memory address spaces they are lightweight user-space threads.
|
What are good sources to study the threading implementation of a XMPP application?
|
From my understanding the XMPP protocol is based on an always-on connection where you have no, immediate, indication of when an XML message ends.
This means you have to evaluate the stream as it comes. This also means that, probably, you have to deal with asynchronous connections since the socket can block in the middle of an XML message, either due to message length or a connection being slow.
I would appreciate one source per answer so we can mod them up and see what's the favourite.
|
[
"Are you wanting to deal with multiple connections at once? Good asynch socket processing is a must in that case, to avoid one thread per connection.\nOtherwise, you just need an XML parser that can deal with a chunk of bytes at a time. Expat is the canonical example; if you're in Java, try XP. These types of XML parsers will fire events as possible, and buffer partial stanzas until the rest arrives.\nNow, to address your assertion that there is no notification when a stanza ends, that's not really true. The important thing is not to process the XML stream as if it is a sequence of documents. Use the following pseudo-code:\nstanza = null\nwhile parser has more:\n switch on token type:\n START_TAG:\n elem = create element from parser state\n if stanza is not null:\n add elem as child of stanza\n stanza = elem\n END_TAG:\n parent = parent of stanza\n if parent is not null:\n fire OnStanza event\n stanza = parent\n\nThis approach should work with an event-based or pull parser. It only requires holding on to one pointer worth of state. Obviously, you'll also need to handle attributes, character data, entity references (like & and the like), and special-purpose the stream:stream tag, but this should get you started.\n",
"Igniterealtime.org provides an open source XMPP-server and client written in java\n",
"ejabberd is written in Erlang. I don't know the details of the ejabberd implementation, but one advantage of using Erlang is really inexpensive threads. I'll speculate they start a thread per XMPP connection. In Erlang terminology these would be called processes, but these are not protected-memory address spaces they are lightweight user-space threads.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"multithreading",
"networking",
"xmpp"
] |
stackoverflow_0000107772_multithreading_networking_xmpp.txt
|
Q:
Defining objects when using Jaxer
I've been playing with Jaxer and while the concept is very cool I cannot figure out how to define objects that are available on both the client and the server. None of the examples I can find define objects at all.
I'd like to be able to define an object and specify which methods will be available on the server, which will be available on the client, and which will be available on the client but executed on the server (server-proxy). Can this be done without using three separate <script> tags with different runat attributes? I would like to be able to define all of my methods in the same js file if possible, and it is not practical to define my objects inline in the html with three separate tags...
Basically I'd like to be able to do this in one js file:
function Person(name) {
this.name = name || 'default';
}
Person.runat = 'both';
Person.clientStaticMethod = function () {
log('client static method');
}
Person.clientStaticMethod.runat = 'client';
Person.serverStaticMethod = function() {
log('server static method');
}
Person.serverStaticMethod.runat = 'server';
Person.proxyStaticMethod = function() {
log('proxy static method');
}
Person.proxyStaticMethod.runat = 'server-proxy';
Person.prototype.clientMethod = function() {
log('client method');
};
Person.prototype.clientMethod.runat = 'client';
Person.prototype.serverMethod = function() {
log('server method');
};
Person.prototype.serverMethod.runat = 'server';
Person.prototype.proxyMethod = function() {
log('proxy method');
}
Person.prototype.proxyMethod.runat = 'server-proxy';
Also, assuming I was able to do that, how would I include it into html pages correctly?
A:
I found a post on the Aptana forums (that no longer exists on the web) that states that only global functions can be proxied... Bummer.
However, I've been playing around, and you can control which methods will be available on the client and the server by placing your code in an include file and using <script> tags with runat attributes.
For example, I can create this file named Person.js.inc:
<script runat="both">
function Person(name) {
this.name = name || 'default';
}
</script>
<script runat="server">
Person.prototype.serverMethod = function() {
return 'server method (' + this.name + ')';
};
Person.serverStaticMethod = function(person) {
return 'server static method (' + person.name + ')';
}
// This is a proxied function. It will be available on the server and
// a proxy function will be set up on the client. Note that it must be
// declared globally.
function SavePerson(person) {
return 'proxied method (' + person.name + ')';
}
SavePerson.proxy = true;
</script>
<script runat="client">
Person.prototype.clientMethod = function() {
return 'client method (' + this.name + ')';
};
Person.clientStaticMethod = function (person) {
return 'client static method (' + person.name + ')';
}
</script>
And I can include it on a page using:
<jaxer:include src="People.js.inc"></jaxer:include>
Unfortunately with this method I lose the advantage of browser caching for client-side scripts because all the scripts get inlined. The only technique I can find to avoid that problem is to split the client methods, server methods and shared methods into their own js files:
<script src="Person.shared.js" runat="both" autoload="true"></script>
<script src="Person.server.js" runat="server" autoload="true"></script>
<script src="Person.client.js" runat="client"></script>
And, at that point I might as well split the proxied functions out into their own file as well...
<script src="Person.proxies.js" runat="server-proxy"></script>
Note that I used autoload="true" on the shared and server scripts so that they would be available to the proxied functions.
|
Defining objects when using Jaxer
|
I've been playing with Jaxer and while the concept is very cool I cannot figure out how to define objects that are available on both the client and the server. None of the examples I can find define objects at all.
I'd like to be able to define an object and specify which methods will be available on the server, which will be available on the client, and which will be available on the client but executed on the server (server-proxy). Can this be done without using three separate <script> tags with different runat attributes? I would like to be able to define all of my methods in the same js file if possible, and it is not practical to define my objects inline in the html with three separate tags...
Basically I'd like to be able to do this in one js file:
function Person(name) {
this.name = name || 'default';
}
Person.runat = 'both';
Person.clientStaticMethod = function () {
log('client static method');
}
Person.clientStaticMethod.runat = 'client';
Person.serverStaticMethod = function() {
log('server static method');
}
Person.serverStaticMethod.runat = 'server';
Person.proxyStaticMethod = function() {
log('proxy static method');
}
Person.proxyStaticMethod.runat = 'server-proxy';
Person.prototype.clientMethod = function() {
log('client method');
};
Person.prototype.clientMethod.runat = 'client';
Person.prototype.serverMethod = function() {
log('server method');
};
Person.prototype.serverMethod.runat = 'server';
Person.prototype.proxyMethod = function() {
log('proxy method');
}
Person.prototype.proxyMethod.runat = 'server-proxy';
Also, assuming I was able to do that, how would I include it into html pages correctly?
|
[
"I found a post on the Aptana forums (that no longer exists on the web) that states that only global functions can be proxied... Bummer.\nHowever, I've been playing around, and you can control which methods will be available on the client and the server by placing your code in an include file and using <script> tags with runat attributes.\nFor example, I can create this file named Person.js.inc:\n<script runat=\"both\">\n\n function Person(name) {\n this.name = name || 'default';\n }\n\n</script>\n\n<script runat=\"server\">\n\n Person.prototype.serverMethod = function() {\n return 'server method (' + this.name + ')';\n };\n\n Person.serverStaticMethod = function(person) {\n return 'server static method (' + person.name + ')';\n }\n\n // This is a proxied function. It will be available on the server and\n // a proxy function will be set up on the client. Note that it must be \n // declared globally.\n function SavePerson(person) {\n return 'proxied method (' + person.name + ')';\n }\n SavePerson.proxy = true;\n\n</script>\n\n<script runat=\"client\">\n\n Person.prototype.clientMethod = function() {\n return 'client method (' + this.name + ')';\n };\n\n Person.clientStaticMethod = function (person) {\n return 'client static method (' + person.name + ')';\n }\n\n</script>\n\nAnd I can include it on a page using:\n<jaxer:include src=\"People.js.inc\"></jaxer:include>\n\nUnfortunately with this method I lose the advantage of browser caching for client-side scripts because all the scripts get inlined. The only technique I can find to avoid that problem is to split the client methods, server methods and shared methods into their own js files:\n<script src=\"Person.shared.js\" runat=\"both\" autoload=\"true\"></script>\n<script src=\"Person.server.js\" runat=\"server\" autoload=\"true\"></script>\n<script src=\"Person.client.js\" runat=\"client\"></script>\n\nAnd, at that point I might as well split the proxied functions out into their own file as well...\n<script src=\"Person.proxies.js\" runat=\"server-proxy\"></script>\n\nNote that I used autoload=\"true\" on the shared and server scripts so that they would be available to the proxied functions.\n"
] |
[
2
] |
[] |
[] |
[
"aptana",
"javascript",
"jaxer",
"oop"
] |
stackoverflow_0000109762_aptana_javascript_jaxer_oop.txt
|
Q:
How can I check to make sure a window is being actively used, and if not alert the end user that they are about to be logged out?
Working on a new back end system for my company, and one of their requests is for a window to become locked down and for the user to be sent to the login screen if they leave it idle for to long.
I figure I'd do this with JavaScript by attaching listeners to clicks, mouse moves and key-ups but I worry about messing with other scripts.
Any suggestions?
A:
You could just make it do a log out if the user doesn't change pages after so long. That's what the Angel Learning Courseware system seems to do.
The other problem you'll face, though, is that some users disable JavaScript.
A:
If you can put code on the page then there's two things:
Javascript looking for mouse movement, keyboard activity, and scrolling.
Put a meta refresh tag in the html - if they're on that page for more than X minutes it'll automatically redirect to the login page.
If you can only put code on the server:
Keep a session (cookie or other) that tracks how long between page changes. If a page is requested longer than X minutes since the last request, don't serve the requested page, serve the login page.
You can use the meta refresh and server techniques together. The refreshed page will go to a "your session is about to expire, click here to go back and continue working within 30 seconds".
The button they click resets your server's session, and performs a page back function so any data they had (in most browsers) will still be there. Requires javascript on the refresh page, but none on the original page - just a meta refresh. Javascript activity tracking would be the best though.
-Adam
A:
In the load event for the page you can use setTimeout to fire a function warning the user that they will be logged out if they don't refresh the page.
With 5 minute session timeouts you could do warnings after 4 minutes:
setTimeout(timeoutWarning, 240000);
function timeoutWarning() {
if(confirm('You have been idle for a while. Would you like to remain logged in?'))
window.location.refresh();
}
A:
Firstly, for this to be effective, you have to make sure users are logged out on the server at the end of this idle time. Otherwise, nothing you do on the client side is effective. If you send them to a login page, they can just click the back button.
Second, the conventional way to do this is to use a "meta refresh" tag. Adding this to the page:
<meta http-equiv="refresh" content="900;url=http://example.com/login"/>
will send them to the login page after 15 minutes (900 seconds). This will send them there even if they are doing something on the page. It doesn't detect activity. It just knows how long the page has been up in the browser. This is usually good enough because people don't take 15 minutes to fill in a page (stackoverflow.com is a notable exception, I guess.)
If you really need to detect activity on the page, then I think your first instinct is correct. You're going to have to add event handlers to several things. If you are worried about messing with other scripting for validation or other things, you should look at adding event handlers programmatically rather than inline. That is, instead of using
<input type="text" onClick="doSomething;">
Access the object model directly with
Mozilla way: element.addEventListener('click' ...)
Microsoft way: element.attachEvent('onclick' ...)
and then make sure you pass along the events after you receive them so existing code still does whatever (validation?) it is supposed to do.
http://www.quirksmode.org/js/introevents.html has a decent write up on how to do this.
--
bmb
|
How can I check to make sure a window is being actively used, and if not alert the end user that they are about to be logged out?
|
Working on a new back end system for my company, and one of their requests is for a window to become locked down and for the user to be sent to the login screen if they leave it idle for to long.
I figure I'd do this with JavaScript by attaching listeners to clicks, mouse moves and key-ups but I worry about messing with other scripts.
Any suggestions?
|
[
"You could just make it do a log out if the user doesn't change pages after so long. That's what the Angel Learning Courseware system seems to do.\nThe other problem you'll face, though, is that some users disable JavaScript.\n",
"If you can put code on the page then there's two things:\n\nJavascript looking for mouse movement, keyboard activity, and scrolling.\nPut a meta refresh tag in the html - if they're on that page for more than X minutes it'll automatically redirect to the login page.\n\nIf you can only put code on the server:\n\nKeep a session (cookie or other) that tracks how long between page changes. If a page is requested longer than X minutes since the last request, don't serve the requested page, serve the login page.\n\nYou can use the meta refresh and server techniques together. The refreshed page will go to a \"your session is about to expire, click here to go back and continue working within 30 seconds\".\nThe button they click resets your server's session, and performs a page back function so any data they had (in most browsers) will still be there. Requires javascript on the refresh page, but none on the original page - just a meta refresh. Javascript activity tracking would be the best though.\n-Adam\n",
"In the load event for the page you can use setTimeout to fire a function warning the user that they will be logged out if they don't refresh the page.\nWith 5 minute session timeouts you could do warnings after 4 minutes:\nsetTimeout(timeoutWarning, 240000);\n\nfunction timeoutWarning() {\n if(confirm('You have been idle for a while. Would you like to remain logged in?'))\n window.location.refresh();\n}\n\n",
"Firstly, for this to be effective, you have to make sure users are logged out on the server at the end of this idle time. Otherwise, nothing you do on the client side is effective. If you send them to a login page, they can just click the back button.\nSecond, the conventional way to do this is to use a \"meta refresh\" tag. Adding this to the page:\n<meta http-equiv=\"refresh\" content=\"900;url=http://example.com/login\"/>\n\nwill send them to the login page after 15 minutes (900 seconds). This will send them there even if they are doing something on the page. It doesn't detect activity. It just knows how long the page has been up in the browser. This is usually good enough because people don't take 15 minutes to fill in a page (stackoverflow.com is a notable exception, I guess.)\nIf you really need to detect activity on the page, then I think your first instinct is correct. You're going to have to add event handlers to several things. If you are worried about messing with other scripting for validation or other things, you should look at adding event handlers programmatically rather than inline. That is, instead of using\n<input type=\"text\" onClick=\"doSomething;\">\n\nAccess the object model directly with \nMozilla way: element.addEventListener('click' ...)\nMicrosoft way: element.attachEvent('onclick' ...) \n\nand then make sure you pass along the events after you receive them so existing code still does whatever (validation?) it is supposed to do.\nhttp://www.quirksmode.org/js/introevents.html has a decent write up on how to do this.\n--\nbmb\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"javascript",
"session"
] |
stackoverflow_0000105147_javascript_session.txt
|
Q:
From small to large projects
I've been quite used to working on small projects which I coded with 1,000 lines or less (pong, tetris, simple 3d games, etc). However as my abilities in programming are increasing, my organization isn't. I seem to be making everything dependent on one one another, so it's very hard for me to change the implementation of something.
Any ideas for keeping my code organized and being able to tackle large projects?
A:
whiteboards are your best friends
prototype designs (not necessarily working prototypes, use notecards or other methods)
plan first! dont code until you know your requirements/goals
A:
Sketch out an architectural design ahead of time. It doesn't have to be too detailed, but imagine how you want things to fit together in general terms.
A:
Read into refactoring first (made famous by Martin Fowler).
By learning refactoring, you can learn how to write code which is easy to change, readable, and simplified.
I would suggest not to learn design patterns until you understand refactoring first. With refactoring, you can understand the themes of clean and readable code. Once you understand refactoring, read on to design patterns. Design patterns is very useful when you need to write more complex designs.
A:
Use of design patterns is a good first step.
Also, spend a little time writing good documentation regarding system architecture and requirements for the application.
Using source control will help if you are not already doing this.
Look for libraries that may do want you want before you decide to roll your own.
|
From small to large projects
|
I've been quite used to working on small projects which I coded with 1,000 lines or less (pong, tetris, simple 3d games, etc). However as my abilities in programming are increasing, my organization isn't. I seem to be making everything dependent on one one another, so it's very hard for me to change the implementation of something.
Any ideas for keeping my code organized and being able to tackle large projects?
|
[
"whiteboards are your best friends\nprototype designs (not necessarily working prototypes, use notecards or other methods)\nplan first! dont code until you know your requirements/goals\n",
"Sketch out an architectural design ahead of time. It doesn't have to be too detailed, but imagine how you want things to fit together in general terms.\n",
"Read into refactoring first (made famous by Martin Fowler).\nBy learning refactoring, you can learn how to write code which is easy to change, readable, and simplified.\nI would suggest not to learn design patterns until you understand refactoring first. With refactoring, you can understand the themes of clean and readable code. Once you understand refactoring, read on to design patterns. Design patterns is very useful when you need to write more complex designs.\n",
"Use of design patterns is a good first step.\nAlso, spend a little time writing good documentation regarding system architecture and requirements for the application.\nUsing source control will help if you are not already doing this.\nLook for libraries that may do want you want before you decide to roll your own.\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"oop"
] |
stackoverflow_0000098653_oop.txt
|
Q:
How would you implement a breadcrumb helper in asp.net mvc?
I know you could make a helper pretty easily given the data. So, if possible, please only submit answers that also include getting the data.
A:
We are using an action filter for this.
...
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var controller = (Controller) filterContext.Controller;
Breadcrumb[] breadcrumbs = _breadcrumbManager.PushBreadcrumb(_breadcrumbLinkText);
controller.ViewData.Add(breadcrumbs);
}
before you mention it, I too have a distaste for service location in the filter attributes - but we are left with few options. IBreadcrumbManager looks like this:
public interface IBreadcrumbManager
{
Breadcrumb[] PushBreadcrumb(string linkText);
}
The implementation puts Breadcrumb objects into the Session. The Url is HttpContext.Current.Request.RawUrl
A:
@Chris: something like this:
<%
foreach (var item in ViewData.Get<Breadcrumb[]>())
{
%>
<a href="<%= Server.HtmlEncode(item.Url) %>"><%= item.LinkText %></a> »
<%
}
%>
|
How would you implement a breadcrumb helper in asp.net mvc?
|
I know you could make a helper pretty easily given the data. So, if possible, please only submit answers that also include getting the data.
|
[
"We are using an action filter for this. \n...\n public override void OnActionExecuting(ActionExecutingContext filterContext)\n {\n var controller = (Controller) filterContext.Controller;\n Breadcrumb[] breadcrumbs = _breadcrumbManager.PushBreadcrumb(_breadcrumbLinkText);\n controller.ViewData.Add(breadcrumbs);\n }\n\nbefore you mention it, I too have a distaste for service location in the filter attributes - but we are left with few options. IBreadcrumbManager looks like this:\npublic interface IBreadcrumbManager\n{\n Breadcrumb[] PushBreadcrumb(string linkText);\n}\n\nThe implementation puts Breadcrumb objects into the Session. The Url is HttpContext.Current.Request.RawUrl\n",
"@Chris: something like this:\n <% \n foreach (var item in ViewData.Get<Breadcrumb[]>())\n {\n %>\n <a href=\"<%= Server.HtmlEncode(item.Url) %>\"><%= item.LinkText %></a> »\n <% \n } \n %>\n\n"
] |
[
12,
2
] |
[] |
[] |
[
"asp.net_mvc",
"navigation"
] |
stackoverflow_0000066009_asp.net_mvc_navigation.txt
|
Q:
Is using Dexter's character sprite okay, or do I have to
.
Inspiration -- Southpark game
(very popular if you see download count on download.com ,,, did he ask for permission ??)
I am making a 2d game based on dexter's lab theme. I've got the sprite of dexter from GSA. basically I'm not an artist, so I have to depend on already available sprites, backgrounds, sfx on websites like GameSpriteArchive etc.
But is it okay/legal to use the dexter sprite I have got ?
I wish to release it publicly too, so shall I have to make lot of changes to do that?
Is it possible to get a permission to use the sprite?? My hopes are very less in getting permission.
Besides all that my basic plan is -
Dexter's sprite from google search
Enemy sprites from various GBA/SNES/etc games
tiles/objects from these retro games
Background art and style from blogs and portfolios of artists behind dexter, powerpuff girls, and samurai jack
A:
I am not a lawyer. This is not legal advice.
If you made the sprite yourself, you'd be fine. If you got a release to use it from the creator, you'd be fine. If it was released into the public domain, you'd be fine.
Anything else, you'd have a definate problem with.
There's also the possible problem you'd have even if you create the sprite yourself -- the likeness of the character is likely copyrighted. However, that's not as cut-and-dried of an issue.
Unfortunately, this is one of the things you'd need to ask a real lawyer to get a firm answer on. If it's for your own use and that of some close friends, you might be able to get away with hoping you don't get noticed (like most people who speed). If you're planning to include this in something you distribute to the public (even more so if you sell it), you're likely to run into problems.
A:
probably not legal, since Dexter's lab is published by Hanna-Barbera and was created by Genndy Tartakovsky. They would have to grant you a license - but it can't hurt to ask!
A:
You probably won't have to get permission if they don't notice -- it's the old "legal unless you get caught" thing. However, I strongly reccomend that you DO get permission from the creators or not use it at all on purely ethical grounds. After all, you wouldn't want somebody appropriating your work, right?
|
Is using Dexter's character sprite okay, or do I have to
|
.
Inspiration -- Southpark game
(very popular if you see download count on download.com ,,, did he ask for permission ??)
I am making a 2d game based on dexter's lab theme. I've got the sprite of dexter from GSA. basically I'm not an artist, so I have to depend on already available sprites, backgrounds, sfx on websites like GameSpriteArchive etc.
But is it okay/legal to use the dexter sprite I have got ?
I wish to release it publicly too, so shall I have to make lot of changes to do that?
Is it possible to get a permission to use the sprite?? My hopes are very less in getting permission.
Besides all that my basic plan is -
Dexter's sprite from google search
Enemy sprites from various GBA/SNES/etc games
tiles/objects from these retro games
Background art and style from blogs and portfolios of artists behind dexter, powerpuff girls, and samurai jack
|
[
"I am not a lawyer. This is not legal advice.\nIf you made the sprite yourself, you'd be fine. If you got a release to use it from the creator, you'd be fine. If it was released into the public domain, you'd be fine.\nAnything else, you'd have a definate problem with.\nThere's also the possible problem you'd have even if you create the sprite yourself -- the likeness of the character is likely copyrighted. However, that's not as cut-and-dried of an issue.\nUnfortunately, this is one of the things you'd need to ask a real lawyer to get a firm answer on. If it's for your own use and that of some close friends, you might be able to get away with hoping you don't get noticed (like most people who speed). If you're planning to include this in something you distribute to the public (even more so if you sell it), you're likely to run into problems.\n",
"probably not legal, since Dexter's lab is published by Hanna-Barbera and was created by Genndy Tartakovsky. They would have to grant you a license - but it can't hurt to ask!\n",
"You probably won't have to get permission if they don't notice -- it's the old \"legal unless you get caught\" thing. However, I strongly reccomend that you DO get permission from the creators or not use it at all on purely ethical grounds. After all, you wouldn't want somebody appropriating your work, right?\n"
] |
[
5,
2,
2
] |
[] |
[] |
[
"2d",
"graphics",
"media"
] |
stackoverflow_0000110271_2d_graphics_media.txt
|
Q:
CSS list-style: none; still shows bullet
I am using YUI reset/base, after the reset it sets the ul and li tags to list-style: disc outside;
My markup looks like this:
<div id="nav">
<ul class="links">
<li><a href="">Testing</a></li>
</ul>
</div>
My CSS is:
#nav {}
#nav ul li {
list-style: none;
}
Now that makes the small disc beside each li disappear.
Why doesn't this work though?
#nav {}
#nav ul.links
{
list-style: none;
}
It works if I remove the link to the base.css file, why?.
Updated: sidenav -> nav
A:
I think that Dan was close with his answer, but this isn't an issue of specificity. You can set the list-style on the list (the UL) but you can also override that list-style for individual list items (the LIs).
You are telling the browser to not use bullets on the list, but YUI tells the browser to use them on individual list items (YUI wins):
ul li{ list-style: disc outside; } /* in YUI base.css */
#nav ul.links {
list-style: none; /* doesn't override styles for LIs, just the UL */
}
What you want is to tell the browser not to use them on the list items:
ul li{ list-style: disc outside; } /* in YUI base.css */
#nav ul.links li {
list-style: none;
}
A:
In the first snippet you apply the list-style to the li element, in the second to the ul element.
Try
#nav ul.links li
{
list-style: none;
}
A:
The latter example probably doesn't work because of CSS specificity. (A more serious explanation can be found here.) That is, YUI's base.css rule is:
ul li{ list-style: disc outside; }
This is more 'specific' than yours, so the YUI rule is being used. As has been noted several times, you can make your rule more specific by targeting the li tags:
#nav ul li{ list-style: none; }
Hard to say for sure without looking at your code, but if you don't know about specificity it's certainly worth a read.
A:
shouldn't it be:
#nav ul.links
A:
Maybe the style is the base.css overrides your styles with "!important"? Did you try to add a class to this specific li and make an own style for it?
A:
Use this one:
.nav ul li {
list-style: none;
}
or
.links li {
list-style: none;
}
it should work...
|
CSS list-style: none; still shows bullet
|
I am using YUI reset/base, after the reset it sets the ul and li tags to list-style: disc outside;
My markup looks like this:
<div id="nav">
<ul class="links">
<li><a href="">Testing</a></li>
</ul>
</div>
My CSS is:
#nav {}
#nav ul li {
list-style: none;
}
Now that makes the small disc beside each li disappear.
Why doesn't this work though?
#nav {}
#nav ul.links
{
list-style: none;
}
It works if I remove the link to the base.css file, why?.
Updated: sidenav -> nav
|
[
"I think that Dan was close with his answer, but this isn't an issue of specificity. You can set the list-style on the list (the UL) but you can also override that list-style for individual list items (the LIs).\nYou are telling the browser to not use bullets on the list, but YUI tells the browser to use them on individual list items (YUI wins):\nul li{ list-style: disc outside; } /* in YUI base.css */\n\n#nav ul.links {\n list-style: none; /* doesn't override styles for LIs, just the UL */\n}\n\nWhat you want is to tell the browser not to use them on the list items:\nul li{ list-style: disc outside; } /* in YUI base.css */\n\n#nav ul.links li {\n list-style: none;\n}\n\n",
"In the first snippet you apply the list-style to the li element, in the second to the ul element.\nTry\n#nav ul.links li\n{\n list-style: none;\n}\n\n",
"The latter example probably doesn't work because of CSS specificity. (A more serious explanation can be found here.) That is, YUI's base.css rule is:\nul li{ list-style: disc outside; }\n\nThis is more 'specific' than yours, so the YUI rule is being used. As has been noted several times, you can make your rule more specific by targeting the li tags:\n#nav ul li{ list-style: none; }\n\nHard to say for sure without looking at your code, but if you don't know about specificity it's certainly worth a read.\n",
"shouldn't it be:\n#nav ul.links\n\n",
"Maybe the style is the base.css overrides your styles with \"!important\"? Did you try to add a class to this specific li and make an own style for it?\n",
"Use this one:\n.nav ul li {\n list-style: none;\n}\n\nor \n.links li {\n list-style: none;\n}\n\nit should work...\n"
] |
[
9,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"css"
] |
stackoverflow_0000107964_css.txt
|
Q:
Not getting event arguments in IHTMLElement event handler
I've added a callback to an IHTMLElement instance but when the IDispatch::Invoke is called for the event, there are never any arguments (i.e. the pDispParams->cArgs and pDispParams->cNamedArgs are always 0). For example, I add a callback for an onmouseup event. From what I can tell, a callback for this event is supposed to receive a MouseEvent object. Is that correct? If so, what do I need to do to ensure this happens?
This is using the MSHTML for IE 6 sp2 (or better) on Windows XP SP2.
A:
Events arguments for all DOM events including onmouseup are stored in the parent window's event property (IHTMLWindow2::event)
If you don't already have the parent window cached, IHTMLElement has a document property which returns an IHTMLDocument interface. From that you can query for IHTMLDocument2 which has a parentWindow property. The IHTMLWindow2 that is returned has the event property you're looking for. You should be able to query for the event interface from there.
|
Not getting event arguments in IHTMLElement event handler
|
I've added a callback to an IHTMLElement instance but when the IDispatch::Invoke is called for the event, there are never any arguments (i.e. the pDispParams->cArgs and pDispParams->cNamedArgs are always 0). For example, I add a callback for an onmouseup event. From what I can tell, a callback for this event is supposed to receive a MouseEvent object. Is that correct? If so, what do I need to do to ensure this happens?
This is using the MSHTML for IE 6 sp2 (or better) on Windows XP SP2.
|
[
"Events arguments for all DOM events including onmouseup are stored in the parent window's event property (IHTMLWindow2::event) \nIf you don't already have the parent window cached, IHTMLElement has a document property which returns an IHTMLDocument interface. From that you can query for IHTMLDocument2 which has a parentWindow property. The IHTMLWindow2 that is returned has the event property you're looking for. You should be able to query for the event interface from there.\n"
] |
[
6
] |
[] |
[] |
[
"c++",
"com",
"internet_explorer",
"mshtml"
] |
stackoverflow_0000110015_c++_com_internet_explorer_mshtml.txt
|
Q:
How do you allow the usage of an while preventing XSS?
I'm using ASP.NET Web Forms for blog style comments.
Edit 1: This looks way more complicated then I first thought. How do you filter the src?
I would prefer to still use real html tags but if things get too complicated that way, I might go a custom route. I haven't done any XML yet, so do I need to learn more about that?
A:
If IMG is the only thing you'd allow, I'd suggest you use a simple square-bracket syntax to allow it. This would eliminate the need for a parser and reduce a load of other dangerous edge cases with the parser as well. Say, something like:
Look at this! [http://a.b.c/m.jpg]
Which would get converted to
Look at this! <img src="http://a.b.c/m.jpg" />
You should filter the SRC address so that no malicious things get passed in the SRC part too. Like maybe
Look at this! [javascript:alert('pwned!')]
A:
Use an XML parser to validate your input, and drop or encode all elements, and attributes, that you do not want to allow. In this case, delete or encode all tags except the <img> tag, and all attributes from that except src, alt and title.
A:
If you end up going with a non-HTML format (which makes things easier b/c you can literally escape all HTML), use a standard syntax like markdown. The markdown image syntax is 
There are others also, like Textile. Its syntax for images is !imageurl!
A:
@chakrit suggested using a custom syntax, e.g. bracketed URLs - This might very well be the best solution. You DEFINITELY dont want to start messing with parsing etc.
Just make sure you properly encode the entire comment (according to the context - see my answer on this here Will HTML Encoding prevent all kinds of XSS attacks?)
(btw I just discovered a good example of custom syntax right there... ;-) )
As also mentioned, restrict the file extension to jpg/gif/etc - even though this can be bypassed, and also restrict the protocol (e.g. http://).
Another issue to be considered besides XSS - is CSRF (http://www.owasp.org/index.php/Cross-Site_Request_Forgery). If you're not familiar with this security issue, it basically allows the attacker to force my browser to submit a valid authenticated request to your application, for instance to transfer money or to change my password. If this is hosted on your site, he can anonymously attack any vulnerable application - including yours. (Note that even if other applications are vulnerable, its not your fault they get attacked, but you still dont want to be the exploit host or the source of the attack...). As far as your own site goes, it's that much easier for the attacker to change the users password on your site, for instance.
|
How do you allow the usage of an while preventing XSS?
|
I'm using ASP.NET Web Forms for blog style comments.
Edit 1: This looks way more complicated then I first thought. How do you filter the src?
I would prefer to still use real html tags but if things get too complicated that way, I might go a custom route. I haven't done any XML yet, so do I need to learn more about that?
|
[
"If IMG is the only thing you'd allow, I'd suggest you use a simple square-bracket syntax to allow it. This would eliminate the need for a parser and reduce a load of other dangerous edge cases with the parser as well. Say, something like:\nLook at this! [http://a.b.c/m.jpg]\n\nWhich would get converted to\nLook at this! <img src=\"http://a.b.c/m.jpg\" />\n\nYou should filter the SRC address so that no malicious things get passed in the SRC part too. Like maybe\nLook at this! [javascript:alert('pwned!')]\n\n",
"Use an XML parser to validate your input, and drop or encode all elements, and attributes, that you do not want to allow. In this case, delete or encode all tags except the <img> tag, and all attributes from that except src, alt and title.\n",
"If you end up going with a non-HTML format (which makes things easier b/c you can literally escape all HTML), use a standard syntax like markdown. The markdown image syntax is \nThere are others also, like Textile. Its syntax for images is !imageurl!\n",
"@chakrit suggested using a custom syntax, e.g. bracketed URLs - This might very well be the best solution. You DEFINITELY dont want to start messing with parsing etc.\nJust make sure you properly encode the entire comment (according to the context - see my answer on this here Will HTML Encoding prevent all kinds of XSS attacks?)\n(btw I just discovered a good example of custom syntax right there... ;-) )\nAs also mentioned, restrict the file extension to jpg/gif/etc - even though this can be bypassed, and also restrict the protocol (e.g. http://).\nAnother issue to be considered besides XSS - is CSRF (http://www.owasp.org/index.php/Cross-Site_Request_Forgery). If you're not familiar with this security issue, it basically allows the attacker to force my browser to submit a valid authenticated request to your application, for instance to transfer money or to change my password. If this is hosted on your site, he can anonymously attack any vulnerable application - including yours. (Note that even if other applications are vulnerable, its not your fault they get attacked, but you still dont want to be the exploit host or the source of the attack...). As far as your own site goes, it's that much easier for the attacker to change the users password on your site, for instance.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"image",
"xss"
] |
stackoverflow_0000110123_image_xss.txt
|
Q:
Tools to convert asp.net dynamic site into static site
Are there any tools that will spider an asp.net website and create a static site?
A:
http://www.httrack.com/
Have used for this purpose a few times, may need to do a little tidying up of urls, and some css linked images might not make it, depends on how good a job you want to do.
If you have dreamweaver, you can use that to manage the links if you need to clean up the file names afterwards.
Optionally use the link checker extension for firefox to check it all afterwards.
A:
You could use OfflineExplorer: http://www.metaproducts.com/mp/Offline_Explorer.htm
This works well as long as you only have GET requests (links). Postbacks will not
be executed.
Be aware that crawling your site might acually change the underlying
database so I would strongly recommend you back up the database and web before
using a crawler.
A:
Another solution is wget.
A:
I've had good luck with WebZip.
|
Tools to convert asp.net dynamic site into static site
|
Are there any tools that will spider an asp.net website and create a static site?
|
[
"http://www.httrack.com/\nHave used for this purpose a few times, may need to do a little tidying up of urls, and some css linked images might not make it, depends on how good a job you want to do.\nIf you have dreamweaver, you can use that to manage the links if you need to clean up the file names afterwards.\nOptionally use the link checker extension for firefox to check it all afterwards.\n",
"You could use OfflineExplorer: http://www.metaproducts.com/mp/Offline_Explorer.htm \nThis works well as long as you only have GET requests (links). Postbacks will not \nbe executed. \nBe aware that crawling your site might acually change the underlying \ndatabase so I would strongly recommend you back up the database and web before \nusing a crawler.\n",
"Another solution is wget.\n",
"I've had good luck with WebZip. \n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"asp.net",
"web_crawler"
] |
stackoverflow_0000044517_asp.net_web_crawler.txt
|
Q:
MalformedInputException while using Shrinksafe with IBM JRE
While trying to use Shrinksafe custom_rhino.jar to build Dojo I get MalformedInputException. The problem occurs when build reaches custom widgets/templates which contain french letters stored in UTF-8. The AIX machine has LANG=en_US which should be correct, judging by other documented problems regarding MalformedInputException with IBM JRE.
Switching to Sun's JRE is not acceptable solution as this build must run on IBM AIX. It is possible that a solution might be in changing something in AIX or a setting in IBM JRE or both. So far I've been unsuccessful.
Problem is also described in dojo forum but without proper resolution.
A:
In the linked forum, I didn't see a clarification about the default character encoding on your build machine.
It may be that Dojo is using an encoding of UTF-8, but in fact your files are encoded with something like ISO-8859-1 (I'm assuming western Latin characters are used for French).
Do you have an editor such as Eclipse's that allows you to specify the character encoding to use on a particular file? You could try to open the file with UTF-8 encoding and see if the characters are what you expect.
|
MalformedInputException while using Shrinksafe with IBM JRE
|
While trying to use Shrinksafe custom_rhino.jar to build Dojo I get MalformedInputException. The problem occurs when build reaches custom widgets/templates which contain french letters stored in UTF-8. The AIX machine has LANG=en_US which should be correct, judging by other documented problems regarding MalformedInputException with IBM JRE.
Switching to Sun's JRE is not acceptable solution as this build must run on IBM AIX. It is possible that a solution might be in changing something in AIX or a setting in IBM JRE or both. So far I've been unsuccessful.
Problem is also described in dojo forum but without proper resolution.
|
[
"In the linked forum, I didn't see a clarification about the default character encoding on your build machine. \nIt may be that Dojo is using an encoding of UTF-8, but in fact your files are encoded with something like ISO-8859-1 (I'm assuming western Latin characters are used for French).\nDo you have an editor such as Eclipse's that allows you to specify the character encoding to use on a particular file? You could try to open the file with UTF-8 encoding and see if the characters are what you expect.\n"
] |
[
2
] |
[] |
[] |
[
"aix",
"dojo",
"ibm_jre",
"java"
] |
stackoverflow_0000108593_aix_dojo_ibm_jre_java.txt
|
Q:
C pointer assignment behavior
temp2, temp1 are pointers to some struct x:
struct FunkyStruct x;
struct FunkyStruct *temp1 = &x, *temp2 = &x;
Now, after execution of following lines:
temp2=temp1;
temp1=temp1->nxt;
...Will temp2 and temp1 still point to the same memory location? If not, please explain why they would be different.
A:
Initially, temp1 and temp2 both contain the memory address of x.
temp2 = temp1 means "assign the value of temp1 to temp2". Since they have the same value to start with, this command does nothing.
The expression temp1->next means "Look inside the data structure that temp1 points to, and return the value of the field next." So temp1 = temp1->next assigns the value of temp1->next to temp1. (Of course, the lookup happen before the assignment.) temp1 will now contain whatever value the next field happened to contain. It could be the same as the old value, or it could be different.
A:
This sounds like a question based on a background in java?
The answer that dysfunctor gave is good.
The important thing to realise is that in C assigning a pointer is no different to assigning an integer.
Consider the following modification to your original code:
int temp1 = 1;
int temp2;
temp2=temp1;
temp1=temp1 + 1;
At the end of this temp1 is 2, temp2 is 1.
It's not like assigning a (non-primitive) object in java, where the assignment actually assigns a reference to the object rather than the value.
A:
temp2 will not be updated, but temp1 will point to the next item. So if temp1 is 0x89abcdef and temp1->next is 0x89b00000, then after you're done, temp1 will be 0x89b00000 and temp2 will be 0x89abcdef.
Assuming you're making a linked list, of course.
A:
You're not really giving us enough information to answer your question. Are they starting out pointing to the same structure, or are they only both of type pointer to structure x? And if it's some struct x, what's the definition of the nxt field?
A:
Different.
You've saved the address of what temp1 is initially pointed to into temp2. You then changed what temp1 is pointed to, not the variable at the other end of what temp1 is pointed to.
If you had done
temp2 = temp1;
*temp1 = temp1->foo;
then temp1 & temp2 will both be pointing to (the same) modified variable.
A:
No, assuming there are pointers like in C. temp2 would be pointing at the location of x and temp1 would be pointing at whatever the nxt pointer points to. Usually this would be the layout for a singly linked list.
A:
x (and therefore x.nxt) will be initialised to an unspecified value, depending on the combination of compiler, compiler options and the runtime environment. temp1 and temp2 will both point to x (before and after temp1=temp2). Then temp1 will be assigned whatever value x.nxt has.
Final answer: 0 < Pr(temp1 == temp2) << 1, because temp1 == temp2 iff x.nxt == &x.
A:
The short answer is no. But only if nxt is different to both temp1 and temp2 to start with.
The line temp1=temp1->nxt; has two parts, separated by the = operator. These are:
The right hand side temp1->nxt looks up the structure pointed to by temp1 and takes the value of the nxt variable. This is a pointer (new memory location).
The pointer from the right hand side is then used to update the value of temp1.
|
C pointer assignment behavior
|
temp2, temp1 are pointers to some struct x:
struct FunkyStruct x;
struct FunkyStruct *temp1 = &x, *temp2 = &x;
Now, after execution of following lines:
temp2=temp1;
temp1=temp1->nxt;
...Will temp2 and temp1 still point to the same memory location? If not, please explain why they would be different.
|
[
"Initially, temp1 and temp2 both contain the memory address of x.\ntemp2 = temp1 means \"assign the value of temp1 to temp2\". Since they have the same value to start with, this command does nothing.\nThe expression temp1->next means \"Look inside the data structure that temp1 points to, and return the value of the field next.\" So temp1 = temp1->next assigns the value of temp1->next to temp1. (Of course, the lookup happen before the assignment.) temp1 will now contain whatever value the next field happened to contain. It could be the same as the old value, or it could be different.\n",
"This sounds like a question based on a background in java?\nThe answer that dysfunctor gave is good.\nThe important thing to realise is that in C assigning a pointer is no different to assigning an integer.\nConsider the following modification to your original code:\n int temp1 = 1;\n int temp2;\n temp2=temp1;\n temp1=temp1 + 1;\n\nAt the end of this temp1 is 2, temp2 is 1.\nIt's not like assigning a (non-primitive) object in java, where the assignment actually assigns a reference to the object rather than the value.\n",
"temp2 will not be updated, but temp1 will point to the next item. So if temp1 is 0x89abcdef and temp1->next is 0x89b00000, then after you're done, temp1 will be 0x89b00000 and temp2 will be 0x89abcdef.\nAssuming you're making a linked list, of course.\n",
"You're not really giving us enough information to answer your question. Are they starting out pointing to the same structure, or are they only both of type pointer to structure x? And if it's some struct x, what's the definition of the nxt field?\n",
"Different.\nYou've saved the address of what temp1 is initially pointed to into temp2. You then changed what temp1 is pointed to, not the variable at the other end of what temp1 is pointed to.\nIf you had done\ntemp2 = temp1;\n*temp1 = temp1->foo;\n\nthen temp1 & temp2 will both be pointing to (the same) modified variable.\n",
"No, assuming there are pointers like in C. temp2 would be pointing at the location of x and temp1 would be pointing at whatever the nxt pointer points to. Usually this would be the layout for a singly linked list.\n",
"x (and therefore x.nxt) will be initialised to an unspecified value, depending on the combination of compiler, compiler options and the runtime environment. temp1 and temp2 will both point to x (before and after temp1=temp2). Then temp1 will be assigned whatever value x.nxt has.\nFinal answer: 0 < Pr(temp1 == temp2) << 1, because temp1 == temp2 iff x.nxt == &x.\n",
"The short answer is no. But only if nxt is different to both temp1 and temp2 to start with.\nThe line temp1=temp1->nxt; has two parts, separated by the = operator. These are:\n\nThe right hand side temp1->nxt looks up the structure pointed to by temp1 and takes the value of the nxt variable. This is a pointer (new memory location).\nThe pointer from the right hand side is then used to update the value of temp1.\n\n"
] |
[
10,
7,
2,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"c",
"memory",
"pointers"
] |
stackoverflow_0000109644_c_memory_pointers.txt
|
Q:
Single responsiblity principle: granularity of the reason to change
When applying the Single Responsibility Principle and looking at a class's reason to change, how do you determine whether that reason too change is too granular, or not granular enough?
A:
I don't know that there's a good answer to this one other than "apply your judgement, based on your experience." Failing that, get help, which I guess is what you're doing here ;)
Seriously, though, if you find that you're creating a gazillion classes to do what seems like a simple job, then you're probably being too granular. If your classes all seem collossal, then you're probably being too coarse. Please pardon me if that's a statement of the obvious.
I think this is one of those fuzzy, no-hard-and-fast-rules cases that show us why we need human programmers. Just try something, seeking balance, and refactor if you find you're going too far in one direction or the other. And remember: if it's worth doing, it's worth doing badly.
A:
I wouldn't be too worried about granularity initially. I will just go with separation of concern at a broader level initially. Basic point is that we should avoid over-engineering here. But just enough. I agree with Lucas here, that this first step will improve with experience.
As the requirements change, as I am starting to get the 'smells', as my understanding of the problem improves I would refactor the design by factoring out the separate concerns as they become obvious. Basically separation of concern shall also be evolutionary as with overall design.
|
Single responsiblity principle: granularity of the reason to change
|
When applying the Single Responsibility Principle and looking at a class's reason to change, how do you determine whether that reason too change is too granular, or not granular enough?
|
[
"I don't know that there's a good answer to this one other than \"apply your judgement, based on your experience.\" Failing that, get help, which I guess is what you're doing here ;)\nSeriously, though, if you find that you're creating a gazillion classes to do what seems like a simple job, then you're probably being too granular. If your classes all seem collossal, then you're probably being too coarse. Please pardon me if that's a statement of the obvious.\nI think this is one of those fuzzy, no-hard-and-fast-rules cases that show us why we need human programmers. Just try something, seeking balance, and refactor if you find you're going too far in one direction or the other. And remember: if it's worth doing, it's worth doing badly.\n",
"\nI wouldn't be too worried about granularity initially. I will just go with separation of concern at a broader level initially. Basic point is that we should avoid over-engineering here. But just enough. I agree with Lucas here, that this first step will improve with experience.\nAs the requirements change, as I am starting to get the 'smells', as my understanding of the problem improves I would refactor the design by factoring out the separate concerns as they become obvious. Basically separation of concern shall also be evolutionary as with overall design. \n\n"
] |
[
1,
1
] |
[] |
[] |
[
"oop"
] |
stackoverflow_0000027018_oop.txt
|
Q:
What is the best way to collect/report unexpected errors in .NET Window Applications?
I am looking for a better solution than what we currently have to deal with unexpected production errors, without reinventing the wheel.
A larger number of our products are WinForm and WPF applications that are installed at remote sites. Inevitably unexpected errors occur, from NullReferenceExceptions to 'General network errors'. Thus ranging from programmer errors to environment problems.
Currently all these unhandled exceptions are logged using log4net and then emailed back to us for analysis. However we found that sometimes these error 'reports' contain too little information to identify the problem.
In these reports we need information such as:
Application name
Application Version
Workstation
Maybe a screen shot
Exception details
Operating system
Available RAM
Running processes
And so on...
I don't really want to re-invent the wheel by developing this from scratch. Components that are required:
Error collection (details as mentioned above)
Error 'sender' (Queuing required if DB or Internet is unavailable)
Error database
Analysis and reporting of these errors. E.g. 10 most frequent errors or timeouts occur between 4:00PM and 5:00PM. How do the errors compare between version x and y?
Note:
We looked at SmartAssembly as a possible solution but although close it didn't quite met our needs and I was hoping to hear what other developers do and if some alternatives exist.
Edit: Thanks for the answers so far. Maybe I wasn't clear in my original question, the problem is not how to catch all unhanded exceptions but rather how to deal with them and to create a reporting engine (analysis) around them.
A:
I'd suggest Jeff Atwood's article on User Friendly Exception Handling, which does most of what you ask already (Application Info, Screenshot, Exception Details, OS, Logging to text files and Emailing), and contains the source code so you add the extra stuff you need.
A:
You can attach to the unhandled exception event and log it/hit a webservice/etc.
[STAThread]
static void Main()
{
Application.ThreadException += new ThreadExceptionEventHandler(OnUnhandledException);
Application.Run(new FormStartUp());
}
static void OnUnhandledException(object sender, ThreadExceptionEventArgs t)
{
// Log
}
I also found this code snippet using AppDomain instead of ThreadException:
static class EntryPoint {
[MTAThread]
static void Main() {
// Add Global Exception Handler
AppDomain.CurrentDomain.UnhandledException +=
new UnhandledExceptionEventHandler(OnUnhandledException);
Application.Run(new Form1());
}
// In CF case only, ALL unhandled exceptions come here
private static void OnUnhandledException(Object sender,
UnhandledExceptionEventArgs e) {
Exception ex = e.ExceptionObject as Exception;
if (ex != null) {
// Can't imagine e.IsTerminating ever being false
// or e.ExceptionObject not being an Exception
SomeClass.SomeStaticHandlingMethod(ex, e.IsTerminating);
}
}
}
Here is some documentation on it: AppDomain Unhandled Exception
Outside of just handling it yourself, there isn't really a generic way to do this that is reusable, it really needs to be integrated with the interface of the application properly, but you could setup a webservice that takes application name, exception, and all that good stuff and have a centralized point for all your apps.
A:
You may want to study the error reporting feature built into JetBrain's Omea Reader. It has a catch-all error-handling component that pops a dialog when an unexpected error occurs. The user can input more details before submitting the problem to JetBrain's public error-collection web service.
They made Omea open source to allow the community to upgrade the .NET 1.1 code base to v2 or 3.
http://www.jetbrains.net/confluence/display/OMEA/this+link
|
What is the best way to collect/report unexpected errors in .NET Window Applications?
|
I am looking for a better solution than what we currently have to deal with unexpected production errors, without reinventing the wheel.
A larger number of our products are WinForm and WPF applications that are installed at remote sites. Inevitably unexpected errors occur, from NullReferenceExceptions to 'General network errors'. Thus ranging from programmer errors to environment problems.
Currently all these unhandled exceptions are logged using log4net and then emailed back to us for analysis. However we found that sometimes these error 'reports' contain too little information to identify the problem.
In these reports we need information such as:
Application name
Application Version
Workstation
Maybe a screen shot
Exception details
Operating system
Available RAM
Running processes
And so on...
I don't really want to re-invent the wheel by developing this from scratch. Components that are required:
Error collection (details as mentioned above)
Error 'sender' (Queuing required if DB or Internet is unavailable)
Error database
Analysis and reporting of these errors. E.g. 10 most frequent errors or timeouts occur between 4:00PM and 5:00PM. How do the errors compare between version x and y?
Note:
We looked at SmartAssembly as a possible solution but although close it didn't quite met our needs and I was hoping to hear what other developers do and if some alternatives exist.
Edit: Thanks for the answers so far. Maybe I wasn't clear in my original question, the problem is not how to catch all unhanded exceptions but rather how to deal with them and to create a reporting engine (analysis) around them.
|
[
"I'd suggest Jeff Atwood's article on User Friendly Exception Handling, which does most of what you ask already (Application Info, Screenshot, Exception Details, OS, Logging to text files and Emailing), and contains the source code so you add the extra stuff you need.\n",
"You can attach to the unhandled exception event and log it/hit a webservice/etc.\n[STAThread]\nstatic void Main() \n{\n Application.ThreadException += new ThreadExceptionEventHandler(OnUnhandledException);\n Application.Run(new FormStartUp());\n}\nstatic void OnUnhandledException(object sender, ThreadExceptionEventArgs t) \n{\n // Log\n}\n\nI also found this code snippet using AppDomain instead of ThreadException:\nstatic class EntryPoint {\n [MTAThread]\n static void Main() {\n // Add Global Exception Handler\n AppDomain.CurrentDomain.UnhandledException += \n new UnhandledExceptionEventHandler(OnUnhandledException);\n\n Application.Run(new Form1());\n }\n\n // In CF case only, ALL unhandled exceptions come here\n private static void OnUnhandledException(Object sender, \n UnhandledExceptionEventArgs e) {\n Exception ex = e.ExceptionObject as Exception;\n if (ex != null) {\n // Can't imagine e.IsTerminating ever being false\n // or e.ExceptionObject not being an Exception\n SomeClass.SomeStaticHandlingMethod(ex, e.IsTerminating);\n }\n }\n}\n\nHere is some documentation on it: AppDomain Unhandled Exception\nOutside of just handling it yourself, there isn't really a generic way to do this that is reusable, it really needs to be integrated with the interface of the application properly, but you could setup a webservice that takes application name, exception, and all that good stuff and have a centralized point for all your apps.\n",
"You may want to study the error reporting feature built into JetBrain's Omea Reader. It has a catch-all error-handling component that pops a dialog when an unexpected error occurs. The user can input more details before submitting the problem to JetBrain's public error-collection web service.\nThey made Omea open source to allow the community to upgrade the .NET 1.1 code base to v2 or 3.\nhttp://www.jetbrains.net/confluence/display/OMEA/this+link\n"
] |
[
5,
2,
0
] |
[] |
[] |
[
".net",
"c#",
"error_handling",
"reporting"
] |
stackoverflow_0000110488_.net_c#_error_handling_reporting.txt
|
Q:
Is it possible to start a scheduled Windows task from a package?
Does anyone know if you can and how to start off a scheduled Windows task on a Remote Server from within a SQL Server Integration Services (SSIS) package?
A:
Assuming you run it on Windows Server 2003/2008 or Vista, use SSIS Execute Process Task to start SCHTASKS.EXE with appropriate params (SCHTASKS /Run /? to see details).
A:
It should be possible as the Task Scheduler has a scriptable COM API that can be used for interacting with tasks.
You could therefore either create a custom task that uses COM interop to call the Task Scheduler API, or it'd probably be quicker to use an Active X Script task to do your dirty work.
A:
I invested a lot of time in the aforementioned COM API back in 2002. It was, to put it mildly, "flakey".
What we ended up doing instead is having our tasks run every minute. The first thing the task did was check the database to see if it should continue running or not.
Then "starting" a scheduled task from SSIS was as simple as changing a database field.
|
Is it possible to start a scheduled Windows task from a package?
|
Does anyone know if you can and how to start off a scheduled Windows task on a Remote Server from within a SQL Server Integration Services (SSIS) package?
|
[
"Assuming you run it on Windows Server 2003/2008 or Vista, use SSIS Execute Process Task to start SCHTASKS.EXE with appropriate params (SCHTASKS /Run /? to see details).\n",
"It should be possible as the Task Scheduler has a scriptable COM API that can be used for interacting with tasks.\nYou could therefore either create a custom task that uses COM interop to call the Task Scheduler API, or it'd probably be quicker to use an Active X Script task to do your dirty work.\n",
"I invested a lot of time in the aforementioned COM API back in 2002. It was, to put it mildly, \"flakey\".\nWhat we ended up doing instead is having our tasks run every minute. The first thing the task did was check the database to see if it should continue running or not.\nThen \"starting\" a scheduled task from SSIS was as simple as changing a database field.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"sql_server",
"ssis",
"windows"
] |
stackoverflow_0000037532_sql_server_ssis_windows.txt
|
Q:
how do i add project references to swcs in FlashDevelop
I am trying to add a project reference or swc to papervision in FlashDevelop but intellisense isn't picking it up. I've done it before but i forgot how.
Thanks.
A:
Add your swc to the lib folder of your project. Then right-click it and mark "Add To Library".
A:
In the menus:
Project -> Properties -> Compiler Options -> SWC Libraries
(and then add the path or file to the list)
|
how do i add project references to swcs in FlashDevelop
|
I am trying to add a project reference or swc to papervision in FlashDevelop but intellisense isn't picking it up. I've done it before but i forgot how.
Thanks.
|
[
"Add your swc to the lib folder of your project. Then right-click it and mark \"Add To Library\".\n",
"In the menus:\nProject -> Properties -> Compiler Options -> SWC Libraries \n\n(and then add the path or file to the list)\n"
] |
[
26,
8
] |
[] |
[] |
[
"actionscript",
"actionscript_3",
"apache_flex",
"flash",
"flashdevelop"
] |
stackoverflow_0000110263_actionscript_actionscript_3_apache_flex_flash_flashdevelop.txt
|
Q:
How to serve files from IIS 6 on Windows Server 2003?
I have files with extensions like ".dae" , ".gtc" , etc. When I try to hit these files over http, the server returns a 404, but they are in the directories. However I can serve readily known file extensions; if i just rename them to say, xml, they are accessible.
Any suggestions for what the problem may be?
A:
If you request a file with an extension that is not a defined MIME type on your IIS 6.0 Web server, you receive a "HTTP Error 404 - File or directory not found" error message.
To define a MIME type for a specific extension (.dae in your case), follow these steps:
Open the IIS Microsoft Management Console (MMC), right-click the local computer name, and then click Properties.
Click MIME Types.
Click New.
In the Extension box, type the file name extension that you want (in your case .dae).
In the MIME Type box, type application/octet-stream.
Apply the new settings.
Note: you must restart the World Wide Web Publishing Service or wait for the worker process to recycle for the changes to take effect.
A:
You need to define additional MIME types on IIS 6 for the extensions that you mentioned.
Here is the MS article on how to add additional MIME types to IIS6:
http://support.microsoft.com/kb/326965
|
How to serve files from IIS 6 on Windows Server 2003?
|
I have files with extensions like ".dae" , ".gtc" , etc. When I try to hit these files over http, the server returns a 404, but they are in the directories. However I can serve readily known file extensions; if i just rename them to say, xml, they are accessible.
Any suggestions for what the problem may be?
|
[
"If you request a file with an extension that is not a defined MIME type on your IIS 6.0 Web server, you receive a \"HTTP Error 404 - File or directory not found\" error message.\nTo define a MIME type for a specific extension (.dae in your case), follow these steps:\n\nOpen the IIS Microsoft Management Console (MMC), right-click the local computer name, and then click Properties.\nClick MIME Types.\nClick New.\nIn the Extension box, type the file name extension that you want (in your case .dae). \nIn the MIME Type box, type application/octet-stream.\nApply the new settings.\n\nNote: you must restart the World Wide Web Publishing Service or wait for the worker process to recycle for the changes to take effect.\n",
"You need to define additional MIME types on IIS 6 for the extensions that you mentioned.\nHere is the MS article on how to add additional MIME types to IIS6:\nhttp://support.microsoft.com/kb/326965\n"
] |
[
4,
1
] |
[] |
[] |
[
".net",
"iis",
"iis_6",
"windows"
] |
stackoverflow_0000110426_.net_iis_iis_6_windows.txt
|
Q:
Connectionstring error after encrypted using aspnet_regiis.exe
I've encrypted the connectionstring in my web.config file using the steps in the link below:
http://www.codeproject.com/KB/database/WebFarmConnStringsNet20.aspx
However, whenever I call my application, it will give the following error:
Failed to decrypt using provider
'CustomProvider'. Error message from
the provider: The RSA key container
could not be opened.
The server where I perform the encryption is a 64-bit Windows Server 2003 R2 SP2. Because of that I assign the ACL to NT Authority\Network Service. Yet it still doesn't work.
Hope someone has some ideas what else do I need to check to get this working.
PS. If I used the default rsa key NetFrameworkConfigurationKey for encryption, then the connection string will not have an access problem.
A:
Well, I found the source of the problem, and boy was it embarrassing. In the attribute keyContainerName, I spelled the name incorrectly.
That it. That's what caused the problem.
Apparently, the encryption will work even if you provide an incorrect keyContainerName, which I incorrectly assumed will fail. So, once I decrypt the connectionstring and re-encrypt with the right keyContainerName, it works fine.
BTW, make sure to decrypt your existing connectionstring before correcting the keyContainerName. The aspnet_regiis.exe will complain about bad data, because the provider is now different.
A:
Did you remember to add the
<configProtectedData>
to your web.config?
|
Connectionstring error after encrypted using aspnet_regiis.exe
|
I've encrypted the connectionstring in my web.config file using the steps in the link below:
http://www.codeproject.com/KB/database/WebFarmConnStringsNet20.aspx
However, whenever I call my application, it will give the following error:
Failed to decrypt using provider
'CustomProvider'. Error message from
the provider: The RSA key container
could not be opened.
The server where I perform the encryption is a 64-bit Windows Server 2003 R2 SP2. Because of that I assign the ACL to NT Authority\Network Service. Yet it still doesn't work.
Hope someone has some ideas what else do I need to check to get this working.
PS. If I used the default rsa key NetFrameworkConfigurationKey for encryption, then the connection string will not have an access problem.
|
[
"Well, I found the source of the problem, and boy was it embarrassing. In the attribute keyContainerName, I spelled the name incorrectly. \nThat it. That's what caused the problem.\nApparently, the encryption will work even if you provide an incorrect keyContainerName, which I incorrectly assumed will fail. So, once I decrypt the connectionstring and re-encrypt with the right keyContainerName, it works fine. \nBTW, make sure to decrypt your existing connectionstring before correcting the keyContainerName. The aspnet_regiis.exe will complain about bad data, because the provider is now different. \n",
"Did you remember to add the \n<configProtectedData>\n\nto your web.config?\n"
] |
[
1,
0
] |
[] |
[] |
[
"asp.net",
"aspnet_regiis.exe",
"connection_string",
"rsa"
] |
stackoverflow_0000107846_asp.net_aspnet_regiis.exe_connection_string_rsa.txt
|
Q:
How many unit tests should I write per function/method?
Do you write one test per function/method, with multiple checks in the test, or a test for each check?
A:
One test per check and super descriptive names, per instance:
@Test
public void userCannotVoteDownWhenScoreIsLessThanOneHundred() {
...
}
Both only one assertion and using good names gives me a better report when a test fails. They scream to me: "You broke THAT rule!".
A:
I have a test per capability the function is offering. Each test may have several assertions, however.
The name of the testcase indicates the capability being tested.
Generally, for one function, I have several "sunny day" tests and one or a few "rainy day" scenario, depending of its complexity.
A:
BDD (Behavior Driven Development)
Though I'm still learning, it's basically TDD organized/focused around how your software will actually be used... NOT how it will be developed/built.
Wikipedia
General Info
BTW as far as whether to do multiple asserts per test method I would recommend trying it both ways. Sometimes you'll see where one strategy left you in a bind and it'll start making sense why you normally just use one assert per method.
A:
I think that the rule of single assertion is a little too strict. In my unit tests, I try to follow the rule of single group of assertions -- you can use more than one assertion in one test method, as long as you do the checks one after another (you don't change the state of tested class between the assertions).
So, in Python, I believe a test like this is correct:
def testGetCountReturnsCountAndEnd(self):
count, endReached = self.handler.getCount()
self.assertEqual(count, 0)
self.assertTrue(endReached)
but this one should be split into two test methods:
def testGetCountReturnsOneAfterPut(self):
self.assertEqual(self.handler.getCount(), 0)
self.handler.put('foo')
self.assertEqual(self.handler.getCount(), 1)
Of course, in case of long and frequently used groups of assertions, I like to create custom assertion methods -- these are especially useful for comparing complex objects.
A:
A test case for each check. It's more granular. It makes it much easier to see what specific test case failed.
A:
I write at least one test per method, and somtimes more if the method requires some different setUp to test the good cases and the bad cases.
But you should NEVER test more than one method in one unit test. It reduce the amount of work and error in fixing your test in case your API changes.
A:
I would suggest a test case for every check.
The more you keep atomic, the better your results are!
Keeping multiple checks in a single tests will help you generate report for how much functionality needs to be corrected.
Keeping atomic test case will show you the overall quality !
A:
In general one testcase per check. When tests are grouped around a particular function it makes refactoring (eg removing or splitting) that function more difficult because the tests also need a lot of changes. It is much better to write the tests for each type of behaviour that you want from the class. Sometimes when testing a particular behaviour it makes sense to have multiple checks per test case. However, as the tests become more complicated it makes them harder to change when something in the class changes.
A:
In Java/Eclipse/JUnit I use two source directories (src and test) with the same tree.
If I have a src/com/mycompany/whatever/TestMePlease with methods worth testing (e.g. deleteAll(List<?> stuff) throws MyException) I create a test/com/mycompany/whatever/TestMePleaseTest with methods to test differente use case/scenarios:
@Test
public void deleteAllWithNullInput() { ... }
@Test(expect="MyException.class") // not sure about actual syntax here :-P
public void deleteAllWithEmptyInput() { ... }
@Test
public void deleteAllWithSingleLineInput() { ... }
@Test
public void deleteAllWithMultipleLinesInput() { ... }
Having different checks is simpler to handle for me.
Nonetheless, since every test should be consistent, if I want my initial data set to stay unaltered I sometimes have, for example, to create stuff and delete it in the same check to insure every other test find the data set pristine:
@Test
public void insertAndDelete() {
assertTrue(/*stuff does not exist yet*/);
createStuff();
assertTrue(/*stuff does exist now*/);
deleteStuff();
assertTrue(/*stuff does not exist anymore*/);
}
Don't know if there are smarter ways to do that, to tell you the truth...
A:
I like to have a test per check in a method and have a meaningfull name for the test-method. For instance:
testAddUser_shouldThrowIllegalArgumentExceptionWhenUserIsNull
A:
A testcase per check. If you name the method appropriately, it can provide valuable hint towards the problem when one of these tests cause a regression failure.
A:
I try to separate out Database tests and Business Logic Tests (using BDD as others here recommend), running the Database ones first ensures your Database is in a good state before asking your application to play with it.
There's a good podcast show with Andy Leonard on what it involves and how to do it, and if you'd like a bit more information, I've written a blog post on the subject (shameless plug ;o)
|
How many unit tests should I write per function/method?
|
Do you write one test per function/method, with multiple checks in the test, or a test for each check?
|
[
"One test per check and super descriptive names, per instance:\n@Test\npublic void userCannotVoteDownWhenScoreIsLessThanOneHundred() {\n ...\n}\n\nBoth only one assertion and using good names gives me a better report when a test fails. They scream to me: \"You broke THAT rule!\".\n",
"I have a test per capability the function is offering. Each test may have several assertions, however. \nThe name of the testcase indicates the capability being tested.\nGenerally, for one function, I have several \"sunny day\" tests and one or a few \"rainy day\" scenario, depending of its complexity. \n",
"BDD (Behavior Driven Development)\nThough I'm still learning, it's basically TDD organized/focused around how your software will actually be used... NOT how it will be developed/built.\nWikipedia\nGeneral Info\nBTW as far as whether to do multiple asserts per test method I would recommend trying it both ways. Sometimes you'll see where one strategy left you in a bind and it'll start making sense why you normally just use one assert per method.\n",
"I think that the rule of single assertion is a little too strict. In my unit tests, I try to follow the rule of single group of assertions -- you can use more than one assertion in one test method, as long as you do the checks one after another (you don't change the state of tested class between the assertions).\nSo, in Python, I believe a test like this is correct:\ndef testGetCountReturnsCountAndEnd(self):\n count, endReached = self.handler.getCount()\n self.assertEqual(count, 0)\n self.assertTrue(endReached)\n\nbut this one should be split into two test methods:\ndef testGetCountReturnsOneAfterPut(self):\n self.assertEqual(self.handler.getCount(), 0)\n self.handler.put('foo')\n self.assertEqual(self.handler.getCount(), 1)\n\nOf course, in case of long and frequently used groups of assertions, I like to create custom assertion methods -- these are especially useful for comparing complex objects.\n",
"A test case for each check. It's more granular. It makes it much easier to see what specific test case failed.\n",
"I write at least one test per method, and somtimes more if the method requires some different setUp to test the good cases and the bad cases.\nBut you should NEVER test more than one method in one unit test. It reduce the amount of work and error in fixing your test in case your API changes.\n",
"I would suggest a test case for every check.\nThe more you keep atomic, the better your results are!\nKeeping multiple checks in a single tests will help you generate report for how much functionality needs to be corrected.\nKeeping atomic test case will show you the overall quality !\n",
"In general one testcase per check. When tests are grouped around a particular function it makes refactoring (eg removing or splitting) that function more difficult because the tests also need a lot of changes. It is much better to write the tests for each type of behaviour that you want from the class. Sometimes when testing a particular behaviour it makes sense to have multiple checks per test case. However, as the tests become more complicated it makes them harder to change when something in the class changes.\n",
"In Java/Eclipse/JUnit I use two source directories (src and test) with the same tree.\nIf I have a src/com/mycompany/whatever/TestMePlease with methods worth testing (e.g. deleteAll(List<?> stuff) throws MyException) I create a test/com/mycompany/whatever/TestMePleaseTest with methods to test differente use case/scenarios:\n@Test\npublic void deleteAllWithNullInput() { ... }\n\n@Test(expect=\"MyException.class\") // not sure about actual syntax here :-P\npublic void deleteAllWithEmptyInput() { ... }\n\n@Test\npublic void deleteAllWithSingleLineInput() { ... }\n\n@Test\npublic void deleteAllWithMultipleLinesInput() { ... }\n\nHaving different checks is simpler to handle for me. \nNonetheless, since every test should be consistent, if I want my initial data set to stay unaltered I sometimes have, for example, to create stuff and delete it in the same check to insure every other test find the data set pristine:\n@Test\npublic void insertAndDelete() { \n assertTrue(/*stuff does not exist yet*/);\n createStuff();\n assertTrue(/*stuff does exist now*/);\n deleteStuff();\n assertTrue(/*stuff does not exist anymore*/);\n}\n\nDon't know if there are smarter ways to do that, to tell you the truth... \n",
"I like to have a test per check in a method and have a meaningfull name for the test-method. For instance:\ntestAddUser_shouldThrowIllegalArgumentExceptionWhenUserIsNull\n",
"A testcase per check. If you name the method appropriately, it can provide valuable hint towards the problem when one of these tests cause a regression failure.\n",
"I try to separate out Database tests and Business Logic Tests (using BDD as others here recommend), running the Database ones first ensures your Database is in a good state before asking your application to play with it.\nThere's a good podcast show with Andy Leonard on what it involves and how to do it, and if you'd like a bit more information, I've written a blog post on the subject (shameless plug ;o)\n"
] |
[
42,
7,
5,
3,
2,
2,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"unit_testing"
] |
stackoverflow_0000110430_unit_testing.txt
|
Q:
Advice on mixing legacy ASP site with .NET 2.0
We've just been tasked with updating an e-commerce application to use PayPal's PayFlow product. The site was originally written in classic ASP except for the credit card processing portion which was a COM component.
Our plan is to replace the COM component with a .NET 2.0 component. I'm looking for tips, gotcha, etc. before we embark.
A:
I think Dan Bartels' blog post about
Replacing Old Classic ASP COM Componenets With .NET Assemblies is the right starting point for you.
Implementing the details described in the blog post, you should be able to instantiate your objects in classic asp and execute code like this:
Dim myObject
Set myObject = Server.CreateObject("MyWebDLL.MyClass")
Response.Write myObject.MyMethod("test")
|
Advice on mixing legacy ASP site with .NET 2.0
|
We've just been tasked with updating an e-commerce application to use PayPal's PayFlow product. The site was originally written in classic ASP except for the credit card processing portion which was a COM component.
Our plan is to replace the COM component with a .NET 2.0 component. I'm looking for tips, gotcha, etc. before we embark.
|
[
"I think Dan Bartels' blog post about \nReplacing Old Classic ASP COM Componenets With .NET Assemblies is the right starting point for you.\nImplementing the details described in the blog post, you should be able to instantiate your objects in classic asp and execute code like this:\nDim myObject\nSet myObject = Server.CreateObject(\"MyWebDLL.MyClass\")\nResponse.Write myObject.MyMethod(\"test\")\n\n"
] |
[
1
] |
[] |
[] |
[
"asp.net",
"asp_classic",
"com",
"e_commerce",
"paypal"
] |
stackoverflow_0000110431_asp.net_asp_classic_com_e_commerce_paypal.txt
|
Q:
User Defined Fields with NHibernate
I need to add a user defined fields feature to an asp.net c# application that uses NHibernate.
The user must be able to add and remove fields from several objects in the system "on the fly", preferably without any system downtime.
One important constraint is that the database schema can't be changed by the user - that is, I can add whatever fields/tables I need to support this feature but when the user adds or removes a field he can't change the database schema.
EDIT: I also have to sort and filter by the values of the user defined fields.
I know how to do it in c#/SQL with a key/value table, but I don't know how to do it with NHibrenate (including filtering and sorting by the user defined fields)
A:
It sounds like you just want to add a name/value properties table.
Have one table defining the name (e.g. ID, FIELDNAME, DESCRIPTION) and another defining the value (e.g. ID, NAME_FK, OBJECT_FK, VALUE).
Have the user adding new rows to the NAME table to add a new property and adding values by adding rows to the VALUE table, foreign-keyed to the NAME table and whatever object you want to attach it to.
Your view can then query the VALUE table keyed against the OBJECT_FK and use the NAME_FK to reference the property name.
Edit: NHibernate won't see the new values as actual properties, but if you map them as collections you should be able to query & filter using ICriteria:
IList<MyProp> props = session
.CreateCriteria(typeof(MyProp))
.Add(Expression.Eq("ObjectName", "Widget"))
.Add(Expression.Eq("Name", "Size"))
.List<MyProp>();
|
User Defined Fields with NHibernate
|
I need to add a user defined fields feature to an asp.net c# application that uses NHibernate.
The user must be able to add and remove fields from several objects in the system "on the fly", preferably without any system downtime.
One important constraint is that the database schema can't be changed by the user - that is, I can add whatever fields/tables I need to support this feature but when the user adds or removes a field he can't change the database schema.
EDIT: I also have to sort and filter by the values of the user defined fields.
I know how to do it in c#/SQL with a key/value table, but I don't know how to do it with NHibrenate (including filtering and sorting by the user defined fields)
|
[
"It sounds like you just want to add a name/value properties table.\nHave one table defining the name (e.g. ID, FIELDNAME, DESCRIPTION) and another defining the value (e.g. ID, NAME_FK, OBJECT_FK, VALUE).\nHave the user adding new rows to the NAME table to add a new property and adding values by adding rows to the VALUE table, foreign-keyed to the NAME table and whatever object you want to attach it to.\nYour view can then query the VALUE table keyed against the OBJECT_FK and use the NAME_FK to reference the property name.\nEdit: NHibernate won't see the new values as actual properties, but if you map them as collections you should be able to query & filter using ICriteria:\nIList<MyProp> props = session\n .CreateCriteria(typeof(MyProp))\n .Add(Expression.Eq(\"ObjectName\", \"Widget\"))\n .Add(Expression.Eq(\"Name\", \"Size\"))\n .List<MyProp>();\n\n"
] |
[
4
] |
[] |
[] |
[
"c#",
"nhibernate",
"user_defined_fields"
] |
stackoverflow_0000110591_c#_nhibernate_user_defined_fields.txt
|
Q:
Prevent .NET from "lifting" local variables
I have the following code:
string prefix = "OLD:";
Func<string, string> prependAction = (x => prefix + x);
prefix = "NEW:";
Console.WriteLine(prependAction("brownie"));
Because the compiler replaces the prefix variable with a closure "NEW:brownie" is printed to the console.
Is there an easy way to prevent the compiler from lifting the prefix variable whilst still making use of a lambda expression? I would like a way of making my Func work identically to:
Func<string, string> prependAction = (x => "OLD:" + x);
The reason I need this is I would like to serialize the resulting delegate. If the prefix variable is in a non-serializable class the above function will not serialize.
The only way around this I can see at the moment is to create a new serializable class that stores the string as a member variable and has the string prepend method:
string prefix = "NEW:";
var prepender = new Prepender {Prefix = prefix};
Func<string, string> prependAction = prepender.Prepend;
prefix = "OLD:";
Console.WriteLine(prependAction("brownie"));
With helper class:
[Serializable]
public class Prepender
{
public string Prefix { get; set; }
public string Prepend(string str)
{
return Prefix + str;
}
}
This seems like a lot of extra work to get the compiler to be "dumb".
A:
I see the underlying problem now. It is deeper than I first thought. Basically the solution is to modify the expression tree before serializing it, by replacing all subtrees that do not depend on the parameters with constant nodes. This is apparently called "funcletization".
There is an explanation of it here.
A:
Just make another closure...
Say, something like:
var prepend = "OLD:";
Func<string, Func<string, string>> makePrepender = x => y => (x + y);
Func<string, string> oldPrepend = makePrepender(prepend);
prepend = "NEW:";
Console.WriteLine(oldPrepend("Brownie"));
Havn't tested it yet as I don't have access to VS at the moment, but normally, this is how I solve such problem.
A:
Lambdas automatically 'suck' in local variables, I'm afraid that's simply how they work by definition.
A:
This is a pretty common problem i.e. variables being modified by a closure unintentionally - a far simpler solution is just to go:
string prefix = "OLD:";
var actionPrefix = prefix;
Func<string, string> prependAction = (x => actionPrefix + x);
prefix = "NEW:";
Console.WriteLine(prependAction("brownie"));
If you're using resharper it will actually identify the places in your code where you're at risk of causing unexpected side effects such as this - so if the file is "all green" your code should be OK.
I think in some ways it would have been nice if we had some syntactic sugar to handle this situation so we could have written it as a one-liner i.e.
Func<string, string> prependAction = (x => ~prefix + x);
Where some prefix operator would cause the variable's value to be evaluated prior to constructing the anonymous delegate/function.
A:
There are already several answers here explaining how you can avoid the lambda "lifting" your variable. Unfortunately that does not solve your underlying problem. Being unable to serialize the lambda has nothing to do with the lambda having "lifted" your variable. If the lambda expression needs an instance of a non-serialize class to compute, it makes perfect sense that it cannot be serialized.
Depending on what you actually are trying to do (I can't quite decide from your post), a solution would be to move the non-serializable part of the lambda outside.
For example, instead of:
NonSerializable nonSerializable = new NonSerializable();
Func<string, string> prependAction = (x => nonSerializable.ToString() + x);
use:
NonSerializable nonSerializable = new NonSerializable();
string prefix = nonSerializable.ToString();
Func<string, string> prependAction = (x => prefix + x);
A:
I get the problem now: the lambda refers to the containing class which might not be serializable. Then do something like this:
public void static Func<string, string> MakePrependAction(String prefix){
return (x => prefix + x);
}
(Note the static keyword.) Then the lambda needs not reference the containing class.
|
Prevent .NET from "lifting" local variables
|
I have the following code:
string prefix = "OLD:";
Func<string, string> prependAction = (x => prefix + x);
prefix = "NEW:";
Console.WriteLine(prependAction("brownie"));
Because the compiler replaces the prefix variable with a closure "NEW:brownie" is printed to the console.
Is there an easy way to prevent the compiler from lifting the prefix variable whilst still making use of a lambda expression? I would like a way of making my Func work identically to:
Func<string, string> prependAction = (x => "OLD:" + x);
The reason I need this is I would like to serialize the resulting delegate. If the prefix variable is in a non-serializable class the above function will not serialize.
The only way around this I can see at the moment is to create a new serializable class that stores the string as a member variable and has the string prepend method:
string prefix = "NEW:";
var prepender = new Prepender {Prefix = prefix};
Func<string, string> prependAction = prepender.Prepend;
prefix = "OLD:";
Console.WriteLine(prependAction("brownie"));
With helper class:
[Serializable]
public class Prepender
{
public string Prefix { get; set; }
public string Prepend(string str)
{
return Prefix + str;
}
}
This seems like a lot of extra work to get the compiler to be "dumb".
|
[
"I see the underlying problem now. It is deeper than I first thought. Basically the solution is to modify the expression tree before serializing it, by replacing all subtrees that do not depend on the parameters with constant nodes. This is apparently called \"funcletization\".\nThere is an explanation of it here.\n",
"Just make another closure...\nSay, something like:\nvar prepend = \"OLD:\";\n\nFunc<string, Func<string, string>> makePrepender = x => y => (x + y);\nFunc<string, string> oldPrepend = makePrepender(prepend);\n\nprepend = \"NEW:\";\n\nConsole.WriteLine(oldPrepend(\"Brownie\"));\n\nHavn't tested it yet as I don't have access to VS at the moment, but normally, this is how I solve such problem.\n",
"Lambdas automatically 'suck' in local variables, I'm afraid that's simply how they work by definition.\n",
"This is a pretty common problem i.e. variables being modified by a closure unintentionally - a far simpler solution is just to go:\nstring prefix = \"OLD:\";\nvar actionPrefix = prefix;\nFunc<string, string> prependAction = (x => actionPrefix + x);\nprefix = \"NEW:\";\nConsole.WriteLine(prependAction(\"brownie\"));\n\nIf you're using resharper it will actually identify the places in your code where you're at risk of causing unexpected side effects such as this - so if the file is \"all green\" your code should be OK.\nI think in some ways it would have been nice if we had some syntactic sugar to handle this situation so we could have written it as a one-liner i.e.\nFunc<string, string> prependAction = (x => ~prefix + x);\n\nWhere some prefix operator would cause the variable's value to be evaluated prior to constructing the anonymous delegate/function.\n",
"There are already several answers here explaining how you can avoid the lambda \"lifting\" your variable. Unfortunately that does not solve your underlying problem. Being unable to serialize the lambda has nothing to do with the lambda having \"lifted\" your variable. If the lambda expression needs an instance of a non-serialize class to compute, it makes perfect sense that it cannot be serialized.\nDepending on what you actually are trying to do (I can't quite decide from your post), a solution would be to move the non-serializable part of the lambda outside.\nFor example, instead of:\nNonSerializable nonSerializable = new NonSerializable();\nFunc<string, string> prependAction = (x => nonSerializable.ToString() + x);\n\nuse:\nNonSerializable nonSerializable = new NonSerializable();\nstring prefix = nonSerializable.ToString();\nFunc<string, string> prependAction = (x => prefix + x);\n\n",
"I get the problem now: the lambda refers to the containing class which might not be serializable. Then do something like this:\npublic void static Func<string, string> MakePrependAction(String prefix){\n return (x => prefix + x);\n}\n\n(Note the static keyword.) Then the lambda needs not reference the containing class.\n"
] |
[
8,
2,
1,
0,
0,
0
] |
[
"What about this\nstring prefix = \"OLD:\";\nstring _prefix=prefix;\nFunc<string, string> prependAction = (x => _prefix + x);\nprefix = \"NEW:\";\nConsole.WriteLine(prependAction(\"brownie\"));\n\n",
"How about:\nstring prefix = \"OLD:\";\nstring prefixCopy = prefix;\nFunc<string, string> prependAction = (x => prefixCopy + x);\nprefix = \"NEW:\";\nConsole.WriteLine(prependAction(\"brownie\"));\n\n?\n",
"Well, if we're gonna talk about \"problems\" here, lambdas come from the functional programming world, and in a purely functional programming langauge, there are no assignments and so your problem would never arise because prefix's value could never change. I understand C# thinks it's cool to import ideas from functional programs (because FP is cool!) but it's very hard to make it pretty, because C# is and will always be an imperative programming language.\n"
] |
[
-1,
-1,
-1
] |
[
".net",
"c#",
"lambda"
] |
stackoverflow_0000110536_.net_c#_lambda.txt
|
Q:
How do you change the http header information sent in IIS 6
Currently IIS sends an expires http header of yesterday minus 1 hour on ASP.NET pages. How do I change this to 60 seconds in the further instead?
A:
You can also add a content-expires page directive to your ASP.NET page (for different expire schedules):
@Outputcache
Or you can set the header inside your code (perhaps a base page class):
Response.Cache.SetExpires(DateTime.Now.AddSeconds(60));
A good article on caching can be found on MSDN:
http://support.microsoft.com/?scid=kb%3Ben-us%3B323290&x=11&y=6
A:
Go to IIS administration -> -> Properties -> HTTP Headers tab -> click Enable Content Expiration, and set it to whatever you want.
|
How do you change the http header information sent in IIS 6
|
Currently IIS sends an expires http header of yesterday minus 1 hour on ASP.NET pages. How do I change this to 60 seconds in the further instead?
|
[
"You can also add a content-expires page directive to your ASP.NET page (for different expire schedules):\n@Outputcache \nOr you can set the header inside your code (perhaps a base page class):\nResponse.Cache.SetExpires(DateTime.Now.AddSeconds(60));\nA good article on caching can be found on MSDN:\nhttp://support.microsoft.com/?scid=kb%3Ben-us%3B323290&x=11&y=6\n",
"Go to IIS administration -> -> Properties -> HTTP Headers tab -> click Enable Content Expiration, and set it to whatever you want.\n"
] |
[
3,
2
] |
[] |
[] |
[
"asp.net",
"iis"
] |
stackoverflow_0000110632_asp.net_iis.txt
|
Q:
USRP2: Number of A/D Converters
Why are there two A/D converters on the USRP2 board if you can only use one RX
daughtercard?
A:
Most of the daughterboards do quadrature downconversion and produce
analog I & Q. For those daughterboards we use 1 A/D for I and another
one for Q.
|
USRP2: Number of A/D Converters
|
Why are there two A/D converters on the USRP2 board if you can only use one RX
daughtercard?
|
[
"Most of the daughterboards do quadrature downconversion and produce\nanalog I & Q. For those daughterboards we use 1 A/D for I and another\none for Q.\n"
] |
[
4
] |
[] |
[] |
[
"usrp"
] |
stackoverflow_0000110680_usrp.txt
|
Q:
Bandwith USRP2
What is the maximum bandwith I can handle with an USRP2?
A:
USRP2 A/D samples at 100MS/s I & Q is decimated to 25MS/s complex. We use 16-bit I & Q.
That works out to ~800Mbit/s on the gigabit ethernet, which the USRP2
can sustain, no problem.
|
Bandwith USRP2
|
What is the maximum bandwith I can handle with an USRP2?
|
[
"USRP2 A/D samples at 100MS/s I & Q is decimated to 25MS/s complex. We use 16-bit I & Q.\nThat works out to ~800Mbit/s on the gigabit ethernet, which the USRP2\ncan sustain, no problem. \n"
] |
[
3
] |
[] |
[] |
[
"gnuradio"
] |
stackoverflow_0000110692_gnuradio.txt
|
Q:
How can i temporarily load a font?
I need to load some fonts temporarily in my program. Preferably from a dll resource file.
A:
And here a Delphi version:
procedure LoadFontFromDll(const DllName, FontName: PWideChar);
var
DllHandle: HMODULE;
ResHandle: HRSRC;
ResSize, NbFontAdded: Cardinal;
ResAddr: HGLOBAL;
begin
DllHandle := LoadLibrary(DllName);
if DllHandle = 0 then
RaiseLastOSError;
ResHandle := FindResource(DllHandle, FontName, RT_FONT);
if ResHandle = 0 then
RaiseLastOSError;
ResAddr := LoadResource(DllHandle, ResHandle);
if ResAddr = 0 then
RaiseLastOSError;
ResSize := SizeOfResource(DllHandle, ResHandle);
if ResSize = 0 then
RaiseLastOSError;
if 0 = AddFontMemResourceEx(Pointer(ResAddr), ResSize, nil, @NbFontAdded) then
RaiseLastOSError;
end;
to be used like:
var
FontName: PChar;
FontHandle: THandle;
...
FontName := 'DEJAVUSANS';
LoadFontFromDll('Project1.dll' , FontName);
FontHandle := CreateFont(0, 0, 0, 0, FW_NORMAL, 0, 0, 0, DEFAULT_CHARSET,
OUT_DEFAULT_PRECIS, CLIP_DEFAULT_PRECIS, DEFAULT_QUALITY, DEFAULT_PITCH,
FontName);
if FontHandle = 0 then
RaiseLastOSError;
A:
I found this with Google. I have cut & pasted the relevant code below.
You need to add the font to your resource file:
34 FONT "myfont.ttf"
The following C code will load the font from the DLL resource and release it from memory when you are finished using it.
DWORD Count;
HMODULE Module = LoadLibrary("mylib.dll");
HRSRC Resource = FindResource(Module,MAKEINTRESOURCE(34),RT_FONT);
DWORD Length = SizeofResource(Module,Resource);
HGLOBAL Address = LoadResource(Module,Resource);
HANDLE Handle = AddFontMemResourceEx(Address,Length,0,&Count);
/* Use the font here... */
RemoveFontMemResourceEx(Handle);
FreeLibrary(Module);
A:
Here's some code that will load/make available the font from inside your executable (ie, the font was embedded as a resource, rather than something you had to install into Windows generally).
Note that the font is available to any application until your program gets rid of it.
I don't know how useful you'll find this, but I have used it a few times. I've never put the font into a dll (I prefer this 'embed into the exe' approach) but don't imagine it changes things too much.
procedure TForm1.FormCreate(Sender: TObject);
var
ResStream : TResourceStream;
sFileName : string;
begin
sFileName:=ExtractFilePath(Application.ExeName)+'SWISFONT.TTF';
ResStream:=nil;
try
ResStream:=TResourceStream.Create(hInstance, 'Swisfont', RT_RCDATA);
try
ResStream.SaveToFile(sFileName);
except
on E:EFCreateError Do ShowMessage(E.Message);
end;
finally
ResStream.Free;
end;
AddFontResource(PChar(sFileName));
SendMessage(HWND_BROADCAST, WM_FONTCHANGE, 0, 0);
end;
procedure TForm1.FormDestroy(Sender: TObject);
var
sFile:string;
begin
sFile:=ExtractFilePath(Application.ExeName)+'SWISFONT.TTF';
if FileExists(sFile) then
begin
RemoveFontResource(PChar(sFile));
SendMessage(HWND_BROADCAST, WM_FONTCHANGE, 0, 0);
DeleteFile(sFile);
end;
end;
|
How can i temporarily load a font?
|
I need to load some fonts temporarily in my program. Preferably from a dll resource file.
|
[
"And here a Delphi version:\nprocedure LoadFontFromDll(const DllName, FontName: PWideChar);\nvar\n DllHandle: HMODULE;\n ResHandle: HRSRC;\n ResSize, NbFontAdded: Cardinal;\n ResAddr: HGLOBAL;\nbegin\n DllHandle := LoadLibrary(DllName);\n if DllHandle = 0 then\n RaiseLastOSError;\n ResHandle := FindResource(DllHandle, FontName, RT_FONT);\n if ResHandle = 0 then\n RaiseLastOSError;\n ResAddr := LoadResource(DllHandle, ResHandle);\n if ResAddr = 0 then\n RaiseLastOSError;\n ResSize := SizeOfResource(DllHandle, ResHandle);\n if ResSize = 0 then\n RaiseLastOSError;\n if 0 = AddFontMemResourceEx(Pointer(ResAddr), ResSize, nil, @NbFontAdded) then\n RaiseLastOSError;\nend;\n\nto be used like:\nvar\n FontName: PChar;\n FontHandle: THandle;\n...\n FontName := 'DEJAVUSANS';\n LoadFontFromDll('Project1.dll' , FontName);\n FontHandle := CreateFont(0, 0, 0, 0, FW_NORMAL, 0, 0, 0, DEFAULT_CHARSET,\n OUT_DEFAULT_PRECIS, CLIP_DEFAULT_PRECIS, DEFAULT_QUALITY, DEFAULT_PITCH,\n FontName);\n if FontHandle = 0 then\n RaiseLastOSError;\n\n",
"I found this with Google. I have cut & pasted the relevant code below.\nYou need to add the font to your resource file:\n\n34 FONT \"myfont.ttf\"\n\nThe following C code will load the font from the DLL resource and release it from memory when you are finished using it.\n\nDWORD Count;\nHMODULE Module = LoadLibrary(\"mylib.dll\");\nHRSRC Resource = FindResource(Module,MAKEINTRESOURCE(34),RT_FONT);\nDWORD Length = SizeofResource(Module,Resource);\nHGLOBAL Address = LoadResource(Module,Resource);\nHANDLE Handle = AddFontMemResourceEx(Address,Length,0,&Count);\n\n/* Use the font here... */\n\nRemoveFontMemResourceEx(Handle);\nFreeLibrary(Module);\n\n",
"Here's some code that will load/make available the font from inside your executable (ie, the font was embedded as a resource, rather than something you had to install into Windows generally). \nNote that the font is available to any application until your program gets rid of it.\nI don't know how useful you'll find this, but I have used it a few times. I've never put the font into a dll (I prefer this 'embed into the exe' approach) but don't imagine it changes things too much.\nprocedure TForm1.FormCreate(Sender: TObject);\nvar\n ResStream : TResourceStream;\n sFileName : string;\nbegin\n sFileName:=ExtractFilePath(Application.ExeName)+'SWISFONT.TTF';\n\n ResStream:=nil;\n try\n ResStream:=TResourceStream.Create(hInstance, 'Swisfont', RT_RCDATA);\n try\n ResStream.SaveToFile(sFileName);\n except\n on E:EFCreateError Do ShowMessage(E.Message);\n end;\n finally\n ResStream.Free;\n end;\n\n AddFontResource(PChar(sFileName));\n SendMessage(HWND_BROADCAST, WM_FONTCHANGE, 0, 0);\nend;\n\n\nprocedure TForm1.FormDestroy(Sender: TObject);\nvar\n sFile:string;\nbegin\n sFile:=ExtractFilePath(Application.ExeName)+'SWISFONT.TTF';\n if FileExists(sFile) then\n begin\n RemoveFontResource(PChar(sFile));\n SendMessage(HWND_BROADCAST, WM_FONTCHANGE, 0, 0);\n DeleteFile(sFile);\n end;\nend;\n\n"
] |
[
10,
2,
1
] |
[] |
[] |
[
"delphi",
"winapi"
] |
stackoverflow_0000107611_delphi_winapi.txt
|
Q:
Creating a shiny Graphic/Gloss Effect
I would like to programmatically create a gloss effect on an Image, kinda like on the Apple-inspired design that the Web has adopted when it was updated to 2.0 Beta.
Essentially this:
example icons http://nhc.hcmuns.googlepages.com/web2_icons.jpg
Now, I see two approaches here: I create one image which has an Alpha channel with the gloss effect, and then I just combine the input and the gloss alpha icon to create this.
The second approach: Create the Alpha Gloss Image in code and then merge it with the input graphic.
I would prefer the second solution, but I'm not much of a graphics person and I don't know what the algorhithm is called to create such effects. Can someone give me some pointers* for what I am actually looking here? is there a "gloss algorhitm" that has a name? or even a .net Implementation already?
*No, not those type of pointers.
A:
Thank you, Devin! Here is my C# Code for implementing your suggestion. It works quite good. Turning this into a community owned Wiki post, If someone likes to add some code, feel free to edit this.
(Example uses different values for Alpha and exposure than the code below)
Image img = Image.FromFile("rss-icon.jpg");
pictureBox1.Image = AddCircularGloss(img, 30,25,255,255,255);
public static Image AddCircularGloss(Image inputImage, int exposurePercentage, int transparency, int fillColorR, int fillColorG, int fillColorB)
{
Bitmap outputImage = new Bitmap(inputImage);
using (Graphics g = Graphics.FromImage(outputImage))
{
using (Pen p = new Pen(Color.FromArgb(transparency, fillColorR, fillColorG, fillColorB)))
{
// Looks jaggy otherwise
g.SmoothingMode = SmoothingMode.HighQuality;
g.CompositingQuality = CompositingQuality.HighQuality;
int x, y;
// 3 * Height looks best
int diameter = outputImage.Height * 3;
double imgPercent = (double)outputImage.Height / 100;
x = 0 - outputImage.Width;
// How many percent of the image to expose
y = (0 - diameter) + (int)(imgPercent * exposurePercentage);
g.FillEllipse(p.Brush, x, y, diameter, diameter);
}
}
return outputImage;
}
(Changed after John's suggestion. I cannot dispose the Bitmap though, this has to be done by the caller of the function)
A:
I can explain that effect in graphic terms.
Create an image around 3* the size of the icon.
Within this image, create a circle where (the height of the icon) < radius < 2*(the height of the icon).
Fill the circle with an alpha blend/transparency (to white) of say 10%.
Crop that circle image into a new image of equal size to your icons, where the center of the circle is centered outside the viewing area but upwards by 1/2 the height of the smaller image.
If you then superimpose this image onto the original icon, the effect should look approximately like the above icons. It should be doable with imagemagick if you're keen on that, or you could go for one of the graphics API's depending on what language you want to use. From the steps above it should be straightforward to do programatically.
A:
Responding to the C# code ... Overall, good job on getting the imaging going. I've had to do similar things with some of my apps in the past.
One piece of advice, however: All graphics objects in .NET are based on Windows GDI+ primitives. This means these objects require correct disposal to clean up their non-memory resources, much like file handles or database connections. You'll want to tweak the code a bit to support that correctly.
All of the GDI+ objects implement the IDisposable interface, making them functional with the using statement. Consider rewriting your code similarly to the following:
// Experiment with this value
int exposurePercentage = 40;
using (Image img = Image.FromFile("rss-icon.jpg"))
{
using (Graphics g = Graphics.FromImage(img))
{
// First Number = Alpha, Experiment with this value.
using (Pen p = new Pen(Color.FromArgb(75, 255, 255, 255)))
{
// Looks jaggy otherwise
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
int x, y;
// 3 * Height looks best
int diameter = img.Height * 3;
double imgPercent = (double)img.Height / 100;
x = 0 - img.Width;
// How many percent of the image to expose
y = (0 - diameter) + (int)(imgPercent * exposurePercentage);
g.FillEllipse(p.Brush, x, y, diameter, diameter);
pictureBox1.Image = img;
}
}
}
(Bear in mind, unlike most of my samples, I haven't had a chance to compile and test this ... It's meant more as a sample of structuring the code for ensuring that there are no resource leaks, not as a finished product. There are probably better ways to abstract/structure that anyway. And strongly consider doing so -- toss this in a graphics library DLL that you can just reference in any project which needs these capabilities in the future!)
|
Creating a shiny Graphic/Gloss Effect
|
I would like to programmatically create a gloss effect on an Image, kinda like on the Apple-inspired design that the Web has adopted when it was updated to 2.0 Beta.
Essentially this:
example icons http://nhc.hcmuns.googlepages.com/web2_icons.jpg
Now, I see two approaches here: I create one image which has an Alpha channel with the gloss effect, and then I just combine the input and the gloss alpha icon to create this.
The second approach: Create the Alpha Gloss Image in code and then merge it with the input graphic.
I would prefer the second solution, but I'm not much of a graphics person and I don't know what the algorhithm is called to create such effects. Can someone give me some pointers* for what I am actually looking here? is there a "gloss algorhitm" that has a name? or even a .net Implementation already?
*No, not those type of pointers.
|
[
"Thank you, Devin! Here is my C# Code for implementing your suggestion. It works quite good. Turning this into a community owned Wiki post, If someone likes to add some code, feel free to edit this.\n\n(Example uses different values for Alpha and exposure than the code below)\nImage img = Image.FromFile(\"rss-icon.jpg\");\npictureBox1.Image = AddCircularGloss(img, 30,25,255,255,255);\n\npublic static Image AddCircularGloss(Image inputImage, int exposurePercentage, int transparency, int fillColorR, int fillColorG, int fillColorB)\n{\n Bitmap outputImage = new Bitmap(inputImage);\n using (Graphics g = Graphics.FromImage(outputImage))\n {\n using (Pen p = new Pen(Color.FromArgb(transparency, fillColorR, fillColorG, fillColorB)))\n {\n // Looks jaggy otherwise\n g.SmoothingMode = SmoothingMode.HighQuality;\n g.CompositingQuality = CompositingQuality.HighQuality;\n int x, y;\n\n // 3 * Height looks best\n int diameter = outputImage.Height * 3;\n double imgPercent = (double)outputImage.Height / 100;\n x = 0 - outputImage.Width;\n\n // How many percent of the image to expose\n y = (0 - diameter) + (int)(imgPercent * exposurePercentage);\n g.FillEllipse(p.Brush, x, y, diameter, diameter);\n }\n }\n return outputImage;\n}\n\n(Changed after John's suggestion. I cannot dispose the Bitmap though, this has to be done by the caller of the function)\n",
"I can explain that effect in graphic terms. \n\nCreate an image around 3* the size of the icon.\nWithin this image, create a circle where (the height of the icon) < radius < 2*(the height of the icon).\nFill the circle with an alpha blend/transparency (to white) of say 10%. \nCrop that circle image into a new image of equal size to your icons, where the center of the circle is centered outside the viewing area but upwards by 1/2 the height of the smaller image. \n\nIf you then superimpose this image onto the original icon, the effect should look approximately like the above icons. It should be doable with imagemagick if you're keen on that, or you could go for one of the graphics API's depending on what language you want to use. From the steps above it should be straightforward to do programatically.\n",
"Responding to the C# code ... Overall, good job on getting the imaging going. I've had to do similar things with some of my apps in the past.\nOne piece of advice, however: All graphics objects in .NET are based on Windows GDI+ primitives. This means these objects require correct disposal to clean up their non-memory resources, much like file handles or database connections. You'll want to tweak the code a bit to support that correctly.\nAll of the GDI+ objects implement the IDisposable interface, making them functional with the using statement. Consider rewriting your code similarly to the following:\n// Experiment with this value\nint exposurePercentage = 40;\n\nusing (Image img = Image.FromFile(\"rss-icon.jpg\"))\n{\n using (Graphics g = Graphics.FromImage(img))\n { \n // First Number = Alpha, Experiment with this value.\n using (Pen p = new Pen(Color.FromArgb(75, 255, 255, 255)))\n {\n // Looks jaggy otherwise\n g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;\n\n int x, y;\n\n // 3 * Height looks best\n int diameter = img.Height * 3;\n double imgPercent = (double)img.Height / 100;\n x = 0 - img.Width;\n\n // How many percent of the image to expose\n y = (0 - diameter) + (int)(imgPercent * exposurePercentage);\n\n g.FillEllipse(p.Brush, x, y, diameter, diameter);\n\n pictureBox1.Image = img;\n }\n }\n}\n\n(Bear in mind, unlike most of my samples, I haven't had a chance to compile and test this ... It's meant more as a sample of structuring the code for ensuring that there are no resource leaks, not as a finished product. There are probably better ways to abstract/structure that anyway. And strongly consider doing so -- toss this in a graphics library DLL that you can just reference in any project which needs these capabilities in the future!)\n"
] |
[
17,
8,
3
] |
[] |
[] |
[
".net",
"image"
] |
stackoverflow_0000109753_.net_image.txt
|
Q:
Load Balancing of Process in 1 Server
I have 1 process that receives incoming connection from port 1000 in 1 linux server. However, 1 process is not fast enough to handle all the incoming request.
I want to run multiple processes in the server but with 1 end-point. In this way, the client will only see 1 end-point/process not multiple.
I have checked LVS and other Load Balancing Solution. Those solutions seem geared towards multiple servers load-balancing.
Any other solution to help on my case?
i am looking something more like nginx where i will need to run multiple copies of my app.
Let me try it out.
Thanks for the help.
A:
The question is a little unclear to me, but I suspect the answer you are looking for is to have a single process accepting tasks from the network, and then forking off 'worker processes' to actually perform the work (before returning the result to the user).
In that way, the work which is being done does not block the acceptance of more requests.
As you point out, the term load balancing carries the implication of multiple servers - what you want to look for is information about how to write a linux network daemon.
The two kes system calls you'll want to look at are called fork and exec.
A:
It sounds like you just need to integrate your server with xinetd.
This is a server that listens on predefined ports (that you control through config) and forks off processes to handle the actual communication on that port.
A:
You also may want to go with a web server like nginx. It can load balance your app against multiple ports of the same app, and is commonly used to load balance Ruby on Rails apps (which are single threaded). The downside is that you need to run multiple copies of your app (one on each port) for this load balancing to work.
A:
You need multi-processing or multi-threading. You aren't specific on the details of the server, so I can't give you advice on what to do exactly. fork and exec as Matt suggested can be a solution, but really: what kind of protocol/server are we talking about?
A:
i am thinking to run multiple application similar to ypops.
A:
nginx is great but if you don't fancy a whole new web server, apache 2.2 with mod proxy balancer will do the same job
A:
Perhaps you can modify your client to round-robin ports (say) 1000-1009 and run 10 copies of the process?
Alternatively there must be some way of internally refactoring it.
It's possible for several processes to listen to the same socket at once by having it opened before calling fork(), but (if it's a TCP socket) once accept() is called the resulting socket then belongs to whichever process successfully accepted the connection.
So essentially you could use:
Prefork, where you open the socket, fork a specified number of children which then share the load
Post-fork, where you have one master process which accepts all the connections and forks children to handle individual sockets
Threads - you can share the sockets in whatever way you like with those, as the file descriptors are not cloned, they're just available to any thread.
|
Load Balancing of Process in 1 Server
|
I have 1 process that receives incoming connection from port 1000 in 1 linux server. However, 1 process is not fast enough to handle all the incoming request.
I want to run multiple processes in the server but with 1 end-point. In this way, the client will only see 1 end-point/process not multiple.
I have checked LVS and other Load Balancing Solution. Those solutions seem geared towards multiple servers load-balancing.
Any other solution to help on my case?
i am looking something more like nginx where i will need to run multiple copies of my app.
Let me try it out.
Thanks for the help.
|
[
"The question is a little unclear to me, but I suspect the answer you are looking for is to have a single process accepting tasks from the network, and then forking off 'worker processes' to actually perform the work (before returning the result to the user).\nIn that way, the work which is being done does not block the acceptance of more requests.\nAs you point out, the term load balancing carries the implication of multiple servers - what you want to look for is information about how to write a linux network daemon.\nThe two kes system calls you'll want to look at are called fork and exec.\n",
"It sounds like you just need to integrate your server with xinetd.\nThis is a server that listens on predefined ports (that you control through config) and forks off processes to handle the actual communication on that port.\n",
"You also may want to go with a web server like nginx. It can load balance your app against multiple ports of the same app, and is commonly used to load balance Ruby on Rails apps (which are single threaded). The downside is that you need to run multiple copies of your app (one on each port) for this load balancing to work.\n",
"You need multi-processing or multi-threading. You aren't specific on the details of the server, so I can't give you advice on what to do exactly. fork and exec as Matt suggested can be a solution, but really: what kind of protocol/server are we talking about?\n",
"i am thinking to run multiple application similar to ypops.\n",
"nginx is great but if you don't fancy a whole new web server, apache 2.2 with mod proxy balancer will do the same job\n",
"Perhaps you can modify your client to round-robin ports (say) 1000-1009 and run 10 copies of the process?\nAlternatively there must be some way of internally refactoring it.\nIt's possible for several processes to listen to the same socket at once by having it opened before calling fork(), but (if it's a TCP socket) once accept() is called the resulting socket then belongs to whichever process successfully accepted the connection.\nSo essentially you could use:\n\nPrefork, where you open the socket, fork a specified number of children which then share the load\nPost-fork, where you have one master process which accepts all the connections and forks children to handle individual sockets\nThreads - you can share the sockets in whatever way you like with those, as the file descriptors are not cloned, they're just available to any thread.\n\n"
] |
[
2,
2,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"linux",
"load_balancing"
] |
stackoverflow_0000045252_linux_load_balancing.txt
|
Q:
Regex for Specific Tag
I'm working on a regular expression in a .NET project to get a specific tag. I would like to match the entire DIV tag and its contents:
<html>
<head><title>Test</title></head>
<body>
<p>The first paragraph.</p>
<div id='super_special'>
<p>The Store paragraph</p>
</div>
</body>
</head>
Code:
Regex re = new Regex("(<div id='super_special'>.*?</div>)", RegexOptions.Multiline);
if (re.IsMatch(test))
Console.WriteLine("it matches");
else
Console.WriteLine("no match");
I want to match this:
<div id="super_special">
<p>Anything could go in here...doesn't matter. Let's get it all</p>
</div>
I thought . was supposed to get all characters, but it seems to having trouble with the carriage returns. What is my regex missing?
Thanks.
A:
Please, pretty please, do yourself a huge favor: use an HTML parser for parsing HTML. Seriously. That's what they are there for.
HTML is a very complex language. No matter how long you will be tweaking, fiddling, fixing, honing your Regexp, there will always be a case you're missing.
Anyway, you have to tell your Regexp engine to match multiple lines instead of just one. In some of the most popular ones you do that by applying the /m modifier.
But let me repeat: please use an HTML parser. Everytime someone uses a Regexp to parse HTML, a kitten dies ...
A:
Depends what language you're working in.
For example, in perl you'd use the regex modifier s:
m{<div id="super_special">.*?</span>}s
A:
What language are you using? In .NET you must set an option to ensure that it isn't single line.
A:
Depends on the language. If on python, you are missing the re.S flag, like this (to remove the match):
re.compile('<div id="super_special">.*?</div>',re.S).sub(your_html,'')
Similar flags exist for other regexps implementations, they are called "Single Line" or "Multi Line" or something like that.
But DO NOT USE REGEXPS TO PARSE HTML. It's a direct path to maintenance hell. Use a HTML parser like Beautiful Soup. Check these links for useful resources in that direction.
A:
The problem is that the . metacharacter doesn't match newlines by default. You have to use the single-line modifier to achieve this. In .NET, you can either use RegexOptions.SingleLine as the last parameter to the method you're using, or use the modifier directly in the pattern, e.g:
(?s)(<div id="super_special">.*?</div>)
A:
Most languages have some way to make . match newlines:
In Java: Pattern.compile("pattern", Pattern.MULTILINE);
In Perl and Ruby: /pattern/m
In VB: Regex.IsMatch(s, "pattern", RegexOptions.Multiline)
In general it's not a good idea to use regexp to match XML/HTML, because XML/HTML tags can be nested, for example:
<div id="super_special">
<div>Nothing</div>
<p>Anything could go in here...doesn't matter. Let's get it all</p>
</div>
... here you could easily end up matching:
<div id="super_special">
<div>Nothing</div>
On the other hand, if you know for sure that the HTML you are matching will always be safe for your regexp, then don't let me stop you (although, even then you should think twice about saving your future self from a potential debugging headache).
A:
Out-of-the-box, without special modifiers, most regex implementations don't go beyond the end-of-line to match text. You probably should look in the documentation of the regex engine you're using for such modifier.
I have one other advice: beware of greed! Traditionally, regex are greedy which means that your regex would probably match this:
<div id="super_special">
I'm the wanted div!
</div>
<div id="not_special">
I'm not wanted, but I've been caught too :(
</div>
You should check for a "not-greedy" modifier, so that your regex would stop matching text at the first occurence of </div>, not at the last one.
Also, as others have said, consider using an HTML parser instead of regexes. It will save you a lot of headache.
Edit: even a non-greedy regex wouldn't work as expected either, if <div>s are nested! Another reason to consider using an HTML parser.
A:
. (dot) Matches any single character except line break characters \r and \n. Most regex flavors have an option to make the dot match line break characters too. . matches x or (almost) any other character
A:
maybe: .[\r\n].[\r\n]
A:
None of these regex suggestions will work. Depending on whether they're greedy or not, they will match either the very last </div> in the document, or the very first </div> after your starting string, which may be a div nested inside the one you're interested in.
Regular expressions are not really the ideal tool for this purpose, but if your situation is simple enough that you don't really want to parse the HTML, you can do this using a Microsoft-proprietary extension to regex available in .NET. For a nice explanation, see this nice article by Morten Maate.
A:
Regular expressions alone are simply not powerful enough to solve your problem. You need something more powerful, such as context-free grammars. See Chomsky hierarchy at Wikipedia.
In other words (as has been said before), don't use regex to parse HTML.
|
Regex for Specific Tag
|
I'm working on a regular expression in a .NET project to get a specific tag. I would like to match the entire DIV tag and its contents:
<html>
<head><title>Test</title></head>
<body>
<p>The first paragraph.</p>
<div id='super_special'>
<p>The Store paragraph</p>
</div>
</body>
</head>
Code:
Regex re = new Regex("(<div id='super_special'>.*?</div>)", RegexOptions.Multiline);
if (re.IsMatch(test))
Console.WriteLine("it matches");
else
Console.WriteLine("no match");
I want to match this:
<div id="super_special">
<p>Anything could go in here...doesn't matter. Let's get it all</p>
</div>
I thought . was supposed to get all characters, but it seems to having trouble with the carriage returns. What is my regex missing?
Thanks.
|
[
"Please, pretty please, do yourself a huge favor: use an HTML parser for parsing HTML. Seriously. That's what they are there for.\nHTML is a very complex language. No matter how long you will be tweaking, fiddling, fixing, honing your Regexp, there will always be a case you're missing.\nAnyway, you have to tell your Regexp engine to match multiple lines instead of just one. In some of the most popular ones you do that by applying the /m modifier.\nBut let me repeat: please use an HTML parser. Everytime someone uses a Regexp to parse HTML, a kitten dies ...\n",
"Depends what language you're working in. \nFor example, in perl you'd use the regex modifier s:\nm{<div id=\"super_special\">.*?</span>}s\n\n",
"What language are you using? In .NET you must set an option to ensure that it isn't single line. \n",
"Depends on the language. If on python, you are missing the re.S flag, like this (to remove the match):\nre.compile('<div id=\"super_special\">.*?</div>',re.S).sub(your_html,'')\n\nSimilar flags exist for other regexps implementations, they are called \"Single Line\" or \"Multi Line\" or something like that.\nBut DO NOT USE REGEXPS TO PARSE HTML. It's a direct path to maintenance hell. Use a HTML parser like Beautiful Soup. Check these links for useful resources in that direction.\n",
"The problem is that the . metacharacter doesn't match newlines by default. You have to use the single-line modifier to achieve this. In .NET, you can either use RegexOptions.SingleLine as the last parameter to the method you're using, or use the modifier directly in the pattern, e.g:\n(?s)(<div id=\"super_special\">.*?</div>)\n\n",
"Most languages have some way to make . match newlines:\n\nIn Java: Pattern.compile(\"pattern\", Pattern.MULTILINE);\nIn Perl and Ruby: /pattern/m\nIn VB: Regex.IsMatch(s, \"pattern\", RegexOptions.Multiline)\n\nIn general it's not a good idea to use regexp to match XML/HTML, because XML/HTML tags can be nested, for example:\n <div id=\"super_special\">\n <div>Nothing</div>\n <p>Anything could go in here...doesn't matter. Let's get it all</p>\n </div>\n\n... here you could easily end up matching:\n <div id=\"super_special\">\n <div>Nothing</div>\n\nOn the other hand, if you know for sure that the HTML you are matching will always be safe for your regexp, then don't let me stop you (although, even then you should think twice about saving your future self from a potential debugging headache).\n",
"Out-of-the-box, without special modifiers, most regex implementations don't go beyond the end-of-line to match text. You probably should look in the documentation of the regex engine you're using for such modifier.\nI have one other advice: beware of greed! Traditionally, regex are greedy which means that your regex would probably match this:\n<div id=\"super_special\">\n I'm the wanted div!\n</div>\n<div id=\"not_special\">\n I'm not wanted, but I've been caught too :(\n</div>\n\nYou should check for a \"not-greedy\" modifier, so that your regex would stop matching text at the first occurence of </div>, not at the last one.\nAlso, as others have said, consider using an HTML parser instead of regexes. It will save you a lot of headache.\nEdit: even a non-greedy regex wouldn't work as expected either, if <div>s are nested! Another reason to consider using an HTML parser.\n",
". (dot) Matches any single character except line break characters \\r and \\n. Most regex flavors have an option to make the dot match line break characters too. . matches x or (almost) any other character \n",
"maybe: .[\\r\\n].[\\r\\n]\n",
"None of these regex suggestions will work. Depending on whether they're greedy or not, they will match either the very last </div> in the document, or the very first </div> after your starting string, which may be a div nested inside the one you're interested in.\nRegular expressions are not really the ideal tool for this purpose, but if your situation is simple enough that you don't really want to parse the HTML, you can do this using a Microsoft-proprietary extension to regex available in .NET. For a nice explanation, see this nice article by Morten Maate.\n",
"Regular expressions alone are simply not powerful enough to solve your problem. You need something more powerful, such as context-free grammars. See Chomsky hierarchy at Wikipedia.\nIn other words (as has been said before), don't use regex to parse HTML.\n"
] |
[
6,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"regex"
] |
stackoverflow_0000078978_.net_regex.txt
|
Q:
How do you load an embedded icon from an exe file with PyWin32?
I have an exe file generated with py2exe. In the setup.py I specify an icon to be embedded in the exe:
windows=[{'script': 'my_script.py','icon_resources': [(0, 'my_icon.ico')], ...
I tried loading the icon using:
hinst = win32api.GetModuleHandle(None)
hicon = win32gui.LoadImage(hinst, 0, win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
But this produces an (very unspecific) error:
pywintypes.error: (0, 'LoadImage', 'No error message is available')
If I try specifying 0 as a string
hicon = win32gui.LoadImage(hinst, '0', win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
then I get the error:
pywintypes.error: (1813, 'LoadImage', 'The specified resource type cannot be found in the image file.')
So, what's the correct method/syntax to load the icon?
Also please notice that I don't use any GUI toolkit - just the Windows API via PyWin32.
A:
@efotinis: You're right.
Here is a workaround until py2exe gets fixed and you don't want to include the same icon twice:
hicon = win32gui.CreateIconFromResource(win32api.LoadResource(None, win32con.RT_ICON, 1), True)
Be aware that 1 is not the ID you gave the icon in setup.py (which is the icon group ID), but the resource ID automatically assigned by py2exe to each icon in each icon group. At least that's how I understand it.
If you want to create an icon with a specified size (as CreateIconFromResource uses the system default icon size), you need to use CreateIconFromResourceEx, which isn't available via PyWin32:
icon_res = win32api.LoadResource(None, win32con.RT_ICON, 1)
hicon = ctypes.windll.user32.CreateIconFromResourceEx(icon_res, len(icon_res), True,
0x00030000, 16, 16, win32con.LR_DEFAULTCOLOR)
A:
If you're using wxPython, you can use the following simple code:
wx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)
I usually have code that checks whether it's running from an EXE or not, and acts accordingly:
def get_app_icon():
if hasattr(sys, "frozen") and getattr(sys, "frozen") == "windows_exe":
return wx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)
else:
return wx.Icon("gfx/myapp.ico", wx.BITMAP_TYPE_ICO)
A:
Well, well... I installed py2exe and I think it's a bug. In py2exe_util.c they should init rt_icon_id to 1 instead of 0. The way it is now, it's impossible to load the first format of the first icon using LoadIcon/LoadImage.
I'll notify the developers about this if it's not already a known issue.
A workaround, in the meantime, would be to include the same icon twice in your setup.py:
'icon_resources': [(1, 'my_icon.ico'), (2, 'my_icon.ico')]
You can load the second one, while Windows will use the first one as the shell icon. Remember to use non-zero IDs though. :)
A:
You should set the icon ID to something other than 0:
'icon_resources': [(42, 'my_icon.ico')]
Windows resource IDs must be between 1 and 32767.
|
How do you load an embedded icon from an exe file with PyWin32?
|
I have an exe file generated with py2exe. In the setup.py I specify an icon to be embedded in the exe:
windows=[{'script': 'my_script.py','icon_resources': [(0, 'my_icon.ico')], ...
I tried loading the icon using:
hinst = win32api.GetModuleHandle(None)
hicon = win32gui.LoadImage(hinst, 0, win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
But this produces an (very unspecific) error:
pywintypes.error: (0, 'LoadImage', 'No error message is available')
If I try specifying 0 as a string
hicon = win32gui.LoadImage(hinst, '0', win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
then I get the error:
pywintypes.error: (1813, 'LoadImage', 'The specified resource type cannot be found in the image file.')
So, what's the correct method/syntax to load the icon?
Also please notice that I don't use any GUI toolkit - just the Windows API via PyWin32.
|
[
"@efotinis: You're right. \nHere is a workaround until py2exe gets fixed and you don't want to include the same icon twice:\nhicon = win32gui.CreateIconFromResource(win32api.LoadResource(None, win32con.RT_ICON, 1), True)\n\nBe aware that 1 is not the ID you gave the icon in setup.py (which is the icon group ID), but the resource ID automatically assigned by py2exe to each icon in each icon group. At least that's how I understand it.\nIf you want to create an icon with a specified size (as CreateIconFromResource uses the system default icon size), you need to use CreateIconFromResourceEx, which isn't available via PyWin32:\nicon_res = win32api.LoadResource(None, win32con.RT_ICON, 1)\nhicon = ctypes.windll.user32.CreateIconFromResourceEx(icon_res, len(icon_res), True,\n 0x00030000, 16, 16, win32con.LR_DEFAULTCOLOR)\n\n",
"If you're using wxPython, you can use the following simple code:\nwx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)\n\nI usually have code that checks whether it's running from an EXE or not, and acts accordingly:\ndef get_app_icon():\n if hasattr(sys, \"frozen\") and getattr(sys, \"frozen\") == \"windows_exe\":\n return wx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)\n else:\n return wx.Icon(\"gfx/myapp.ico\", wx.BITMAP_TYPE_ICO)\n\n",
"Well, well... I installed py2exe and I think it's a bug. In py2exe_util.c they should init rt_icon_id to 1 instead of 0. The way it is now, it's impossible to load the first format of the first icon using LoadIcon/LoadImage.\nI'll notify the developers about this if it's not already a known issue.\nA workaround, in the meantime, would be to include the same icon twice in your setup.py:\n'icon_resources': [(1, 'my_icon.ico'), (2, 'my_icon.ico')]\n\nYou can load the second one, while Windows will use the first one as the shell icon. Remember to use non-zero IDs though. :)\n",
"You should set the icon ID to something other than 0:\n'icon_resources': [(42, 'my_icon.ico')]\n\nWindows resource IDs must be between 1 and 32767.\n"
] |
[
5,
1,
1,
0
] |
[] |
[] |
[
"exe",
"icons",
"python",
"pywin32"
] |
stackoverflow_0000090775_exe_icons_python_pywin32.txt
|
Q:
How to do Channel measurements in Gnuradio?
What is the best way to measure the channel for use in space-time
coding schemes using an RFX2400 board?
As far as I know you can only get the I and Q streams out of the USRP,
and I'm not sure how you would get a set of channel coefficients.
I am planning on using the conjugate of the measured channel to
'reverse' the damage done by transmission.
A:
If you trying to measure the impulse response of the channel, then
one technique would be to transmit a known pseudo-random bit sequence (an
m-sequence) using BPSK modulation at the carrier frequency of interest. The
chip rate of the sequence determines the measurement system bandwidth, while
the sequence length determines the 'dynamic range' of the measurement.
At the receiver set the LO to the same carrier frequency as that at the
transmitter. Here you need to cross-correlate the equivalent low-pass
received signal with the known m-sequence to give the (complex) impulse
response of the channel. Any 'peaks' that exceed your definition of a
threshold noise level would be your channel coefficients in the time domain.
This is actually implemented in gr-sounder.
The channel sounder transmitter is sending the PRNG modulated BPSK at
32 Mchips/sec. You need to do the correlation at this speed; it's not
possible to send that much data over the USB to the host.
A channel sounder in software would work for chip rates less than 4
Mchip/sec. But that limits the resolution of your impulse response to
about 250 ns per bin, or 75 meters per bin in the spatial domain.
Unfortunately, the cross-correlation done on the very limited space
FPGA has no frequency offset compensation, so the resulting impulse
response vectors "roll" in the time domain.
--
answer (c) by Johnathan Corgan
|
How to do Channel measurements in Gnuradio?
|
What is the best way to measure the channel for use in space-time
coding schemes using an RFX2400 board?
As far as I know you can only get the I and Q streams out of the USRP,
and I'm not sure how you would get a set of channel coefficients.
I am planning on using the conjugate of the measured channel to
'reverse' the damage done by transmission.
|
[
"If you trying to measure the impulse response of the channel, then\none technique would be to transmit a known pseudo-random bit sequence (an\nm-sequence) using BPSK modulation at the carrier frequency of interest. The\nchip rate of the sequence determines the measurement system bandwidth, while\nthe sequence length determines the 'dynamic range' of the measurement.\nAt the receiver set the LO to the same carrier frequency as that at the\ntransmitter. Here you need to cross-correlate the equivalent low-pass\nreceived signal with the known m-sequence to give the (complex) impulse\nresponse of the channel. Any 'peaks' that exceed your definition of a\nthreshold noise level would be your channel coefficients in the time domain.\nThis is actually implemented in gr-sounder.\nThe channel sounder transmitter is sending the PRNG modulated BPSK at\n32 Mchips/sec. You need to do the correlation at this speed; it's not\npossible to send that much data over the USB to the host.\nA channel sounder in software would work for chip rates less than 4\nMchip/sec. But that limits the resolution of your impulse response to\nabout 250 ns per bin, or 75 meters per bin in the spatial domain.\nUnfortunately, the cross-correlation done on the very limited space\nFPGA has no frequency offset compensation, so the resulting impulse\nresponse vectors \"roll\" in the time domain.\n-- \nanswer (c) by Johnathan Corgan\n"
] |
[
1
] |
[] |
[] |
[
"gnuradio"
] |
stackoverflow_0000110781_gnuradio.txt
|
Q:
How to allow different content types in different folders of the same document library in WSS 3.0?
I have a document library with about 50 available content types. This document library is divided into several folders. When a user cliks the "New" button in a folder, all available content types are offered. I need to limit the content types according to the folder. For example, in the folder "Legal" a want to have only content types containing legal documents. I tried to use the UniqueContentTypeOrder property of SPFolder but it does not work. What is wrong?
private void CreateFolder(SPFolder parent, string name)
{
SPFolder z = parent.SubFolders.Add(name);
List col = new List();
foreach (SPContentType type in myDocumentLibrary.ContentTypes)
{
if (ContentTypeMatchesName(name, type))
{
col.Add(type);
}
}
z.UniqueContentTypeOrder = col;
z.Update();
}
A:
Have you looked at this article by Ton Stegeman?
A:
I think Magnus' answer will be exactly what you need but why are you storing to many content types and document types in one library? Wouldn't it make more sense to have more than one document library? this would make it much more easily managed.
|
How to allow different content types in different folders of the same document library in WSS 3.0?
|
I have a document library with about 50 available content types. This document library is divided into several folders. When a user cliks the "New" button in a folder, all available content types are offered. I need to limit the content types according to the folder. For example, in the folder "Legal" a want to have only content types containing legal documents. I tried to use the UniqueContentTypeOrder property of SPFolder but it does not work. What is wrong?
private void CreateFolder(SPFolder parent, string name)
{
SPFolder z = parent.SubFolders.Add(name);
List col = new List();
foreach (SPContentType type in myDocumentLibrary.ContentTypes)
{
if (ContentTypeMatchesName(name, type))
{
col.Add(type);
}
}
z.UniqueContentTypeOrder = col;
z.Update();
}
|
[
"Have you looked at this article by Ton Stegeman?\n",
"I think Magnus' answer will be exactly what you need but why are you storing to many content types and document types in one library? Wouldn't it make more sense to have more than one document library? this would make it much more easily managed.\n"
] |
[
3,
0
] |
[] |
[] |
[
"sharepoint",
"wss"
] |
stackoverflow_0000110584_sharepoint_wss.txt
|
Q:
Are there any OS X equivalents to `hcitool`?
I'd like to write some quick scripts to play with bluetooth devices (scan etc…), for the Mac. Under linux I'd probably use hcitool, or the python bluez library.
What tools are there for the Mac?
A:
hcitool is a command that comes with BlueZ, which is specific to the Linux kernel.
Unfortunately, I don't have Mac OSX, so I can't test this, but as far as I know, Darwin shares a lot with BSD, so they both use Netgraph framework for bluetooth drivers. There's some information on how to use Bluetooth in FreeBSD, I think they might be helpful, at least as a starting point. The hcitool equivalent in FreeBSD is hccontrol.
|
Are there any OS X equivalents to `hcitool`?
|
I'd like to write some quick scripts to play with bluetooth devices (scan etc…), for the Mac. Under linux I'd probably use hcitool, or the python bluez library.
What tools are there for the Mac?
|
[
"hcitool is a command that comes with BlueZ, which is specific to the Linux kernel.\nUnfortunately, I don't have Mac OSX, so I can't test this, but as far as I know, Darwin shares a lot with BSD, so they both use Netgraph framework for bluetooth drivers. There's some information on how to use Bluetooth in FreeBSD, I think they might be helpful, at least as a starting point. The hcitool equivalent in FreeBSD is hccontrol.\n"
] |
[
4
] |
[] |
[] |
[
"bluetooth",
"macos"
] |
stackoverflow_0000110661_bluetooth_macos.txt
|
Q:
Firefox status bar is 3/4" above the bottom border
Somehow my FireFox 2 got corrupted, and the status bar is about 3/4" above the bottom window border.
Anyone have an idea on how to get it back to being flush with the bottom window border?
A:
Not sure it is programming-related... but:
firefox -safe-mode
Reset toolbar and control
More at Firefox support
|
Firefox status bar is 3/4" above the bottom border
|
Somehow my FireFox 2 got corrupted, and the status bar is about 3/4" above the bottom window border.
Anyone have an idea on how to get it back to being flush with the bottom window border?
|
[
"Not sure it is programming-related... but:\nfirefox -safe-mode\nReset toolbar and control\n\nMore at Firefox support\n"
] |
[
2
] |
[] |
[] |
[
"firefox"
] |
stackoverflow_0000110822_firefox.txt
|
Q:
Alternatives to Live/GamerServices for XNA projects?
Using the GamerServices component for XNA to access Xbox/GfW Live for networking purposes requires developers and players each to have a US$100/year subscription to Microsoft's Creators Club. That's not much of an issue for Xbox360 XNA projects as you need the subscription anyway to be able to put your game on the 360.
But for PC games using XNA, requiring developers and players to put that much up each year is pretty crazy just for the access to a player's gamer card. Are there any solutions for XNA games that provide similar benefits to GamerServices? Or are developers pretty much restricted to building their own networking functionality if they don't want to subject their players (and themselves) to that $100/head hit?
A:
Perhaps you could try Lidgren
A:
Please note that games for windows live is now free:
http://www.engadget.com/2008/07/22/games-for-windows-live-now-free/
Since using the Live APIs is your only option on xbox and zune, it makes it a pretty compelling option since your only issue was the cost on windows :-) Especially considering the fact that once game studio 3.0 launches, you'll be able to sell your games on xbox live's new community games section
Edit, upon further investigation, it turns out that the games for windows live stuff is kind of half-baked. The gamerservices library doesn't seem to be included in the redistributable bits. So unless you want to break the EULA, your player would have to install gamestudio. That being said, I do still believe that it's free nonetheless, if not inconvenient.
A:
Well, you can use sockets, obviously, and using sockets you can create a seperate, dedicated server app, which you can't do with Live (as far as I know). You could also try SteamWorks; I haven't heard of anyone trying that, however.
|
Alternatives to Live/GamerServices for XNA projects?
|
Using the GamerServices component for XNA to access Xbox/GfW Live for networking purposes requires developers and players each to have a US$100/year subscription to Microsoft's Creators Club. That's not much of an issue for Xbox360 XNA projects as you need the subscription anyway to be able to put your game on the 360.
But for PC games using XNA, requiring developers and players to put that much up each year is pretty crazy just for the access to a player's gamer card. Are there any solutions for XNA games that provide similar benefits to GamerServices? Or are developers pretty much restricted to building their own networking functionality if they don't want to subject their players (and themselves) to that $100/head hit?
|
[
"Perhaps you could try Lidgren\n",
"Please note that games for windows live is now free: \nhttp://www.engadget.com/2008/07/22/games-for-windows-live-now-free/\nSince using the Live APIs is your only option on xbox and zune, it makes it a pretty compelling option since your only issue was the cost on windows :-) Especially considering the fact that once game studio 3.0 launches, you'll be able to sell your games on xbox live's new community games section\nEdit, upon further investigation, it turns out that the games for windows live stuff is kind of half-baked. The gamerservices library doesn't seem to be included in the redistributable bits. So unless you want to break the EULA, your player would have to install gamestudio. That being said, I do still believe that it's free nonetheless, if not inconvenient.\n",
"Well, you can use sockets, obviously, and using sockets you can create a seperate, dedicated server app, which you can't do with Live (as far as I know). You could also try SteamWorks; I haven't heard of anyone trying that, however.\n"
] |
[
7,
2,
0
] |
[] |
[] |
[
"networking",
"xna"
] |
stackoverflow_0000100459_networking_xna.txt
|
Q:
Keeping CL and Scheme straight in your head
Depending on my mood I seem to waffle back and forth between wanting a Lisp-1 and a Lisp-2. Unfortunately beyond the obvious name space differences, this leaves all kinds of amusing function name/etc problems you run into. Case in point, trying to write some code tonight I tried to do (map #'function listvar) which, of course, doesn't work in CL, at all. Took me a bit to remember I wanted mapcar, not map. Of course it doesn't help when slime/emacs shows map IS defined as something, though obviously not the same function at all.
So, pointers on how to minimize this short of picking one or the other and sticking with it?
A:
Map is more general than mapcar, for example you could do the following rather than using mapcar:
(map 'list #'function listvar)
How do I keep scheme and CL separate in my head? I guess when you know both languages well enough you just know what works in one and not the other. Despite the syntactic similarities they are quite different languages in terms of style.
A:
Well, I think that as soon you get enough experience in both languages this becomes a non-issue (just with similar natural languages, like Italian and Spanish). If you usually program in one language and switch to the other only occasionally, then unfortunately you are doomed to write Common Lisp in Scheme or vice versa ;)
One thing that helps is to have a distinct visual environment for both languages, using syntax highlighting in some other colors etc. Then at least you will always know whether you are in Common Lisp or Scheme mode.
A:
I'm definitely aware that there are syntactic differences, though I'm certainly not fluent enough yet to automatically use them, making the code look much more similar currently ;-).
And I had a feeling your answer would be the case, but can always hope for a shortcut <_<.
A:
The easiest way to keep both languages straight is to do your thinking and code writing in Common Lisp. Common Lisp code can be converted into Scheme code with relative ease; however, going from Scheme to Common Lisp can cause a few headaches. I remember once where I was using a letrec in Scheme to store both variables and functions and had to split it up into the separate CL functions for the variable and function namespaces respectively.
In all practicality though I don't make a habit of writing CL code, which makes the times that I do have to all the more painful.
|
Keeping CL and Scheme straight in your head
|
Depending on my mood I seem to waffle back and forth between wanting a Lisp-1 and a Lisp-2. Unfortunately beyond the obvious name space differences, this leaves all kinds of amusing function name/etc problems you run into. Case in point, trying to write some code tonight I tried to do (map #'function listvar) which, of course, doesn't work in CL, at all. Took me a bit to remember I wanted mapcar, not map. Of course it doesn't help when slime/emacs shows map IS defined as something, though obviously not the same function at all.
So, pointers on how to minimize this short of picking one or the other and sticking with it?
|
[
"Map is more general than mapcar, for example you could do the following rather than using mapcar:\n(map 'list #'function listvar)\n\nHow do I keep scheme and CL separate in my head? I guess when you know both languages well enough you just know what works in one and not the other. Despite the syntactic similarities they are quite different languages in terms of style.\n",
"Well, I think that as soon you get enough experience in both languages this becomes a non-issue (just with similar natural languages, like Italian and Spanish). If you usually program in one language and switch to the other only occasionally, then unfortunately you are doomed to write Common Lisp in Scheme or vice versa ;)\nOne thing that helps is to have a distinct visual environment for both languages, using syntax highlighting in some other colors etc. Then at least you will always know whether you are in Common Lisp or Scheme mode.\n",
"I'm definitely aware that there are syntactic differences, though I'm certainly not fluent enough yet to automatically use them, making the code look much more similar currently ;-).\nAnd I had a feeling your answer would be the case, but can always hope for a shortcut <_<.\n",
"The easiest way to keep both languages straight is to do your thinking and code writing in Common Lisp. Common Lisp code can be converted into Scheme code with relative ease; however, going from Scheme to Common Lisp can cause a few headaches. I remember once where I was using a letrec in Scheme to store both variables and functions and had to split it up into the separate CL functions for the variable and function namespaces respectively.\nIn all practicality though I don't make a habit of writing CL code, which makes the times that I do have to all the more painful.\n"
] |
[
5,
2,
0,
0
] |
[] |
[] |
[
"clisp",
"lisp",
"scheme"
] |
stackoverflow_0000031561_clisp_lisp_scheme.txt
|
Q:
Regular expression to convert mark down to HTML
How would you write a regular expression to convert mark down into HTML? For example, you would type in the following:
This would be *italicized* text and this would be **bold** text
This would then need to be converted to:
This would be <em>italicized</em> text and this would be <strong>bold</strong> text
Very similar to the mark down edit control used by stackoverflow.
Clarification
For what it is worth, I am using C#. Also, these are the only real tags/markdown that I want to allow. The amount of text being converted would be less than 300 characters or so.
A:
The best way is to find a version of the Markdown library ported to whatever language you are using (you did not specify in your question).
Now that you have clarified that you only want STRONG and EM to be processed, and that you are using C#, I recommend you take a look at Markdown.NET to see how those tags are implemented. As you can see, it is in fact two expressions. Here is the code:
private string DoItalicsAndBold (string text)
{
// <strong> must go first:
text = Regex.Replace (text, @"(\*\*|__) (?=\S) (.+?[*_]*) (?<=\S) \1",
new MatchEvaluator (BoldEvaluator),
RegexOptions.IgnorePatternWhitespace | RegexOptions.Singleline);
// Then <em>:
text = Regex.Replace (text, @"(\*|_) (?=\S) (.+?) (?<=\S) \1",
new MatchEvaluator (ItalicsEvaluator),
RegexOptions.IgnorePatternWhitespace | RegexOptions.Singleline);
return text;
}
private string ItalicsEvaluator (Match match)
{
return string.Format ("<em>{0}</em>", match.Groups[2].Value);
}
private string BoldEvaluator (Match match)
{
return string.Format ("<strong>{0}</strong>", match.Groups[2].Value);
}
A:
A single regex won't do. Every text markup will have it's own html translator. Better look into how the existing converters are implemented to get an idea on how it works.
http://en.wikipedia.org/wiki/Markdown#See_also
A:
I don't know about C# specifically, but in perl it would be:
\\\*\\\*(.*?)\\\*\\\*/
\< bold\>$1\<\/bold\>/g
\\\*(.\*?)\\\*/
\< em\>$1\<\/em\>/g
A:
I came across the following post that recommends to not do this. In my case though I am looking to keep it simple, but thought I would post this per jop's recommendation in case someone else wanted to do this.
|
Regular expression to convert mark down to HTML
|
How would you write a regular expression to convert mark down into HTML? For example, you would type in the following:
This would be *italicized* text and this would be **bold** text
This would then need to be converted to:
This would be <em>italicized</em> text and this would be <strong>bold</strong> text
Very similar to the mark down edit control used by stackoverflow.
Clarification
For what it is worth, I am using C#. Also, these are the only real tags/markdown that I want to allow. The amount of text being converted would be less than 300 characters or so.
|
[
"The best way is to find a version of the Markdown library ported to whatever language you are using (you did not specify in your question). \n\nNow that you have clarified that you only want STRONG and EM to be processed, and that you are using C#, I recommend you take a look at Markdown.NET to see how those tags are implemented. As you can see, it is in fact two expressions. Here is the code:\nprivate string DoItalicsAndBold (string text)\n{\n // <strong> must go first:\n text = Regex.Replace (text, @\"(\\*\\*|__) (?=\\S) (.+?[*_]*) (?<=\\S) \\1\", \n new MatchEvaluator (BoldEvaluator),\n RegexOptions.IgnorePatternWhitespace | RegexOptions.Singleline);\n\n // Then <em>:\n text = Regex.Replace (text, @\"(\\*|_) (?=\\S) (.+?) (?<=\\S) \\1\",\n new MatchEvaluator (ItalicsEvaluator),\n RegexOptions.IgnorePatternWhitespace | RegexOptions.Singleline);\n return text;\n}\n\nprivate string ItalicsEvaluator (Match match)\n{\n return string.Format (\"<em>{0}</em>\", match.Groups[2].Value);\n}\n\nprivate string BoldEvaluator (Match match)\n{\n return string.Format (\"<strong>{0}</strong>\", match.Groups[2].Value);\n}\n\n",
"A single regex won't do. Every text markup will have it's own html translator. Better look into how the existing converters are implemented to get an idea on how it works.\nhttp://en.wikipedia.org/wiki/Markdown#See_also\n",
"I don't know about C# specifically, but in perl it would be:\n\\\\\\*\\\\\\*(.*?)\\\\\\*\\\\\\*/\n\\< bold\\>$1\\<\\/bold\\>/g\n\n\\\\\\*(.\\*?)\\\\\\*/\n\\< em\\>$1\\<\\/em\\>/g\n\n",
"I came across the following post that recommends to not do this. In my case though I am looking to keep it simple, but thought I would post this per jop's recommendation in case someone else wanted to do this.\n"
] |
[
6,
5,
1,
0
] |
[] |
[] |
[
"c#",
"html",
"markdown",
"regex"
] |
stackoverflow_0000110749_c#_html_markdown_regex.txt
|
Q:
javascript XSLTProcessor occasionally not working
The following JavaScript supposes to read the popular tags from an XML file and applies the XSL Stylesheet and output to the browser as HTML.
function ShowPopularTags() {
xml = XMLDocLoad("http://localhost/xml/tags/popular.xml?s=94987898");
xsl = XMLDocLoad("http://localhost/xml/xsl/popular-tags.xsl");
if (window.ActiveXObject) {
// code for IE
ex = xml.transformNode(xsl);
ex = ex.replace(/\\/g, "");
document.getElementById("popularTags").innerHTML = ex;
} else if (document.implementation && document.implementation.createDocument) {
// code for Mozilla, Firefox, Opera, etc.
xsltProcessor = new XSLTProcessor();
xsltProcessor.importStylesheet(xsl);
resultDocument = xsltProcessor.transformToFragment(xml, document);
document.getElementById("popularTags").appendChild(resultDocument);
var ihtml = document.getElementById("popularTags").innerHTML;
ihtml = ihtml.replace(/\\/g, "");
document.getElementById("popularTags").innerHTML = ihtml;
}
}
ShowPopularTags();
The issue with this script is sometime it manages to output the resulting HTML code, sometime it doesn't. Anyone knows where is going wrong?
A:
To avoid problems with things loading in parallel (as hinted by Dan), it is always a good idea to call such scripting only when the page has fully loaded.
Ideally you put the script-tags in the page head and call ShowPopularTags(); in the body Onload item. I.e.
<BODY onLoad="ShowPopularTags();">
That way you are completely sure that your document.getElementById("popularTags") doesn't fail because the scripting is called before the HTML containing the element is fully loaded.
Also, can we see your XMLDocLoad function? If that contains non-sequential elements as well, you might be facing a problem where the XSLT transformation takes place before the objects xml and xsl are fully loaded.
A:
Are you forced into the synchronous solution you are using now, or is an asynchronous solution an option as well? I recall Firefox has had it's share of problems with synchronous calls in the past, and I don't know how much of that is still carried with it. I have seen situations where the entire Firefox interface would lock up for as long as the request was running (which, depending on timeout settings, can take a very long time).
It would require a bit more work on your end, but the solution would be something like the following. This is the code I use for handling XSLT stuff with Ajax (rewrote it slightly because my code is object oriented and contains a loop that parses out the appropriate XSL document from the XML document first loaded)
Note: make sure you declare your version of oCurrentRequest and oXMLRequest outside of the functions, since it will be carried over.
if (window.XMLHttpRequest)
{
oCurrentRequest = new XMLHttpRequest();
oCurrentRequest.onreadystatechange = processReqChange;
oCurrentRequest.open('GET', sURL, true);
oCurrentRequest.send(null);
}
else if (window.ActiveXObject)
{
oCurrentRequest = new ActiveXObject('Microsoft.XMLHTTP');
if (oCurrentRequest)
{
oCurrentRequest.onreadystatechange = processReqChange;
oCurrentRequest.open('GET', sURL, true);
oCurrentRequest.send();
}
}
After this you'd just need a function named processReqChange that contains something like the following:
function processReqChange()
{
if (oCurrentRequest.readyState == 4)
{
if (oCurrentRequest.status == 200)
{
oXMLRequest = oCurrentRequest;
oCurrentRequest = null;
loadXSLDoc();
}
}
}
And ofcourse you'll need to produce a second set of functions to handle the XSL loading (starting from loadXSLDoc on, for example).
Then at the end of you processXSLReqChange you can grab your XML result and XSL result and do the transformation.
A:
Well, that code follows entirely different paths for IE and everything-else. I assume the problem is limited to one of them. What browsers have you tried it on, and which exhibit this error?
The only other thing I can think of is that the popularTags element may not exist when you're trying to do stuff to it. How is this function being executed? In an onload/domready event?
A:
Dan. IE executes the script with no issue. I am facing the problem in Firefox. The popularTags element exists in the HTML document that calls the function.
<div id="popularTags" style="line-height:18px"></div>
<script language="javascript" type="text/javascript">
function ShowPopularTags()
{
xml=XMLDocLoad("http://localhost/xml/tags/popular.xml?s=29497105");
xsl=XMLDocLoad("http://localhost/xml/xsl/popular-tags.xsl");
if (window.ActiveXObject){
// code for IE
ex=xml.transformNode(xsl);
ex = ex.replace(/\\/g, "");
document.getElementById("popularTags").innerHTML=ex;
}
else if (document.implementation && document.implementation.createDocument){
// code for Mozilla, Firefox, Opera, etc.
xsltProcessor=new XSLTProcessor();
xsltProcessor.importStylesheet(xsl);
resultDocument = xsltProcessor.transformToFragment(xml,document);
document.getElementById("popularTags").appendChild(resultDocument);
var ihtml = document.getElementById("popularTags").innerHTML;
ihtml = ihtml.replace(/\\/g, "");
document.getElementById("popularTags").innerHTML = ihtml;
}
}
ShowPopularTags();
</script>
A:
The following is the XMLDocLoad function.
function XMLDocLoad(fname)
{
var xmlDoc;
if (window.ActiveXObject){
// code for IE
xmlDoc=new ActiveXObject("Microsoft.XMLDOM");
xmlDoc.async=false;
xmlDoc.load(fname);
return(xmlDoc);
}
else if(document.implementation && document.implementation.createDocument){
// code for Mozilla, Firefox, Opera, etc.
xmlDoc=document.implementation.createDocument("","",null);
xmlDoc.async=false;
xmlDoc.load(fname);
return(xmlDoc);
}
else{
alert('Your browser cannot handle this script');
}
}
|
javascript XSLTProcessor occasionally not working
|
The following JavaScript supposes to read the popular tags from an XML file and applies the XSL Stylesheet and output to the browser as HTML.
function ShowPopularTags() {
xml = XMLDocLoad("http://localhost/xml/tags/popular.xml?s=94987898");
xsl = XMLDocLoad("http://localhost/xml/xsl/popular-tags.xsl");
if (window.ActiveXObject) {
// code for IE
ex = xml.transformNode(xsl);
ex = ex.replace(/\\/g, "");
document.getElementById("popularTags").innerHTML = ex;
} else if (document.implementation && document.implementation.createDocument) {
// code for Mozilla, Firefox, Opera, etc.
xsltProcessor = new XSLTProcessor();
xsltProcessor.importStylesheet(xsl);
resultDocument = xsltProcessor.transformToFragment(xml, document);
document.getElementById("popularTags").appendChild(resultDocument);
var ihtml = document.getElementById("popularTags").innerHTML;
ihtml = ihtml.replace(/\\/g, "");
document.getElementById("popularTags").innerHTML = ihtml;
}
}
ShowPopularTags();
The issue with this script is sometime it manages to output the resulting HTML code, sometime it doesn't. Anyone knows where is going wrong?
|
[
"To avoid problems with things loading in parallel (as hinted by Dan), it is always a good idea to call such scripting only when the page has fully loaded.\nIdeally you put the script-tags in the page head and call ShowPopularTags(); in the body Onload item. I.e.\n<BODY onLoad=\"ShowPopularTags();\">\n\nThat way you are completely sure that your document.getElementById(\"popularTags\") doesn't fail because the scripting is called before the HTML containing the element is fully loaded.\nAlso, can we see your XMLDocLoad function? If that contains non-sequential elements as well, you might be facing a problem where the XSLT transformation takes place before the objects xml and xsl are fully loaded.\n",
"Are you forced into the synchronous solution you are using now, or is an asynchronous solution an option as well? I recall Firefox has had it's share of problems with synchronous calls in the past, and I don't know how much of that is still carried with it. I have seen situations where the entire Firefox interface would lock up for as long as the request was running (which, depending on timeout settings, can take a very long time).\nIt would require a bit more work on your end, but the solution would be something like the following. This is the code I use for handling XSLT stuff with Ajax (rewrote it slightly because my code is object oriented and contains a loop that parses out the appropriate XSL document from the XML document first loaded)\nNote: make sure you declare your version of oCurrentRequest and oXMLRequest outside of the functions, since it will be carried over.\nif (window.XMLHttpRequest)\n{\n oCurrentRequest = new XMLHttpRequest();\n oCurrentRequest.onreadystatechange = processReqChange;\n oCurrentRequest.open('GET', sURL, true);\n oCurrentRequest.send(null);\n}\nelse if (window.ActiveXObject)\n{\n oCurrentRequest = new ActiveXObject('Microsoft.XMLHTTP');\n if (oCurrentRequest)\n {\n oCurrentRequest.onreadystatechange = processReqChange;\n oCurrentRequest.open('GET', sURL, true);\n oCurrentRequest.send();\n }\n}\n\nAfter this you'd just need a function named processReqChange that contains something like the following:\nfunction processReqChange()\n{\n if (oCurrentRequest.readyState == 4)\n {\n if (oCurrentRequest.status == 200)\n {\n oXMLRequest = oCurrentRequest;\n oCurrentRequest = null;\n loadXSLDoc();\n }\n }\n}\n\nAnd ofcourse you'll need to produce a second set of functions to handle the XSL loading (starting from loadXSLDoc on, for example).\nThen at the end of you processXSLReqChange you can grab your XML result and XSL result and do the transformation.\n",
"Well, that code follows entirely different paths for IE and everything-else. I assume the problem is limited to one of them. What browsers have you tried it on, and which exhibit this error?\nThe only other thing I can think of is that the popularTags element may not exist when you're trying to do stuff to it. How is this function being executed? In an onload/domready event?\n",
"Dan. IE executes the script with no issue. I am facing the problem in Firefox. The popularTags element exists in the HTML document that calls the function.\n\n<div id=\"popularTags\" style=\"line-height:18px\"></div>\n<script language=\"javascript\" type=\"text/javascript\">\n function ShowPopularTags()\n {\n xml=XMLDocLoad(\"http://localhost/xml/tags/popular.xml?s=29497105\");\n xsl=XMLDocLoad(\"http://localhost/xml/xsl/popular-tags.xsl\");\n\n if (window.ActiveXObject){\n // code for IE\n ex=xml.transformNode(xsl);\n ex = ex.replace(/\\\\/g, \"\");\n document.getElementById(\"popularTags\").innerHTML=ex;\n }\n else if (document.implementation && document.implementation.createDocument){\n // code for Mozilla, Firefox, Opera, etc.\n xsltProcessor=new XSLTProcessor();\n xsltProcessor.importStylesheet(xsl);\n resultDocument = xsltProcessor.transformToFragment(xml,document);\n document.getElementById(\"popularTags\").appendChild(resultDocument);\n\n var ihtml = document.getElementById(\"popularTags\").innerHTML;\n ihtml = ihtml.replace(/\\\\/g, \"\");\n document.getElementById(\"popularTags\").innerHTML = ihtml;\n }\n }\n\n ShowPopularTags();\n</script> \n\n\n",
"The following is the XMLDocLoad function.\n\nfunction XMLDocLoad(fname)\n{\n var xmlDoc;\n\n if (window.ActiveXObject){\n // code for IE\n xmlDoc=new ActiveXObject(\"Microsoft.XMLDOM\");\n xmlDoc.async=false;\n xmlDoc.load(fname);\n\n return(xmlDoc);\n }\n else if(document.implementation && document.implementation.createDocument){\n // code for Mozilla, Firefox, Opera, etc.\n xmlDoc=document.implementation.createDocument(\"\",\"\",null);\n\n xmlDoc.async=false;\n xmlDoc.load(fname);\n\n return(xmlDoc);\n\n }\n else{\n alert('Your browser cannot handle this script');\n }\n\n\n}\n\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"firefox",
"javascript",
"xml",
"xslt"
] |
stackoverflow_0000100774_firefox_javascript_xml_xslt.txt
|
Q:
Dynamically importing a C++ class from a DLL
What is the correct way to import a C++ class from a DLL? We're using Visual C++.
There's the dllexport/exports.def+LoadLibrary+GetProcAddress trifecta, but it doesn't work on C++ classes, only C functions. Is this due to C++ name-mangling? How do I make this work?
A:
You need to add the following:
extern "C"
{
...
}
to avoid function mangling.
you might consider writing two simple C functions:
SomeClass* CreateObjectInstace()
{
return new SomeClass();
}
void ReleaseObject(SomeClass* someClass)
{
delete someClass;
}
By only using those functions you can afterward add/change functionality of your object creation/deletion. This is sometimes called a Factory.
A:
Found the solution at http://www.codeproject.com/KB/DLL/XDllPt4.aspx
Thanks for your efforts guys & girls
A:
I normally declare an interface base class, use this declaration in my application, then use LoadLibrary, GetProcAddress to get the factory function. The factor always returns pointer of the interface type.
Here is a practical example, exporting an MFC document/view from a DLL, dynamically loaded
A:
dllexport/dllimport works, place it before your class name in the header file and you're good to go.
Typically you want to use dllexport in the dll, and dllimport in the exe (but you can just use dllexport everywhere and it works, doing it 'right' makes it tinily faster to load).
Obviously that is for link-time compilation. You can use /delayload linker directive to make it 'dynamic', but that's probably not what you want from the subject line.
If you truly want a LoadLibrary style loading, you're going to have to wrap your C++ functions with "extern C" wrappers. The problem is because of name mangling, you could type in the fully-mangled name and it'd work.
The workarounds are generally to provide a C function that returns a pointer to the correct class - COM works this way, as it exports 4 C functions from a dll that are used to get the interface methods inside the object in the dll.
A:
Check out this question. Basically, there are two ways. You can mark the class using _dllexport and then link with the import library, and the DLL will be loaded automatically. Or if you want to load the DLL dynamically yourself, you can use the factory function idea that @titanae suggested
|
Dynamically importing a C++ class from a DLL
|
What is the correct way to import a C++ class from a DLL? We're using Visual C++.
There's the dllexport/exports.def+LoadLibrary+GetProcAddress trifecta, but it doesn't work on C++ classes, only C functions. Is this due to C++ name-mangling? How do I make this work?
|
[
"You need to add the following:\nextern \"C\"\n{\n...\n}\n\nto avoid function mangling.\nyou might consider writing two simple C functions:\nSomeClass* CreateObjectInstace()\n{\n return new SomeClass();\n}\n\nvoid ReleaseObject(SomeClass* someClass)\n{\n delete someClass;\n}\n\nBy only using those functions you can afterward add/change functionality of your object creation/deletion. This is sometimes called a Factory.\n",
"Found the solution at http://www.codeproject.com/KB/DLL/XDllPt4.aspx\nThanks for your efforts guys & girls\n",
"I normally declare an interface base class, use this declaration in my application, then use LoadLibrary, GetProcAddress to get the factory function. The factor always returns pointer of the interface type.\nHere is a practical example, exporting an MFC document/view from a DLL, dynamically loaded\n",
"dllexport/dllimport works, place it before your class name in the header file and you're good to go.\nTypically you want to use dllexport in the dll, and dllimport in the exe (but you can just use dllexport everywhere and it works, doing it 'right' makes it tinily faster to load).\nObviously that is for link-time compilation. You can use /delayload linker directive to make it 'dynamic', but that's probably not what you want from the subject line.\nIf you truly want a LoadLibrary style loading, you're going to have to wrap your C++ functions with \"extern C\" wrappers. The problem is because of name mangling, you could type in the fully-mangled name and it'd work.\nThe workarounds are generally to provide a C function that returns a pointer to the correct class - COM works this way, as it exports 4 C functions from a dll that are used to get the interface methods inside the object in the dll.\n",
"Check out this question. Basically, there are two ways. You can mark the class using _dllexport and then link with the import library, and the DLL will be loaded automatically. Or if you want to load the DLL dynamically yourself, you can use the factory function idea that @titanae suggested\n"
] |
[
13,
6,
2,
2,
2
] |
[] |
[] |
[
"c++",
"dll",
"import",
"windows"
] |
stackoverflow_0000110833_c++_dll_import_windows.txt
|
Q:
Saving information in "sub" model in CakePHP
I've got a simple CakePHP site (1.2). I've got a page where you can edit and save a Person. So I have a Person model and controller.
Each Person has none or more comments, in the comment table. So I have a Comment model, and I have a hasMany association on my Person model to the Comment model. View is working great.
My question is, on the view Person page, I have an add comment button. How should this work? Should I expect the Person controller to include a save for the comment record, or create a comment controller and save it outside of it's association for a person?
I'm experienced with PHP, but brand new to Cake.
Any ideas? I think I'm just missing something obvious, but I'm not sure what to do. I feel like if this was PHP I would reference the Person_id in my add comment form, and thus use a separate controller, but I feel like having a controller for a simple Model is useless, since Comments are only edited in the context of a Person record.
Ideas?
A:
I'm not a CakePHP expert, but I still think it would make sense to have your own controller. From what I remember from doing one of those CakePHP blog tutorials is, that you need to link the comments and the post in the comment model. This is some of the code I have from it:
class Comment extends AppModel
{
var $name = ‘Comment’;
var $belongsTo = array(‘Person’);
}
And then you need a controller (comments_controller.php):
class CommentsController extends AppController
{
var $name = ‘Comments’;
var $scaffold;
}
Some SQL:
CREATE TABLE comments (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
author VARCHAR(50),
comment TEXT,
person_id INT,
created DATETIME DEFAULT NULL,
modified DATETIME DEFAULT NULL
);
The $scaffold creates a CRUD application for you, so when you go to /comments in your browser you can create, read, update and delete comments. So, as you see, there is not much involved here. All you need is your database tables and a little logic to provide person_id.
To save a comment (in your Person/view):
<h2>Add comment</h2>
<?php
echo $form->create(‘Comment’, array(‘action’=>‘add/’.$person[‘Person’][‘id’]);
echo $form->input(‘author’);
echo $form->input(‘content’);
echo $form->submit(‘Add comment’);
echo $form->end();
?>
And in your CommentsController:
function add($id = NULL) {
if (!empty($this->data)) {
$this->data['Comment']['person_id'] = $id;
$this->data['Comment']['id'] = '';
if ($this->Comment->save($this->data)) {
$this->Session->setFlash('Commented added');
$this->redirect($this->referer());
}
}
}
So you basically overwrite the standard add action, which Cake adds by itself. Hope that makes sense now. Also, you might need a route so it picks up /comments/add/ID. I don't know about this part. :)
|
Saving information in "sub" model in CakePHP
|
I've got a simple CakePHP site (1.2). I've got a page where you can edit and save a Person. So I have a Person model and controller.
Each Person has none or more comments, in the comment table. So I have a Comment model, and I have a hasMany association on my Person model to the Comment model. View is working great.
My question is, on the view Person page, I have an add comment button. How should this work? Should I expect the Person controller to include a save for the comment record, or create a comment controller and save it outside of it's association for a person?
I'm experienced with PHP, but brand new to Cake.
Any ideas? I think I'm just missing something obvious, but I'm not sure what to do. I feel like if this was PHP I would reference the Person_id in my add comment form, and thus use a separate controller, but I feel like having a controller for a simple Model is useless, since Comments are only edited in the context of a Person record.
Ideas?
|
[
"I'm not a CakePHP expert, but I still think it would make sense to have your own controller. From what I remember from doing one of those CakePHP blog tutorials is, that you need to link the comments and the post in the comment model. This is some of the code I have from it:\nclass Comment extends AppModel\n{\n var $name = ‘Comment’;\n var $belongsTo = array(‘Person’);\n}\n\nAnd then you need a controller (comments_controller.php):\nclass CommentsController extends AppController\n{\n var $name = ‘Comments’;\n var $scaffold;\n}\n\nSome SQL:\nCREATE TABLE comments (\n id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,\n author VARCHAR(50),\n comment TEXT,\n person_id INT,\n created DATETIME DEFAULT NULL,\n modified DATETIME DEFAULT NULL\n);\n\nThe $scaffold creates a CRUD application for you, so when you go to /comments in your browser you can create, read, update and delete comments. So, as you see, there is not much involved here. All you need is your database tables and a little logic to provide person_id.\nTo save a comment (in your Person/view):\n<h2>Add comment</h2>\n<?php\necho $form->create(‘Comment’, array(‘action’=>‘add/’.$person[‘Person’][‘id’]);\necho $form->input(‘author’);\necho $form->input(‘content’);\necho $form->submit(‘Add comment’);\necho $form->end();\n?>\n\nAnd in your CommentsController:\nfunction add($id = NULL) {\n if (!empty($this->data)) {\n $this->data['Comment']['person_id'] = $id;\n $this->data['Comment']['id'] = '';\n if ($this->Comment->save($this->data)) {\n $this->Session->setFlash('Commented added');\n $this->redirect($this->referer());\n }\n }\n}\n\nSo you basically overwrite the standard add action, which Cake adds by itself. Hope that makes sense now. Also, you might need a route so it picks up /comments/add/ID. I don't know about this part. :)\n"
] |
[
1
] |
[] |
[] |
[
"cakephp",
"cakephp_1.2",
"php"
] |
stackoverflow_0000110825_cakephp_cakephp_1.2_php.txt
|
Q:
Mod_rails and mongrel running on the same server?
I'm currently running mongrel clusters with monit watching over them for 8 Rails applications on one server.
I'd like to move 7 of these applications to mod_rails, with one remaining on mongrel. The 7 smaller applications are low-volume, while the one I'd like to remain on mongrel is a high volume, app.
As I understand it, this would be the best solution - as the setting PassengerPoolIdleTime only can be applied at a global level.
What configuration gotchas should I look out for with this type of setup?
A:
I would probably just move all the apps to mod_rails, as the performance seems comparable to Mongrel and there's less administration overhead.
With regards to configuration gotchas, just make sure that you allow your public directory, or you'll find static assets failing:
<Directory "/var/www/app/current/public">
Options FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all
</Directory>
Aside from that, if you know how to configure Apache, mod_rails is very painless.
A:
Ended up moving everything to mod_rails.
Works like a champ!
|
Mod_rails and mongrel running on the same server?
|
I'm currently running mongrel clusters with monit watching over them for 8 Rails applications on one server.
I'd like to move 7 of these applications to mod_rails, with one remaining on mongrel. The 7 smaller applications are low-volume, while the one I'd like to remain on mongrel is a high volume, app.
As I understand it, this would be the best solution - as the setting PassengerPoolIdleTime only can be applied at a global level.
What configuration gotchas should I look out for with this type of setup?
|
[
"I would probably just move all the apps to mod_rails, as the performance seems comparable to Mongrel and there's less administration overhead.\nWith regards to configuration gotchas, just make sure that you allow your public directory, or you'll find static assets failing:\n<Directory \"/var/www/app/current/public\">\n Options FollowSymLinks\n AllowOverride None\n Order allow,deny\n Allow from all\n</Directory>\n\nAside from that, if you know how to configure Apache, mod_rails is very painless.\n",
"Ended up moving everything to mod_rails.\nWorks like a champ!\n"
] |
[
4,
1
] |
[] |
[] |
[
"apache",
"mod_rails",
"mongrel",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000110512_apache_mod_rails_mongrel_ruby_ruby_on_rails.txt
|
Q:
Always do it with the same method every time, is this usable in Software projects?
I was out running.. listening to a podcast about Toyota.. anyway.
This principle I think does not come to use in the software projects. (maybe project management). The art is still to young. We don't know what we are doing, at the moment. But eventually, we will.
Or, do some one see how to use this core principle?
Ok, here is the podcast. I think it is interesting
http://itc.conversationsnetwork.org/shows/detail3798.html
A:
I would suggest a small modification, if the method has been proven to work properly (performance/maintenance/security/etc.), THEN use that every time.
The trick is the "proven to work", and also the "properly".
So basically, unless there is a problem with the current method, don't change it for the sake of change. (Note that a method that works provably better, in actuality highlights that the other method has a problem, notably does not work as well).
Particularly in our field it is especially applicable, because of the productivity/scalability gains you get when most code is built the same way. E.g. maintenance, developer training, etc.
In other, more familiar words from the famed philosopher:
If it ain't broke, don't fix it.
A:
Well, I think it absolutely depends. If the method you have already used has good execution time, is (mostly) free of bugs, and works just how you want, then there is no need to write a new way of doing this task. Especially if you are programming for money, or for a company.
However, if you are wanting to learn some new features of a programming language, or simply a different way of doing things, completely for you personal interest, why not?
In a company like Toyota, saving time and money is of utmost importance. However, your personal time has whatever importance you assign to it. If learning a new method of doing something is good for your bottom line then do it. If your bottom line is to learn as much as possible, then this is probably the right thing to do. If, on the other hand, your bottom line is to get as many projects done as fast as possible then it is not.
However, trying a different method could still be useful, even if your bottom line is to save time and money; because, by doing something you've already done with a different methodology may introduce ideas to you that could potentially save you time (and time is money) in the long run.
So I'd pretty much say, if redoing something in a completely different way is what you want to do, then just do it.
|
Always do it with the same method every time, is this usable in Software projects?
|
I was out running.. listening to a podcast about Toyota.. anyway.
This principle I think does not come to use in the software projects. (maybe project management). The art is still to young. We don't know what we are doing, at the moment. But eventually, we will.
Or, do some one see how to use this core principle?
Ok, here is the podcast. I think it is interesting
http://itc.conversationsnetwork.org/shows/detail3798.html
|
[
"I would suggest a small modification, if the method has been proven to work properly (performance/maintenance/security/etc.), THEN use that every time.\nThe trick is the \"proven to work\", and also the \"properly\".\nSo basically, unless there is a problem with the current method, don't change it for the sake of change. (Note that a method that works provably better, in actuality highlights that the other method has a problem, notably does not work as well).\nParticularly in our field it is especially applicable, because of the productivity/scalability gains you get when most code is built the same way. E.g. maintenance, developer training, etc.\nIn other, more familiar words from the famed philosopher:\n\nIf it ain't broke, don't fix it.\n\n",
"Well, I think it absolutely depends. If the method you have already used has good execution time, is (mostly) free of bugs, and works just how you want, then there is no need to write a new way of doing this task. Especially if you are programming for money, or for a company.\nHowever, if you are wanting to learn some new features of a programming language, or simply a different way of doing things, completely for you personal interest, why not?\nIn a company like Toyota, saving time and money is of utmost importance. However, your personal time has whatever importance you assign to it. If learning a new method of doing something is good for your bottom line then do it. If your bottom line is to learn as much as possible, then this is probably the right thing to do. If, on the other hand, your bottom line is to get as many projects done as fast as possible then it is not.\nHowever, trying a different method could still be useful, even if your bottom line is to save time and money; because, by doing something you've already done with a different methodology may introduce ideas to you that could potentially save you time (and time is money) in the long run.\nSo I'd pretty much say, if redoing something in a completely different way is what you want to do, then just do it.\n"
] |
[
0,
0
] |
[] |
[] |
[
"principles",
"project_management"
] |
stackoverflow_0000110876_principles_project_management.txt
|
Q:
Never do anything until you ready to use it, in software too? [Toyota principle]
I was listening to a podcast. Where they talked about principles Toyota was using:
Never do anything until you are ready to use it.
I think this tells us to look in other places, to learn what other practices have been known for years.
A:
It may apply to software construction, but I am not sure it does apply
If we consider the five elements in a "toyota-way of decision making", based on the principle that "how you arrive at the decision is just as important as the quality of the decision":
[mode humour ON]
Finding out what is really going on, including genchi gembutsu.
Except that sometime, one does finally understand what is going on when the client explain to us at the end of the project;)
Understanding underlying causes that explain surface appearances—asking “Why?” five times.
Sure but the client is not available enough during the project ;)
Broadly considering alternative solutions and developing a detailed rationale for the preferred solution.
Too late, the programmers are already coding like madmen :)
Building consensus within the team, including Toyota employees and outside partners.
Oops that programmer is already re-writing the authentification system even though the old one was working fine
Using very efficient communication vehicles to do one through four, preferably one side of one sheet of paper.
Did you hear "death by powerpoint" ? This is not always our strong suit ;)
[mode humour OFF]
Seriously, as stated by the previous answers, the Agile philosophy does address some of the core tenants of this Toyota principle.
And it may be a little richer that just "You Ain't Gonna Need It", as described in the book "The Toyota way"
A:
Sort of, yes. This is a core part of the agile philosophy.
Basically, favour flexibility and speed of response over big design up front and unwieldy specifications. One of the best ways of doing that is to only build enough to meet your current requirements, because you never know when they're going to change.
A:
It is old news a little. It's often called "You ain't gonna need it" ( "You Arent' Going to Need It" in non-idomatic English), and abbreviated YAGNI.
Problems associated with implementing a feature when you don't need it:
the implementation takes time away from developing features that are needed
the feature is hard to document and test, since if you don't need it, who knows what it's supposed to do exactly?
maintaining the feature will take additional time
the feature adds extra code, complicating the codebase
the feature may have a snowball effect, whereby it suggests other features that you may then want to add, even though they're not needed
A:
It is a good agile practice to think just like that. There is also something called Test-Driven-Development, that helps you get software without bugs (almost), but also have that side effect that NOTHING is implemented that you don't use.
A example is you're own collection class. If you only are needing a Add method, and a ToArray method, then why use the time to implement the Remove and Count methods?
So yep. Follow that principle :)
|
Never do anything until you ready to use it, in software too? [Toyota principle]
|
I was listening to a podcast. Where they talked about principles Toyota was using:
Never do anything until you are ready to use it.
I think this tells us to look in other places, to learn what other practices have been known for years.
|
[
"It may apply to software construction, but I am not sure it does apply\nIf we consider the five elements in a \"toyota-way of decision making\", based on the principle that \"how you arrive at the decision is just as important as the quality of the decision\":\n[mode humour ON]\n\nFinding out what is really going on, including genchi gembutsu. \n\nExcept that sometime, one does finally understand what is going on when the client explain to us at the end of the project;)\n \n\nUnderstanding underlying causes that explain surface appearances—asking “Why?” five times.\n\nSure but the client is not available enough during the project ;)\n\nBroadly considering alternative solutions and developing a detailed rationale for the preferred solution.\n\nToo late, the programmers are already coding like madmen :)\n\nBuilding consensus within the team, including Toyota employees and outside partners.\n\nOops that programmer is already re-writing the authentification system even though the old one was working fine\n\nUsing very efficient communication vehicles to do one through four, preferably one side of one sheet of paper.\n\nDid you hear \"death by powerpoint\" ? This is not always our strong suit ;)\n \n\n\n[mode humour OFF]\nSeriously, as stated by the previous answers, the Agile philosophy does address some of the core tenants of this Toyota principle.\nAnd it may be a little richer that just \"You Ain't Gonna Need It\", as described in the book \"The Toyota way\"\n",
"Sort of, yes. This is a core part of the agile philosophy.\nBasically, favour flexibility and speed of response over big design up front and unwieldy specifications. One of the best ways of doing that is to only build enough to meet your current requirements, because you never know when they're going to change.\n",
"It is old news a little. It's often called \"You ain't gonna need it\" ( \"You Arent' Going to Need It\" in non-idomatic English), and abbreviated YAGNI.\nProblems associated with implementing a feature when you don't need it:\n\nthe implementation takes time away from developing features that are needed\nthe feature is hard to document and test, since if you don't need it, who knows what it's supposed to do exactly?\nmaintaining the feature will take additional time\nthe feature adds extra code, complicating the codebase\nthe feature may have a snowball effect, whereby it suggests other features that you may then want to add, even though they're not needed\n\n",
"It is a good agile practice to think just like that. There is also something called Test-Driven-Development, that helps you get software without bugs (almost), but also have that side effect that NOTHING is implemented that you don't use.\nA example is you're own collection class. If you only are needing a Add method, and a ToArray method, then why use the time to implement the Remove and Count methods?\nSo yep. Follow that principle :)\n"
] |
[
3,
2,
2,
1
] |
[] |
[] |
[
"design_patterns",
"principles",
"process",
"process_management"
] |
stackoverflow_0000110855_design_patterns_principles_process_process_management.txt
|
Q:
Can I pass an arbitrary block of commands to a bash function?
I am working on a bash script where I need to conditionally execute some things if a particular file exists. This is happening multiple times, so I abstracted the following function:
function conditional-do {
if [ -f $1 ]
then
echo "Doing stuff"
$2
else
echo "File doesn't exist!"
end
}
Now, when I want to execute this, I do something like:
function exec-stuff {
echo "do some command"
echo "do another command"
}
conditional-do /path/to/file exec-stuff
The problem is, I am bothered that I am defining 2 things: the function of a group of commands to execute, and then invoking my first function.
I would like to pass this block of commands (often 2 or more) directly to "conditional-do" in a clean manner, but I have no idea how this is doable (or if it is even possible)... does anyone have any ideas?
Note, I need it to be a readable solution... otherwise I would rather stick with what I have.
A:
This should be readable to most C programmers:
function file_exists {
if ( [ -e $1 ] ) then
echo "Doing stuff"
else
echo "File $1 doesn't exist"
false
fi
}
file_exists filename && (
echo "Do your stuff..."
)
or the one-liner
file_exists filename && echo "Do your stuff..."
Now, if you really want the code to be run from the function, this is how you can do that:
function file_exists {
if ( [ -e $1 ] ) then
echo "Doing stuff"
shift
$*
else
echo "File $1 doesn't exist"
false
fi
}
file_exists filename echo "Do your stuff..."
I don't like that solution though, because you will eventually end up doing escaping of the command string.
EDIT: Changed "eval $*" to $ *. Eval is not required, actually. As is common with bash scripts, it was written when I had had a couple of beers ;-)
A:
One (possibly-hack) solution is to store the separate functions as separate scripts altogether.
A:
The cannonical answer:
[ -f $filename ] && echo "it has worked!"
or you can wrap it up if you really want to:
function file-exists {
[ "$1" ] && [ -f $1 ]
}
file-exists $filename && echo "It has worked"
|
Can I pass an arbitrary block of commands to a bash function?
|
I am working on a bash script where I need to conditionally execute some things if a particular file exists. This is happening multiple times, so I abstracted the following function:
function conditional-do {
if [ -f $1 ]
then
echo "Doing stuff"
$2
else
echo "File doesn't exist!"
end
}
Now, when I want to execute this, I do something like:
function exec-stuff {
echo "do some command"
echo "do another command"
}
conditional-do /path/to/file exec-stuff
The problem is, I am bothered that I am defining 2 things: the function of a group of commands to execute, and then invoking my first function.
I would like to pass this block of commands (often 2 or more) directly to "conditional-do" in a clean manner, but I have no idea how this is doable (or if it is even possible)... does anyone have any ideas?
Note, I need it to be a readable solution... otherwise I would rather stick with what I have.
|
[
"This should be readable to most C programmers:\nfunction file_exists {\n if ( [ -e $1 ] ) then \n echo \"Doing stuff\"\n else\n echo \"File $1 doesn't exist\" \n false\n fi\n}\n\nfile_exists filename && (\n echo \"Do your stuff...\"\n)\n\nor the one-liner\nfile_exists filename && echo \"Do your stuff...\"\n\nNow, if you really want the code to be run from the function, this is how you can do that:\nfunction file_exists {\n if ( [ -e $1 ] ) then \n echo \"Doing stuff\"\n shift\n $*\n else\n echo \"File $1 doesn't exist\" \n false\n fi\n}\n\nfile_exists filename echo \"Do your stuff...\"\n\nI don't like that solution though, because you will eventually end up doing escaping of the command string.\nEDIT: Changed \"eval $*\" to $ *. Eval is not required, actually. As is common with bash scripts, it was written when I had had a couple of beers ;-)\n",
"One (possibly-hack) solution is to store the separate functions as separate scripts altogether.\n",
"The cannonical answer:\n[ -f $filename ] && echo \"it has worked!\"\n\nor you can wrap it up if you really want to:\nfunction file-exists {\n [ \"$1\" ] && [ -f $1 ]\n}\n\nfile-exists $filename && echo \"It has worked\"\n\n"
] |
[
6,
0,
0
] |
[] |
[] |
[
"bash"
] |
stackoverflow_0000105971_bash.txt
|
Q:
What's the best tool to track a process's memory usage over a long period of time in Windows?
What is the best available tool to monitor the memory usage of my C#/.Net windows service over a long period of time. As far as I know, tools like perfmon can monitor the memory usage over a short period of time, but not graphically over a long period of time. I need trend data over days, not seconds.
To be clear, I want to monitor the memory usage at a fine level of detail over a long time, and have the graph show both the whole time frame and the level of detail. I need a small sampling interval, and a large graph.
A:
Perfmon in my opinion is one of the best tools to do this but make sure you properly configure the sampling interval according to the time you wish to monitor.
For example if you want to monitor a process:
for 1 hour : I would use 1 second intervals (this will generate 60*60 samples)
for 1 day : I would use 30 second intervals (this will generate 2*60*24 samples)
for 1 week : I would use 1 minute intervals (this will generate 60*24*7 samples)
With these sampling intervals Perfmon should have no problem generating a nice graphical output of your counters.
A:
Well I used perfmon, exported the results to a csv and used excel for statistics afterwards. That worked pretty well last time I needed to monitor a process
A:
Playing around with Computer Management (assuming you're running Windows here) and it seems like you can make it monitor a process over time. Go to computer management -> performance logs and alerts and look at the counter/trace logs. Right click on counter logs and add a new log. Now click add object and select memory. Now click add counters and change the "Performance Object" to Process, and select your process.
A:
As good as monitoring the memory is by itself, you're probably thinking of memory profiling to identify leaks or stale objects - http://memprofiler.com/ is a good choice here, but there are plenty of others.
If you want to do something very specific, don't be afraid to write your own WMI-based logger running on a timer - you could get this to email you process statistics, warn when it grows too fast or too high, send it as XML for charting, etc.
A:
If you're familiar with Python, it's pretty easy to write a script for this.
Activestate Python (which is free) exposes the relevant parts of the Win32 API through the win32process module.
You can also check out all win32 related modules or use gotAPI to browse the Python standard libs.
A:
I would recommend using the .Net Memory Validator tool from software verify.
This tool helped me to solve many different issues related to memory management in .Net application I have to work with.
I use more frequently the C++ version but they are quite similar and the fact that you can really see in real-time the type of the objects being allocated will be invaluable to you.
|
What's the best tool to track a process's memory usage over a long period of time in Windows?
|
What is the best available tool to monitor the memory usage of my C#/.Net windows service over a long period of time. As far as I know, tools like perfmon can monitor the memory usage over a short period of time, but not graphically over a long period of time. I need trend data over days, not seconds.
To be clear, I want to monitor the memory usage at a fine level of detail over a long time, and have the graph show both the whole time frame and the level of detail. I need a small sampling interval, and a large graph.
|
[
"Perfmon in my opinion is one of the best tools to do this but make sure you properly configure the sampling interval according to the time you wish to monitor.\nFor example if you want to monitor a process:\n\nfor 1 hour : I would use 1 second intervals (this will generate 60*60 samples)\nfor 1 day : I would use 30 second intervals (this will generate 2*60*24 samples)\nfor 1 week : I would use 1 minute intervals (this will generate 60*24*7 samples)\n\nWith these sampling intervals Perfmon should have no problem generating a nice graphical output of your counters.\n",
"Well I used perfmon, exported the results to a csv and used excel for statistics afterwards. That worked pretty well last time I needed to monitor a process\n",
"Playing around with Computer Management (assuming you're running Windows here) and it seems like you can make it monitor a process over time. Go to computer management -> performance logs and alerts and look at the counter/trace logs. Right click on counter logs and add a new log. Now click add object and select memory. Now click add counters and change the \"Performance Object\" to Process, and select your process.\n",
"As good as monitoring the memory is by itself, you're probably thinking of memory profiling to identify leaks or stale objects - http://memprofiler.com/ is a good choice here, but there are plenty of others.\nIf you want to do something very specific, don't be afraid to write your own WMI-based logger running on a timer - you could get this to email you process statistics, warn when it grows too fast or too high, send it as XML for charting, etc.\n",
"If you're familiar with Python, it's pretty easy to write a script for this.\nActivestate Python (which is free) exposes the relevant parts of the Win32 API through the win32process module. \nYou can also check out all win32 related modules or use gotAPI to browse the Python standard libs. \n",
"I would recommend using the .Net Memory Validator tool from software verify.\nThis tool helped me to solve many different issues related to memory management in .Net application I have to work with.\nI use more frequently the C++ version but they are quite similar and the fact that you can really see in real-time the type of the objects being allocated will be invaluable to you.\n"
] |
[
5,
3,
1,
1,
0,
0
] |
[
"I've used ProcessMonitor if you need something more powerful than perfmon.\n"
] |
[
-1
] |
[
".net",
"c#",
"memory",
"performance"
] |
stackoverflow_0000097590_.net_c#_memory_performance.txt
|
Q:
PHP parse configuration ini files
Is there a way to read a module's configuration ini file?
For example I installed php-eaccelerator (http://eaccelerator.net) and it put a eaccelerator.ini file in /etc/php.d. My PHP installation wont read this .ini file because the --with-config-file-scan-dir option wasn't used when compiling PHP. Is there a way to manually specify a path to the ini file somewhere so PHP can read the module's settings?
A:
This is just a wild guess, but try to add all the directives from eaccelerator.ini to php.ini. First create a <?php phpinfo(); ?> and check where it's located.
For example, try this:
[eAccelerator]
extension="eaccelerator.so"
eaccelerator.shm_size="32"
eaccelerator.cache_dir="/tmp"
eaccelerator.enable="1"
eaccelerator.optimizer="1"
eaccelerator.check_mtime="1"
eaccelerator.debug="0"
eaccelerator.filter=""
eaccelerator.shm_max="0"
eaccelerator.shm_ttl="0"
eaccelerator.shm_prune_period="0"
eaccelerator.shm_only="0"
eaccelerator.compress="1"
eaccelerator.compress_level="9"
Another thing you could do is set all the settings on run-time using ini_set(). I am not sure if that works though or how effective that is. :) I am not familiar with eAccelerator to know for sure.
A:
The standard way in this instance is to copy the relevant .ini lines to the bottom of the php.ini file. There is no 'include "file.ini"' functionality in the php.ini file itself.
You can't do it at run time either, since the extension has already been initialised by then.
A:
If using Apache, and mod-php, you can configure/override some php settings locally with a .htaccess file. Your webserver has to "AlloweOverride" appropriately in the main config file to allow you to override these settings locally. In my experience, many hosting companies will let you set php settings via htaccess.
(thanks commenter for pointing out this only works with mod-php)
|
PHP parse configuration ini files
|
Is there a way to read a module's configuration ini file?
For example I installed php-eaccelerator (http://eaccelerator.net) and it put a eaccelerator.ini file in /etc/php.d. My PHP installation wont read this .ini file because the --with-config-file-scan-dir option wasn't used when compiling PHP. Is there a way to manually specify a path to the ini file somewhere so PHP can read the module's settings?
|
[
"This is just a wild guess, but try to add all the directives from eaccelerator.ini to php.ini. First create a <?php phpinfo(); ?> and check where it's located.\nFor example, try this:\n[eAccelerator]\nextension=\"eaccelerator.so\"\neaccelerator.shm_size=\"32\"\neaccelerator.cache_dir=\"/tmp\"\neaccelerator.enable=\"1\"\neaccelerator.optimizer=\"1\"\neaccelerator.check_mtime=\"1\"\neaccelerator.debug=\"0\"\neaccelerator.filter=\"\"\neaccelerator.shm_max=\"0\"\neaccelerator.shm_ttl=\"0\"\neaccelerator.shm_prune_period=\"0\"\neaccelerator.shm_only=\"0\"\neaccelerator.compress=\"1\"\neaccelerator.compress_level=\"9\"\n\nAnother thing you could do is set all the settings on run-time using ini_set(). I am not sure if that works though or how effective that is. :) I am not familiar with eAccelerator to know for sure.\n",
"The standard way in this instance is to copy the relevant .ini lines to the bottom of the php.ini file. There is no 'include \"file.ini\"' functionality in the php.ini file itself.\nYou can't do it at run time either, since the extension has already been initialised by then.\n",
"If using Apache, and mod-php, you can configure/override some php settings locally with a .htaccess file. Your webserver has to \"AlloweOverride\" appropriately in the main config file to allow you to override these settings locally. In my experience, many hosting companies will let you set php settings via htaccess.\n(thanks commenter for pointing out this only works with mod-php)\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"apache",
"php"
] |
stackoverflow_0000110887_apache_php.txt
|
Q:
Never produce to an unknown pathway, in software too? [Toyota principle]
In Toyota manufacturing lines they always know what path a part have traveled. Just so they can be sure they can fix it of something goes wrong. Is this applicable in software too?
All error messages should tell me exactly what path they traveled. Some do, the error messages with stack trace. Is this a correct interpretation? Could it be used some where else?
Ok, here is the podcast. I think it is interesting
http://itc.conversationsnetwork.org/shows/detail3798.html
A:
A good idea where practicable. Unfortunately, it is usually prohibitively difficult to keep track of the entire history of the state of the machine. You just can't tag each data structure with where you got it from, and the entire state of that object. You might be able to store just the external events and in that way reproduce where everything came from.
Some examples:
I did work on a project where it was practicable and it helped immensely. When we were getting close to shipping, and running out of bugs to fix, we would have our game play in "zero players mode", where the computer would repeatedly play itself all night long with all variations of characters and locales. If it asserted, it would display the random key that started the match. When we came to work in the morning we'd write the key down from our screen (there usually was one) and start it again using that key. Then we'd just watch it until the assert came up, and track it down. The important thing is that we could recreate all the original inputs that led to the error, and rerun it as many times as we wanted, even after recompiles (within limits... the number of fetches from the random number generator could not be changed, although we had a separate RNG for non-game stuff like visual fx). This only worked because each match started after a warm reboot and took only a very small amount of data as input.
I have heard that Bungie used a similar method to try to discover bad geometry in their Halo levels. They would set the dev kits running overnight in a special mode where the indestructable protagonist would move and jump randomly. In the morning they'd look and see if he got stuck in the geometry at some location where he couldn't get out. There may have been grenades involved, too.
On another project we actually logged all user interaction with a timestamp so we could replay it. That works great if you can, but most people have interactions with a changing DB whose entire state might not be stored so easily.
A:
It's less vital with software. If something goes wrong in software, you can usually reproduce the fault and analyse it in captivity. Even if it only happens 1 time in 1000, you can often switch on all the logging and run it 1000 times (a simple soak test).
That's much more expensive and time-consuming on a manufacturing line, to the point of being impossible.
Having as much information available as possible the first time it goes wrong is no bad thing, but it's not as important to me as it is to Toyota.
A:
This is a good approach. But be aware that you shouldn't over-do logging. Otherwise you couldn't find the interesting informations in all the noise, and it reduces the overall performance (e.g. anonymous object creation, depending on the language).
A:
Producing error messages with a full stack trace is usually bad security practice.
On the other hand, and more in line with Toyota's intent, every developed module should be traced back to the original programmer(s) - and they should be held accountable for shoddy work, bug fixes, security vulnerabilities, etc. Not for disciplinary purposes, but both maintenance, and education if necessary. And maybe for bonuses, in the contrary situation... ;-)
|
Never produce to an unknown pathway, in software too? [Toyota principle]
|
In Toyota manufacturing lines they always know what path a part have traveled. Just so they can be sure they can fix it of something goes wrong. Is this applicable in software too?
All error messages should tell me exactly what path they traveled. Some do, the error messages with stack trace. Is this a correct interpretation? Could it be used some where else?
Ok, here is the podcast. I think it is interesting
http://itc.conversationsnetwork.org/shows/detail3798.html
|
[
"A good idea where practicable. Unfortunately, it is usually prohibitively difficult to keep track of the entire history of the state of the machine. You just can't tag each data structure with where you got it from, and the entire state of that object. You might be able to store just the external events and in that way reproduce where everything came from.\nSome examples:\nI did work on a project where it was practicable and it helped immensely. When we were getting close to shipping, and running out of bugs to fix, we would have our game play in \"zero players mode\", where the computer would repeatedly play itself all night long with all variations of characters and locales. If it asserted, it would display the random key that started the match. When we came to work in the morning we'd write the key down from our screen (there usually was one) and start it again using that key. Then we'd just watch it until the assert came up, and track it down. The important thing is that we could recreate all the original inputs that led to the error, and rerun it as many times as we wanted, even after recompiles (within limits... the number of fetches from the random number generator could not be changed, although we had a separate RNG for non-game stuff like visual fx). This only worked because each match started after a warm reboot and took only a very small amount of data as input. \nI have heard that Bungie used a similar method to try to discover bad geometry in their Halo levels. They would set the dev kits running overnight in a special mode where the indestructable protagonist would move and jump randomly. In the morning they'd look and see if he got stuck in the geometry at some location where he couldn't get out. There may have been grenades involved, too.\nOn another project we actually logged all user interaction with a timestamp so we could replay it. That works great if you can, but most people have interactions with a changing DB whose entire state might not be stored so easily.\n",
"It's less vital with software. If something goes wrong in software, you can usually reproduce the fault and analyse it in captivity. Even if it only happens 1 time in 1000, you can often switch on all the logging and run it 1000 times (a simple soak test).\nThat's much more expensive and time-consuming on a manufacturing line, to the point of being impossible.\nHaving as much information available as possible the first time it goes wrong is no bad thing, but it's not as important to me as it is to Toyota.\n",
"This is a good approach. But be aware that you shouldn't over-do logging. Otherwise you couldn't find the interesting informations in all the noise, and it reduces the overall performance (e.g. anonymous object creation, depending on the language).\n",
"Producing error messages with a full stack trace is usually bad security practice.\nOn the other hand, and more in line with Toyota's intent, every developed module should be traced back to the original programmer(s) - and they should be held accountable for shoddy work, bug fixes, security vulnerabilities, etc. Not for disciplinary purposes, but both maintenance, and education if necessary. And maybe for bonuses, in the contrary situation... ;-)\n"
] |
[
5,
2,
0,
0
] |
[] |
[] |
[
"design_patterns",
"error_handling",
"principles",
"process"
] |
stackoverflow_0000110868_design_patterns_error_handling_principles_process.txt
|
Q:
Does the unmodifiable wrapper for java collections make them thread safe?
I need to make an ArrayList of ArrayLists thread safe. I also cannot have the client making changes to the collection. Will the unmodifiable wrapper make it thread safe or do I need two wrappers on the collection?
A:
It depends. The wrapper will only prevent changes to the collection it wraps, not to the objects in the collection. If you have an ArrayList of ArrayLists, the global List as well as each of its element Lists need to be wrapped separately, and you may also have to do something for the contents of those lists. Finally, you have to make sure that the original list objects are not changed, since the wrapper only prevents changes through the wrapper reference, not to the original object.
You do NOT need the synchronized wrapper in this case.
A:
On a related topic - I've seen several replies suggesting using synchronized collection in order to achieve thread safety.
Using synchronized version of a collection doesn't make it "thread safe" - although each operation (insert, count etc.) is protected by mutex when combining two operations there is no guarantee that they would execute atomically.
For example the following code is not thread safe (even with a synchronized queue):
if(queue.Count > 0)
{
queue.Add(...);
}
A:
The unmodifiable wrapper only prevents changes to the structure of the list that it applies to. If this list contains other lists and you have threads trying to modify these nested lists, then you are not protected against concurrent modification risks.
A:
From looking at the Collections source, it looks like Unmodifiable does not make it synchronized.
static class UnmodifiableSet<E> extends UnmodifiableCollection<E>
implements Set<E>, Serializable;
static class UnmodifiableCollection<E> implements Collection<E>, Serializable;
the synchronized class wrappers have a mutex object in them to do the synchronized parts, so looks like you need to use both to get both. Or roll your own!
A:
I believe that because the UnmodifiableList wrapper stores the ArrayList to a final field, any read methods on the wrapper will see the list as it was when the wrapper was constructed as long as the list isn't modified after the wrapper is created, and as long as the mutable ArrayLists inside the wrapper aren't modified (which the wrapper can't protect against).
A:
An immutable object is by definition thread safe (assuming no-one retains references to the original collections), so synchronization is not necessary.
Wrapping the outer ArrayList using Collections.unmodifiableList()
prevents the client from changing its contents (and thus makes it thread
safe), but the inner ArrayLists are still mutable.
Wrapping the inner ArrayLists using Collections.unmodifiableList() too
prevents the client from changing their contents (and thus makes them
thread safe), which is what you need.
Let us know if this solution causes problems (overhead, memory usage etc);
other solutions may be applicable to your problem. :)
EDIT: Of course, if the lists are modified they are NOT thread safe. I assumed no further edits were to be made.
A:
It will be thread-safe if the unmodifiable view is safely published, and the modifiable original is never ever modified (including all objects recursively contained in the collection!) after publication of the unmodifiable view.
If you want to keep modifying the original, then you can either create a defensive copy of the object graph of your collection and return an unmodifiable view of that, or use an inherently thread-safe list to begin with, and return an unmodifiable view of that.
You cannot return an unmodifiableList(synchonizedList(theList)) if you still intend to access theList unsynchronized afterwards; if mutable state is shared between multiple threads, then all threads must synchronize on the same locks when they access that state.
A:
This is neccessary if:
There is still a reference to the original modifiable list.
The list will possibly be accessed though an iterator.
If you intend to read from the ArrayList by index only you could assume this is thread-safe.
When in doubt, chose the synchronized wrapper.
A:
Not sure if I understood what you are trying to do, but I'd say the answer in most cases is "No".
If you setup an ArrayList of ArrayList and both, the outer and inner lists can never be changed after creation (and during creation only one thread will have access to either inner and outer lists), they are probably thread safe by a wrapper (if both, outer and inner lists are wrapped in such a way that modifying them is impossible). All read-only operations on ArrayLists are most likely thread-safe. However, Sun does not guarantee them to be thread-safe (also not for read-only operations), so even though it might work right now, it could break in the future (if Sun creates some internal caching of data for quicker access for example).
|
Does the unmodifiable wrapper for java collections make them thread safe?
|
I need to make an ArrayList of ArrayLists thread safe. I also cannot have the client making changes to the collection. Will the unmodifiable wrapper make it thread safe or do I need two wrappers on the collection?
|
[
"It depends. The wrapper will only prevent changes to the collection it wraps, not to the objects in the collection. If you have an ArrayList of ArrayLists, the global List as well as each of its element Lists need to be wrapped separately, and you may also have to do something for the contents of those lists. Finally, you have to make sure that the original list objects are not changed, since the wrapper only prevents changes through the wrapper reference, not to the original object.\nYou do NOT need the synchronized wrapper in this case.\n",
"On a related topic - I've seen several replies suggesting using synchronized collection in order to achieve thread safety.\nUsing synchronized version of a collection doesn't make it \"thread safe\" - although each operation (insert, count etc.) is protected by mutex when combining two operations there is no guarantee that they would execute atomically.\nFor example the following code is not thread safe (even with a synchronized queue):\nif(queue.Count > 0)\n{\n queue.Add(...);\n}\n\n",
"The unmodifiable wrapper only prevents changes to the structure of the list that it applies to. If this list contains other lists and you have threads trying to modify these nested lists, then you are not protected against concurrent modification risks.\n",
"From looking at the Collections source, it looks like Unmodifiable does not make it synchronized.\nstatic class UnmodifiableSet<E> extends UnmodifiableCollection<E>\n implements Set<E>, Serializable;\n\nstatic class UnmodifiableCollection<E> implements Collection<E>, Serializable;\n\nthe synchronized class wrappers have a mutex object in them to do the synchronized parts, so looks like you need to use both to get both. Or roll your own!\n",
"I believe that because the UnmodifiableList wrapper stores the ArrayList to a final field, any read methods on the wrapper will see the list as it was when the wrapper was constructed as long as the list isn't modified after the wrapper is created, and as long as the mutable ArrayLists inside the wrapper aren't modified (which the wrapper can't protect against).\n",
"An immutable object is by definition thread safe (assuming no-one retains references to the original collections), so synchronization is not necessary.\nWrapping the outer ArrayList using Collections.unmodifiableList()\nprevents the client from changing its contents (and thus makes it thread\nsafe), but the inner ArrayLists are still mutable.\nWrapping the inner ArrayLists using Collections.unmodifiableList() too\nprevents the client from changing their contents (and thus makes them\nthread safe), which is what you need.\nLet us know if this solution causes problems (overhead, memory usage etc);\nother solutions may be applicable to your problem. :)\nEDIT: Of course, if the lists are modified they are NOT thread safe. I assumed no further edits were to be made.\n",
"It will be thread-safe if the unmodifiable view is safely published, and the modifiable original is never ever modified (including all objects recursively contained in the collection!) after publication of the unmodifiable view.\nIf you want to keep modifying the original, then you can either create a defensive copy of the object graph of your collection and return an unmodifiable view of that, or use an inherently thread-safe list to begin with, and return an unmodifiable view of that.\nYou cannot return an unmodifiableList(synchonizedList(theList)) if you still intend to access theList unsynchronized afterwards; if mutable state is shared between multiple threads, then all threads must synchronize on the same locks when they access that state.\n",
"This is neccessary if:\n\nThere is still a reference to the original modifiable list.\nThe list will possibly be accessed though an iterator.\n\nIf you intend to read from the ArrayList by index only you could assume this is thread-safe.\nWhen in doubt, chose the synchronized wrapper.\n",
"Not sure if I understood what you are trying to do, but I'd say the answer in most cases is \"No\".\nIf you setup an ArrayList of ArrayList and both, the outer and inner lists can never be changed after creation (and during creation only one thread will have access to either inner and outer lists), they are probably thread safe by a wrapper (if both, outer and inner lists are wrapped in such a way that modifying them is impossible). All read-only operations on ArrayLists are most likely thread-safe. However, Sun does not guarantee them to be thread-safe (also not for read-only operations), so even though it might work right now, it could break in the future (if Sun creates some internal caching of data for quicker access for example).\n"
] |
[
10,
5,
2,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"collections",
"java",
"multithreading",
"unmodifiable"
] |
stackoverflow_0000088036_collections_java_multithreading_unmodifiable.txt
|
Q:
Changing Hostname / IP Address of Windows System Mounted as an Image
I'm looking for a way to change the hostname and IP address of a Windows XP system that is mounted via a loop-back image on a Linux system. So basically I have access to the Windows XP system on a file level, but I cannot execute any programs on it. A way similar to editing the /etc/hostname and whatever network configuration file under Linux.
The only ways I've found so far would include running a tool after boot, e.g. MS sysprep or use a solution like Acronis Snap Deploy.
A:
You can use chntpw tool to edit Windows registry offline. Here's an example of how to use it.
The keys you're looking for are these:
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\ComputerName\ComputerName
HKEY_LOCAL_MACHINE\SYSTEM\Current Control Set\
Services\Tcpip\Parameters\Interfaces\{<Interface GUID>}
Under your interface's GUID you'll find many keys, the ones you need are:
IPAddress (REG_MULTI_SZ) = x.x.x.x
SubnetMask (REG_MULTI_SZ) = x.x.x.x
DefaultGateway (REG_MULTI_SZ) = x.x.x.x
Do take a look at the rest of they keys in there, you might find some interesting information.
|
Changing Hostname / IP Address of Windows System Mounted as an Image
|
I'm looking for a way to change the hostname and IP address of a Windows XP system that is mounted via a loop-back image on a Linux system. So basically I have access to the Windows XP system on a file level, but I cannot execute any programs on it. A way similar to editing the /etc/hostname and whatever network configuration file under Linux.
The only ways I've found so far would include running a tool after boot, e.g. MS sysprep or use a solution like Acronis Snap Deploy.
|
[
"You can use chntpw tool to edit Windows registry offline. Here's an example of how to use it.\nThe keys you're looking for are these:\nHKEY_LOCAL_MACHINE\\SYSTEM\\ControlSet001\\Control\\ComputerName\\ComputerName\n\nHKEY_LOCAL_MACHINE\\SYSTEM\\Current Control Set\\\nServices\\Tcpip\\Parameters\\Interfaces\\{<Interface GUID>}\n\nUnder your interface's GUID you'll find many keys, the ones you need are:\nIPAddress (REG_MULTI_SZ) = x.x.x.x\n\nSubnetMask (REG_MULTI_SZ) = x.x.x.x\n\nDefaultGateway (REG_MULTI_SZ) = x.x.x.x\n\nDo take a look at the rest of they keys in there, you might find some interesting information.\n"
] |
[
5
] |
[] |
[] |
[
"sysadmin",
"windows"
] |
stackoverflow_0000110920_sysadmin_windows.txt
|
Q:
Are writes from within a Tomcat 6 CometProcessor non-blocking
I have a CometProcessor implementation that is effectively doing a Multicast to a potentially large number of clients. When an event occurs that needs propagated to to all the clients, the CometProcessor will need to loop through the list of clients writing out the response. If writing responses block then there is the possibility that potentially slow clients could have an adverse effect on the distribution of the event. Example:
public class MyCometProcessor implements CometProcessor {
private List<Event> connections = new ArrayList<Event>();
public void onEvent(byte[] someInfo) {
synchronized (connections) {
for (Event e : connections) {
HttpServletResponse r = e.getHttpResponse();
// -- Does this line block while waiting for I/O --
r.getOutputStream().write(someInfo);
}
}
}
public void event(CometEvent event) {
switch (event.getEventType()) {
case READ:
synchronzied (connections) {
connections.add(event);
}
break;
// ...
}
}
}
Update: Answering my own question. Writes from a CometProcessor are blocking:
http://tomcat.apache.org/tomcat-6.0-doc/config/http.html
See the table at the bottom of the page.
A:
Tomcat6's implementation of HttpServlerResponse is the Response class. Internally it uses a CoyoteOutputStream wrapped around an OutputBuffer. As the name suggests, this class is a buffer, default size 8k. So I would say at the very least if you are writing less than 8k then you arent going to block. You may need to flush though for your clients to see the data which means that ultimately it depends on which connector variant you are using. In your Connector config if you want non-blocking writes then specify
protocol=org.apache.coyote.http11.Http11NioProtocol
This Connector/Protocol is massively configurable:
http://tomcat.apache.org/tomcat-6.0-doc/config/http.html
|
Are writes from within a Tomcat 6 CometProcessor non-blocking
|
I have a CometProcessor implementation that is effectively doing a Multicast to a potentially large number of clients. When an event occurs that needs propagated to to all the clients, the CometProcessor will need to loop through the list of clients writing out the response. If writing responses block then there is the possibility that potentially slow clients could have an adverse effect on the distribution of the event. Example:
public class MyCometProcessor implements CometProcessor {
private List<Event> connections = new ArrayList<Event>();
public void onEvent(byte[] someInfo) {
synchronized (connections) {
for (Event e : connections) {
HttpServletResponse r = e.getHttpResponse();
// -- Does this line block while waiting for I/O --
r.getOutputStream().write(someInfo);
}
}
}
public void event(CometEvent event) {
switch (event.getEventType()) {
case READ:
synchronzied (connections) {
connections.add(event);
}
break;
// ...
}
}
}
Update: Answering my own question. Writes from a CometProcessor are blocking:
http://tomcat.apache.org/tomcat-6.0-doc/config/http.html
See the table at the bottom of the page.
|
[
"Tomcat6's implementation of HttpServlerResponse is the Response class. Internally it uses a CoyoteOutputStream wrapped around an OutputBuffer. As the name suggests, this class is a buffer, default size 8k. So I would say at the very least if you are writing less than 8k then you arent going to block. You may need to flush though for your clients to see the data which means that ultimately it depends on which connector variant you are using. In your Connector config if you want non-blocking writes then specify \nprotocol=org.apache.coyote.http11.Http11NioProtocol\nThis Connector/Protocol is massively configurable:\nhttp://tomcat.apache.org/tomcat-6.0-doc/config/http.html\n"
] |
[
1
] |
[] |
[] |
[
"comet",
"java",
"servlets",
"tomcat"
] |
stackoverflow_0000109294_comet_java_servlets_tomcat.txt
|
Q:
What is the reasoning behind the recommended layout for Subversion repositories?
Version Control with Subversion recommends the following layout for (single-project) repositories (complemented by this question):
/trunk
/tags
/rel.1 (approximately)
...
/branches
/rel1fixes
What are the relative merits of this arrangement when compared with a (perhaps) more process-oriented one?:
/development
/current
/stable
/qa (maybe)
...
/production
/stable
/Prod.2
/Prod.1
/vendor
/Rel.5.1
/Rel.5.2
Please note that I'm thinking of in-house deployment, rather than building a product.
Disclaimer: although I'm a Subversion user, I've never had to deploy with it in a real live environment.
A:
The main difference between the recommended layout and your proposed layout is that the recommended layout is somewhat self-documenting as to where to commit things, and how it behaves.
For example, in the recommended layout, it's obvious that all new development is committed to trunk, and most branches are made from trunk. Also, it's obvious that you should never commit anything into /tags. Finally, it's safe to assume that branches are truly branches, which may contain changes specific to that particular branch purpose.
With the proposed layout, some of these things are less certain. Is /development/stable branched from /current? What's the relation between /development/stable and /production/stable? Which of these directories are tags, and which ones can I actually check stuff into?
Certainly this behavior can be documented, but by sticking to the accepted layout that everybody uses, you'll have an easier time getting new hires up to speed on how it works.
A:
I'll try and sum up the answers so far:
Simple
The "classic" layout (trunk/ +
branches/ + tags/) has the advantage
of growable simplicity
The Trunk is (usually) the main
development line
Branches attend to special
development needs such as complex
subprojects and post-release
maintenance
Tags are fixed, immutable marker
posts
This classic layout is well-known so
your developers get up to speed
faster
Expandable
Vendor development of products
integrated into your development
(perhaps with adaptations) can, if
required be handled as a vendor
branch (normally one is enough)
The "Process" axis (Eg. Development,
Test if done separately, QA if used, and
Production) can be handled by
appropriate branch or tag
conventions (depending on whether
any changes are required or
permitted outside "Development").
These additional sets of branches
can be handled by naming
conventions, or by an additional
directory level within tags/ or
branches/.
See Other Questions
What does branch, tag and trunk really mean?
What is a good repository layout for releases and projects in Subversion?
Do you use the branches-tags-trunk convention?
I have made this a community answer; please feel free to correct or extend any deficiencies, for which I apologise.
A:
You've described the two pretty much standard models for repository organization: dev-test-prod and trunk-branch. Eric Sink does a nice job of describing them in his Source Control HOWTO. One thing to note is that the way most people use trunk-branch is to create a branch for each version as it is released to customers, which then becomes the maintenance branch.
I would tend to prefer trunk-branch since it doesn't require migrating every single change from development to test to production. Only changes that need to be backported to maintance branches or bugfixes that migrate from the maintance branch to the trunk need to be migrated.
However, one circumstance were dev-test-prod might be preferable is in web development, where the concept of versions released to customers doesn't really exist. Prod, in this case, would be whatever's running on the server right now, while code is being worked on in dev and test and constantly migrated into the application, rather than being released in one big chunk.
A:
I think flexibility and avoiding ambiguity is your answer.
By using version numbers you do not tie yourself to where that version is deployed.
For example you might have version 1.3 which is deployed as development, 1.2 which is in test and 1.1 which is in production. If you wanted you could easily add another staging environment for another version without having to change your subversion layout.
Nobody can argument what version 1.1 of the code is, but "production-stable" version is ambiguous.
A:
Whenever you deal with real live environments, you would want your developers to be able to understand your repository as easily as possible. A good way to do this is by adhering to the recommended Subversion standard layout.
A:
Although I personally use the layout recommended in the SVN book, you probably should not restrict yourself to it if your layout works better for you. I would keep the branch directory since its usage and purpose is pretty clear from the name. Apart from that, really, anything goes if it works for you.
|
What is the reasoning behind the recommended layout for Subversion repositories?
|
Version Control with Subversion recommends the following layout for (single-project) repositories (complemented by this question):
/trunk
/tags
/rel.1 (approximately)
...
/branches
/rel1fixes
What are the relative merits of this arrangement when compared with a (perhaps) more process-oriented one?:
/development
/current
/stable
/qa (maybe)
...
/production
/stable
/Prod.2
/Prod.1
/vendor
/Rel.5.1
/Rel.5.2
Please note that I'm thinking of in-house deployment, rather than building a product.
Disclaimer: although I'm a Subversion user, I've never had to deploy with it in a real live environment.
|
[
"The main difference between the recommended layout and your proposed layout is that the recommended layout is somewhat self-documenting as to where to commit things, and how it behaves.\nFor example, in the recommended layout, it's obvious that all new development is committed to trunk, and most branches are made from trunk. Also, it's obvious that you should never commit anything into /tags. Finally, it's safe to assume that branches are truly branches, which may contain changes specific to that particular branch purpose.\nWith the proposed layout, some of these things are less certain. Is /development/stable branched from /current? What's the relation between /development/stable and /production/stable? Which of these directories are tags, and which ones can I actually check stuff into? \nCertainly this behavior can be documented, but by sticking to the accepted layout that everybody uses, you'll have an easier time getting new hires up to speed on how it works.\n",
"I'll try and sum up the answers so far:\n\nSimple\n\nThe \"classic\" layout (trunk/ +\nbranches/ + tags/) has the advantage\nof growable simplicity\nThe Trunk is (usually) the main\ndevelopment line\nBranches attend to special\ndevelopment needs such as complex\nsubprojects and post-release\nmaintenance\nTags are fixed, immutable marker\nposts\nThis classic layout is well-known so\nyour developers get up to speed\nfaster\n\nExpandable\n\nVendor development of products\nintegrated into your development\n(perhaps with adaptations) can, if\nrequired be handled as a vendor\nbranch (normally one is enough)\nThe \"Process\" axis (Eg. Development,\nTest if done separately, QA if used, and\nProduction) can be handled by\nappropriate branch or tag\nconventions (depending on whether\nany changes are required or\npermitted outside \"Development\"). \nThese additional sets of branches\ncan be handled by naming\nconventions, or by an additional\ndirectory level within tags/ or\nbranches/.\n\nSee Other Questions\n\nWhat does branch, tag and trunk really mean?\nWhat is a good repository layout for releases and projects in Subversion?\nDo you use the branches-tags-trunk convention?\n\n\nI have made this a community answer; please feel free to correct or extend any deficiencies, for which I apologise.\n",
"You've described the two pretty much standard models for repository organization: dev-test-prod and trunk-branch. Eric Sink does a nice job of describing them in his Source Control HOWTO. One thing to note is that the way most people use trunk-branch is to create a branch for each version as it is released to customers, which then becomes the maintenance branch.\nI would tend to prefer trunk-branch since it doesn't require migrating every single change from development to test to production. Only changes that need to be backported to maintance branches or bugfixes that migrate from the maintance branch to the trunk need to be migrated.\nHowever, one circumstance were dev-test-prod might be preferable is in web development, where the concept of versions released to customers doesn't really exist. Prod, in this case, would be whatever's running on the server right now, while code is being worked on in dev and test and constantly migrated into the application, rather than being released in one big chunk.\n",
"I think flexibility and avoiding ambiguity is your answer.\nBy using version numbers you do not tie yourself to where that version is deployed.\nFor example you might have version 1.3 which is deployed as development, 1.2 which is in test and 1.1 which is in production. If you wanted you could easily add another staging environment for another version without having to change your subversion layout.\nNobody can argument what version 1.1 of the code is, but \"production-stable\" version is ambiguous. \n",
"Whenever you deal with real live environments, you would want your developers to be able to understand your repository as easily as possible. A good way to do this is by adhering to the recommended Subversion standard layout.\n",
"Although I personally use the layout recommended in the SVN book, you probably should not restrict yourself to it if your layout works better for you. I would keep the branch directory since its usage and purpose is pretty clear from the name. Apart from that, really, anything goes if it works for you.\n"
] |
[
18,
9,
5,
2,
0,
0
] |
[
"I think your plan is pretty good, really. How will you account for branches where a programmer is wandering off on their own just trying something? Maybe like /development/jfm3-messing-around ?\n"
] |
[
-1
] |
[
"svn"
] |
stackoverflow_0000108682_svn.txt
|
Q:
Requiring users to update .NET
I'm working on some production software, using C# on the .NET framework. I really would like to be able to use LINQ on the project. I believe it requires .NET version 3.5 (correct me if I'm wrong). This application is a commercial software app, required to run on a client's work PC. Is it reasonable to assume they have .NET 3.5, or assume that they won't mind upgrading to the latest version?
I just wanted to feel out what the consensus was as far as mandating framework upgrades to run apps.
A:
I would say that it isn't safe to assume they have .NET 3.5.
Where as it is very, very unlikely they will have any problems when upgrading, changing anything always carries a risk. I know I wouldn't mind upgrading, but I am a developer.
I think it's one of those things that could go either way, they either won't think twice about it and just upgrade, or they might make an issue out of it. I think it would depend on your customers, 'low-tech' clients may think twice as they may not fully understand it, which would make them nervous.
A:
To use LINQ, as you have said, you need to have .NET 3.5. Just to confirm this, the Wikipedia page for LINQ says:
Language Integrated Query (LINQ,
pronounced "link") is a Microsoft .NET
Framework component that adds native
data querying capabilities to .NET
languages using a syntax reminiscent
of SQL. Many of the concepts that LINQ
has introduced were originally tested
in Microsoft's Cω research project.
LINQ was released as a part of .NET
Framework 3.5 on November 19, 2007.
Due to the fact that machines may have some of the previous versions of .NET already installed, you may find that this site, Smallest Dot NET by Scott Hanselman (Microsoft employee) is useful. It works out the smallest updates you need to get up to date (currently 3.5 SP1).
As for whether it is reasonable to expect it on the client's machine, I guess it depends upon what you're creating. My feelings are:
Small low cost applications = PERHAPS NOT YET
A tiny application sold at low cost, perhaps targeting 3.5 is a little early and likely to reduce the size of your audience because of the annoyance factor.
Large commercial applications, with installers = YES
If it is a large commercial application (your baseline specifications are already WInXP or newer running on .NET 2.0), I don't think the customer would care. Put the redistributable on the installer disk!
Remember that adopting any new technology should be done for a number of reasons. What is your need to use LINQ, is it something that would be tough to replicate? If LINQ gives you functionality you really need, your costs and timetable are likely to benefit from selecting it. Your company gain by being able to sell the product for less or increase their margins.
One final option, as pointed out by Nescio, if all you need is Linq to Objects (eg. you don't need Linq to SQL or Linq to XML) then LinqBridge may be an option.
A:
Since .NET Framework itself is distributed for free, people are rarely against upgrading it. However there may be problems with system administrator availability or problems with installation.
A:
Check out: LinqBridge
A:
Talk to your V.P. of Sales. Seriously. If 3.5 is bleeding edge (I honestly don't know), then odds are he/she will not like the idea very much. If it is a couple of years old, then they'll be more accepting. Being a product that forces upgrades of third party SW is not an insurmountable shortcoming, but it doesn't help.
A:
It depends on your target audience and the importance of your app. Generally speaking at this point you probably can't assume that your audience already has .NET 3.5. Installing it can take quite a while, and can be quite tedious if they don't already have the other prerequisites to .NET 3.5.
So unless it's a fairly comprehensive and/or important piece of enterprise software, I would strongly advise against it.
A:
You should read this Hanselman's entry: http://www.hanselman.com/blog/SmallestDotNetOnTheSizeOfTheNETFramework.aspx
It's really interesting if it comes to installing and thus minimalizing installation size of .NET framework. It should be somehow an answer to your question.
A:
So long as you know that you don't need to support Windows 2000 or any older versions of Windows then requiring the latest and greatest framework version doesn't feel too onerous.
Some less fortunate developers are stuck with older framework versions because they need to support older OS versions.
A:
.Net 3.5 is not yet auto updated on Windows PC, I would not bet on a standard customer having it "as is".
Notice you may have to decide if you go for .Net3.5 SP1, since there is a small DataSet backward incompatibility between 3.5 and 3.5SP1 (and maybe some others I did not see).
If your client is a big company you may want to consider that they are often very conservative (My clients are still XP/IE6 and sometime even W2K/IE6).
A:
Beware Windows 2000 is not supported on any frameworks above 2.0. So you're application would then only support the following operating systems:
Microsoft Windows XP
Microsoft Windows Server 2003
Windows Vista
Windows Server 2008
Good Luck!
|
Requiring users to update .NET
|
I'm working on some production software, using C# on the .NET framework. I really would like to be able to use LINQ on the project. I believe it requires .NET version 3.5 (correct me if I'm wrong). This application is a commercial software app, required to run on a client's work PC. Is it reasonable to assume they have .NET 3.5, or assume that they won't mind upgrading to the latest version?
I just wanted to feel out what the consensus was as far as mandating framework upgrades to run apps.
|
[
"I would say that it isn't safe to assume they have .NET 3.5.\nWhere as it is very, very unlikely they will have any problems when upgrading, changing anything always carries a risk. I know I wouldn't mind upgrading, but I am a developer.\nI think it's one of those things that could go either way, they either won't think twice about it and just upgrade, or they might make an issue out of it. I think it would depend on your customers, 'low-tech' clients may think twice as they may not fully understand it, which would make them nervous.\n",
"To use LINQ, as you have said, you need to have .NET 3.5. Just to confirm this, the Wikipedia page for LINQ says:\n\nLanguage Integrated Query (LINQ,\n pronounced \"link\") is a Microsoft .NET\n Framework component that adds native\n data querying capabilities to .NET\n languages using a syntax reminiscent\n of SQL. Many of the concepts that LINQ\n has introduced were originally tested\n in Microsoft's Cω research project.\n LINQ was released as a part of .NET\n Framework 3.5 on November 19, 2007.\n\nDue to the fact that machines may have some of the previous versions of .NET already installed, you may find that this site, Smallest Dot NET by Scott Hanselman (Microsoft employee) is useful. It works out the smallest updates you need to get up to date (currently 3.5 SP1).\nAs for whether it is reasonable to expect it on the client's machine, I guess it depends upon what you're creating. My feelings are:\nSmall low cost applications = PERHAPS NOT YET\nA tiny application sold at low cost, perhaps targeting 3.5 is a little early and likely to reduce the size of your audience because of the annoyance factor. \nLarge commercial applications, with installers = YES\nIf it is a large commercial application (your baseline specifications are already WInXP or newer running on .NET 2.0), I don't think the customer would care. Put the redistributable on the installer disk! \nRemember that adopting any new technology should be done for a number of reasons. What is your need to use LINQ, is it something that would be tough to replicate? If LINQ gives you functionality you really need, your costs and timetable are likely to benefit from selecting it. Your company gain by being able to sell the product for less or increase their margins. \nOne final option, as pointed out by Nescio, if all you need is Linq to Objects (eg. you don't need Linq to SQL or Linq to XML) then LinqBridge may be an option.\n",
"Since .NET Framework itself is distributed for free, people are rarely against upgrading it. However there may be problems with system administrator availability or problems with installation.\n",
"Check out: LinqBridge\n",
"Talk to your V.P. of Sales. Seriously. If 3.5 is bleeding edge (I honestly don't know), then odds are he/she will not like the idea very much. If it is a couple of years old, then they'll be more accepting. Being a product that forces upgrades of third party SW is not an insurmountable shortcoming, but it doesn't help. \n",
"It depends on your target audience and the importance of your app. Generally speaking at this point you probably can't assume that your audience already has .NET 3.5. Installing it can take quite a while, and can be quite tedious if they don't already have the other prerequisites to .NET 3.5.\nSo unless it's a fairly comprehensive and/or important piece of enterprise software, I would strongly advise against it.\n",
"You should read this Hanselman's entry: http://www.hanselman.com/blog/SmallestDotNetOnTheSizeOfTheNETFramework.aspx\nIt's really interesting if it comes to installing and thus minimalizing installation size of .NET framework. It should be somehow an answer to your question.\n",
"So long as you know that you don't need to support Windows 2000 or any older versions of Windows then requiring the latest and greatest framework version doesn't feel too onerous. \nSome less fortunate developers are stuck with older framework versions because they need to support older OS versions.\n",
".Net 3.5 is not yet auto updated on Windows PC, I would not bet on a standard customer having it \"as is\".\nNotice you may have to decide if you go for .Net3.5 SP1, since there is a small DataSet backward incompatibility between 3.5 and 3.5SP1 (and maybe some others I did not see).\nIf your client is a big company you may want to consider that they are often very conservative (My clients are still XP/IE6 and sometime even W2K/IE6).\n",
"Beware Windows 2000 is not supported on any frameworks above 2.0. So you're application would then only support the following operating systems:\n\nMicrosoft Windows XP\nMicrosoft Windows Server 2003\nWindows Vista\nWindows Server 2008\n\nGood Luck!\n"
] |
[
5,
4,
3,
1,
1,
1,
1,
0,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"client_applications",
"linq"
] |
stackoverflow_0000106439_.net_c#_client_applications_linq.txt
|
Q:
Large array arithmetics in C#
Which is the best way to store a 2D array in c# in order to optimize performance when performing lots of arithmetic on the elements in the array?
We have large (approx 1.5G) arrays, which for example we want to multiply with each other element by element. Performance is critical. The context in which this is done is in c#. Is there any smart way of storing the arrays and iterating over them? Could we write these parts in unmanaged C++ and will this really increase performance? The arrays need to be accessible to the rest of the c# program.
Currently (in c) the array is stored as a single long vector. We perform calculations on each element in the array and overwrite the old value. The calculations are usually unique for each element in the vector.
Timing experiments show that storing and iterating over the data as an array in C# is slower than storing it as a 2D array. I would like to know if there is an even better way of handling the data. The specific arithmetics performed are not relevant for the question.
A:
Anna,
Here is a great page that discusses the performance difference between tradition scientific programming languages (fortran, C++) and c#.
http://msdn.microsoft.com/en-us/magazine/cc163995.aspx
According to the article C#, when using rectangular arrays (2d) can be a very good performer. Here is a graph that shows the difference in performance between jagged arrays (an array of arrays) and rectangular arrays (multi-dimensional) arrays.
alt text http://i.msdn.microsoft.com/cc163995.fig08.gif
I would suggest experimenting yourself, and use the Performance Analysis in VS 2008 to compare.
If using C# is "fast enough" then your application will be that much easier to maintain.
Good Luck!
A:
For best array performance, make sure you're using a single dimension array with lower index of 0.
To access the elements of the array as fast as possible, you can use unsafe pointers like so:
int[] array = Enumerable.Range(0, 1000).ToArray();
int count = 0;
unsafe {
fixed (int* pArray = array) {
for (int i = 0; i < array.Length; i++) {
count += *(pArray + i);
}
}
}
EDIT Drat! Didn't notice you said 2D array. This trick won't work with a multi-dimensional array so I'm not sure how much help it will be. Although you could turn any array into a single-dimension array by doing some arithmetic on the array index. Just depends on if you care about the performance hit in indexing the array or in iterating over the array.
A:
If you download F#, and reference one of the runtime libraries (I think it's FSharp.PowerPack), and use Microsoft.FSharp.Maths.Matrix. It optimises itself based on whether you are using a dense or sparse matrix.
A:
Do you iterate the matrix by row or by colum or both? Do you always access nearby elements or do you do random accesses on the matrix.
If there is some locality in your accesses but you're not accessing it sequential (typical in matrix multiplication for example) then you can get a huge performance difference by storing your matrix in a more cache-friendly way.
A pretty easy way to do that is to write a little access function to turn your row/colum indices into an index and work on a one dimensional matrix, the cache-friendy way.
The function should group nearby coordinates into nearby indices. The morton-order can be used if you work on power of two sizes. For non-power sizes you can often bring just the lowest 4 bits into morton order and use normal index-arithmetic for the upper bits. You'll still get a significant speed-up, even if the coordinate to index conversion looks seems to be a costly operation.
http://en.wikipedia.org/wiki/Z-order_(curve) <-- sorry, can't link that SO does not like URL's with a dash in it. You have to cut'n'paste.
A speed up of factor 10 and more are realistic btw. It depends on the algorithm you ron over your matrices though.
|
Large array arithmetics in C#
|
Which is the best way to store a 2D array in c# in order to optimize performance when performing lots of arithmetic on the elements in the array?
We have large (approx 1.5G) arrays, which for example we want to multiply with each other element by element. Performance is critical. The context in which this is done is in c#. Is there any smart way of storing the arrays and iterating over them? Could we write these parts in unmanaged C++ and will this really increase performance? The arrays need to be accessible to the rest of the c# program.
Currently (in c) the array is stored as a single long vector. We perform calculations on each element in the array and overwrite the old value. The calculations are usually unique for each element in the vector.
Timing experiments show that storing and iterating over the data as an array in C# is slower than storing it as a 2D array. I would like to know if there is an even better way of handling the data. The specific arithmetics performed are not relevant for the question.
|
[
"Anna,\nHere is a great page that discusses the performance difference between tradition scientific programming languages (fortran, C++) and c#.\nhttp://msdn.microsoft.com/en-us/magazine/cc163995.aspx\nAccording to the article C#, when using rectangular arrays (2d) can be a very good performer. Here is a graph that shows the difference in performance between jagged arrays (an array of arrays) and rectangular arrays (multi-dimensional) arrays.\nalt text http://i.msdn.microsoft.com/cc163995.fig08.gif\nI would suggest experimenting yourself, and use the Performance Analysis in VS 2008 to compare.\nIf using C# is \"fast enough\" then your application will be that much easier to maintain.\nGood Luck!\n",
"For best array performance, make sure you're using a single dimension array with lower index of 0.\nTo access the elements of the array as fast as possible, you can use unsafe pointers like so:\nint[] array = Enumerable.Range(0, 1000).ToArray();\n\nint count = 0;\nunsafe {\n fixed (int* pArray = array) {\n for (int i = 0; i < array.Length; i++) {\n count += *(pArray + i);\n }\n }\n}\n\nEDIT Drat! Didn't notice you said 2D array. This trick won't work with a multi-dimensional array so I'm not sure how much help it will be. Although you could turn any array into a single-dimension array by doing some arithmetic on the array index. Just depends on if you care about the performance hit in indexing the array or in iterating over the array.\n",
"If you download F#, and reference one of the runtime libraries (I think it's FSharp.PowerPack), and use Microsoft.FSharp.Maths.Matrix. It optimises itself based on whether you are using a dense or sparse matrix.\n",
"Do you iterate the matrix by row or by colum or both? Do you always access nearby elements or do you do random accesses on the matrix.\nIf there is some locality in your accesses but you're not accessing it sequential (typical in matrix multiplication for example) then you can get a huge performance difference by storing your matrix in a more cache-friendly way.\nA pretty easy way to do that is to write a little access function to turn your row/colum indices into an index and work on a one dimensional matrix, the cache-friendy way.\nThe function should group nearby coordinates into nearby indices. The morton-order can be used if you work on power of two sizes. For non-power sizes you can often bring just the lowest 4 bits into morton order and use normal index-arithmetic for the upper bits. You'll still get a significant speed-up, even if the coordinate to index conversion looks seems to be a costly operation.\nhttp://en.wikipedia.org/wiki/Z-order_(curve) <-- sorry, can't link that SO does not like URL's with a dash in it. You have to cut'n'paste.\nA speed up of factor 10 and more are realistic btw. It depends on the algorithm you ron over your matrices though.\n"
] |
[
8,
5,
2,
0
] |
[] |
[] |
[
"arrays",
"c#"
] |
stackoverflow_0000111026_arrays_c#.txt
|
Q:
Report generation
I am writing a web app using TurboGears, and in that app the users must be able to generate different reports.
The data the reports need is stored in a database (MySQL).
The reports must be returned either as a easily printable html document, or a pdf file.
I have previously used jasper and iReport for creating the documents, but if I can avoid having to fire up Java to create the report I would be happy.
Ideally I would like to specify reports in a easily readable markup language and then just feed the template data with some kind of library that produces the report.
I gladly accept any kind of hints on how I should go about generating these reports!
A:
Pod is my favorite solution to your problem.
A:
You can build some fancy PDFs from Python with the ReportLab toolkit.
A:
A partial answer: the easily readable format you are looking for might be DocBook. From there it is very easy to go to PDF, html, RTF, etc. etc.
|
Report generation
|
I am writing a web app using TurboGears, and in that app the users must be able to generate different reports.
The data the reports need is stored in a database (MySQL).
The reports must be returned either as a easily printable html document, or a pdf file.
I have previously used jasper and iReport for creating the documents, but if I can avoid having to fire up Java to create the report I would be happy.
Ideally I would like to specify reports in a easily readable markup language and then just feed the template data with some kind of library that produces the report.
I gladly accept any kind of hints on how I should go about generating these reports!
|
[
"Pod is my favorite solution to your problem.\n",
"You can build some fancy PDFs from Python with the ReportLab toolkit.\n",
"A partial answer: the easily readable format you are looking for might be DocBook. From there it is very easy to go to PDF, html, RTF, etc. etc.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"python",
"report"
] |
stackoverflow_0000110760_python_report.txt
|
Q:
Why isn't there a standard memswap function
Why doesn't have the c standard a memswap function, which would probably look like:
int memswap(void *ptr1, void *ptr2, size_t nbytes)?
I know it'd be easy to write, but i think the libc could do some awesome tricks to speed it up like some implementations do it for memcpy.
A:
I think because it's not needed very often. However, there is an easy way to do this in C++:
#include <algorithm>
swap_ranges(ptr1, ptr1 + nbytes, ptr2)
It's it may not be quite as optimized as a compiler built in, but it has the potential of being faster than a loop you write for yourself, since it may have platform specific optimization that you would not implement.
You do need to be careful with the above, because it assumes that ptr1 and ptr2 are char pointers. The more canonical way to do this is:
#include <algorithm>
swap_ranges(ptr1, ptr1 + num_items, ptr2)
A:
This isn't something that is routinely required.
The ideas may have been considered and discarded because it is quite difficult to come up with an algorithm that is general purpose. Don't forget that C is an old language and extensions need to be generally useful.
Possible error conditions :-
behaviour when the ranges being swapped overlap
length of zero
running out of memory (an optimal implementation might allocate memory to do this)
null pointer
The best algorithm might also depend upon what you are doing, and so could be better coded directly by you.
swapping structures likely to be quicker using a temp structure and assignment
small lengths - may be better allocating temporary memory
long lengths - 'section' by section swap (where section is some optimal length)
use of hardware copy functions
A:
Probably because it's not needed very often - I memset and memcpy reasonably often, but I don't know that I'd ever have used memswap if it was available.
A:
It probably isn't required very often in C programming, in C++ where swap is a regular thing to do on class members there's the std::swap algorithm which is highly optimized for different types.
|
Why isn't there a standard memswap function
|
Why doesn't have the c standard a memswap function, which would probably look like:
int memswap(void *ptr1, void *ptr2, size_t nbytes)?
I know it'd be easy to write, but i think the libc could do some awesome tricks to speed it up like some implementations do it for memcpy.
|
[
"I think because it's not needed very often. However, there is an easy way to do this in C++:\n#include <algorithm>\n\nswap_ranges(ptr1, ptr1 + nbytes, ptr2)\n\nIt's it may not be quite as optimized as a compiler built in, but it has the potential of being faster than a loop you write for yourself, since it may have platform specific optimization that you would not implement.\nYou do need to be careful with the above, because it assumes that ptr1 and ptr2 are char pointers. The more canonical way to do this is:\n#include <algorithm>\n\nswap_ranges(ptr1, ptr1 + num_items, ptr2)\n\n",
"This isn't something that is routinely required.\nThe ideas may have been considered and discarded because it is quite difficult to come up with an algorithm that is general purpose. Don't forget that C is an old language and extensions need to be generally useful.\nPossible error conditions :-\n\nbehaviour when the ranges being swapped overlap\nlength of zero\nrunning out of memory (an optimal implementation might allocate memory to do this)\nnull pointer\n\nThe best algorithm might also depend upon what you are doing, and so could be better coded directly by you.\n\nswapping structures likely to be quicker using a temp structure and assignment\nsmall lengths - may be better allocating temporary memory\nlong lengths - 'section' by section swap (where section is some optimal length)\nuse of hardware copy functions\n\n",
"Probably because it's not needed very often - I memset and memcpy reasonably often, but I don't know that I'd ever have used memswap if it was available.\n",
"It probably isn't required very often in C programming, in C++ where swap is a regular thing to do on class members there's the std::swap algorithm which is highly optimized for different types.\n"
] |
[
14,
6,
3,
3
] |
[] |
[] |
[
"c"
] |
stackoverflow_0000109249_c.txt
|
Q:
How do I implement a two-pass scanner using Flex?
As a pet-project, I'd like to attempt to implement a basic language of my own design that can be used as a web-scripting language. It's trivial to run a C++ program as an Apache CGI, so the real work lies in how to parse an input file containing non-code (HTML/CSS markup) and server-side code.
In my undergrad compiler course, we used Flex and Bison to generate a scanner and a parser for a simple language. We were given a copy of the grammar and wrote a parser that translated the simple language to a simple assembly for a virtual machine. The flex scanner tokenizes the input, and passes the tokens to the Bison parser.
The difference between that and what I'd like to do is that like PHP, this language could have plain HTML markup and the scripting language interspersed like the following:
<p>Hello,
<? echo "World ?>
</p>
Am I incorrect in assuming that it would be efficient to parse the input file as follows:
Scan input until a script start tag is found ('
Second scanner tokenizes the server-side script section of the input file (from the open tag: '') and passes the token to the parser, which has no need to know about the markup in the file.
Control is returned to the first scanner that continues this general pattern.
Basically, the first scanner only differentiates between Markup (which is returned directly to the browser unmodified) and code, which is passed to the second scanner, which in turn tokenizes the code and passes the tokens to the parser.
If this is not a solid design pattern, how do languages such as PHP handle scanning input and parsing code efficiently?
A:
You want to look at start conditions. For example:
"<?" { BEGIN (PHP); }
<PHP>[a-zA-Z]* { return PHP_TOKEN; }
<PHP>">?" { BEGIN (0); }
[a-zA-Z]* { return HTML_TOKEN; }
You start off in state 0, use the BEGIN macro to change states.
To match a RE only while in a particular state, prefix the RE with the state name surrounded by angle-brackets.
In the example above, "PHP" is state. "PHP_TOKEN" and "HTML_TOKEN" are _%token_s defined by your yacc file.
A:
PHP doesn't differentiate between the scanning and the Markup. It simply outputs to buffer when in Markup mode, and then switches to parsing when in code mode. You don't need a two pass scanner, and you can do this with just a single flex lexer.
If you are interested in how PHP itself works, download the source (try the PHP4 source it is a lot easier to understand). What you want to look at is in the Zend Directory, zend_language_scanner.l.
Having written something similar myself, I would really recommend rethinking going the Flex and Bison route, and go with something modern like Antlr. It is a lot easier, easier to understand (the macros employed in a lex grammar get very confusing and hard to read) and it has a built in debugger (AntlrWorks) so you don't have to spend hours looking at 3 Meg debug files. It also supports many languages (Java, c#, C, Python, Actionscript) and has an excellent book and a very good website that should be able to get you up and running in no time.
|
How do I implement a two-pass scanner using Flex?
|
As a pet-project, I'd like to attempt to implement a basic language of my own design that can be used as a web-scripting language. It's trivial to run a C++ program as an Apache CGI, so the real work lies in how to parse an input file containing non-code (HTML/CSS markup) and server-side code.
In my undergrad compiler course, we used Flex and Bison to generate a scanner and a parser for a simple language. We were given a copy of the grammar and wrote a parser that translated the simple language to a simple assembly for a virtual machine. The flex scanner tokenizes the input, and passes the tokens to the Bison parser.
The difference between that and what I'd like to do is that like PHP, this language could have plain HTML markup and the scripting language interspersed like the following:
<p>Hello,
<? echo "World ?>
</p>
Am I incorrect in assuming that it would be efficient to parse the input file as follows:
Scan input until a script start tag is found ('
Second scanner tokenizes the server-side script section of the input file (from the open tag: '') and passes the token to the parser, which has no need to know about the markup in the file.
Control is returned to the first scanner that continues this general pattern.
Basically, the first scanner only differentiates between Markup (which is returned directly to the browser unmodified) and code, which is passed to the second scanner, which in turn tokenizes the code and passes the tokens to the parser.
If this is not a solid design pattern, how do languages such as PHP handle scanning input and parsing code efficiently?
|
[
"You want to look at start conditions. For example:\n\"<?\" { BEGIN (PHP); }\n<PHP>[a-zA-Z]* { return PHP_TOKEN; }\n<PHP>\">?\" { BEGIN (0); }\n[a-zA-Z]* { return HTML_TOKEN; }\n\nYou start off in state 0, use the BEGIN macro to change states.\nTo match a RE only while in a particular state, prefix the RE with the state name surrounded by angle-brackets.\nIn the example above, \"PHP\" is state. \"PHP_TOKEN\" and \"HTML_TOKEN\" are _%token_s defined by your yacc file.\n",
"PHP doesn't differentiate between the scanning and the Markup. It simply outputs to buffer when in Markup mode, and then switches to parsing when in code mode. You don't need a two pass scanner, and you can do this with just a single flex lexer. \nIf you are interested in how PHP itself works, download the source (try the PHP4 source it is a lot easier to understand). What you want to look at is in the Zend Directory, zend_language_scanner.l. \nHaving written something similar myself, I would really recommend rethinking going the Flex and Bison route, and go with something modern like Antlr. It is a lot easier, easier to understand (the macros employed in a lex grammar get very confusing and hard to read) and it has a built in debugger (AntlrWorks) so you don't have to spend hours looking at 3 Meg debug files. It also supports many languages (Java, c#, C, Python, Actionscript) and has an excellent book and a very good website that should be able to get you up and running in no time.\n"
] |
[
7,
2
] |
[] |
[] |
[
"bison",
"flex_lexer",
"lexical_analysis",
"parsing"
] |
stackoverflow_0000104967_bison_flex_lexer_lexical_analysis_parsing.txt
|
Q:
interval rails caching
I need to cache a single page. I've used ActionController's caches_page for this. But now, I'd like to expire AND regenerate it once in every 10 minutes. What are my options?
later: I'd like to not use any external tools for this, like cron. The important point is interval-based expiry of the cache.
A:
You can also use this if you want to have fragments timeout.
A:
AFAIK rails page caching compares the cache time on request and regenerates if necessary. If you need to forcibly flush that cache check out Sweepers.
http://www.railsenvy.com/2007/2/28/rails-caching-tutorial#sweepers
|
interval rails caching
|
I need to cache a single page. I've used ActionController's caches_page for this. But now, I'd like to expire AND regenerate it once in every 10 minutes. What are my options?
later: I'd like to not use any external tools for this, like cron. The important point is interval-based expiry of the cache.
|
[
"You can also use this if you want to have fragments timeout.\n",
"AFAIK rails page caching compares the cache time on request and regenerates if necessary. If you need to forcibly flush that cache check out Sweepers.\nhttp://www.railsenvy.com/2007/2/28/rails-caching-tutorial#sweepers\n"
] |
[
1,
0
] |
[] |
[] |
[
"caching",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000110930_caching_ruby_ruby_on_rails.txt
|
Q:
Is there a tool to monitor the SQL statements being executed by an .EXE?
I'd like to be able to hook into a 3rd party application to see what SQL Statements are being executed. Specifically, it is a VB6 application running on SQL Server 2005.
For example, when the application fills out a grid, I'd like to be able to see exactly what query produced that data.
A:
If you have the appropriate rights (sysadmin or ALTER TRACE permission) on the DB you could watch using SQL Profiler.
A:
If the application does not write a log or something, the only way to watch them is on the database side. SQL Profiler is the proper tool for the task on MSSQL 2005.
A:
You can view it server side by connecting to the SQL server with the SQL Server Profiler included in the tools. Here's a usage run down of it from Microsoft
A:
Reviewing it on the server as other answers indicate is most likely the best way to go. However, if that's not available, you can also turn on ODBC logging which may be helpful.
|
Is there a tool to monitor the SQL statements being executed by an .EXE?
|
I'd like to be able to hook into a 3rd party application to see what SQL Statements are being executed. Specifically, it is a VB6 application running on SQL Server 2005.
For example, when the application fills out a grid, I'd like to be able to see exactly what query produced that data.
|
[
"If you have the appropriate rights (sysadmin or ALTER TRACE permission) on the DB you could watch using SQL Profiler.\n",
"If the application does not write a log or something, the only way to watch them is on the database side. SQL Profiler is the proper tool for the task on MSSQL 2005.\n",
"You can view it server side by connecting to the SQL server with the SQL Server Profiler included in the tools. Here's a usage run down of it from Microsoft\n",
"Reviewing it on the server as other answers indicate is most likely the best way to go. However, if that's not available, you can also turn on ODBC logging which may be helpful.\n"
] |
[
12,
1,
1,
0
] |
[] |
[] |
[
"monitoring",
"sql_server",
"sql_server_2005",
"vb6"
] |
stackoverflow_0000111181_monitoring_sql_server_sql_server_2005_vb6.txt
|
Q:
How does nunit work?
Can someone explain me how it works, starting from when you select to run a test
A:
When you select to run a test,
it will create an instance of the parent class of that test method.
It then proceeds to run the method marked with TestFixtureSetup attribute if one exists (once for the the test class).
Next is the method marked with the Setup attribute is called if one exists (once before every test in that class)
Next your selected method (with the Test attribute) is executed. All assertions are checked. If all assertions are valid, the test is marked as Pass (Green in the GUI) else Fail (Red). If any exceptions pop up that were not specified with the ExpectedException attribute, test fails.
Then the method marked with the Teardown attribute is called if one exists. (Cleanup code.. called once after every test in the class)
Finally method marked with TestFixtureTeardown attribute is executed. (once after all tests in the test class)
That's it in a nutshell. The power of xUnit is its simplicity. Is that what you were looking for ?
A:
I use it at work, but I'm not an expert. Here's a link to the NUnit documentation: http://www.nunit.org/index.php?p=getStarted&r=2.4.8
A:
1) Have a class you want to test in a .NET project (MyClass is the class name, MyProject is the project name, for example)
2) Add another project to your solution called MyProject.Tests
3) Add a reference from MyProject to MyProject.Tests so that you can access the class you want to test from the testing code
3) In this new project add a new class file called MyClass (the same as the class in MyProject)
4) In that class, add your testing code like this page explains -- http://www.nunit.org/index.php?p=quickStart&r=2.4.8
5) When you've written your tests, build the solution. In the MyProject.Tests project folder a new folder will appear -- 'MyProject.Tests\bin\Debug'. That's assuming you built in Debug mode. If you built in Release mode it'll be MyProject.Test\bin\Release. Either will work. In this folder, you'll find a dll file called MyProject.Tests.dll
6) Open the nUnit testing utility, File > Open, then navigate to the folder in #5 to find that MyProject.Tests.dll. Open it.
7) The tests from the dll should be listed in the nUnit utility window, and you can now select which tests to run, and run them.
Note: The naming convention isn't necessary, it's just the way I do it. If you have a project called 'MyProject' and you want your testing project to be called 'ArbitraryName' instead of 'MyProject.Test', then it'll still work... the naming convention just helps keep track of what exactly is being tested.
A:
What do you mean how does it work?
You define your test classes with [TestFixture] and your tests with [Test]
It's nothing more than a testing framework, you still have to write the tests and all of that jazz :)
|
How does nunit work?
|
Can someone explain me how it works, starting from when you select to run a test
|
[
"When you select to run a test, \n\nit will create an instance of the parent class of that test method. \nIt then proceeds to run the method marked with TestFixtureSetup attribute if one exists (once for the the test class).\nNext is the method marked with the Setup attribute is called if one exists (once before every test in that class)\nNext your selected method (with the Test attribute) is executed. All assertions are checked. If all assertions are valid, the test is marked as Pass (Green in the GUI) else Fail (Red). If any exceptions pop up that were not specified with the ExpectedException attribute, test fails.\nThen the method marked with the Teardown attribute is called if one exists. (Cleanup code.. called once after every test in the class)\nFinally method marked with TestFixtureTeardown attribute is executed. (once after all tests in the test class)\n\nThat's it in a nutshell. The power of xUnit is its simplicity. Is that what you were looking for ?\n",
"I use it at work, but I'm not an expert. Here's a link to the NUnit documentation: http://www.nunit.org/index.php?p=getStarted&r=2.4.8\n",
"1) Have a class you want to test in a .NET project (MyClass is the class name, MyProject is the project name, for example)\n2) Add another project to your solution called MyProject.Tests\n3) Add a reference from MyProject to MyProject.Tests so that you can access the class you want to test from the testing code\n3) In this new project add a new class file called MyClass (the same as the class in MyProject)\n4) In that class, add your testing code like this page explains -- http://www.nunit.org/index.php?p=quickStart&r=2.4.8\n5) When you've written your tests, build the solution. In the MyProject.Tests project folder a new folder will appear -- 'MyProject.Tests\\bin\\Debug'. That's assuming you built in Debug mode. If you built in Release mode it'll be MyProject.Test\\bin\\Release. Either will work. In this folder, you'll find a dll file called MyProject.Tests.dll \n6) Open the nUnit testing utility, File > Open, then navigate to the folder in #5 to find that MyProject.Tests.dll. Open it.\n7) The tests from the dll should be listed in the nUnit utility window, and you can now select which tests to run, and run them.\nNote: The naming convention isn't necessary, it's just the way I do it. If you have a project called 'MyProject' and you want your testing project to be called 'ArbitraryName' instead of 'MyProject.Test', then it'll still work... the naming convention just helps keep track of what exactly is being tested.\n",
"What do you mean how does it work? \nYou define your test classes with [TestFixture] and your tests with [Test]\nIt's nothing more than a testing framework, you still have to write the tests and all of that jazz :)\n"
] |
[
5,
1,
1,
0
] |
[] |
[] |
[
".net",
"nunit"
] |
stackoverflow_0000111140_.net_nunit.txt
|
Q:
Closing a minimized/iconized process from C#
Here's my issue: I need to close a process, already running, from a C# program.
The problem is that the process now runs as an icon (minimized to taskbar), and unless the user opens it at least once (which will never happen on unattended machines), it'll never
have a main window.
The other requirement that I have is that the application be closed not killed. I need it to write it's memory buffers to disk - and killing it causes data loss.
Here's what I tried so far:
foreach (Process proc in Process.GetProcesses())
{
if (proc.ProcessName.ToLower().StartsWith("myapp"))
{
if (proc.MainWindowHandle.ToInt32() != 0)
{
proc.CloseMainWindow();
proc.Close();
//proc.Kill(); <--- not good!
}
}
}
I've added the if clause, after discovering that MainWindowHandle == 0 when the window was minimized. Removing the if doesn't help. Neither the CloseMainWindow() nor the Close() work. The Kill() does, but as mentioned above - it's not what I need.
Any idea would be accepted, including the use of arcane Win32 API functions :)
A:
This should work:
[DllImport("user32.dll", CharSet=CharSet.Auto)]
private static extern IntPtr FindWindow(string className, string windowName);
[DllImport("user32.dll", CharSet=CharSet.Auto)]
private static extern IntPtr SendMessage(IntPtr hWnd, int msg, IntPtr wParam, IntPtr lParam);
private const int WM_CLOSE = 0x10;
private const int WM_QUIT = 0x12;
public void SearchAndDestroy(string windowName)
{
IntPtr hWnd = FindWindow(null, windowName);
if (hWnd == IntPtr.Zero)
throw new Exception("Couldn't find window!");
SendMessage(hWnd, WM_CLOSE, IntPtr.Zero, IntPtr.Zero);
}
Since some windows don't respond to WM_CLOSE, WM_QUIT might have to be sent instead. These declarations should work on both 32bit and 64bit.
A:
If it's on the taskbar, it'll have a window. Or did you mean that it's in the taskbar notification area (aka the SysTray)? In which case, it'll still have a window.
Win32 applications don't really have a "main window", except by convention (the main window is the one that calls PostQuitMessage in response to WM_DESTROY, causing the message loop to exit).
With the program running, run Spy++. To find all of the windows owned by a process, you should select Spy -> Processes from the main menu. This will display a tree of processes. From there, you can drill down to threads, and then to windows. This will tell you which windows the process has. Note down the window class and caption. With these, you can use FindWindow (or EnumWindows) to find the window handle in future.
With the window handle, you can send a WM_CLOSE or WM_SYSCOMMAND/SC_CLOSE (equivalent to clicking on the 'X' on the window caption) message. This ought to cause the program to shut down nicely.
Note that I'm talking from a Win32 point-of-view here. You might need to use P/Invoke or other tricks to get this to work from a .NET program.
A:
Here are some answers and clarifications:
rpetrich:
Tried your method before and the problem is, I don't know the window name, it differs from user to user, version to version - just the exe name remains constant. All I have is the process name. And as you can see in the code above the MainWindowHandle of the process is 0.
Roger:
Yes, I did mean the taskbar notification area - thanks for the clarification.
I NEED to call PostQuitMessage. I just don't know how, given a processs only, and not a Window.
Craig:
I'd be glad to explain the situation: the application has a command line interface, allowing you to specify parameters that dictate what it would do and where will it save the results. But once it's running, the only way to stop it and get the results is right-click it in the tray notification are and select 'exit'.
Now my users want to script/batch the app. They had absolutely no problem starting it from a batch (just specify the exe name and and a bunch of flags) but then got stuck with a running process. Assuming no one will change the process to provide an API to stop it while running (it's quite old), I need a way to artificially close it.
Similarly, on unattended computers, the script to start the process can be started by a task scheduling or operations control program, but there's no way to shut the process down.
Hope that clarifies my situation, and again, thanks everyone who's trying to help!
|
Closing a minimized/iconized process from C#
|
Here's my issue: I need to close a process, already running, from a C# program.
The problem is that the process now runs as an icon (minimized to taskbar), and unless the user opens it at least once (which will never happen on unattended machines), it'll never
have a main window.
The other requirement that I have is that the application be closed not killed. I need it to write it's memory buffers to disk - and killing it causes data loss.
Here's what I tried so far:
foreach (Process proc in Process.GetProcesses())
{
if (proc.ProcessName.ToLower().StartsWith("myapp"))
{
if (proc.MainWindowHandle.ToInt32() != 0)
{
proc.CloseMainWindow();
proc.Close();
//proc.Kill(); <--- not good!
}
}
}
I've added the if clause, after discovering that MainWindowHandle == 0 when the window was minimized. Removing the if doesn't help. Neither the CloseMainWindow() nor the Close() work. The Kill() does, but as mentioned above - it's not what I need.
Any idea would be accepted, including the use of arcane Win32 API functions :)
|
[
"This should work:\n[DllImport(\"user32.dll\", CharSet=CharSet.Auto)]\nprivate static extern IntPtr FindWindow(string className, string windowName);\n[DllImport(\"user32.dll\", CharSet=CharSet.Auto)]\nprivate static extern IntPtr SendMessage(IntPtr hWnd, int msg, IntPtr wParam, IntPtr lParam);\n\nprivate const int WM_CLOSE = 0x10;\nprivate const int WM_QUIT = 0x12;\n\npublic void SearchAndDestroy(string windowName) \n{\n IntPtr hWnd = FindWindow(null, windowName);\n if (hWnd == IntPtr.Zero)\n throw new Exception(\"Couldn't find window!\");\n SendMessage(hWnd, WM_CLOSE, IntPtr.Zero, IntPtr.Zero);\n}\n\nSince some windows don't respond to WM_CLOSE, WM_QUIT might have to be sent instead. These declarations should work on both 32bit and 64bit.\n",
"If it's on the taskbar, it'll have a window. Or did you mean that it's in the taskbar notification area (aka the SysTray)? In which case, it'll still have a window.\nWin32 applications don't really have a \"main window\", except by convention (the main window is the one that calls PostQuitMessage in response to WM_DESTROY, causing the message loop to exit).\nWith the program running, run Spy++. To find all of the windows owned by a process, you should select Spy -> Processes from the main menu. This will display a tree of processes. From there, you can drill down to threads, and then to windows. This will tell you which windows the process has. Note down the window class and caption. With these, you can use FindWindow (or EnumWindows) to find the window handle in future.\nWith the window handle, you can send a WM_CLOSE or WM_SYSCOMMAND/SC_CLOSE (equivalent to clicking on the 'X' on the window caption) message. This ought to cause the program to shut down nicely.\nNote that I'm talking from a Win32 point-of-view here. You might need to use P/Invoke or other tricks to get this to work from a .NET program.\n",
"Here are some answers and clarifications:\nrpetrich:\nTried your method before and the problem is, I don't know the window name, it differs from user to user, version to version - just the exe name remains constant. All I have is the process name. And as you can see in the code above the MainWindowHandle of the process is 0.\nRoger:\nYes, I did mean the taskbar notification area - thanks for the clarification.\nI NEED to call PostQuitMessage. I just don't know how, given a processs only, and not a Window.\nCraig:\nI'd be glad to explain the situation: the application has a command line interface, allowing you to specify parameters that dictate what it would do and where will it save the results. But once it's running, the only way to stop it and get the results is right-click it in the tray notification are and select 'exit'.\nNow my users want to script/batch the app. They had absolutely no problem starting it from a batch (just specify the exe name and and a bunch of flags) but then got stuck with a running process. Assuming no one will change the process to provide an API to stop it while running (it's quite old), I need a way to artificially close it.\nSimilarly, on unattended computers, the script to start the process can be started by a task scheduling or operations control program, but there's no way to shut the process down.\nHope that clarifies my situation, and again, thanks everyone who's trying to help!\n"
] |
[
2,
1,
0
] |
[
"Question to clarify why you're attempting this: If the only user interface on the process is the system tray icon, why would you want to kill that and but leave the process running? How would the user access the process? And if the machine is \"unattended\", why concern yourself with the tray icon?\n"
] |
[
-1
] |
[
"c#",
"kill",
"process",
"winapi"
] |
stackoverflow_0000110336_c#_kill_process_winapi.txt
|
Q:
Distributed python
What is the best python framework to create distributed applications? For example to build a P2P app.
A:
I think you mean "Networked Apps"? Distributed means an app that can split its workload among multiple worker clients over the network.
You probably want.
Twisted
A:
You probably want Twisted. There is a P2P framework for Twisted called "Vertex". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained.
A:
You could checkout pyprocessing which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading.
A:
You could download the source of BitTorrent for starters and see how they did it.
http://download.bittorrent.com/dl/
A:
If it's something where you're going to need tons of threads and need better concurrent performance, check out Stackless Python. Otherwise you could just use the SOAP or XML-RPC protocols. In response to Ben's post, if you don't want to look over the BitTorrent source, you could just look at the article on the BitTorrent protocol.
|
Distributed python
|
What is the best python framework to create distributed applications? For example to build a P2P app.
|
[
"I think you mean \"Networked Apps\"? Distributed means an app that can split its workload among multiple worker clients over the network.\nYou probably want.\nTwisted\n",
"You probably want Twisted. There is a P2P framework for Twisted called \"Vertex\". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained.\n",
"You could checkout pyprocessing which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading.\n",
"You could download the source of BitTorrent for starters and see how they did it.\nhttp://download.bittorrent.com/dl/\n",
"If it's something where you're going to need tons of threads and need better concurrent performance, check out Stackless Python. Otherwise you could just use the SOAP or XML-RPC protocols. In response to Ben's post, if you don't want to look over the BitTorrent source, you could just look at the article on the BitTorrent protocol.\n"
] |
[
9,
3,
2,
1,
1
] |
[] |
[] |
[
"distributed",
"python"
] |
stackoverflow_0000094334_distributed_python.txt
|
Q:
Best content type to serve JSONP?
I have a webservice that when called without specifying a callback will return a JSON string using application/json as the content type.
When a callback is specified it will wrap the JSON string in a callback function, so it's not really valid JSON anymore. My question is, should I serve it as application/javascript in this case or still use application/json?
A:
Use application/javascript. In that way, clients can rely on the content-type without having to manually check whether a response has padding or not.
A:
Use application/json as per rfc4627.txt if what you return is plain JSON.
If you return JavaScript (which is really what JSONP is), then use application/javascript as per rfc4329.txt
|
Best content type to serve JSONP?
|
I have a webservice that when called without specifying a callback will return a JSON string using application/json as the content type.
When a callback is specified it will wrap the JSON string in a callback function, so it's not really valid JSON anymore. My question is, should I serve it as application/javascript in this case or still use application/json?
|
[
"Use application/javascript. In that way, clients can rely on the content-type without having to manually check whether a response has padding or not.\n",
"Use application/json as per rfc4627.txt if what you return is plain JSON.\nIf you return JavaScript (which is really what JSONP is), then use application/javascript as per rfc4329.txt \n"
] |
[
147,
122
] |
[] |
[] |
[
"javascript",
"json",
"jsonp"
] |
stackoverflow_0000111302_javascript_json_jsonp.txt
|
Q:
How to repaint a Word 2003 menubar
I have a Word 2003 .dot template that changes its menu based on the condition of the active document.
The DocumentChange, DocumentOpen and NewDocument events of Word.Application trigger setting the .Visible and .Enabled properties of CommandBarButton controls.
On switching active documents, controls exposed by changing the Visible property display correctly, but text buttons which have been enabled/disabled do not change appearance. You can show enabled controls by hovering over them, but the disabled ones do not repaint until you place a window in front.
Is there a simple way to send a repaint message to the menubar, to simulate hiding and exposing?
A:
You are playing with the visible & enabled properties of the controls. But did you try to hide/unhide the whole commandbar to refresh it?
application.CommandBars.ActiveMenuBar.visible = false
application.CommandBars.ActiveMenuBar.visible = true
|
How to repaint a Word 2003 menubar
|
I have a Word 2003 .dot template that changes its menu based on the condition of the active document.
The DocumentChange, DocumentOpen and NewDocument events of Word.Application trigger setting the .Visible and .Enabled properties of CommandBarButton controls.
On switching active documents, controls exposed by changing the Visible property display correctly, but text buttons which have been enabled/disabled do not change appearance. You can show enabled controls by hovering over them, but the disabled ones do not repaint until you place a window in front.
Is there a simple way to send a repaint message to the menubar, to simulate hiding and exposing?
|
[
"You are playing with the visible & enabled properties of the controls. But did you try to hide/unhide the whole commandbar to refresh it? \napplication.CommandBars.ActiveMenuBar.visible = false\napplication.CommandBars.ActiveMenuBar.visible = true\n\n"
] |
[
1
] |
[] |
[] |
[
"ms_word",
"vba"
] |
stackoverflow_0000107254_ms_word_vba.txt
|
Q:
Changing another Process Locale
From my own "key logger like" process I figured out that another process Locale is wrong (i.e. by sniffing few keys, I figured out that the foreground process Locale should be something while it is set to another). What's the best way to do this?
A:
I'd use setLocale from within that process to change it, and notify the process about this with some form of IPC like:
signals
sockets
pipes
from the process who knows
A:
You didn't specify operating system or anything, but in Linux this is quite hard unless the target process is willing to help (i.e. there's some IPC mechanism available where you can ask the process to do it for you)
What you can do is to attach to the process, like a debugger or strace does, and the call the appropriate system call (like setlocale())
The result on the target process is of course undetermined since it probably doesn't expect to get its locale changed under its feet :)
|
Changing another Process Locale
|
From my own "key logger like" process I figured out that another process Locale is wrong (i.e. by sniffing few keys, I figured out that the foreground process Locale should be something while it is set to another). What's the best way to do this?
|
[
"I'd use setLocale from within that process to change it, and notify the process about this with some form of IPC like:\n\nsignals\nsockets\npipes\n\nfrom the process who knows\n",
"You didn't specify operating system or anything, but in Linux this is quite hard unless the target process is willing to help (i.e. there's some IPC mechanism available where you can ask the process to do it for you)\nWhat you can do is to attach to the process, like a debugger or strace does, and the call the appropriate system call (like setlocale())\nThe result on the target process is of course undetermined since it probably doesn't expect to get its locale changed under its feet :)\n"
] |
[
2,
1
] |
[] |
[] |
[
"process"
] |
stackoverflow_0000111339_process.txt
|
Q:
What are some excellent examples of user sign-up forms on the web?
I'm trying to get a sampling of what people think are the best sign-up forms. Good design, usability. Smart engineering. Helpful feedback.
A:
One of my all-time fave sign-up forms was the original Vox one, which has since been changed; there was a great break-down of it published online, and it goes into the things that made it so great to me. How they implemented the CSS layout of their forms, how they used in-form validation with pop-up tips, etc. -- it was nice.
A:
Two good links to start with:
CSS-Based Forms: Modern Solutions
Label Placement in Forms
A:
I like Geni's one (www.geni.com). It's an example of a signup form that doesn't feel like one. You can get started straight away with the site, and are able to add further information as an when you want to.
A:
I think that Reddit's registration is pretty good. If you attempt to use an action that requires you to be logged in it will pop up in front all Javascripty. It just requires your username and password, and just takes a few second.
A:
Surprisingly enough, my all-time favorite, of ones I've encountered in the wild, is Dell's, on their IdeaStorm.
If you click on a control that requires a login (to vote up an idea, for example), it automatically refocuses on the login element. If you don't already have an account you can hit the 'register' tab and no page load is required.
The register form is totally lightweight (four fields I think) and uses AJAX to check if the name is already taken. Once you register you're automatically logged in.
Bottom line, it's visually compact, asks for a minimal amount of information, and lets you login or register without ever leaving the original page.
A:
Is it vain to suggest my own? It's not perfect, but I think it's a good mix of simple, friendly, and optionally thorough:
https://www.woot.com/User/Register.aspx
A:
37signals' Screens Around Town column often has interesting ones. Worth a peek.
A:
There are some nice shots of sign up forms in the flickr set to go along with Luke Wroblewski's "Web Form Design" book.
(which is jolly good - worth picking up if you're interested in this sort of thing).
A:
The perfect example of a login form, in my opinion, is the one on 2chan. Read linked wikipedia article to understand.
A:
A couple of examples I find interesting are Tripit, a site for organizing your travel plans. Although there is a link to Sign-up for the service the easiest and quickest way is to forward a confirmation email from a travel service (orbitz, travelocity, united.com, hertz.com etc), doing this will automatically sign you up and get you going (once you log in to the site it will ask for more info).
Another quick and easy registration is Marco Arment's Instapaper. All you need is to fill in your email address or username.
|
What are some excellent examples of user sign-up forms on the web?
|
I'm trying to get a sampling of what people think are the best sign-up forms. Good design, usability. Smart engineering. Helpful feedback.
|
[
"One of my all-time fave sign-up forms was the original Vox one, which has since been changed; there was a great break-down of it published online, and it goes into the things that made it so great to me. How they implemented the CSS layout of their forms, how they used in-form validation with pop-up tips, etc. -- it was nice.\n",
"Two good links to start with:\nCSS-Based Forms: Modern Solutions\nLabel Placement in Forms\n",
"I like Geni's one (www.geni.com). It's an example of a signup form that doesn't feel like one. You can get started straight away with the site, and are able to add further information as an when you want to.\n",
"I think that Reddit's registration is pretty good. If you attempt to use an action that requires you to be logged in it will pop up in front all Javascripty. It just requires your username and password, and just takes a few second.\n",
"Surprisingly enough, my all-time favorite, of ones I've encountered in the wild, is Dell's, on their IdeaStorm.\nIf you click on a control that requires a login (to vote up an idea, for example), it automatically refocuses on the login element. If you don't already have an account you can hit the 'register' tab and no page load is required.\nThe register form is totally lightweight (four fields I think) and uses AJAX to check if the name is already taken. Once you register you're automatically logged in.\nBottom line, it's visually compact, asks for a minimal amount of information, and lets you login or register without ever leaving the original page.\n",
"Is it vain to suggest my own? It's not perfect, but I think it's a good mix of simple, friendly, and optionally thorough:\nhttps://www.woot.com/User/Register.aspx\n",
"37signals' Screens Around Town column often has interesting ones. Worth a peek.\n",
"There are some nice shots of sign up forms in the flickr set to go along with Luke Wroblewski's \"Web Form Design\" book.\n(which is jolly good - worth picking up if you're interested in this sort of thing).\n",
"The perfect example of a login form, in my opinion, is the one on 2chan. Read linked wikipedia article to understand.\n",
"A couple of examples I find interesting are Tripit, a site for organizing your travel plans. Although there is a link to Sign-up for the service the easiest and quickest way is to forward a confirmation email from a travel service (orbitz, travelocity, united.com, hertz.com etc), doing this will automatically sign you up and get you going (once you log in to the site it will ask for more info).\nAnother quick and easy registration is Marco Arment's Instapaper. All you need is to fill in your email address or username.\n"
] |
[
5,
3,
3,
2,
2,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"css",
"html",
"javascript",
"usability"
] |
stackoverflow_0000105720_css_html_javascript_usability.txt
|
Q:
How does Jan Willem Klop's "(L L L...)" Y combinator work?
I understand what a Y Combinator is, but I don't understand this example of a "novel" combinator, from the Wikipedia page:
Yk = (L L L L L L L L L L L L L L L L L L L L L L L L L L)
Where:
L = λabcdefghijklmnopqstuvwxyzr. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))
How does this work?
A:
The essence of a fixed-point combinator C is that C f reduces to f (C f). It doesn't matter what you take for C as long as does this. So instead of
(\y f. f (y y f)) (\y f. f (y y f))
you can just as well take
(\y z f. f (y y y f)) (\y z f. f (y y y f)) (\y z f. f (y y y f))
Basically you need something of the form
C t1 t2 ... tN
where ti = C for some i and
C = \x1 x2 .. xN f. f (xi u1 u2 ... xi ... u(N-1) f)
The other terms tj and uj are not actually "used". You can see that Klop's L has this form (although he uses the fact that all ti are L such that the second xi can also be any other xj).
|
How does Jan Willem Klop's "(L L L...)" Y combinator work?
|
I understand what a Y Combinator is, but I don't understand this example of a "novel" combinator, from the Wikipedia page:
Yk = (L L L L L L L L L L L L L L L L L L L L L L L L L L)
Where:
L = λabcdefghijklmnopqstuvwxyzr. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))
How does this work?
|
[
"The essence of a fixed-point combinator C is that C f reduces to f (C f). It doesn't matter what you take for C as long as does this. So instead of\n(\\y f. f (y y f)) (\\y f. f (y y f))\n\nyou can just as well take\n(\\y z f. f (y y y f)) (\\y z f. f (y y y f)) (\\y z f. f (y y y f))\n\nBasically you need something of the form\nC t1 t2 ... tN\n\nwhere ti = C for some i and\nC = \\x1 x2 .. xN f. f (xi u1 u2 ... xi ... u(N-1) f)\n\nThe other terms tj and uj are not actually \"used\". You can see that Klop's L has this form (although he uses the fact that all ti are L such that the second xi can also be any other xj).\n"
] |
[
12
] |
[] |
[] |
[
"computer_science",
"functional_programming",
"lisp"
] |
stackoverflow_0000111295_computer_science_functional_programming_lisp.txt
|
Q:
Shut-down script on Windows to delete a registry key?
EDIT: This was formerly more explicitly titled: - "Best solution to stop Kontiki's KHOST.EXE from loading automatically at start-up on Windows XP?"
Essentially, whenever the 40D application is run it sets up khost.exe to automatically start-up with Windows. This is annoying as it increases my boot up time by a couple of minutes and I don't even use the P2P aspect of 4OD anyway.
The registry keys that are set are:
Command: C:\Program Files\Kontiki\KHost.exe -all
Description: kdx
Location: HKU\S-1-5-21-1757981266-1960408961-839522115-1003\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Name: kdx
Setting ID:
User: LAPTOP\Me
Command: "C:\Program Files\Kontiki\KHost.exe" -all
Description: 4oD
Location: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Name: 4oD
Setting ID:
User: All Users
I'm assuming some kind of start-up or shut-down script to delete these registry keys would be the best solution, but I'm not that up with .vbs or .bat scripting or where I'd put them to automatically run at an appropriate time.
I know there is a TV On-Demand Monitor application, but I don't really need to be running yet another process, I just need to delete the registry keys as I describe above.
A:
What I ended up doing in the end:
1) Stopped 40D from the task tray with a right-click > exit which terminated the Khost.exe process.
2) Opened Start > All Programs > Administrative Tools > Services and stopped KService then set the Startup Type to 'Manual'.
3) Created a ShutdownScript.vbs with the following content:
Set SH = CreateObject("WScript.Shell")
RemoveRegKey "HKU\S-1-5-21-1757981266-1960408961-839522115-1003\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\kdx"
RemoveRegKey "HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\kdx"
RemoveRegKey "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\4oD"
Shutdown
Set Shell = Nothing
Set SH = Nothing
WScript.Quit
Sub RemoveRegKey(sKey)
On Error Resume Next
SH.RegDelete sKey
End Sub
Sub Shutdown()
SH.Run "shutdown -s -t 1", 0, TRUE
End Sub
4) Put a shortcut to the script in my Start Menu and now use that to shut the PC down.
Now 40D will work when I need it, and all I have to do is quit it and shutdown with the script to stop it auto-starting everytime I boot up the PC.
THANKS FOR ALL YOUR HELP WITH THIS! :)
A:
Why not just copy the executable to some other name, and put a do-nothing exe in its place. Then change your shortcuts to the copied and renamed EXE. If the program is sensitive to its name, then point your shortcuts to a VBS file to temporarily rename the EXE file.
A:
for the vb script you would use something like this:
Dim WSHShell
Set WSHShell = WScript.CreateObject("WScript.Shell")
'repeat the line below for each key to delete
WSHShell.RegDelete "[Location of Key]"
Just drop the code into a text file and re-name it something like shutdown,vbs.
As for when to run it, if you are in a corporate environment you could use a group policy and set it as a machine shutdown script. Alternatively, see this page here about adding it manually
A:
Another method:
Create a VBS file that runs the program and then deletes the registry keys.
Set objShell = CreateObject("WScript.Shell")
objShell.Exec("C:\Program Files\Kontiki\KHost.exe")
strRoot = "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\4oD"
strDelete = objShell.RegDelete(strRoot)
...
And point your shortcuts at that.
A:
Should I suggest you give a try to AutoIt (http://www.autoitscript.com/autoit3/), a freeware scripting language designed for automating the Windows GUI and general scripting.
If you choose to use it, the AutoIt code for your need would be a 2-liner:
RegDelete("YourKey", "YourValue");
ShutDown(1);
And you can compile it into a standalone exe that can run on any computer (no runtime library needed)
|
Shut-down script on Windows to delete a registry key?
|
EDIT: This was formerly more explicitly titled: - "Best solution to stop Kontiki's KHOST.EXE from loading automatically at start-up on Windows XP?"
Essentially, whenever the 40D application is run it sets up khost.exe to automatically start-up with Windows. This is annoying as it increases my boot up time by a couple of minutes and I don't even use the P2P aspect of 4OD anyway.
The registry keys that are set are:
Command: C:\Program Files\Kontiki\KHost.exe -all
Description: kdx
Location: HKU\S-1-5-21-1757981266-1960408961-839522115-1003\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Name: kdx
Setting ID:
User: LAPTOP\Me
Command: "C:\Program Files\Kontiki\KHost.exe" -all
Description: 4oD
Location: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Name: 4oD
Setting ID:
User: All Users
I'm assuming some kind of start-up or shut-down script to delete these registry keys would be the best solution, but I'm not that up with .vbs or .bat scripting or where I'd put them to automatically run at an appropriate time.
I know there is a TV On-Demand Monitor application, but I don't really need to be running yet another process, I just need to delete the registry keys as I describe above.
|
[
"What I ended up doing in the end:\n1) Stopped 40D from the task tray with a right-click > exit which terminated the Khost.exe process.\n2) Opened Start > All Programs > Administrative Tools > Services and stopped KService then set the Startup Type to 'Manual'.\n3) Created a ShutdownScript.vbs with the following content:\nSet SH = CreateObject(\"WScript.Shell\")\n\nRemoveRegKey \"HKU\\S-1-5-21-1757981266-1960408961-839522115-1003\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\\kdx\"\nRemoveRegKey \"HKEY_CURRENT_USER\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\\kdx\"\nRemoveRegKey \"HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\\4oD\"\n\nShutdown\n\nSet Shell = Nothing\nSet SH = Nothing\nWScript.Quit\n\nSub RemoveRegKey(sKey)\n On Error Resume Next\n SH.RegDelete sKey\nEnd Sub\n\nSub Shutdown()\n SH.Run \"shutdown -s -t 1\", 0, TRUE\nEnd Sub\n\n4) Put a shortcut to the script in my Start Menu and now use that to shut the PC down.\nNow 40D will work when I need it, and all I have to do is quit it and shutdown with the script to stop it auto-starting everytime I boot up the PC.\nTHANKS FOR ALL YOUR HELP WITH THIS! :)\n",
"Why not just copy the executable to some other name, and put a do-nothing exe in its place. Then change your shortcuts to the copied and renamed EXE. If the program is sensitive to its name, then point your shortcuts to a VBS file to temporarily rename the EXE file.\n",
"for the vb script you would use something like this:\nDim WSHShell\nSet WSHShell = WScript.CreateObject(\"WScript.Shell\")\n'repeat the line below for each key to delete \nWSHShell.RegDelete \"[Location of Key]\"\n\nJust drop the code into a text file and re-name it something like shutdown,vbs.\nAs for when to run it, if you are in a corporate environment you could use a group policy and set it as a machine shutdown script. Alternatively, see this page here about adding it manually\n",
"Another method:\nCreate a VBS file that runs the program and then deletes the registry keys.\nSet objShell = CreateObject(\"WScript.Shell\") \n\nobjShell.Exec(\"C:\\Program Files\\Kontiki\\KHost.exe\")\n\nstrRoot = \"HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\\4oD\" \nstrDelete = objShell.RegDelete(strRoot) \n...\n\nAnd point your shortcuts at that.\n",
"Should I suggest you give a try to AutoIt (http://www.autoitscript.com/autoit3/), a freeware scripting language designed for automating the Windows GUI and general scripting.\nIf you choose to use it, the AutoIt code for your need would be a 2-liner:\nRegDelete(\"YourKey\", \"YourValue\");\nShutDown(1);\n\nAnd you can compile it into a standalone exe that can run on any computer (no runtime library needed)\n"
] |
[
2,
1,
1,
1,
1
] |
[] |
[] |
[
"batch_file",
"registry",
"scripting",
"windows"
] |
stackoverflow_0000111097_batch_file_registry_scripting_windows.txt
|
Q:
How to stream binary data to standard output in .NET?
I'm trying to stream binary data to the standard output in .NET. However you can only write char using the Console class. I want to use it with redirection. Is there a way to do this?
A:
You can access the output stream using Console.OpenStandardOutput.
static void Main(string[] args) {
MemoryStream data = new MemoryStream(Encoding.UTF8.GetBytes("Some data"));
using (Stream console = Console.OpenStandardOutput()) {
data.CopyTo(console);
}
}
|
How to stream binary data to standard output in .NET?
|
I'm trying to stream binary data to the standard output in .NET. However you can only write char using the Console class. I want to use it with redirection. Is there a way to do this?
|
[
"You can access the output stream using Console.OpenStandardOutput. \n static void Main(string[] args) {\n MemoryStream data = new MemoryStream(Encoding.UTF8.GetBytes(\"Some data\"));\n using (Stream console = Console.OpenStandardOutput()) {\n data.CopyTo(console);\n }\n }\n\n"
] |
[
6
] |
[] |
[] |
[
".net",
"interop"
] |
stackoverflow_0000111387_.net_interop.txt
|
Q:
Is it a problem if multiple different accepting sockets use the same OpenSSL context?
Is it OK if the same OpenSSL context is used by several different accepting sockets?
In particular I'm using the same boost::asio::ssl::context with 2 different listening sockets.
A:
Yep, SSL_CTX--which I believe is the underlying data structure--is just a global data structure used by your program. From ssl(3):
SSL_CTX (SSL Context)
That's the global context structure which is created by a server or client once per program life-time and which holds mainly default values for the SSL structures which are later created for the connections.
A:
It should be OK.
For example a typical RFC4217 FTPS server will use the same SSL context for the control socket and all data sockets within that session.
|
Is it a problem if multiple different accepting sockets use the same OpenSSL context?
|
Is it OK if the same OpenSSL context is used by several different accepting sockets?
In particular I'm using the same boost::asio::ssl::context with 2 different listening sockets.
|
[
"Yep, SSL_CTX--which I believe is the underlying data structure--is just a global data structure used by your program. From ssl(3):\n\nSSL_CTX (SSL Context)\nThat's the global context structure which is created by a server or client once per program life-time and which holds mainly default values for the SSL structures which are later created for the connections.\n\n",
"It should be OK.\nFor example a typical RFC4217 FTPS server will use the same SSL context for the control socket and all data sockets within that session.\n"
] |
[
2,
1
] |
[] |
[] |
[
"boost_asio",
"c++",
"openssl",
"ssl"
] |
stackoverflow_0000111391_boost_asio_c++_openssl_ssl.txt
|
Q:
Optimal multiplayer maze generation algorithm
I'm working on a simple multiplayer game in which 2-4 players are placed at separate entrypoints in a maze and need to reach a goal point. Generating a maze in general is very easy, but in this case the goal of the game is to reach the goal before everyone else and I don't want the generation algorithm to drastically favor one player over others.
So I'm looking for a maze generation algorithm where the optimal path for each player from the startpoint to the goal is no more than 10% more steps than the average path. This way the players are on more or less an equal playing field. Can anyone think up such an algorithm?
(I've got one idea as it stands, but it's not well thought out and seems far less than optimal -- I'll post it as an answer.)
A:
An alternative to freespace's answer would be to generate a random maze, then assign each cell a value representing the number of moves to reach the end of the maze (you can do both at once if you decide that you're starting at the 'end'). Then pick a distance (perhaps the highest one with n points at that distance?) and place the players at squares with that value.
A:
What about first selecting the position of the players and goal and an equal length path and afterwards build a maze respecting the defined paths? If the paths do not intersect this should easily work, I presume
A:
I would approach this by setting the goal and each player's entry point, then generating paths of similar length for each of them to the goal. Then I would start adding false branches along these paths, being careful to avoid linking to other player's paths, or having a branch connect back to the path. So essentially every branch is a dead end.
This way, you guarantee the paths are similar in length. However it won't allow players to interact with each other. You can however put this in, by creating links between branches such that branch entry points on either path are at a similar distance away from the goal. And on this branch you can branch off more dead ends for fun and profit :-)
A:
The easiest solution I can come up with is to randomly generate an entire maze like normal, then randomly pick the goal point and player startpoints. Once this is done, calculate the shortest path from each startpoint to the goal. Find the average and start 'smoothing' (remove/move barriers -- don't know how this will work) the paths that are significantly above it, until all of the paths are within the proper margin. In addition, it could be possible to take the ones that are significantly below the average and insert additional barriers.
A:
Pick your exit point somewhere in the middle
Start your N paths from there, adding 1 to each path per loop,
until they are as long as you want them to be.
There are your N start points, and they are all the same length.
Add additional branches off of the lines, until the maze is full.
|
Optimal multiplayer maze generation algorithm
|
I'm working on a simple multiplayer game in which 2-4 players are placed at separate entrypoints in a maze and need to reach a goal point. Generating a maze in general is very easy, but in this case the goal of the game is to reach the goal before everyone else and I don't want the generation algorithm to drastically favor one player over others.
So I'm looking for a maze generation algorithm where the optimal path for each player from the startpoint to the goal is no more than 10% more steps than the average path. This way the players are on more or less an equal playing field. Can anyone think up such an algorithm?
(I've got one idea as it stands, but it's not well thought out and seems far less than optimal -- I'll post it as an answer.)
|
[
"An alternative to freespace's answer would be to generate a random maze, then assign each cell a value representing the number of moves to reach the end of the maze (you can do both at once if you decide that you're starting at the 'end'). Then pick a distance (perhaps the highest one with n points at that distance?) and place the players at squares with that value.\n",
"What about first selecting the position of the players and goal and an equal length path and afterwards build a maze respecting the defined paths? If the paths do not intersect this should easily work, I presume\n",
"I would approach this by setting the goal and each player's entry point, then generating paths of similar length for each of them to the goal. Then I would start adding false branches along these paths, being careful to avoid linking to other player's paths, or having a branch connect back to the path. So essentially every branch is a dead end.\nThis way, you guarantee the paths are similar in length. However it won't allow players to interact with each other. You can however put this in, by creating links between branches such that branch entry points on either path are at a similar distance away from the goal. And on this branch you can branch off more dead ends for fun and profit :-) \n",
"The easiest solution I can come up with is to randomly generate an entire maze like normal, then randomly pick the goal point and player startpoints. Once this is done, calculate the shortest path from each startpoint to the goal. Find the average and start 'smoothing' (remove/move barriers -- don't know how this will work) the paths that are significantly above it, until all of the paths are within the proper margin. In addition, it could be possible to take the ones that are significantly below the average and insert additional barriers.\n",
"Pick your exit point somewhere in the middle\nStart your N paths from there, adding 1 to each path per loop,\nuntil they are as long as you want them to be.\nThere are your N start points, and they are all the same length.\nAdd additional branches off of the lines, until the maze is full.\n"
] |
[
8,
1,
1,
0,
0
] |
[] |
[] |
[
"algorithm",
"language_agnostic",
"maze"
] |
stackoverflow_0000108000_algorithm_language_agnostic_maze.txt
|
Q:
How to configure asp.net process to run under a domain account?
I would like to configure asp.net process to run under an account with domain credentials.
My requirement is to access some files on a network share.
What are the steps for this? Is there any built-in account I can use?
A:
Check this article from MSDN.
How To: Create a Service Account for an ASP.NET 2.0 Application
This How To shows you how to create and configure a custom least-privileged service account to run an ASP.NET Web application. By default, an ASP.NET application on Microsoft Windows Server 2003 and IIS 6.0 runs using the built-in Network Service account. In production environments, you usually run your application using a custom service account. By using a custom service account, you can audit and authorize your application separately from others, and your application is protected from any changes made to the privileges or permissions associated with the Network Service account. To use a custom service account, you must configure the account by running the Aspnet_regiis.exe utility with the -ga switch, and then configure your application to run in a custom application pool that uses the custom account's identity.
|
How to configure asp.net process to run under a domain account?
|
I would like to configure asp.net process to run under an account with domain credentials.
My requirement is to access some files on a network share.
What are the steps for this? Is there any built-in account I can use?
|
[
"Check this article from MSDN.\nHow To: Create a Service Account for an ASP.NET 2.0 Application\nThis How To shows you how to create and configure a custom least-privileged service account to run an ASP.NET Web application. By default, an ASP.NET application on Microsoft Windows Server 2003 and IIS 6.0 runs using the built-in Network Service account. In production environments, you usually run your application using a custom service account. By using a custom service account, you can audit and authorize your application separately from others, and your application is protected from any changes made to the privileges or permissions associated with the Network Service account. To use a custom service account, you must configure the account by running the Aspnet_regiis.exe utility with the -ga switch, and then configure your application to run in a custom application pool that uses the custom account's identity.\n"
] |
[
9
] |
[] |
[] |
[
".net",
"asp.net",
"iis_6"
] |
stackoverflow_0000111410_.net_asp.net_iis_6.txt
|
Q:
How do I set the thickness of a line in VB.NET
In VB.NET I'm drawing an ellipse using some code like this.
aPen = New Pen(Color.Black)
g.DrawEllipse(aPen, n.boxLeft, n.boxTop, n.getWidth(), n.getHeight)
But I want to set the thickness of the line. How do I do it? Is it a property of the Pen or an argument to the DrawEllipse method?
(NB : For some reason, the help is VisualStudio is failing me so I've got to hit the web anyway. Thought I'd try here first.)
A:
Use the pen's Width property.
aPen.Width = 10.0F
|
How do I set the thickness of a line in VB.NET
|
In VB.NET I'm drawing an ellipse using some code like this.
aPen = New Pen(Color.Black)
g.DrawEllipse(aPen, n.boxLeft, n.boxTop, n.getWidth(), n.getHeight)
But I want to set the thickness of the line. How do I do it? Is it a property of the Pen or an argument to the DrawEllipse method?
(NB : For some reason, the help is VisualStudio is failing me so I've got to hit the web anyway. Thought I'd try here first.)
|
[
"Use the pen's Width property.\naPen.Width = 10.0F\n\n"
] |
[
8
] |
[] |
[] |
[
".net",
"drawing",
"vb.net"
] |
stackoverflow_0000111424_.net_drawing_vb.net.txt
|
Q:
looking for a tuple matching algorithm
I need to implement an in-memory tuple-of-strings matching feature in C. There will be large list of tuples associated with different actions and a high volume of events to be matched against the list.
List of tuples:
("one", "four")
("one")
("three")
("four", "five")
("six")
event ("one", "two", "three", "four") should match list item ("one", "four") and ("one") and ("three") but not ("four", "five") and not ("six")
my current approach uses a map of all tuple field values as keys for lists of each tuple using that value. there is a lot of redundant hashing and list insertion.
is there a right or classic way to do this?
A:
If you only have a small number of possible tuple values it would make sense to write some sort of hashing function which could turn them into integer indexes for quick searching.
If there are < 32 values you could do something with bitmasks:
unsigned int hash(char *value){...}
typedef struct _tuple {
unsigned int bitvalues;
void * data
} tuple;
tuple a,b,c,d;
a.bitvalues = hash("one");
a.bitvalues |= hash("four");
//a.data = something;
unsigned int event = 0;
//foreach value in event;
event |= hash(string_val);
// foreach tuple
if(x->bitvalues & test == test)
{
//matches
}
If there are too many values to do a bitmask solution you could have an array of linked lists. Go through each item in the event. If the item matches key_one, walk through the tuples with that first key and check the event for the second key:
typedef struct _tuple {
unsigned int key_one;
unsigned int key_two;
_tuple *next;
void * data;
} tuple;
tuple a,b,c,d;
a.key_one = hash("one");
a.key_two = hash("four");
tuple * list = malloc(/*big enough for all hash indexes*/
memset(/*clear list*/);
//foreach touple item
if(list[item->key_one])
put item on the end of the list;
else
list[item->key_one] = item;
//foreach event
//foreach key
if(item_ptr = list[key])
while(item_ptr.next)
if(!item_ptr.key_two || /*item has key_two*/)
//match
item_ptr = item_ptr.next;
This code is in no way tested and probably has many small errors but you should get the idea. (one error that was corrected was the test condition for tuple match)
If event processing speed is of utmost importance it would make sense to iterate through all of your constructed tuples, count the number of occurrences and go through possibly re-ordering the key one/key two of each tuple so the most unique value is listed first.
A:
A possible solution would be to assign a unique prime number to each of the words.
Then if you multiply the words together in each tuple, then you have a number that represents the words in the list.
Divide one list by another, and if you get an integer remainder, then the one list is contained in the other.
A:
I don't know of any classical or right way to do this, so here is what I would do :P
It looks like you want to decide if A is a superset of B, using set theory jargon. One way you can do it is to sort A and B, and do a merge sort-esque operation on A and B, in that you try to find where in A a value in B goes. Those elements of B which are also in A, will have duplicates, and the other elements won't. Because both A and B are sorted, this shouldn't be too horrible.
For example, you take the first value of B, and walk A until you find its duplicate in A. Then you take the second value of B, and start walking A from where you left off previously. If you get to end of A without finding a match, then A is not a superset of B, and you return false.
If these tuples can stay sorted, then the sorting cost is only incurred once.
A:
If you have a smallish number of possible strings, you can assign an index to each and use bitmaps. That way a simple bitwise and will tell you if there's overlap.
If that's not practical, your inverted index setup is probably going to be hard to match for speed, especially if you only have to build it once. (does the list of tuples change at runtime?)
A:
public static void Main()
{
List<List<string>> tuples = new List<List<string>>();
string [] tuple = {"one", "four"};
tuples.Add(new List<string>(tuple));
tuple = new string [] {"one"};
tuples.Add(new List<string>(tuple));
tuple = new string [] {"three"};
tuples.Add(new List<string>(tuple));
tuple = new string[]{"four", "five"};
tuples.Add(new List<string>(tuple));
tuple = new string[]{"six"};
tuples.Add(new List<string>(tuple));
tuple = new string[] {"one", "two", "three", "four"};
List<string> checkTuple = new List<string>(tuple);
List<List<string>> result = new List<List<string>>();
foreach (List<string> ls in tuples)
{
bool ok = true;
foreach(string s in ls)
if(!checkTuple.Contains(s))
{
ok = false;
break;
}
if (ok)
result.Add(ls);
}
}
|
looking for a tuple matching algorithm
|
I need to implement an in-memory tuple-of-strings matching feature in C. There will be large list of tuples associated with different actions and a high volume of events to be matched against the list.
List of tuples:
("one", "four")
("one")
("three")
("four", "five")
("six")
event ("one", "two", "three", "four") should match list item ("one", "four") and ("one") and ("three") but not ("four", "five") and not ("six")
my current approach uses a map of all tuple field values as keys for lists of each tuple using that value. there is a lot of redundant hashing and list insertion.
is there a right or classic way to do this?
|
[
"If you only have a small number of possible tuple values it would make sense to write some sort of hashing function which could turn them into integer indexes for quick searching.\nIf there are < 32 values you could do something with bitmasks:\nunsigned int hash(char *value){...}\n\ntypedef struct _tuple {\n unsigned int bitvalues;\n void * data\n} tuple;\n\ntuple a,b,c,d;\na.bitvalues = hash(\"one\");\na.bitvalues |= hash(\"four\");\n//a.data = something;\n\nunsigned int event = 0;\n//foreach value in event;\nevent |= hash(string_val);\n\n// foreach tuple\nif(x->bitvalues & test == test)\n{\n //matches\n}\n\nIf there are too many values to do a bitmask solution you could have an array of linked lists. Go through each item in the event. If the item matches key_one, walk through the tuples with that first key and check the event for the second key:\ntypedef struct _tuple {\n unsigned int key_one;\n unsigned int key_two;\n _tuple *next;\n void * data;\n} tuple;\n\ntuple a,b,c,d;\na.key_one = hash(\"one\");\na.key_two = hash(\"four\");\n\ntuple * list = malloc(/*big enough for all hash indexes*/\nmemset(/*clear list*/);\n\n//foreach touple item\nif(list[item->key_one])\n put item on the end of the list;\nelse\n list[item->key_one] = item;\n\n\n//foreach event\n //foreach key\n if(item_ptr = list[key])\n while(item_ptr.next)\n if(!item_ptr.key_two || /*item has key_two*/)\n //match\n item_ptr = item_ptr.next;\n\nThis code is in no way tested and probably has many small errors but you should get the idea. (one error that was corrected was the test condition for tuple match)\n\nIf event processing speed is of utmost importance it would make sense to iterate through all of your constructed tuples, count the number of occurrences and go through possibly re-ordering the key one/key two of each tuple so the most unique value is listed first. \n",
"A possible solution would be to assign a unique prime number to each of the words.\nThen if you multiply the words together in each tuple, then you have a number that represents the words in the list. \nDivide one list by another, and if you get an integer remainder, then the one list is contained in the other.\n",
"I don't know of any classical or right way to do this, so here is what I would do :P\nIt looks like you want to decide if A is a superset of B, using set theory jargon. One way you can do it is to sort A and B, and do a merge sort-esque operation on A and B, in that you try to find where in A a value in B goes. Those elements of B which are also in A, will have duplicates, and the other elements won't. Because both A and B are sorted, this shouldn't be too horrible.\nFor example, you take the first value of B, and walk A until you find its duplicate in A. Then you take the second value of B, and start walking A from where you left off previously. If you get to end of A without finding a match, then A is not a superset of B, and you return false.\nIf these tuples can stay sorted, then the sorting cost is only incurred once.\n",
"If you have a smallish number of possible strings, you can assign an index to each and use bitmaps. That way a simple bitwise and will tell you if there's overlap.\nIf that's not practical, your inverted index setup is probably going to be hard to match for speed, especially if you only have to build it once. (does the list of tuples change at runtime?)\n",
" public static void Main()\n {\n List<List<string>> tuples = new List<List<string>>();\n\n string [] tuple = {\"one\", \"four\"};\n tuples.Add(new List<string>(tuple));\n\n tuple = new string [] {\"one\"};\n tuples.Add(new List<string>(tuple));\n\n tuple = new string [] {\"three\"};\n tuples.Add(new List<string>(tuple));\n\n tuple = new string[]{\"four\", \"five\"};\n tuples.Add(new List<string>(tuple));\n\n tuple = new string[]{\"six\"};\n tuples.Add(new List<string>(tuple));\n\n tuple = new string[] {\"one\", \"two\", \"three\", \"four\"};\n\n List<string> checkTuple = new List<string>(tuple);\n\n List<List<string>> result = new List<List<string>>();\n\n foreach (List<string> ls in tuples)\n {\n bool ok = true;\n foreach(string s in ls)\n if(!checkTuple.Contains(s))\n {\n ok = false;\n break;\n }\n if (ok)\n result.Add(ls);\n }\n }\n\n"
] |
[
3,
2,
1,
0,
0
] |
[] |
[] |
[
"algorithm",
"c"
] |
stackoverflow_0000103989_algorithm_c.txt
|
Q:
Is it ok to have multiple threads writing the same values to the same variables?
I understand about race conditions and how with multiple threads accessing the same variable, updates made by one can be ignored and overwritten by others, but what if each thread is writing the same value (not different values) to the same variable; can even this cause problems? Could this code:
GlobalVar.property = 11;
(assuming that property will never be assigned anything other than 11), cause problems if multiple threads execute it at the same time?
A:
The problem comes when you read that state back, and do something about it. Writing is a red herring - it is true that as long as this is a single word most environments guarantee the write will be atomic, but that doesn't mean that a larger piece of code that includes this fragment is thread-safe. Firstly, presumably your global variable contained a different value to begin with - otherwise if you know it's always the same, why is it a variable? Second, presumably you eventually read this value back again?
The issue is that presumably, you are writing to this bit of shared state for a reason - to signal that something has occurred? This is where it falls down: when you have no locking constructs, there is no implied order of memory accesses at all. It's hard to point to what's wrong here because your example doesn't actually contain the use of the variable, so here's a trivialish example in neutral C-like syntax:
int x = 0, y = 0;
//thread A does:
x = 1;
y = 2;
if (y == 2)
print(x);
//thread B does, at the same time:
if (y == 2)
print(x);
Thread A will always print 1, but it's completely valid for thread B to print 0. The order of operations in thread A is only required to be observable from code executing in thread A - thread B is allowed to see any combination of the state. The writes to x and y may not actually happen in order.
This can happen even on single-processor systems, where most people do not expect this kind of reordering - your compiler may reorder it for you. On SMP even if the compiler doesn't reorder things, the memory writes may be reordered between the caches of the separate processors.
If that doesn't seem to answer it for you, include more detail of your example in the question. Without the use of the variable it's impossible to definitively say whether such a usage is safe or not.
A:
It depends on the work actually done by that statement. There can still be some cases where Something Bad happens - for example, if a C++ class has overloaded the = operator, and does anything nontrivial within that statement.
I have accidentally written code that did something like this with POD types (builtin primitive types), and it worked fine -- however, it's definitely not good practice, and I'm not confident that it's dependable.
Why not just lock the memory around this variable when you use it? In fact, if you somehow "know" this is the only write statement that can occur at some point in your code, why not just use the value 11 directly, instead of writing it to a shared variable?
(edit: I guess it's better to use a constant name instead of the magic number 11 directly in the code, btw.)
If you're using this to figure out when at least one thread has reached this statement, you could use a semaphore that starts at 1, and is decremented by the first thread that hits it.
A:
I would expect the result to be undetermined. As in it would vary from compiler to complier, langauge to language and OS to OS etc. So no, it is not safe
WHy would you want to do this though - adding in a line to obtain a mutex lock is only one or two lines of code (in most languages), and would remove any possibility of problem. If this is going to be two expensive then you need to find an alternate way of solving the problem
A:
In General, this is not considered a safe thing to do unless your system provides for atomic operation (operations that are guaranteed to be executed in a single cycle).
The reason is that while the "C" statement looks simple, often there are a number of underlying assembly operations taking place.
Depending on your OS, there are a few things you could do:
Take a mutual exclusion semaphore (mutex) to protect access
in some OS, you can temporarily disable preemption, which guarantees your thread will not swap out.
Some OS provide a writer or reader semaphore which is more performant than a plain old mutex.
A:
Here's my take on the question.
You have two or more threads running that write to a variable...like a status flag or something, where you only want to know if one or more of them was true. Then in another part of the code (after the threads complete) you want to check and see if at least on thread set that status... for example
bool flag = false
threadContainer tc
threadInputs inputs
check(input)
{
...do stuff to input
if(success)
flag = true
}
start multiple threads
foreach(i in inputs)
t = startthread(check, i)
tc.add(t) // Keep track of all the threads started
foreach(t in tc)
t.join( ) // Wait until each thread is done
if(flag)
print "One of the threads were successful"
else
print "None of the threads were successful"
I believe the above code would be OK, assuming you're fine with not knowing which thread set the status to true, and you can wait for all the multi-threaded stuff to finish before reading that flag. I could be wrong though.
|
Is it ok to have multiple threads writing the same values to the same variables?
|
I understand about race conditions and how with multiple threads accessing the same variable, updates made by one can be ignored and overwritten by others, but what if each thread is writing the same value (not different values) to the same variable; can even this cause problems? Could this code:
GlobalVar.property = 11;
(assuming that property will never be assigned anything other than 11), cause problems if multiple threads execute it at the same time?
|
[
"The problem comes when you read that state back, and do something about it. Writing is a red herring - it is true that as long as this is a single word most environments guarantee the write will be atomic, but that doesn't mean that a larger piece of code that includes this fragment is thread-safe. Firstly, presumably your global variable contained a different value to begin with - otherwise if you know it's always the same, why is it a variable? Second, presumably you eventually read this value back again?\nThe issue is that presumably, you are writing to this bit of shared state for a reason - to signal that something has occurred? This is where it falls down: when you have no locking constructs, there is no implied order of memory accesses at all. It's hard to point to what's wrong here because your example doesn't actually contain the use of the variable, so here's a trivialish example in neutral C-like syntax:\nint x = 0, y = 0;\n\n//thread A does:\nx = 1;\ny = 2;\nif (y == 2)\n print(x);\n\n//thread B does, at the same time:\nif (y == 2)\n print(x);\n\nThread A will always print 1, but it's completely valid for thread B to print 0. The order of operations in thread A is only required to be observable from code executing in thread A - thread B is allowed to see any combination of the state. The writes to x and y may not actually happen in order.\nThis can happen even on single-processor systems, where most people do not expect this kind of reordering - your compiler may reorder it for you. On SMP even if the compiler doesn't reorder things, the memory writes may be reordered between the caches of the separate processors.\nIf that doesn't seem to answer it for you, include more detail of your example in the question. Without the use of the variable it's impossible to definitively say whether such a usage is safe or not.\n",
"It depends on the work actually done by that statement. There can still be some cases where Something Bad happens - for example, if a C++ class has overloaded the = operator, and does anything nontrivial within that statement.\nI have accidentally written code that did something like this with POD types (builtin primitive types), and it worked fine -- however, it's definitely not good practice, and I'm not confident that it's dependable.\nWhy not just lock the memory around this variable when you use it? In fact, if you somehow \"know\" this is the only write statement that can occur at some point in your code, why not just use the value 11 directly, instead of writing it to a shared variable?\n(edit: I guess it's better to use a constant name instead of the magic number 11 directly in the code, btw.)\nIf you're using this to figure out when at least one thread has reached this statement, you could use a semaphore that starts at 1, and is decremented by the first thread that hits it.\n",
"I would expect the result to be undetermined. As in it would vary from compiler to complier, langauge to language and OS to OS etc. So no, it is not safe\nWHy would you want to do this though - adding in a line to obtain a mutex lock is only one or two lines of code (in most languages), and would remove any possibility of problem. If this is going to be two expensive then you need to find an alternate way of solving the problem\n",
"In General, this is not considered a safe thing to do unless your system provides for atomic operation (operations that are guaranteed to be executed in a single cycle).\nThe reason is that while the \"C\" statement looks simple, often there are a number of underlying assembly operations taking place.\nDepending on your OS, there are a few things you could do: \n\nTake a mutual exclusion semaphore (mutex) to protect access \nin some OS, you can temporarily disable preemption, which guarantees your thread will not swap out.\nSome OS provide a writer or reader semaphore which is more performant than a plain old mutex.\n\n",
"Here's my take on the question.\nYou have two or more threads running that write to a variable...like a status flag or something, where you only want to know if one or more of them was true. Then in another part of the code (after the threads complete) you want to check and see if at least on thread set that status... for example\nbool flag = false\nthreadContainer tc\nthreadInputs inputs\n\ncheck(input)\n{\n ...do stuff to input\n if(success)\n flag = true\n}\n\nstart multiple threads\nforeach(i in inputs) \n t = startthread(check, i)\n tc.add(t) // Keep track of all the threads started\n\nforeach(t in tc)\n t.join( ) // Wait until each thread is done\n\nif(flag)\n print \"One of the threads were successful\"\nelse\n print \"None of the threads were successful\"\n\nI believe the above code would be OK, assuming you're fine with not knowing which thread set the status to true, and you can wait for all the multi-threaded stuff to finish before reading that flag. I could be wrong though. \n"
] |
[
9,
3,
1,
1,
1
] |
[
"If the operation is atomic, you should be able to get by just fine. But I wouldn't do that in practice. It is better just to acquire a lock on the object and write the value.\n",
"Assuming that property will never be assigned anything other than 11, then I don't see a reason for assigment in the first place. Just make it a constant then.\nAssigment only makes sense when you intend to change the value unless the act of assigment itself has other side effects - like volatile writes have memory visibility side-effects in Java. And if you change state shared between multiple threads, then you need to synchronize or otherwise \"handle\" the problem of concurrency.\nWhen you assign a value, without proper synchronization, to some state shared between multiple threads, then there's no guarantees for when the other threads will see that change. And no visibility guarantees means that it it possible that the other threads will never see the assignt.\nCompilers, JITs, CPU caches. They're all trying to make your code run as fast as possible, and if you don't make any explicit requirements for memory visibility, then they will take advantage of that. If not on your machine, then somebody elses.\n"
] |
[
-1,
-1
] |
[
"multithreading"
] |
stackoverflow_0000072116_multithreading.txt
|
Q:
Strange call stack, could it be problem in asio's usage of openssl?
I have this strange call stack and I am stumped to understand why.
It seems to me that asio calls open ssl's read and then gets a negative return value (-37) .
Asio seems to then try to use it inside the memcpy function.
The function that causes this call stack is used hunderds of thousands of times without this error.
It happens only rarely, about once a week.
ulRead = (boost::asio::read(spCon->socket(), boost::asio::buffer(_requestHeader, _requestHeader.size()), boost::asio::transfer_at_least(_requestHeader.size()), error_));
Note that request header's size is exactly 3 bytes always.
Could anyone shed some light on possible reasons?
Note: I'm using boost asio 1.36
Here is the crashing call stack crash happens in memcpy because of the huge "count":
A:
A quick look at evp_lib.c shows that it tries to pull a length from the cipher context, and in your case gets a Very Bad Value(tm). It then uses this value to copy a string (which does the memcpy). My guess is something is trashing your cipher, be it a thread safety problem, or a reading more bytes into a buffer than allowed.
Relevant source:
int EVP_CIPHER_set_asn1_iv(EVP_CIPHER_CTX *c, ASN1_TYPE *type)
{
int i=0,j;
if (type != NULL)
{
j=EVP_CIPHER_CTX_iv_length(c);
OPENSSL_assert(j <= sizeof c->iv);
i=ASN1_TYPE_set_octetstring(type,c->oiv,j);
}
return(i);
}
|
Strange call stack, could it be problem in asio's usage of openssl?
|
I have this strange call stack and I am stumped to understand why.
It seems to me that asio calls open ssl's read and then gets a negative return value (-37) .
Asio seems to then try to use it inside the memcpy function.
The function that causes this call stack is used hunderds of thousands of times without this error.
It happens only rarely, about once a week.
ulRead = (boost::asio::read(spCon->socket(), boost::asio::buffer(_requestHeader, _requestHeader.size()), boost::asio::transfer_at_least(_requestHeader.size()), error_));
Note that request header's size is exactly 3 bytes always.
Could anyone shed some light on possible reasons?
Note: I'm using boost asio 1.36
Here is the crashing call stack crash happens in memcpy because of the huge "count":
|
[
"A quick look at evp_lib.c shows that it tries to pull a length from the cipher context, and in your case gets a Very Bad Value(tm). It then uses this value to copy a string (which does the memcpy). My guess is something is trashing your cipher, be it a thread safety problem, or a reading more bytes into a buffer than allowed.\nRelevant source:\nint EVP_CIPHER_set_asn1_iv(EVP_CIPHER_CTX *c, ASN1_TYPE *type)\n{\nint i=0,j;\n\nif (type != NULL)\n {\n j=EVP_CIPHER_CTX_iv_length(c);\n OPENSSL_assert(j <= sizeof c->iv);\n i=ASN1_TYPE_set_octetstring(type,c->oiv,j);\n }\nreturn(i);\n}\n\n"
] |
[
2
] |
[] |
[] |
[
"boost",
"boost_asio",
"c++",
"openssl",
"ssl"
] |
stackoverflow_0000111415_boost_boost_asio_c++_openssl_ssl.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.