content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How are Integer arrays stored internally, in the JVM?
An array of ints in java is stored as a block of 32-bit values in memory. How is an array of Integer objects stored? i.e.
int[] vs. Integer[]
I'd imagine that each element in the Integer array is a reference to an Integer object, and that the Integer object has object storage overheads, just like any other object.
I'm hoping however that the JVM does some magical cleverness under the hood given that Integers are immutable and stores it just like an array of ints.
Is my hope woefully naive? Is an Integer array much slower than an int array in an application where every last ounce of performance matters?
A:
No VM I know of will store an Integer[] array like an int[] array for the following reasons:
There can be null Integer objects in the array and you have no bits left for indicating this in an int array. The VM could store this 1-bit information per array slot in a hiden bit-array though.
You can synchronize in the elements of an Integer array. This is much harder to overcome as the first point, since you would have to store a monitor object for each array slot.
The elements of Integer[] can be compared for identity. You could for example create two Integer objects with the value 1 via new and store them in different array slots and later you retrieve them and compare them via ==. This must lead to false, so you would have to store this information somewhere. Or you keep a reference to one of the Integer objects somewhere and use this for comparison and you have to make sure one of the == comparisons is false and one true. This means the whole concept of object identity is quiet hard to handle for the optimized Integer array.
You can cast an Integer[] to e.g. Object[] and pass it to methods expecting just an Object[]. This means all the code which handles Object[] must now be able to handle the special Integer[] object too, making it slower and larger.
Taking all this into account, it would probably be possible to make a special Integer[] which saves some space in comparison to a naive implementation, but the additional complexity will likely affect a lot of other code, making it slower in the end.
The overhead of using Integer[] instead of int[] can be quiet large in space and time. On a typical 32 bit VM an Integer object will consume 16 byte (8 byte for the object header, 4 for the payload and 4 additional bytes for alignment) while the Integer[] uses as much space as int[]. In 64 bit VMs (using 64bit pointers, which is not always the case) an Integer object will consume 24 byte (16 for the header, 4 for the payload and 4 for alignment). In addition a slot in the Integer[] will use 8 byte instead of 4 as in the int[]. This means you can expect an overhead of 16 to 28 byte per slot, which is a factor of 4 to 7 compared to plain int arrays.
The performance overhead can be significant too for mainly two reasons:
Since you use more memory, you put on much more pressure on the memory subsystem, making it more likely to have cache misses in the case of Integer[]. For example if you traverse the contents of the int[] in a linear manner, the cache will have most of the entries already fetched when you need them (since the layout is linear too). But in case of the Integer array, the Integer objects itself might be scattered randomly in the heap, making it hard for the cache to guess where the next memory reference will point to.
The garbage collection has to do much more work because of the additional memory used and because it has to scan and move each Integer object separately, while in the case of int[] it is just one object and the contents of the object doesn't have to be scanned (they contain no reference to other objects).
To sum it up, using an int[] in performance critical work will be both much faster and memory efficient than using an Integer array in current VMs and it is unlikely this will change much in the near future.
A:
John Rose working on fixnums in the JVM to fix this problem.
A:
I think your hope is woefully naive. Specifically, it needs to deal with the issue that Integer can potentially be null, whereas int can not be. That alone is reason enough to store the object pointer.
That said, the actual object pointer will be to a immutable int instance, notably for a select subset of integers.
A:
It won't be much slower, but because an Integer[] must accept "null" as an entry and int[] doesn't have to, there will be some amount of bookkeeping involved, even if Integer[] is backed by an int[].
So if every last ounce of performance matters, user int[]
A:
The reason that Integer can be null, whereas int cannot, is because Integer is a full-fledged Java object, with all of the overhead that includes. There's value in this since you can write
Integer foo = new Integer();
foo = null;
which is good for saying that foo will have a value, but it doesn't yet.
Another difference is that int performs no overflow calculation. For instance,
int bar = Integer.MAX_VALUE;
bar++;
will merrily increment bar and you end up with a very negative number, which is probably not what you intended in the first place.
foo = Integer.MAX_VALUE;
foo++;
will complain, which I think would be better behavior.
One last point is that Integer, being a Java object, carries with it the space overhead of an object. I think that someone else may need to chime in here, but I believe that every object consumes 12 bytes for overhead, and then the space for the data storage itself. If you're after performance and space, I wonder whether Integer is the right solution.
|
How are Integer arrays stored internally, in the JVM?
|
An array of ints in java is stored as a block of 32-bit values in memory. How is an array of Integer objects stored? i.e.
int[] vs. Integer[]
I'd imagine that each element in the Integer array is a reference to an Integer object, and that the Integer object has object storage overheads, just like any other object.
I'm hoping however that the JVM does some magical cleverness under the hood given that Integers are immutable and stores it just like an array of ints.
Is my hope woefully naive? Is an Integer array much slower than an int array in an application where every last ounce of performance matters?
|
[
"No VM I know of will store an Integer[] array like an int[] array for the following reasons:\n\nThere can be null Integer objects in the array and you have no bits left for indicating this in an int array. The VM could store this 1-bit information per array slot in a hiden bit-array though.\nYou can synchronize in the elements of an Integer array. This is much harder to overcome as the first point, since you would have to store a monitor object for each array slot.\nThe elements of Integer[] can be compared for identity. You could for example create two Integer objects with the value 1 via new and store them in different array slots and later you retrieve them and compare them via ==. This must lead to false, so you would have to store this information somewhere. Or you keep a reference to one of the Integer objects somewhere and use this for comparison and you have to make sure one of the == comparisons is false and one true. This means the whole concept of object identity is quiet hard to handle for the optimized Integer array.\nYou can cast an Integer[] to e.g. Object[] and pass it to methods expecting just an Object[]. This means all the code which handles Object[] must now be able to handle the special Integer[] object too, making it slower and larger.\n\nTaking all this into account, it would probably be possible to make a special Integer[] which saves some space in comparison to a naive implementation, but the additional complexity will likely affect a lot of other code, making it slower in the end.\nThe overhead of using Integer[] instead of int[] can be quiet large in space and time. On a typical 32 bit VM an Integer object will consume 16 byte (8 byte for the object header, 4 for the payload and 4 additional bytes for alignment) while the Integer[] uses as much space as int[]. In 64 bit VMs (using 64bit pointers, which is not always the case) an Integer object will consume 24 byte (16 for the header, 4 for the payload and 4 for alignment). In addition a slot in the Integer[] will use 8 byte instead of 4 as in the int[]. This means you can expect an overhead of 16 to 28 byte per slot, which is a factor of 4 to 7 compared to plain int arrays.\nThe performance overhead can be significant too for mainly two reasons:\n\nSince you use more memory, you put on much more pressure on the memory subsystem, making it more likely to have cache misses in the case of Integer[]. For example if you traverse the contents of the int[] in a linear manner, the cache will have most of the entries already fetched when you need them (since the layout is linear too). But in case of the Integer array, the Integer objects itself might be scattered randomly in the heap, making it hard for the cache to guess where the next memory reference will point to.\nThe garbage collection has to do much more work because of the additional memory used and because it has to scan and move each Integer object separately, while in the case of int[] it is just one object and the contents of the object doesn't have to be scanned (they contain no reference to other objects).\n\nTo sum it up, using an int[] in performance critical work will be both much faster and memory efficient than using an Integer array in current VMs and it is unlikely this will change much in the near future.\n",
"John Rose working on fixnums in the JVM to fix this problem.\n",
"I think your hope is woefully naive. Specifically, it needs to deal with the issue that Integer can potentially be null, whereas int can not be. That alone is reason enough to store the object pointer.\nThat said, the actual object pointer will be to a immutable int instance, notably for a select subset of integers.\n",
"It won't be much slower, but because an Integer[] must accept \"null\" as an entry and int[] doesn't have to, there will be some amount of bookkeeping involved, even if Integer[] is backed by an int[].\nSo if every last ounce of performance matters, user int[]\n",
"The reason that Integer can be null, whereas int cannot, is because Integer is a full-fledged Java object, with all of the overhead that includes. There's value in this since you can write\nInteger foo = new Integer();\nfoo = null; \n\nwhich is good for saying that foo will have a value, but it doesn't yet. \nAnother difference is that int performs no overflow calculation. For instance,\nint bar = Integer.MAX_VALUE;\nbar++;\n\nwill merrily increment bar and you end up with a very negative number, which is probably not what you intended in the first place.\nfoo = Integer.MAX_VALUE;\nfoo++;\n\nwill complain, which I think would be better behavior. \nOne last point is that Integer, being a Java object, carries with it the space overhead of an object. I think that someone else may need to chime in here, but I believe that every object consumes 12 bytes for overhead, and then the space for the data storage itself. If you're after performance and space, I wonder whether Integer is the right solution.\n"
] |
[
12,
3,
1,
0,
0
] |
[] |
[] |
[
"java",
"jvm"
] |
stackoverflow_0000076549_java_jvm.txt
|
Q:
How do I get the most recently updated form item to "stick" in Firefox when I copy its container?
I have a dl containing some input boxes that I "clone" with a bit of JavaScript like:
var newBox = document.createElement('dl');
var sourceBox = document.getElementById(oldkey);
newBox.innerHTML = sourceBox.innerHTML;
newBox.id = newkey;
document.getElementById('boxes').appendChild(columnBox);
In IE, the form in sourceBox is duplicated in newBox, complete with user-supplied values. In Firefox, last value entered in the orginal sourceBox is not present in newBox. How do I make this "stick?"
A:
You could try the cloneNode method. It might do a better job of copying the contents. It should also be faster in most cases
var newBox;
var sourceBox = document.getElementById(oldkey);
if (sourceBox.cloneNode)
newBox = sourceBox.cloneNode(true);
else {
newBox = document.createElement(sourceBox.tagName);
newBox.innerHTML = sourceBox.innerHTML;
}
newBox.id = newkey;
document.getElementById('boxes').appendChild(newBox);
A:
Thanks folks.
I got things to work by using prototype and changing document.getElementById(oldkey)
to $(oldkey).
<script src="j/prototype.js" type="text/javascript"></script>
var newBox;
var sourceBox = $(oldkey);
if (sourceBox.cloneNode)
newBox = sourceBox.cloneNode(true);
else {
newBox = document.createElement(sourceBox.tagName);
newBox.innerHTML = sourceBox.innerHTML;
}
newBox.id = newkey;
document.getElementById('boxes').appendChild(newBox);
A:
Firefox vs. IE: innerHTML handling ?
|
How do I get the most recently updated form item to "stick" in Firefox when I copy its container?
|
I have a dl containing some input boxes that I "clone" with a bit of JavaScript like:
var newBox = document.createElement('dl');
var sourceBox = document.getElementById(oldkey);
newBox.innerHTML = sourceBox.innerHTML;
newBox.id = newkey;
document.getElementById('boxes').appendChild(columnBox);
In IE, the form in sourceBox is duplicated in newBox, complete with user-supplied values. In Firefox, last value entered in the orginal sourceBox is not present in newBox. How do I make this "stick?"
|
[
"You could try the cloneNode method. It might do a better job of copying the contents. It should also be faster in most cases\nvar newBox;\nvar sourceBox = document.getElementById(oldkey);\nif (sourceBox.cloneNode) \n newBox = sourceBox.cloneNode(true);\nelse {\n newBox = document.createElement(sourceBox.tagName); \n newBox.innerHTML = sourceBox.innerHTML; \n}\nnewBox.id = newkey; \ndocument.getElementById('boxes').appendChild(newBox);\n\n",
"Thanks folks. \nI got things to work by using prototype and changing document.getElementById(oldkey)\nto $(oldkey).\n<script src=\"j/prototype.js\" type=\"text/javascript\"></script> \n\nvar newBox; \nvar sourceBox = $(oldkey); \nif (sourceBox.cloneNode) \n newBox = sourceBox.cloneNode(true); \nelse { \n newBox = document.createElement(sourceBox.tagName); \n newBox.innerHTML = sourceBox.innerHTML; \n} \nnewBox.id = newkey; \ndocument.getElementById('boxes').appendChild(newBox);\n\n",
"Firefox vs. IE: innerHTML handling ?\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"dom",
"firefox",
"javascript"
] |
stackoverflow_0000069722_dom_firefox_javascript.txt
|
Q:
What are the best practices for moving between version control systems?
There are about 200 projects in cvs and at least 100 projects in vss. Some are inactive code in maintenance mode. Some are legacy apps. Some are old apps no longer in use. About 10% are in active development. The plan is to move everything to perforce my end of year 2009.
Has anyone done a large migration like this?
Has anyone come across best practices for moving from cvs to perforce? Or a similar migration. Any gotchas to look out for?
A:
On the VSS side, there are conversion tools that are available to help with migration. They can mostly maintain version history (there are caveats that are explained in the readme and docs). I have migrated well over 50 VSS projcts into perforce using the VSS to perforce tool. Getting the data out of VSS can be a bit finicky and not terribly speedy, but it works. If you have direct access to the disks (i.e. not over a network share) to the VSS repository, the conversion can go much quicker. You can find information about the scripts here.
There is a simlar page for CVS to perforce conversion here, although I don't have direct experience with that. These links are good places to start. You can also search through the Perforce mailing lists at the Perforce Knowledge Base located here. I'm pretty sure that you might find some conversion information in the mailing list archives.
Migrate your old projects first. You can make sure that your process works. When we migrated active code to Perforce, I took one weekend and basically took down access to the servers and moved the code over to Perforce. Honestly, it was a pretty easy migration and when people came back on Monday they were ready to go. You might think about preparing your employees with Perforce cheat sheets after you start doing the migration.
The biggest gotchas might actually be preparing your people to use Perforce. Had I done it over again, I would have migrated our smaller active projects first and prepared smaller numbers of people to use Perforce at once. As it was, I had to train 120+ people on day 1 after the migration and that was a bit much. Also, make sure that you don't have 100+ people hitting your server for a fresh sync on day 1 either. We wound up taking our server down multiple times during the first few days. We used a windows 32 bit server which I would not recommend. We have a windows 64bit server now and it's much more robust. If you can, I would actually use Linux as your OS for your perforce server. Again, there should be good info on the Perforce site about performance.
A:
I haven't had to do something of this scale, but I have a few ideas. First off, start by taking a small, unimportant project, and migrate that. That will give you an idea of how much trouble it is going to take to migrate the rest of the projects. Immediately after that you should choose a medium size project as there may be issues with migrating a larger project (say with branches) that might not be apparent on a small project.
Make sure you spend a bit of time seeing how easy it is to convert cvs projects to vss, or the other way around. If converting from vss to perforce is a real pain, you can convert vss to cvs, and then to perforce. Don't sink days into it, but it could back you out of a sticky situation. I think the key here is go incremental.
Backups are good. Period.
Consider a cutoff date, and any projects that are inactive, and older then that, should be mothballed. Check out the final revision and store that in Perforce. Do you really need 15 yearold visual basic code?
A:
What ever you do, keep the old repositories in read-only mode some where.
A:
Forgive my answering a question with a question, but doesn't Perforce provide tools for this? Or, at the very least, documentation? I'd be beating up my Perforce salesperson...
A:
Consider not migrating dead and inactive projects. Simply put their repositories in read-only mode. The data will still be available if needed and you save the time effort of migrating them. Just migrate the 10% that are in use. Document the process thoroughly.
If one of the un-migrated projects gets resurrected some time in the future you can easily migrate it using your documentation as a reference.
A:
We migrated our svn repository with a tool that we wrote, and just took the head revision of our starteam projects.
Watch out for differences between single-file checkins (CVS) and multi-file changesets (Perforce).
Watch out for branches is separate space (CVS) vs. branches in filepath-space (Perforce).
|
What are the best practices for moving between version control systems?
|
There are about 200 projects in cvs and at least 100 projects in vss. Some are inactive code in maintenance mode. Some are legacy apps. Some are old apps no longer in use. About 10% are in active development. The plan is to move everything to perforce my end of year 2009.
Has anyone done a large migration like this?
Has anyone come across best practices for moving from cvs to perforce? Or a similar migration. Any gotchas to look out for?
|
[
"On the VSS side, there are conversion tools that are available to help with migration. They can mostly maintain version history (there are caveats that are explained in the readme and docs). I have migrated well over 50 VSS projcts into perforce using the VSS to perforce tool. Getting the data out of VSS can be a bit finicky and not terribly speedy, but it works. If you have direct access to the disks (i.e. not over a network share) to the VSS repository, the conversion can go much quicker. You can find information about the scripts here.\nThere is a simlar page for CVS to perforce conversion here, although I don't have direct experience with that. These links are good places to start. You can also search through the Perforce mailing lists at the Perforce Knowledge Base located here. I'm pretty sure that you might find some conversion information in the mailing list archives.\nMigrate your old projects first. You can make sure that your process works. When we migrated active code to Perforce, I took one weekend and basically took down access to the servers and moved the code over to Perforce. Honestly, it was a pretty easy migration and when people came back on Monday they were ready to go. You might think about preparing your employees with Perforce cheat sheets after you start doing the migration.\nThe biggest gotchas might actually be preparing your people to use Perforce. Had I done it over again, I would have migrated our smaller active projects first and prepared smaller numbers of people to use Perforce at once. As it was, I had to train 120+ people on day 1 after the migration and that was a bit much. Also, make sure that you don't have 100+ people hitting your server for a fresh sync on day 1 either. We wound up taking our server down multiple times during the first few days. We used a windows 32 bit server which I would not recommend. We have a windows 64bit server now and it's much more robust. If you can, I would actually use Linux as your OS for your perforce server. Again, there should be good info on the Perforce site about performance.\n",
"I haven't had to do something of this scale, but I have a few ideas. First off, start by taking a small, unimportant project, and migrate that. That will give you an idea of how much trouble it is going to take to migrate the rest of the projects. Immediately after that you should choose a medium size project as there may be issues with migrating a larger project (say with branches) that might not be apparent on a small project.\nMake sure you spend a bit of time seeing how easy it is to convert cvs projects to vss, or the other way around. If converting from vss to perforce is a real pain, you can convert vss to cvs, and then to perforce. Don't sink days into it, but it could back you out of a sticky situation. I think the key here is go incremental.\nBackups are good. Period.\nConsider a cutoff date, and any projects that are inactive, and older then that, should be mothballed. Check out the final revision and store that in Perforce. Do you really need 15 yearold visual basic code?\n",
"What ever you do, keep the old repositories in read-only mode some where.\n",
"Forgive my answering a question with a question, but doesn't Perforce provide tools for this? Or, at the very least, documentation? I'd be beating up my Perforce salesperson...\n",
"Consider not migrating dead and inactive projects. Simply put their repositories in read-only mode. The data will still be available if needed and you save the time effort of migrating them. Just migrate the 10% that are in use. Document the process thoroughly. \nIf one of the un-migrated projects gets resurrected some time in the future you can easily migrate it using your documentation as a reference.\n",
"We migrated our svn repository with a tool that we wrote, and just took the head revision of our starteam projects.\nWatch out for differences between single-file checkins (CVS) and multi-file changesets (Perforce).\nWatch out for branches is separate space (CVS) vs. branches in filepath-space (Perforce).\n"
] |
[
5,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"cvs",
"perforce",
"version_control"
] |
stackoverflow_0000078983_cvs_perforce_version_control.txt
|
Q:
Visual Studio: How to trigger an alarm when a breakpoint is hit?
Is there a way to trigger a beep/alarm/sound when my breakpoint is hit? I'm using Visual Studio 2005/2008.
A:
Windows XP
Control Panel -> Sounds and Audio... -> Program Events - Microsoft Developer -> Breakpoint Hit
Windows 7
Control Panel -> All Control Panel Items -> Sounds -> Sounds (tab) - Microsoft Visual Studio -> Breakpoint Hit
A:
Yes, you can do it with a Macro assigned to a breakpoint. This works in VS 2005, I assume 2008 will work as well. I assume you don't want a sound on EVERY breakpoint, or the other answer will work fine. There is probably a way to play a specific sound, but I didn't dig that hard. Here are the basic steps:
Add A New Macro Module (steps below the code)
Imports System.Runtime.InteropServices
Public Module Beeps
Public Sub WindowsBeep()
Interaction.Beep()
End Sub
Public Sub ForceBeep()
Beep(900, 300)
End Sub
<DllImport("Kernel32.dll")> _
Private Function Beep(ByVal frequency As UInt32, ByVal duration As UInt32) As Boolean
End Function
End Module
Tools => Macros => Macros IDE
My Macros (In Project Explorer) => Add New Module => Name: "Beeps"
Copy the above code in. It has 2 methods
First one uses the windows "Beep" sound
Second one forces a "Beep" tone, not a .wav file. This works with all sounds disabled (eg Control Panel -> Sounds -> Sound Scheme: No Sounds), but sounds ugly.
View the Macro Explorer in VS.Net (not the macro IDE) to make sure it is there :)
Assign To A Breakpoint
Add a break point to a line
Right click on the little red dot
Select "When Hit"
Check the box to enable macros
Select your macro from the pulldown
Uncheck "continue execution" if you want to stop. It is checked by default.
Also, there are ways to play an arbitrary wav file, but that seems excessive for an alert. Perhaps the forced "beep" is the best, since that at least sounds different than Ding.
A:
You can create a macro that runs in response to a breakpoint firing. In your macro, you could do whatever it takes to make a beeping noise.
|
Visual Studio: How to trigger an alarm when a breakpoint is hit?
|
Is there a way to trigger a beep/alarm/sound when my breakpoint is hit? I'm using Visual Studio 2005/2008.
|
[
"Windows XP\nControl Panel -> Sounds and Audio... -> Program Events - Microsoft Developer -> Breakpoint Hit\nWindows 7\nControl Panel -> All Control Panel Items -> Sounds -> Sounds (tab) - Microsoft Visual Studio -> Breakpoint Hit\n",
"Yes, you can do it with a Macro assigned to a breakpoint. This works in VS 2005, I assume 2008 will work as well. I assume you don't want a sound on EVERY breakpoint, or the other answer will work fine. There is probably a way to play a specific sound, but I didn't dig that hard. Here are the basic steps:\nAdd A New Macro Module (steps below the code)\nImports System.Runtime.InteropServices\nPublic Module Beeps\n Public Sub WindowsBeep()\n Interaction.Beep()\n End Sub\n Public Sub ForceBeep()\n Beep(900, 300)\n End Sub\n <DllImport(\"Kernel32.dll\")> _\n Private Function Beep(ByVal frequency As UInt32, ByVal duration As UInt32) As Boolean\n End Function\nEnd Module\n\n\nTools => Macros => Macros IDE\nMy Macros (In Project Explorer) => Add New Module => Name: \"Beeps\"\nCopy the above code in. It has 2 methods\n\n\nFirst one uses the windows \"Beep\" sound\nSecond one forces a \"Beep\" tone, not a .wav file. This works with all sounds disabled (eg Control Panel -> Sounds -> Sound Scheme: No Sounds), but sounds ugly.\n\nView the Macro Explorer in VS.Net (not the macro IDE) to make sure it is there :)\n\nAssign To A Breakpoint\n\nAdd a break point to a line\nRight click on the little red dot\nSelect \"When Hit\"\nCheck the box to enable macros\nSelect your macro from the pulldown\nUncheck \"continue execution\" if you want to stop. It is checked by default.\n\nAlso, there are ways to play an arbitrary wav file, but that seems excessive for an alert. Perhaps the forced \"beep\" is the best, since that at least sounds different than Ding.\n",
"You can create a macro that runs in response to a breakpoint firing. In your macro, you could do whatever it takes to make a beeping noise.\n"
] |
[
94,
8,
2
] |
[] |
[] |
[
"breakpoints",
"visual_studio"
] |
stackoverflow_0000080564_breakpoints_visual_studio.txt
|
Q:
best tool to reverse-engineer a WinXP PS/2 touchpad driver?
I have a PS/2 touchpad which I would like to write a driver for (I'm just a web guy so this is unfamiliar territory to me). The touchpad comes with a Windows XP driver, which apparently sends messages to enable/disable tap-to-click. I'm trying to find out what message it is sending but I'm not sure how to start. Would software like "Syser Debugger" work? I want to intercept outgoing messages being sent to the PS/2 bus.
A:
IDA Pro won't be much use to you if you want to find out what 'messages' are being sent. You should realise that this is a very big step up for most web developers, but you already knew that?
I would start by deciding if you really need to work at the driver-level, often this is the Kernel level. The user mode level may be where you want to look first. Use a tool like WinSpy or other Windows debug tool to find out what messages are getting passed around by your driver software, and the mouse configuration applet in control panel. You can use the Windows API function called SendMessage() to send your messages to the application from user mode.
Your first stop for device driver development should be the Windows DDK docs and OSR Online.
A:
I suggest reading the synaptics touchpad specs (most of the touchpads installed on notebooks are synaptics') available here http://www.synaptics.com/decaf/utilities/ACF126.pdf
I believe on page 18 you'll find the feature you are looking for. At least you'll know what to expect.
So, very likely, the touchpad driver "converts" the command coming from user mode to this PS/2 command.
I don't know the specifics of the touchpad PS/2 driver but I see two major ways for the user mode panel to communicate with the driver:
- update some key in the registry (this is actually very common)
- the driver provides an alternate "channel" that the user mode app opens and writes specific commands to
You may want to try using the process monitor from sysinternals to log registry activity when setting/resetting the feature.
As for the options 2 you may want to try IRP tracker from OSR and see if there's any specific communication between the panel and the driver (in the form or IRPs going back and forth). In this case, kernel programming knowledge is somewhat required.
The windows kernel debugger may also be useful to see if the PS/2 driver has some alternate channel.
A:
Have a look at IDA Pro - The Interactive Disassembler. It is an amazing disassembler.
If you want to debug, not just reverse engineer, try PEBrowse Professional Interactive from SmidgeonSoft
|
best tool to reverse-engineer a WinXP PS/2 touchpad driver?
|
I have a PS/2 touchpad which I would like to write a driver for (I'm just a web guy so this is unfamiliar territory to me). The touchpad comes with a Windows XP driver, which apparently sends messages to enable/disable tap-to-click. I'm trying to find out what message it is sending but I'm not sure how to start. Would software like "Syser Debugger" work? I want to intercept outgoing messages being sent to the PS/2 bus.
|
[
"IDA Pro won't be much use to you if you want to find out what 'messages' are being sent. You should realise that this is a very big step up for most web developers, but you already knew that?\nI would start by deciding if you really need to work at the driver-level, often this is the Kernel level. The user mode level may be where you want to look first. Use a tool like WinSpy or other Windows debug tool to find out what messages are getting passed around by your driver software, and the mouse configuration applet in control panel. You can use the Windows API function called SendMessage() to send your messages to the application from user mode.\nYour first stop for device driver development should be the Windows DDK docs and OSR Online.\n",
"I suggest reading the synaptics touchpad specs (most of the touchpads installed on notebooks are synaptics') available here http://www.synaptics.com/decaf/utilities/ACF126.pdf\nI believe on page 18 you'll find the feature you are looking for. At least you'll know what to expect.\nSo, very likely, the touchpad driver \"converts\" the command coming from user mode to this PS/2 command.\nI don't know the specifics of the touchpad PS/2 driver but I see two major ways for the user mode panel to communicate with the driver:\n- update some key in the registry (this is actually very common)\n- the driver provides an alternate \"channel\" that the user mode app opens and writes specific commands to\nYou may want to try using the process monitor from sysinternals to log registry activity when setting/resetting the feature.\nAs for the options 2 you may want to try IRP tracker from OSR and see if there's any specific communication between the panel and the driver (in the form or IRPs going back and forth). In this case, kernel programming knowledge is somewhat required.\nThe windows kernel debugger may also be useful to see if the PS/2 driver has some alternate channel.\n",
"Have a look at IDA Pro - The Interactive Disassembler. It is an amazing disassembler.\nIf you want to debug, not just reverse engineer, try PEBrowse Professional Interactive from SmidgeonSoft\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"black_box",
"drivers",
"reverse_engineering",
"windows_xp"
] |
stackoverflow_0000051011_black_box_drivers_reverse_engineering_windows_xp.txt
|
Q:
Best approach to write/read binary data in Little or Big Endian with C#?
Ok, if i've got a binary file encoded either in little endian or big endian under .NET, what is the best way to read / write to it?
In the .NET framework i've only managed to found BinaryWritters / BinaryReaders which use little endian as default, so my approach was implement my own BinaryReader / BinaryWritter for reading / writting data in big endian, but I wonder if there is a better aproach.
A:
I like this one:
Miscellaneous Utility Library
|
Best approach to write/read binary data in Little or Big Endian with C#?
|
Ok, if i've got a binary file encoded either in little endian or big endian under .NET, what is the best way to read / write to it?
In the .NET framework i've only managed to found BinaryWritters / BinaryReaders which use little endian as default, so my approach was implement my own BinaryReader / BinaryWritter for reading / writting data in big endian, but I wonder if there is a better aproach.
|
[
"I like this one:\nMiscellaneous Utility Library\n"
] |
[
5
] |
[] |
[] |
[
".net",
"c#",
"endianness",
"file"
] |
stackoverflow_0000080784_.net_c#_endianness_file.txt
|
Q:
How to sync a database that exists in various (not networked) SQL Server 2005 instances
I am working on a database application that runs on various independent servers.
Each server runs an Instance of SQL Server 2005 with the same database. We would have a Master Server where that would be the definitive source of information and various "Client" Servers that would be distributed around (with no network connection of any kind). This Client Servers would return from time to time (lets say once a week) to be synchronized with the Master. Simply put the process would be.
1) Update the database on the master server with all the modifications from a client server (taking into account not overwriting changes made by the update process of a different client server [that would update the same master server])
2) Copy an updated version of the master server database to the client server.
Thanks for any help
A:
MS SQL Integration Services may help:
http://www.microsoft.com/sql/technologies/integration/default.mspx
A:
Also check for database replication. Check the Master-Remote part too.
|
How to sync a database that exists in various (not networked) SQL Server 2005 instances
|
I am working on a database application that runs on various independent servers.
Each server runs an Instance of SQL Server 2005 with the same database. We would have a Master Server where that would be the definitive source of information and various "Client" Servers that would be distributed around (with no network connection of any kind). This Client Servers would return from time to time (lets say once a week) to be synchronized with the Master. Simply put the process would be.
1) Update the database on the master server with all the modifications from a client server (taking into account not overwriting changes made by the update process of a different client server [that would update the same master server])
2) Copy an updated version of the master server database to the client server.
Thanks for any help
|
[
"MS SQL Integration Services may help:\nhttp://www.microsoft.com/sql/technologies/integration/default.mspx\n",
"Also check for database replication. Check the Master-Remote part too.\n"
] |
[
2,
1
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000080721_sql_server.txt
|
Q:
How to: Pass an ampersand in a lousy filename to a flash object on a webpage
Argghh. I have a site that offers audio previews of songs hosted elsewhere. Some file names have an ampersand in them - see below where it passes "soundFile." Anytime there's an ampersand, Flash can't get the file - I think it drops the filename after the ampersand. It doesn't matter if I pass it as an "&" or an HTML entity ("& a m p ;")
<object type="application/x-shockwave-flash" data="includes/player.swf" id="audioplayer" height="24" width="290">
<param name="movie" value="includes/player.swf"><param name="FlashVars" value="playerID=1&soundFile=http://www.divideandkreate.com/mp3/Divide_&_Kreate_-_Party_Kisser.mp3">
<param name="quality" value="high"><param name="menu" value="false"><param name="wmode" value="transparent">
</object>
A:
Sounds like you might have to URL-encode it, rather than HTML-encode it. Not sure without the code sample though. The URL-encoded code for ampersand is '%26'.
|
How to: Pass an ampersand in a lousy filename to a flash object on a webpage
|
Argghh. I have a site that offers audio previews of songs hosted elsewhere. Some file names have an ampersand in them - see below where it passes "soundFile." Anytime there's an ampersand, Flash can't get the file - I think it drops the filename after the ampersand. It doesn't matter if I pass it as an "&" or an HTML entity ("& a m p ;")
<object type="application/x-shockwave-flash" data="includes/player.swf" id="audioplayer" height="24" width="290">
<param name="movie" value="includes/player.swf"><param name="FlashVars" value="playerID=1&soundFile=http://www.divideandkreate.com/mp3/Divide_&_Kreate_-_Party_Kisser.mp3">
<param name="quality" value="high"><param name="menu" value="false"><param name="wmode" value="transparent">
</object>
|
[
"Sounds like you might have to URL-encode it, rather than HTML-encode it. Not sure without the code sample though. The URL-encoded code for ampersand is '%26'.\n"
] |
[
8
] |
[] |
[] |
[
"character",
"encoding",
"flash"
] |
stackoverflow_0000080818_character_encoding_flash.txt
|
Q:
makefiles CFLAGS
In the process of learning TinyOS I have discovered that I am totally clueless about makefiles.
There are many optional compile time features that can be used by way of declaring preprocessor variables.
To use them you have to do things like:
CFLAGS="-DPACKET_LINK" this enables a certain feature.
and
CFLAGS="-DPACKET_LINK" "-DLOW_POWER" enables two features.
Can someone dissect these lines for me and tell me whats going on? Not in terms of TinyOS, but in terms of makefiles!
A:
CFLAGS is a variable that is most commonly used to add arguments to the compiler. In this case, it define macros.
So the -DPACKET_LINK is the equivalent of putting #define PACKET_LINK 1 at the top of all .c and .h files in your project. Most likely, you have code inside your project that looks if these macros are defined and does something depending on that:
#ifdef PACKET_LINK
// This code will be ignored if PACKET_LINK is not defined
do_packet_link_stuff();
#endif
#ifdef LOW_POWER
// This code will be ignored if LOW_POWER is not defined
handle_powersaving_functions();
#endif
If you look further down in your makefile, you should see that $(CFLAGS) is probably used like:
$(CC) $(CFLAGS) ...some-more-arguments...
A:
Somewhere in the makefile the CFLAG will be used in compilation line like this:
$(CC) $(CFLAGS) $(C_INCLUDES) $<
and eventually in the execution will be translated to :
gcc -DPACKET_LINK -DLOW_POWER -c filename.c -o filename.o
This define will be passed to the source code as it was define in the header file
A:
The -D option set pre-processor variables, so in your case, all code that is in the specified "#ifdef / #endif" blocks will be compiled.
I.e.
#ifdef PACKET_LINK
/* whatever code here */
#endif
The CFLAGS is a variable used in the makefile which will be expanded to it's contents when the compiler is invoked.
E.g.
gcc $(CFLAGS) source.c
A:
-D stands for define (in gcc) at least, which lets you #define on the command line instead of a file somewhere. A common thing to see would be -DDEBUG or -DNDEBUG which respectively activate or disable debugging code.
A:
Just for completeness in this - if you're using Microsoft's nmake utility, you might not actually see the $(CFLAGS) macro used in the makefile because nmake has some defaults for things like compiling C/C++ files. Among others, the following are pre-defined in nmake (I'm not sure if GNU Make does anything like this), so you might not see it in a working makefile on Windows:
.c.exe:
commands: $(CC) $(CFLAGS) $<
.c.obj:
commands: $(CC) $(CFLAGS) /c $<
.cpp.exe:
commands: $(CXX) $(CXXFLAGS) $<
.cpp.obj:
commands: $(CXX) $(CXXFLAGS) /c $<
|
makefiles CFLAGS
|
In the process of learning TinyOS I have discovered that I am totally clueless about makefiles.
There are many optional compile time features that can be used by way of declaring preprocessor variables.
To use them you have to do things like:
CFLAGS="-DPACKET_LINK" this enables a certain feature.
and
CFLAGS="-DPACKET_LINK" "-DLOW_POWER" enables two features.
Can someone dissect these lines for me and tell me whats going on? Not in terms of TinyOS, but in terms of makefiles!
|
[
"CFLAGS is a variable that is most commonly used to add arguments to the compiler. In this case, it define macros. \nSo the -DPACKET_LINK is the equivalent of putting #define PACKET_LINK 1 at the top of all .c and .h files in your project. Most likely, you have code inside your project that looks if these macros are defined and does something depending on that:\n#ifdef PACKET_LINK\n// This code will be ignored if PACKET_LINK is not defined\ndo_packet_link_stuff();\n#endif\n\n#ifdef LOW_POWER\n// This code will be ignored if LOW_POWER is not defined \nhandle_powersaving_functions();\n#endif\n\nIf you look further down in your makefile, you should see that $(CFLAGS) is probably used like:\n$(CC) $(CFLAGS) ...some-more-arguments...\n\n",
"Somewhere in the makefile the CFLAG will be used in compilation line like this:\n$(CC) $(CFLAGS) $(C_INCLUDES) $<\nand eventually in the execution will be translated to : \ngcc -DPACKET_LINK -DLOW_POWER -c filename.c -o filename.o\nThis define will be passed to the source code as it was define in the header file \n",
"The -D option set pre-processor variables, so in your case, all code that is in the specified \"#ifdef / #endif\" blocks will be compiled.\nI.e.\n#ifdef PACKET_LINK\n/* whatever code here */\n#endif\n\nThe CFLAGS is a variable used in the makefile which will be expanded to it's contents when the compiler is invoked.\nE.g.\ngcc $(CFLAGS) source.c\n\n",
"-D stands for define (in gcc) at least, which lets you #define on the command line instead of a file somewhere. A common thing to see would be -DDEBUG or -DNDEBUG which respectively activate or disable debugging code.\n",
"Just for completeness in this - if you're using Microsoft's nmake utility, you might not actually see the $(CFLAGS) macro used in the makefile because nmake has some defaults for things like compiling C/C++ files. Among others, the following are pre-defined in nmake (I'm not sure if GNU Make does anything like this), so you might not see it in a working makefile on Windows:\n.c.exe:\n commands: $(CC) $(CFLAGS) $<\n\n.c.obj:\n commands: $(CC) $(CFLAGS) /c $<\n\n.cpp.exe:\n commands: $(CXX) $(CXXFLAGS) $<\n\n.cpp.obj:\n commands: $(CXX) $(CXXFLAGS) /c $<\n\n"
] |
[
28,
8,
3,
1,
1
] |
[] |
[] |
[
"makefile"
] |
stackoverflow_0000080657_makefile.txt
|
Q:
Is there another way to do screen scraping apart from regular expressions?
I'm doing a personal, just for fun, project that is using screen scraping to give me a System Tray notification in case another line on an HTML table is added, modified or deleted.
Having done this before I thought: well let's go with the regular expression thing and that's it, but being a curious person, made me think that there could be something else out there that could have another paradigm but be as simple to use.
I know about DOM and X-Path and all the xml'ish approaches. I'm looking for something outside the box, something that can even be defined in a set of rules so you can make a plugin system to aggregate various sites.
A:
See Options for HTML Scraping
A:
Here's an idea: assuming your main use case is getting a notification whenever an HTML file changes, why not use a standard diff tool and then loop through the changed lines, applying your rules?
Also, if this is a situation where you have access to the server and the files you're watching, you might be able to put everything under source control with CVS (or similar) and just watch for commits. If you want to use this approach for random sites on the web, just write a script that periodically downloads the html for the appropriate URLs and then commits it to source control and watch the diffs.
Not very practical, but outside the box.
|
Is there another way to do screen scraping apart from regular expressions?
|
I'm doing a personal, just for fun, project that is using screen scraping to give me a System Tray notification in case another line on an HTML table is added, modified or deleted.
Having done this before I thought: well let's go with the regular expression thing and that's it, but being a curious person, made me think that there could be something else out there that could have another paradigm but be as simple to use.
I know about DOM and X-Path and all the xml'ish approaches. I'm looking for something outside the box, something that can even be defined in a set of rules so you can make a plugin system to aggregate various sites.
|
[
"See Options for HTML Scraping\n",
"Here's an idea: assuming your main use case is getting a notification whenever an HTML file changes, why not use a standard diff tool and then loop through the changed lines, applying your rules?\nAlso, if this is a situation where you have access to the server and the files you're watching, you might be able to put everything under source control with CVS (or similar) and just watch for commits. If you want to use this approach for random sites on the web, just write a script that periodically downloads the html for the appropriate URLs and then commits it to source control and watch the diffs.\nNot very practical, but outside the box.\n"
] |
[
3,
0
] |
[
"If you can convert the source into valid XHTML/XML using something like SgmlReader or HtmlTidy then you could use XSLT. Simply create a XSL template for each site you wish to scrape.\n"
] |
[
-1
] |
[
"screen_scraping"
] |
stackoverflow_0000080834_screen_scraping.txt
|
Q:
How do I use LogParser to find out the LENGTH of a field in an IIS Log?
I'm trying to find LONG UserAgent strings with LogParser.exe in my IIS logs. This example searches for entries with the string 'poo' in them.
LogParser.exe -i:IISW3C
"SELECT COUNT(cs(User-Agent)) AS Client
FROM *.log WHERE cs(User-Agent) LIKE '%poo%'"
I'm trying to say "How many entries have a User-Agent that is longer than 'x'".
A:
Well, looks like I answered my own question.
LogParser.exe -i:IISW3C
"SELECT COUNT(cs(User-Agent)) AS Client
FROM *.log WHERE STRLEN(cs(User-Agent)) > 100"
|
How do I use LogParser to find out the LENGTH of a field in an IIS Log?
|
I'm trying to find LONG UserAgent strings with LogParser.exe in my IIS logs. This example searches for entries with the string 'poo' in them.
LogParser.exe -i:IISW3C
"SELECT COUNT(cs(User-Agent)) AS Client
FROM *.log WHERE cs(User-Agent) LIKE '%poo%'"
I'm trying to say "How many entries have a User-Agent that is longer than 'x'".
|
[
"Well, looks like I answered my own question.\nLogParser.exe -i:IISW3C \n\"SELECT COUNT(cs(User-Agent)) AS Client \nFROM *.log WHERE STRLEN(cs(User-Agent)) > 100\"\n\n"
] |
[
45
] |
[] |
[] |
[
"iis",
"logparser"
] |
stackoverflow_0000080844_iis_logparser.txt
|
Q:
How to move SharePoint sites from one active directory domain to another?
I have a SharePoint virtual machine in one active directory domain (for example domain1) and I want to transfer all the sites it has to another active directory domain (domain2).
I don’t know which could be the best procedure to do this, if I detach and attach my virtual machine from domain1 to domain2 it probably didn’t work since all the accounts used by SharePoint are no longer valid. (Both domain are not in the same network and didn’t trust each other).
Additionally I could export the sites in domain1 and import them on domain2 using stsadm, but if I use this technique I have to manually install all the features, solutions and personalization I made on my original server.
Does anybody know the best approach to “move” the sites from one domain to another?
A:
There is a STSADM Custom Extension: move web that should be what you are looking for:
C:>stsadm -help gl-moveweb
stsadm -o gl-moveweb
Moves a web.
Parameters:
-url
-parenturl
[-haltonwarning (only considered if moving to a new site collection)]
[-haltonfatalerror (only considered if moving to a new site collection)]
[-includeusersecurity (only considered if moving to a new site collection)]
[-retainobjectidentity (only considered if moving to a new site collection)]
A:
You may have some sucess by adding a local account to the administrators group and joining the server to the new domain. Then manualy updateing all of the AD accounts that are used in the server. I sould note that all of your users will then have new accounts that are not related to the old ones.
You sould ask your domain admins about an SID update to the new accounts so they also have the SID's from the old domain.
|
How to move SharePoint sites from one active directory domain to another?
|
I have a SharePoint virtual machine in one active directory domain (for example domain1) and I want to transfer all the sites it has to another active directory domain (domain2).
I don’t know which could be the best procedure to do this, if I detach and attach my virtual machine from domain1 to domain2 it probably didn’t work since all the accounts used by SharePoint are no longer valid. (Both domain are not in the same network and didn’t trust each other).
Additionally I could export the sites in domain1 and import them on domain2 using stsadm, but if I use this technique I have to manually install all the features, solutions and personalization I made on my original server.
Does anybody know the best approach to “move” the sites from one domain to another?
|
[
"There is a STSADM Custom Extension: move web that should be what you are looking for:\n\nC:>stsadm -help gl-moveweb\nstsadm -o gl-moveweb\nMoves a web.\nParameters:\n -url \n -parenturl \n [-haltonwarning (only considered if moving to a new site collection)]\n [-haltonfatalerror (only considered if moving to a new site collection)]\n [-includeusersecurity (only considered if moving to a new site collection)]\n [-retainobjectidentity (only considered if moving to a new site collection)]\n\n",
"You may have some sucess by adding a local account to the administrators group and joining the server to the new domain. Then manualy updateing all of the AD accounts that are used in the server. I sould note that all of your users will then have new accounts that are not related to the old ones.\nYou sould ask your domain admins about an SID update to the new accounts so they also have the SID's from the old domain.\n"
] |
[
1,
0
] |
[] |
[] |
[
"active_directory",
"dns",
"sharepoint",
"sites"
] |
stackoverflow_0000073499_active_directory_dns_sharepoint_sites.txt
|
Q:
Best way to make events asynchronous in C#
Events are synchronous in C#. I have this application where my main form starts a thread with a loop in it that listens to a stream. When something comes along on the stream an event is fired from the loop to the main form.
If the main form is slow or shows a messagebox or something the loop will be suspended. What is the best way around this? By using a callback and invoke on the main form?
A:
Since you're using a form, the easier way is to use the BackgroundWorker component.
The BackgroundWorker class allows you
to run an operation on a separate,
dedicated thread. Time-consuming
operations like downloads and database
transactions can cause your user
interface (UI) to seem as though it
has stopped responding while they are
running. When you want a responsive UI
and you are faced with long delays
associated with such operations, the
BackgroundWorker class provides a
convenient solution.
A:
Hmmm, I've used different scenarios that depended on what I needed at the time.
I believe the BeginInvoke would probably be the easiest to code since you're almost there. Either way you should be using Invoke already, so just changing to BeginInvoke. Using a callback on a separate thread will accomplish the same thing (as long as you use the threadpool to queue up the callback) as using BeginInvoke.
A:
Events are just delegates, so use BeginInvoke. (see Making Asynchronous Method Calls in the .NET Environment)
A:
You have a few options, as already detailed, but in my experience, you're better off leaving delegates and BeginInvoke, and using BackgroundWorker instead (v2.0+), as it is easier to use and also allows you to interact with the main form on the thread's completion. All in all a very weel implemented solution, I have found.
A:
System.ComponentModel.BackgroundWorker is indeed a good starting point. It will do your asynchronous work, give you notifications of important events, and has ways to better integrate with your forms.
For example, you can activate progress notifications by registering a handler for the ProgressChanged event. (which is highly recommended if you have a long, asynchronous process and you don't want your user to think the application froze)
|
Best way to make events asynchronous in C#
|
Events are synchronous in C#. I have this application where my main form starts a thread with a loop in it that listens to a stream. When something comes along on the stream an event is fired from the loop to the main form.
If the main form is slow or shows a messagebox or something the loop will be suspended. What is the best way around this? By using a callback and invoke on the main form?
|
[
"Since you're using a form, the easier way is to use the BackgroundWorker component.\n\nThe BackgroundWorker class allows you\n to run an operation on a separate,\n dedicated thread. Time-consuming\n operations like downloads and database\n transactions can cause your user\n interface (UI) to seem as though it\n has stopped responding while they are\n running. When you want a responsive UI\n and you are faced with long delays\n associated with such operations, the\n BackgroundWorker class provides a\n convenient solution.\n\n",
"Hmmm, I've used different scenarios that depended on what I needed at the time.\nI believe the BeginInvoke would probably be the easiest to code since you're almost there. Either way you should be using Invoke already, so just changing to BeginInvoke. Using a callback on a separate thread will accomplish the same thing (as long as you use the threadpool to queue up the callback) as using BeginInvoke.\n",
"Events are just delegates, so use BeginInvoke. (see Making Asynchronous Method Calls in the .NET Environment)\n",
"You have a few options, as already detailed, but in my experience, you're better off leaving delegates and BeginInvoke, and using BackgroundWorker instead (v2.0+), as it is easier to use and also allows you to interact with the main form on the thread's completion. All in all a very weel implemented solution, I have found.\n",
"System.ComponentModel.BackgroundWorker is indeed a good starting point. It will do your asynchronous work, give you notifications of important events, and has ways to better integrate with your forms. \nFor example, you can activate progress notifications by registering a handler for the ProgressChanged event. (which is highly recommended if you have a long, asynchronous process and you don't want your user to think the application froze)\n"
] |
[
8,
3,
1,
0,
0
] |
[] |
[] |
[
"asynchronous",
"c#",
"events"
] |
stackoverflow_0000080645_asynchronous_c#_events.txt
|
Q:
Detect "Clone Mode" display setup
How can I determine if my displays are in "Clone Mode" without using either COPP (Computer Output Protection Protocol) or OPM (Output Protection Protocol) on Windows?
Vista solution:
hMonitor = MonitorFromWindow (HWND_DESKTOP, MONITOR_DEFAULTTOPRIMARY);
bSuccess = GetNumberOfPhysicalMonitorsFromHMONITOR (hMonitor, &dwMonitorCount);
A:
I assume you've already tried EnumDisplayMonitors() and it didn't work. So if that returns a single HMONITOR for each set of cloned displays, you could compare this set of results to the result of EnumDisplayDevices(). Devices returned by EnumDisplayDevices() that are attached to the desktop but aren't returned by EnumDisplayMonitors() should be clones.
|
Detect "Clone Mode" display setup
|
How can I determine if my displays are in "Clone Mode" without using either COPP (Computer Output Protection Protocol) or OPM (Output Protection Protocol) on Windows?
Vista solution:
hMonitor = MonitorFromWindow (HWND_DESKTOP, MONITOR_DEFAULTTOPRIMARY);
bSuccess = GetNumberOfPhysicalMonitorsFromHMONITOR (hMonitor, &dwMonitorCount);
|
[
"I assume you've already tried EnumDisplayMonitors() and it didn't work. So if that returns a single HMONITOR for each set of cloned displays, you could compare this set of results to the result of EnumDisplayDevices(). Devices returned by EnumDisplayDevices() that are attached to the desktop but aren't returned by EnumDisplayMonitors() should be clones.\n"
] |
[
3
] |
[] |
[] |
[
"multiple_monitors",
"windows"
] |
stackoverflow_0000080103_multiple_monitors_windows.txt
|
Q:
Firefox vs. IE: innerHTML handling
After hours of debugging, it appears to me that in FireFox, the innerHTML of a DOM reflects what is actually in the markup, but in IE, the innerHTML reflects what's in the markup PLUS any changes made by the user or dynamically (i.e. via Javascript).
Has anyone else found this to be true? Any interesting work-arounds to ensure both behave the same way?
A:
I use jQuery's .html() to get a consistent result across browsers.
A:
I agree with Pat. At this point in the game, writing your own code to deal with cross-browser compatibility given the available Javascript frameworks doesn't make a lot of sense. There's a framework for nearly any taste (some really quite tiny) and they've focused on really abstracting out all of the differences between the browsers. They're doing WAY more testing of it than you're likely to.
Something like jQuery or Yahoo's YUI (think how many people hit the Yahoo Javascript in a day and the variety of browsers) is just way more road-tested than any snippet you or I come up with.
A:
using a good library is a great way to get around browser inconsistencies, and jquery is the one that I typically recommend - and if you're running into issues altering the elements in a form in particular, jquery boasts a few really useful plugins focused specifically on form manipulation and evaluation.
A:
Using prototype and the $("thisid") syntax instead of document.getElementById("thisid") might do the trick for you. It worked for me.
|
Firefox vs. IE: innerHTML handling
|
After hours of debugging, it appears to me that in FireFox, the innerHTML of a DOM reflects what is actually in the markup, but in IE, the innerHTML reflects what's in the markup PLUS any changes made by the user or dynamically (i.e. via Javascript).
Has anyone else found this to be true? Any interesting work-arounds to ensure both behave the same way?
|
[
"I use jQuery's .html() to get a consistent result across browsers.\n",
"I agree with Pat. At this point in the game, writing your own code to deal with cross-browser compatibility given the available Javascript frameworks doesn't make a lot of sense. There's a framework for nearly any taste (some really quite tiny) and they've focused on really abstracting out all of the differences between the browsers. They're doing WAY more testing of it than you're likely to.\nSomething like jQuery or Yahoo's YUI (think how many people hit the Yahoo Javascript in a day and the variety of browsers) is just way more road-tested than any snippet you or I come up with.\n",
"using a good library is a great way to get around browser inconsistencies, and jquery is the one that I typically recommend - and if you're running into issues altering the elements in a form in particular, jquery boasts a few really useful plugins focused specifically on form manipulation and evaluation.\n",
"Using prototype and the $(\"thisid\") syntax instead of document.getElementById(\"thisid\") might do the trick for you. It worked for me.\n"
] |
[
11,
9,
2,
1
] |
[] |
[] |
[
"dom",
"firefox",
"internet_explorer",
"javascript"
] |
stackoverflow_0000036778_dom_firefox_internet_explorer_javascript.txt
|
Q:
XSLT processing in/from ruby
Can anyone recommend an efficient method to execute XSLT transforms of XML data within a Ruby application? The XSL gem (REXSL) is not available yet, and while I have seen a project or two that implement it, I'm wary of using them so early on. A friend had recommended a shell out call to Perl, but I'm worried about resources.
This is for a linux environment.
A:
Try the "libxslt-ruby" gem. It depends on the "libxmlr-ruby" bindings for libxml library, which you probably already have installed if you're developing on Linux.
A:
I would recomment to shell out call to "xsltproc", which comes with the libxslt libraries in linux and does the work.
Or if you are using JRuby by any chance, then you have several xslt parsers for java that you can really really easily use from your ruby program.
|
XSLT processing in/from ruby
|
Can anyone recommend an efficient method to execute XSLT transforms of XML data within a Ruby application? The XSL gem (REXSL) is not available yet, and while I have seen a project or two that implement it, I'm wary of using them so early on. A friend had recommended a shell out call to Perl, but I'm worried about resources.
This is for a linux environment.
|
[
"Try the \"libxslt-ruby\" gem. It depends on the \"libxmlr-ruby\" bindings for libxml library, which you probably already have installed if you're developing on Linux.\n",
"I would recomment to shell out call to \"xsltproc\", which comes with the libxslt libraries in linux and does the work.\nOr if you are using JRuby by any chance, then you have several xslt parsers for java that you can really really easily use from your ruby program.\n"
] |
[
1,
1
] |
[] |
[] |
[
"linux",
"ruby",
"xml",
"xslt"
] |
stackoverflow_0000075340_linux_ruby_xml_xslt.txt
|
Q:
What do you use to capture webpages, diagram/pictures and code snippets for later reference?
What do you use to capture webpages, diagram/pictures and code snippets for later reference?
A:
Evernote http://www.evernote.com and delicious http://www.delicious.com
A:
Evernote
Notepad2's clipboard feature (Notepad2.exe /c as a link in Launchy)
Windows Clippings or PrintKey
Firefox extension Page Saver
Delicious
A:
Microsoft OneNote.
A:
I find google notebook is very good for drive by code snippeting and google bookmarks especially as when used with the google toolbar, for web pages.
The benefit of these tools are that they are available from any pc on the web, though a good use of semantic organisation using labels is recommended.
A:
I just have an emacs instance running on my home machine, under screen. Whereever I am (and have network) I can connect to it remotely. I stick all useful urls, birthday present ideas, future dates, code snippets, ideas for docs etcetc in there.
I rarely have doodles/diagrams I need to capture, I tend to draw them in ascii in my file if needed.
I must admit I'm a bit stuck if I have no network/wifi somewhere, but that's rarely the case.
A:
Here's my response to a similar question:
The combination of OneNote with a tablet PC is awesome! I was a bit of a skeptic at first. I used the trial version and then forgot about it. A year later I had an unruly collection of files, project related emails, notebooks and scraps of paper all scattered throughout my life. I went back to OneNote and all my problems went away. Some highlights:
Everything is searchable. The character recognition is good enough that my chicken-scratch meeting notes can be searched. Text within images is searchable.
OneNote syncs with Outlook so finding meeting notes is a breeze.
I now embed all files into OneNote - pdfs, spreadsheets, word docs, images, web clippings.
OneNote is constantly saving all changes so, combined with a scheduled automated backup, everything is in one place and is safe.
There are some built-in collaboration tools I have yet to try but that look useful.
It is SO worth the price. It allows you to get started on a project and avoid all that time spent deciding how to organize things.
A:
Zotero, is a nice plugin for Firefox.
A:
SnagIt
captures everything you could want, and lets you annotate it.
A:
I prefer to use the good old url for delicious
Apart from that i use the Scrapbook extension in firefox when i want to save something on the disk. It's possible to tag the page, edit it and remove those stupids ads before saving it.
I also have a Wiki on a stick that i carry around on a usbkey for code snippets that should go to other clients when i'm travelling around
Mostly, my code snippets are embedded into projects i carry on the same usb key, which allows me to demonstrate some technologies right off to the client and get his advice based on a demonstration, not a listing of code...
A:
For screen shots, I use a mix between ScrapBook and ScreenGrab. They are both firefox plugins that are pretty amazing when you need to get a screenshot of a page for editing. Works great for consulting.
https://addons.mozilla.org/en-US/firefox/addon/427
https://addons.mozilla.org/en-US/firefox/addon/1146
A:
Delicious Bookmarks extension for Firefox
A:
It's a little primitive, but I've been using tiddlywiki (self-contained, single-file wiki) http://www.tiddlywiki.com/ which works good for basic text and markup. I combine it with a plugin to sync it with Outlook's notes (http://syncoutlooknotes.tiddlyspot.com/#SyncOutlookNotes) so that I can then sync it to my blackberry using the standard outlook-blackberry sync mechanism. This has the significant advantage that I can look at my notes and even write new notes when I'm out and about, away from my laptop, or just don't feel like lugging the laptop around to a meeting that I don't really need it for.
I'd prefer using something more advanced like Onenote, but being able to take my notes with my in the little blackberry has turned out to be a significant advantage.
A:
Google Notebook is very convenient tool. You can clip and save any parts of web pages without leaving your browser tab. The Notebook plug-in automatically saves them as separate notes in your notebooks and keep the links back to the original web pages. You can organize your clippings later by moving them between your notebooks and/or tagging them. Very good for code snippets and references.
|
What do you use to capture webpages, diagram/pictures and code snippets for later reference?
|
What do you use to capture webpages, diagram/pictures and code snippets for later reference?
|
[
"Evernote http://www.evernote.com and delicious http://www.delicious.com\n",
"\nEvernote\nNotepad2's clipboard feature (Notepad2.exe /c as a link in Launchy)\nWindows Clippings or PrintKey\nFirefox extension Page Saver\nDelicious \n\n",
"Microsoft OneNote.\n",
"I find google notebook is very good for drive by code snippeting and google bookmarks especially as when used with the google toolbar, for web pages.\nThe benefit of these tools are that they are available from any pc on the web, though a good use of semantic organisation using labels is recommended.\n",
"I just have an emacs instance running on my home machine, under screen. Whereever I am (and have network) I can connect to it remotely. I stick all useful urls, birthday present ideas, future dates, code snippets, ideas for docs etcetc in there.\nI rarely have doodles/diagrams I need to capture, I tend to draw them in ascii in my file if needed.\nI must admit I'm a bit stuck if I have no network/wifi somewhere, but that's rarely the case.\n",
"Here's my response to a similar question:\nThe combination of OneNote with a tablet PC is awesome! I was a bit of a skeptic at first. I used the trial version and then forgot about it. A year later I had an unruly collection of files, project related emails, notebooks and scraps of paper all scattered throughout my life. I went back to OneNote and all my problems went away. Some highlights:\n\nEverything is searchable. The character recognition is good enough that my chicken-scratch meeting notes can be searched. Text within images is searchable.\nOneNote syncs with Outlook so finding meeting notes is a breeze.\nI now embed all files into OneNote - pdfs, spreadsheets, word docs, images, web clippings.\nOneNote is constantly saving all changes so, combined with a scheduled automated backup, everything is in one place and is safe.\nThere are some built-in collaboration tools I have yet to try but that look useful.\n\nIt is SO worth the price. It allows you to get started on a project and avoid all that time spent deciding how to organize things.\n",
"Zotero, is a nice plugin for Firefox.\n",
"SnagIt \ncaptures everything you could want, and lets you annotate it.\n",
"I prefer to use the good old url for delicious\nApart from that i use the Scrapbook extension in firefox when i want to save something on the disk. It's possible to tag the page, edit it and remove those stupids ads before saving it.\nI also have a Wiki on a stick that i carry around on a usbkey for code snippets that should go to other clients when i'm travelling around\nMostly, my code snippets are embedded into projects i carry on the same usb key, which allows me to demonstrate some technologies right off to the client and get his advice based on a demonstration, not a listing of code...\n",
"For screen shots, I use a mix between ScrapBook and ScreenGrab. They are both firefox plugins that are pretty amazing when you need to get a screenshot of a page for editing. Works great for consulting.\nhttps://addons.mozilla.org/en-US/firefox/addon/427\nhttps://addons.mozilla.org/en-US/firefox/addon/1146\n",
"Delicious Bookmarks extension for Firefox\n",
"It's a little primitive, but I've been using tiddlywiki (self-contained, single-file wiki) http://www.tiddlywiki.com/ which works good for basic text and markup. I combine it with a plugin to sync it with Outlook's notes (http://syncoutlooknotes.tiddlyspot.com/#SyncOutlookNotes) so that I can then sync it to my blackberry using the standard outlook-blackberry sync mechanism. This has the significant advantage that I can look at my notes and even write new notes when I'm out and about, away from my laptop, or just don't feel like lugging the laptop around to a meeting that I don't really need it for.\nI'd prefer using something more advanced like Onenote, but being able to take my notes with my in the little blackberry has turned out to be a significant advantage.\n",
"Google Notebook is very convenient tool. You can clip and save any parts of web pages without leaving your browser tab. The Notebook plug-in automatically saves them as separate notes in your notebooks and keep the links back to the original web pages. You can organize your clippings later by moving them between your notebooks and/or tagging them. Very good for code snippets and references.\n"
] |
[
7,
3,
3,
2,
2,
2,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"code_snippets"
] |
stackoverflow_0000053139_code_snippets.txt
|
Q:
How do you minimize the number of threads used in a tcp server application?
I am looking for any strategies people use when implementing server applications that service client TCP (or UDP) requests: design patterns, implementation techniques, best practices, etc.
Let's assume for the purposes of this question that the requests are relatively long-lived (several minutes) and that the traffic is time sensitive, so no delays are acceptable in responding to messages. Also, we are both servicing requests from clients and making our own connections to other servers.
My platform is .NET, but since the underlying technology is the same regardless of platform, I'm interested to see answers for any language.
A:
The modern approach is to make use of the operating system to multiplex many network sockets for you, freeing your application to only processing active connections with traffic.
Whenever you open a socket it's associated it with a selector. You use a single thread to poll that selector. Whenever data arrives, the selector will indicate the socket which is active, you hand off that operation to a child thread and continue polling.
This way you only need a thread for each concurrent operation. Sockets which are open but idle will not tie up a thread.
Using the select() and poll() methods
Building Highly Scalable Servers with Java NIO
A:
A more sophosticated aproach would be to use IO Completion ports. (Windows)
With IO Completion ports you leave to the operating system to manage polling, which lets it potentially use very high level of optimization with NIC driver support.
Basically, you have a queue of network operations which is OS managed, and provide a callback function which is called when the operation completes. A bit like (Hard-drive) DMA but for network.
Len Holgate wrote an eccelent series on IO completion ports a few years ago on Codeproject:
http://www.codeproject.com/KB/IP/jbsocketserver2.aspx
And
I found an article on IO completion ports for .net (haven't read it though)
http://www.codeproject.com/KB/cs/managediocp.aspx
I would also say that it is easy to use completion ports compared to try and write a scaleable alternative. The problem is that they are only available on NT (2000, XP, Vista)
A:
If you were using C++ and the Win32 directly then I'd suggest that you read up about overlapped I/O and I/O Completion ports. I have a free C++, IOCP, client/server framework with complete source code, see here for more details.
Since you're using .Net you should be looking at using the asynchronous socket methods so that you don't need have a thread for every connection; there are several links from this blog posting of mine that may be useful starting points: http://www.lenholgate.com/blog/2005/07/disappointing-net-sockets-article-in-msdn-magazine-this-month.html (some of the best links are in the comments to the original posting!)
A:
G'day,
I'd start by looking at the metaphor you want to use for your thread framework.
Maybe "leader follower" where a thread is listening for incoming requests and when a new request comes in it does the work and the next thread in the pool starts listening for incoming requests.
Or thread pool where the same thread is always listening for incoming requests and then passing the requests over to the next available thread in the thread pool.
You might like to visit the Reactor section of the Ace Components to get some ideas.
HTH.
cheers,
Rob
|
How do you minimize the number of threads used in a tcp server application?
|
I am looking for any strategies people use when implementing server applications that service client TCP (or UDP) requests: design patterns, implementation techniques, best practices, etc.
Let's assume for the purposes of this question that the requests are relatively long-lived (several minutes) and that the traffic is time sensitive, so no delays are acceptable in responding to messages. Also, we are both servicing requests from clients and making our own connections to other servers.
My platform is .NET, but since the underlying technology is the same regardless of platform, I'm interested to see answers for any language.
|
[
"The modern approach is to make use of the operating system to multiplex many network sockets for you, freeing your application to only processing active connections with traffic.\nWhenever you open a socket it's associated it with a selector. You use a single thread to poll that selector. Whenever data arrives, the selector will indicate the socket which is active, you hand off that operation to a child thread and continue polling.\nThis way you only need a thread for each concurrent operation. Sockets which are open but idle will not tie up a thread.\n\nUsing the select() and poll() methods\nBuilding Highly Scalable Servers with Java NIO\n\n",
"A more sophosticated aproach would be to use IO Completion ports. (Windows)\nWith IO Completion ports you leave to the operating system to manage polling, which lets it potentially use very high level of optimization with NIC driver support.\nBasically, you have a queue of network operations which is OS managed, and provide a callback function which is called when the operation completes. A bit like (Hard-drive) DMA but for network.\nLen Holgate wrote an eccelent series on IO completion ports a few years ago on Codeproject:\nhttp://www.codeproject.com/KB/IP/jbsocketserver2.aspx\nAnd \nI found an article on IO completion ports for .net (haven't read it though)\nhttp://www.codeproject.com/KB/cs/managediocp.aspx\nI would also say that it is easy to use completion ports compared to try and write a scaleable alternative. The problem is that they are only available on NT (2000, XP, Vista)\n",
"If you were using C++ and the Win32 directly then I'd suggest that you read up about overlapped I/O and I/O Completion ports. I have a free C++, IOCP, client/server framework with complete source code, see here for more details.\nSince you're using .Net you should be looking at using the asynchronous socket methods so that you don't need have a thread for every connection; there are several links from this blog posting of mine that may be useful starting points: http://www.lenholgate.com/blog/2005/07/disappointing-net-sockets-article-in-msdn-magazine-this-month.html (some of the best links are in the comments to the original posting!)\n",
"G'day,\nI'd start by looking at the metaphor you want to use for your thread framework.\nMaybe \"leader follower\" where a thread is listening for incoming requests and when a new request comes in it does the work and the next thread in the pool starts listening for incoming requests.\nOr thread pool where the same thread is always listening for incoming requests and then passing the requests over to the next available thread in the thread pool.\nYou might like to visit the Reactor section of the Ace Components to get some ideas.\nHTH.\ncheers,\nRob\n"
] |
[
6,
4,
2,
0
] |
[] |
[] |
[
"multithreading",
"sockets",
"tcp",
"udp"
] |
stackoverflow_0000032198_multithreading_sockets_tcp_udp.txt
|
Q:
Intermittent error when attempting to control another database
I have the following code:
Dim obj As New Access.Application
obj.OpenCurrentDatabase (CurrentProject.Path & "\Working.mdb")
obj.Run "Routine"
obj.CloseCurrentDatabase
Set obj = Nothing
The problem I'm experimenting is a pop-up that tells me Access can't set the focus on the other database. As you can see from the code, I want to run a Subroutine in another mdb. Any other way to achieve this will be appreciated.
I'm working with MS Access 2003.
This is an intermittent error. As this is production code that will be run only once a month, it's extremely difficult to reproduce, and I can't give you the exact text and number at this time. It is the second month this happened.
I suspect this may occur when someone is working with this or the other database.
The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database.
Maybe, it's because of the first line in the 'Routines' code:
If vbNo = MsgBox("Do you want to update?", vbYesNo, "Update") Then
Exit Function
End If
I'll make another subroutine without the MsgBox.
I've been able to reproduce this behaviour. It happens when the focus has to shift to the called database, but the user sets the focus ([ALT]+[TAB]) on the first database. The 'solution' was to educate the user.
This is an intermittent error. As this is production code that will be run only once a month, it's extremely difficult to reproduce, and I can't give you the exact text and number at this time. It is the second month this happened.
I suspect this may occur when someone is working with this or the other database.
The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database.
Maybe, it's because of the first line in the 'Routines' code:
If vbNo = MsgBox("Do you want to update?", vbYesNo, "Update") Then
Exit Function
End If
I'll make another subroutine without the MsgBox.
I've tried this in our development database and it works. This doesn't mean anything as the other code also workes fine in development.
A:
I guess this error message is linked to the state of one of your databases. You are using here Jet connections and Access objects, and you might not be able, for multiple reasons (multi-user environment, unability to delete LDB Lock file, etc), to properly close your active database and open another one. So, according to me, the solution is to forget the Jet engine and to use another connexion to update the data in the "other" database.
When you say "The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database", I assume that the role of your "Routine" is to update some data, either via SQL instructions or equivalent recordset updates.
Why don't you try to make the corresponding updates by opening a connexion to your other database and (1) send the corresponding SQL instructions or (2) opening recordset and making requested updates?
One idea would be for example:
Dim cn as ADODB.connexion,
qr as string,
rs as ADODB.recordset
'qr can be "Update Table_Blablabla Set ... Where ...
'rs can be "SELECT * From Table_Blablabla INNER JOIN Table_Blobloblo
set cn = New ADODB.connexion
cn.open
You can here send any SQL instruction (with command object and execute method)
or open and update any recordset linked to your other database, then
cn.close
This can also be done via an ODBC connexion (and DAO.recordsets), so you can choose your favorite objects.
A:
If you would like another means of running the function, try the following:
Dim obj As New Access.Application
obj.OpenCurrentDatabase (CurrentProject.Path & "\Working.mdb")
obj.DoCmd.RunMacro "MyMacro"
obj.CloseCurrentDatabase
Set obj = Nothing
Where 'MyMacro' has an action of 'RunCode' with the Function name you would prefer to execute in Working.mdb
A:
I've been able to reproduce the error in 'development'.
"This action cannot be completed because the other application is busy. Choose 'Switch To' to activate ...."
I really can't see the rest of the message, as it is blinking very fast. I guess this error is due to 'switching' between the two databases. I hope that, by educating the user, this will stop.
Philippe, your answer is, of course, correct. I'd have chosen that path if I hadn't developed the 'routine' beforehand.
"I've been able to reproduce this behaviour. It happens when the focus has to shift to the called database, but the user sets the focus ([ALT]+[TAB]) on the first database. The 'solution' was to educate the user." As it is impossible to prevent the user to switch application in Windows, I'd like to close the subject.
|
Intermittent error when attempting to control another database
|
I have the following code:
Dim obj As New Access.Application
obj.OpenCurrentDatabase (CurrentProject.Path & "\Working.mdb")
obj.Run "Routine"
obj.CloseCurrentDatabase
Set obj = Nothing
The problem I'm experimenting is a pop-up that tells me Access can't set the focus on the other database. As you can see from the code, I want to run a Subroutine in another mdb. Any other way to achieve this will be appreciated.
I'm working with MS Access 2003.
This is an intermittent error. As this is production code that will be run only once a month, it's extremely difficult to reproduce, and I can't give you the exact text and number at this time. It is the second month this happened.
I suspect this may occur when someone is working with this or the other database.
The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database.
Maybe, it's because of the first line in the 'Routines' code:
If vbNo = MsgBox("Do you want to update?", vbYesNo, "Update") Then
Exit Function
End If
I'll make another subroutine without the MsgBox.
I've been able to reproduce this behaviour. It happens when the focus has to shift to the called database, but the user sets the focus ([ALT]+[TAB]) on the first database. The 'solution' was to educate the user.
This is an intermittent error. As this is production code that will be run only once a month, it's extremely difficult to reproduce, and I can't give you the exact text and number at this time. It is the second month this happened.
I suspect this may occur when someone is working with this or the other database.
The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database.
Maybe, it's because of the first line in the 'Routines' code:
If vbNo = MsgBox("Do you want to update?", vbYesNo, "Update") Then
Exit Function
End If
I'll make another subroutine without the MsgBox.
I've tried this in our development database and it works. This doesn't mean anything as the other code also workes fine in development.
|
[
"I guess this error message is linked to the state of one of your databases. You are using here Jet connections and Access objects, and you might not be able, for multiple reasons (multi-user environment, unability to delete LDB Lock file, etc), to properly close your active database and open another one. So, according to me, the solution is to forget the Jet engine and to use another connexion to update the data in the \"other\" database.\nWhen you say \"The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database\", I assume that the role of your \"Routine\" is to update some data, either via SQL instructions or equivalent recordset updates. \nWhy don't you try to make the corresponding updates by opening a connexion to your other database and (1) send the corresponding SQL instructions or (2) opening recordset and making requested updates?\nOne idea would be for example:\nDim cn as ADODB.connexion, \n qr as string, \n rs as ADODB.recordset\n\n'qr can be \"Update Table_Blablabla Set ... Where ...\n'rs can be \"SELECT * From Table_Blablabla INNER JOIN Table_Blobloblo \n\nset cn = New ADODB.connexion\ncn.open\n\nYou can here send any SQL instruction (with command object and execute method) \nor open and update any recordset linked to your other database, then\n\ncn.close\n\nThis can also be done via an ODBC connexion (and DAO.recordsets), so you can choose your favorite objects.\n",
"If you would like another means of running the function, try the following:\nDim obj As New Access.Application\nobj.OpenCurrentDatabase (CurrentProject.Path & \"\\Working.mdb\")\n\nobj.DoCmd.RunMacro \"MyMacro\"\nobj.CloseCurrentDatabase\nSet obj = Nothing\n\nWhere 'MyMacro' has an action of 'RunCode' with the Function name you would prefer to execute in Working.mdb\n",
"I've been able to reproduce the error in 'development'. \n\"This action cannot be completed because the other application is busy. Choose 'Switch To' to activate ....\"\nI really can't see the rest of the message, as it is blinking very fast. I guess this error is due to 'switching' between the two databases. I hope that, by educating the user, this will stop.\nPhilippe, your answer is, of course, correct. I'd have chosen that path if I hadn't developed the 'routine' beforehand.\n\"I've been able to reproduce this behaviour. It happens when the focus has to shift to the called database, but the user sets the focus ([ALT]+[TAB]) on the first database. The 'solution' was to educate the user.\" As it is impossible to prevent the user to switch application in Windows, I'd like to close the subject.\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"ms_access",
"vba"
] |
stackoverflow_0000070417_ms_access_vba.txt
|
Q:
How to skip sys.exitfunc when unhandled exceptions occur
As you can see, even after the program should have died it speaks from the grave. Is there a way to "deregister" the exitfunction in case of exceptions?
import atexit
def helloworld():
print("Hello World!")
atexit.register(helloworld)
raise Exception("Good bye cruel world!")
outputs
Traceback (most recent call last):
File "test.py", line 8, in <module>
raise Exception("Good bye cruel world!")
Exception: Good bye cruel world!
Hello World!
A:
I don't really know why you want to do that, but you can install an excepthook that will be called by Python whenever an uncatched exception is raised, and in it clear the array of registered function in the atexit module.
Something like that :
import sys
import atexit
def clear_atexit_excepthook(exctype, value, traceback):
atexit._exithandlers[:] = []
sys.__excepthook__(exctype, value, traceback)
def helloworld():
print "Hello world!"
sys.excepthook = clear_atexit_excepthook
atexit.register(helloworld)
raise Exception("Good bye cruel world!")
Beware that it may behave incorrectly if the exception is raised from an atexit registered function (but then the behaviour would have been strange even if this hook was not used).
A:
In addition to calling os._exit() to avoid the registered exit handler you also need to catch the unhandled exception:
import atexit
import os
def helloworld():
print "Hello World!"
atexit.register(helloworld)
try:
raise Exception("Good bye cruel world!")
except Exception, e:
print 'caught unhandled exception', str(e)
os._exit(1)
|
How to skip sys.exitfunc when unhandled exceptions occur
|
As you can see, even after the program should have died it speaks from the grave. Is there a way to "deregister" the exitfunction in case of exceptions?
import atexit
def helloworld():
print("Hello World!")
atexit.register(helloworld)
raise Exception("Good bye cruel world!")
outputs
Traceback (most recent call last):
File "test.py", line 8, in <module>
raise Exception("Good bye cruel world!")
Exception: Good bye cruel world!
Hello World!
|
[
"I don't really know why you want to do that, but you can install an excepthook that will be called by Python whenever an uncatched exception is raised, and in it clear the array of registered function in the atexit module.\nSomething like that :\nimport sys\nimport atexit\n\ndef clear_atexit_excepthook(exctype, value, traceback):\n atexit._exithandlers[:] = []\n sys.__excepthook__(exctype, value, traceback)\n\ndef helloworld():\n print \"Hello world!\"\n\nsys.excepthook = clear_atexit_excepthook\natexit.register(helloworld)\n\nraise Exception(\"Good bye cruel world!\")\n\nBeware that it may behave incorrectly if the exception is raised from an atexit registered function (but then the behaviour would have been strange even if this hook was not used).\n",
"In addition to calling os._exit() to avoid the registered exit handler you also need to catch the unhandled exception:\nimport atexit\nimport os\n\ndef helloworld():\n print \"Hello World!\"\n\natexit.register(helloworld) \n\ntry:\n raise Exception(\"Good bye cruel world!\")\n\nexcept Exception, e:\n print 'caught unhandled exception', str(e)\n\n os._exit(1)\n\n"
] |
[
7,
0
] |
[
"If you call\nimport os\nos._exit(0)\n\nthe exit handlers will not be called, yours or those registered by other modules in the application.\n"
] |
[
-1
] |
[
"atexit",
"exception",
"python"
] |
stackoverflow_0000080993_atexit_exception_python.txt
|
Q:
What happens when the stylus "lifts" on a tablet PC?
I am working on a legacy project in VC++/Win32/MFC. Recently it became a requirement that the application work on a tablet pc, and this ushered in a host of new issues.
I have been able to work with, and around these issues, but am left with one wherein I could use some expert suggestions.
I have a particular bug that is induced by the "lift" of the stylus off of the active surface. Basically the mouse cursor disappears and then reappears when you "press" it back onto the screen.
It makes sense that this is unaccounted for in the application. you can't lift the cursor on a desktop pc. So what I am looking for is a good overview on what happens (in terms of windows messages, etc.) when the lift occurs. Does this translate to just focus changes and mouseover events? My bug seems to also involve cursor changes (may not be lift related though). Certainly the unexpected "lift" is breaking the state of the application's tool processing.
So the tangible questions are:
What happens when a stylus "lift" occurs? A press?
What API calls can be used to detect this? Does it just translate into standard messages with flags/values set?
Whats a good way to test/emulate this when your development pc is a desktop? Am I just flying blind here? (I only have periodic access to a tablet pc)
What represents correct behavior or best practice for tablet stylus awareness?
Thanks for your consideration,
ee
A:
As a tablet user I can answer a few of your questions.
First:
You cannot very easily keep a "keyboard focus" on a window when the stylus has to trail out of the focused window to push a key on the virtual keyboard.
Most of the virtual keyboards I've used (The windows tablet input panel and one under ubuntu) allow the program they are typing in to keep "keyboard focus."
What happens when a stylus "lift" occurs? A press?
Under Windows, the pressure value drops, but outside of that, there is no event. (I don't know about linux.)
What API calls can be used to detect this? Does it just translate into standard messages with flags/values set?
As mentioned above, if you can get the pressure value, you can use that.
Whats a good way to test/emulate this when your development pc is a desktop? Am I just flying blind here? (I only have periodic access to a tablet pc)
When the stylus is placed down elsewhere, the global coordinates of the pointer change, so, you can emulate the sudden pointer move with anything that allows you to change the global pointer values. (The Robot class in Java makes this fairly easy.)
What represents correct behavior or best practice for tablet stylus awareness?
I'd recommend you read what Microsoft has to say, the MSDN website has a number of excellent articles. (http://msdn.microsoft.com/en-us/library/ms704849(VS.85).aspx)
I'll point out that the size of the buttons on your applications makes a HUGE difference.
Hope this was of help.
A:
As I understand it, there is no "lift" event -- the only event happens when the stylus is brought back to the screen later. Of course, this depends on your specific driver and so on.
Worse, the bug you describe might be reproducible with just a typical mouse. Try moving the mouse as fast as you can -- it will almost certainly jump several pixels at once. Or even dozens or hundreds, if you have the mouse settings configured for the highest pointer speed. One update, the mouse might be at 100,100. The very next update, it could be at 200,300.
A:
Under Windows, the pressure value drops, but outside of that, there is no event. (I don't know about linux.)
Under linux you`ll get "ProximityEvents"
Most likely these events WT_PROXIMITY are avaliable in windows (please refer to: http://www.wacomeng.com/devsupport/ibmpc/wacomwindevfaq.html )
A:
@Greg - A clarification, this is a laptop pc with integrated tablet and stylus built in. the device has no dedicated keyboard (it is a virtual one on the touchscreen) and is not a wacom input device. Sorry for the confusion.
It appears that there is an SDK for the Microsoft Windows XP Tablet PC Edition that may have the ability to get special details such as pressure. However, I know that there has to be some level of standard compatibility with existing non-tablet-aware applications. I guess I can try to get Spy++ installed on the tablet and try and filter down to specific messages/events.
|
What happens when the stylus "lifts" on a tablet PC?
|
I am working on a legacy project in VC++/Win32/MFC. Recently it became a requirement that the application work on a tablet pc, and this ushered in a host of new issues.
I have been able to work with, and around these issues, but am left with one wherein I could use some expert suggestions.
I have a particular bug that is induced by the "lift" of the stylus off of the active surface. Basically the mouse cursor disappears and then reappears when you "press" it back onto the screen.
It makes sense that this is unaccounted for in the application. you can't lift the cursor on a desktop pc. So what I am looking for is a good overview on what happens (in terms of windows messages, etc.) when the lift occurs. Does this translate to just focus changes and mouseover events? My bug seems to also involve cursor changes (may not be lift related though). Certainly the unexpected "lift" is breaking the state of the application's tool processing.
So the tangible questions are:
What happens when a stylus "lift" occurs? A press?
What API calls can be used to detect this? Does it just translate into standard messages with flags/values set?
Whats a good way to test/emulate this when your development pc is a desktop? Am I just flying blind here? (I only have periodic access to a tablet pc)
What represents correct behavior or best practice for tablet stylus awareness?
Thanks for your consideration,
ee
|
[
"As a tablet user I can answer a few of your questions.\nFirst:\n\nYou cannot very easily keep a \"keyboard focus\" on a window when the stylus has to trail out of the focused window to push a key on the virtual keyboard.\n\nMost of the virtual keyboards I've used (The windows tablet input panel and one under ubuntu) allow the program they are typing in to keep \"keyboard focus.\"\n\nWhat happens when a stylus \"lift\" occurs? A press?\n\nUnder Windows, the pressure value drops, but outside of that, there is no event. (I don't know about linux.)\n\nWhat API calls can be used to detect this? Does it just translate into standard messages with flags/values set?\n\nAs mentioned above, if you can get the pressure value, you can use that.\n\nWhats a good way to test/emulate this when your development pc is a desktop? Am I just flying blind here? (I only have periodic access to a tablet pc)\n\nWhen the stylus is placed down elsewhere, the global coordinates of the pointer change, so, you can emulate the sudden pointer move with anything that allows you to change the global pointer values. (The Robot class in Java makes this fairly easy.)\n\nWhat represents correct behavior or best practice for tablet stylus awareness?\n\nI'd recommend you read what Microsoft has to say, the MSDN website has a number of excellent articles. (http://msdn.microsoft.com/en-us/library/ms704849(VS.85).aspx)\nI'll point out that the size of the buttons on your applications makes a HUGE difference.\nHope this was of help.\n",
"As I understand it, there is no \"lift\" event -- the only event happens when the stylus is brought back to the screen later. Of course, this depends on your specific driver and so on.\nWorse, the bug you describe might be reproducible with just a typical mouse. Try moving the mouse as fast as you can -- it will almost certainly jump several pixels at once. Or even dozens or hundreds, if you have the mouse settings configured for the highest pointer speed. One update, the mouse might be at 100,100. The very next update, it could be at 200,300.\n",
"\nUnder Windows, the pressure value drops, but outside of that, there is no event. (I don't know about linux.)\n\nUnder linux you`ll get \"ProximityEvents\" \nMost likely these events WT_PROXIMITY are avaliable in windows (please refer to: http://www.wacomeng.com/devsupport/ibmpc/wacomwindevfaq.html )\n",
"@Greg - A clarification, this is a laptop pc with integrated tablet and stylus built in. the device has no dedicated keyboard (it is a virtual one on the touchscreen) and is not a wacom input device. Sorry for the confusion.\nIt appears that there is an SDK for the Microsoft Windows XP Tablet PC Edition that may have the ability to get special details such as pressure. However, I know that there has to be some level of standard compatibility with existing non-tablet-aware applications. I guess I can try to get Spy++ installed on the tablet and try and filter down to specific messages/events. \n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"c++",
"events",
"stylus_pen",
"tablet_pc",
"windows"
] |
stackoverflow_0000080518_c++_events_stylus_pen_tablet_pc_windows.txt
|
Q:
Is it possible to kill a Java Virtual Machine from another Virtual Machine?
I have a Java application that launches another java application. The launcher has a watchdog timer and receives periodic notifications from the second VM. However, if no notifications are received then the second virtual machine should be killed and the launcher will perform some additional clean-up activities.
The question is, is there any way to do this using only java? so far I have to use some native methods to perform this operation and it is somehow ugly.
Thanks!
A:
I may be missing something but can't you call the destroy() method on the Process object returned by Runtime.exec()?
A:
You can use java.lang.Process to do what you want. Once you have created the nested process and have a reference to the Process instance, you can get references to its standard out and err streams. You can periodically monitor those, and call .destroy() if you want to close the process. The whole thing might look something like this:
Process nestedProcess = new ProcessBuilder("java mysubprocess").start();
InputStream nestedStdOut = nestedProcess.getInputStream(); //kinda backwards, I know
InputStream nestedStdErr = nestedProcess.getErrorStream();
while (true) {
/*
TODO: read from the std out or std err (or get notifications some other way)
Then put the real "kill-me" logic here instead of if (false)
*/
if (false) {
nestedProcess.destroy();
//perform post-destruction cleanup here
return;
}
Thread.currentThread().sleep(1000L); //wait for a bit
}
Hope this helps,
Sean
A:
You could also publish a service (via burlap, hessian, etc) on the second JVM that calls System.exit() and consume it from the watchdog JVM. If you only want to shut the second JVM down when it stops sending those periodic notifications, it might not be in a state to respond to the service call.
Calling shell commands with java.lang.Runtime.exec() is probably your best bet.
A:
The usual way to do this is to call Process.destroy()... however it is an incomplete solution since when using the sun JVM on *nix destroy maps onto a SIGTERM which is not guaranteed to terminate the process (for that you need SIGKILL as well). The net result is that you can't do real process management using Java.
There are some open bugs about this issue see:
link text
A:
java.lang.Process has a waitFor() method to wait for a process to die, and a destroy() method to kill the subprocess.
A:
OK the twist of the gist is as follows:
I was using the Process API to close the second virtual machine, but it wouldn't work.
The reason is that my second application is an Eclipse RCP Application, and I launched it using the eclipse.exe launcher included.
However, that means that the Process API destroy() method will target the eclipse.exe process. Killing this process leaves the Java Process unscathed. So, one of my colleagues here wrote a small application that will kill the right application.
So one of the solutions to use the Process API (and remove redundant middle steps) is to get away with the Eclipse launcher, having my first virtual machine duplicate all its functionality.
I guess I will have to get to work.
A:
You should be able to do that java.lang.Runtime.exec and shell commands.
A:
You can have the java code detect the platform at runtime and fire off the platform's kill process command. This is really an refinement on your current solution.
There's also Process.destroy(), if you're using the ProcessBuilder API
A:
Not exactly process management, but you could start an rmi server in the java virtual machine you are launching, and bind a remote instance with a method that does whatever cleanup required and calls System.exit(). The first vm could then call that remote method to shutdown the second vm.
|
Is it possible to kill a Java Virtual Machine from another Virtual Machine?
|
I have a Java application that launches another java application. The launcher has a watchdog timer and receives periodic notifications from the second VM. However, if no notifications are received then the second virtual machine should be killed and the launcher will perform some additional clean-up activities.
The question is, is there any way to do this using only java? so far I have to use some native methods to perform this operation and it is somehow ugly.
Thanks!
|
[
"I may be missing something but can't you call the destroy() method on the Process object returned by Runtime.exec()?\n",
"You can use java.lang.Process to do what you want. Once you have created the nested process and have a reference to the Process instance, you can get references to its standard out and err streams. You can periodically monitor those, and call .destroy() if you want to close the process. The whole thing might look something like this:\nProcess nestedProcess = new ProcessBuilder(\"java mysubprocess\").start();\nInputStream nestedStdOut = nestedProcess.getInputStream(); //kinda backwards, I know\nInputStream nestedStdErr = nestedProcess.getErrorStream();\nwhile (true) {\n /*\n TODO: read from the std out or std err (or get notifications some other way)\n Then put the real \"kill-me\" logic here instead of if (false)\n */\n if (false) {\n nestedProcess.destroy();\n //perform post-destruction cleanup here\n return;\n }\n\n Thread.currentThread().sleep(1000L); //wait for a bit\n}\n\nHope this helps,\nSean\n",
"You could also publish a service (via burlap, hessian, etc) on the second JVM that calls System.exit() and consume it from the watchdog JVM. If you only want to shut the second JVM down when it stops sending those periodic notifications, it might not be in a state to respond to the service call.\nCalling shell commands with java.lang.Runtime.exec() is probably your best bet.\n",
"The usual way to do this is to call Process.destroy()... however it is an incomplete solution since when using the sun JVM on *nix destroy maps onto a SIGTERM which is not guaranteed to terminate the process (for that you need SIGKILL as well). The net result is that you can't do real process management using Java.\nThere are some open bugs about this issue see:\nlink text\n",
"java.lang.Process has a waitFor() method to wait for a process to die, and a destroy() method to kill the subprocess.\n",
"OK the twist of the gist is as follows:\nI was using the Process API to close the second virtual machine, but it wouldn't work.\nThe reason is that my second application is an Eclipse RCP Application, and I launched it using the eclipse.exe launcher included.\nHowever, that means that the Process API destroy() method will target the eclipse.exe process. Killing this process leaves the Java Process unscathed. So, one of my colleagues here wrote a small application that will kill the right application.\nSo one of the solutions to use the Process API (and remove redundant middle steps) is to get away with the Eclipse launcher, having my first virtual machine duplicate all its functionality.\nI guess I will have to get to work.\n",
"You should be able to do that java.lang.Runtime.exec and shell commands.\n",
"You can have the java code detect the platform at runtime and fire off the platform's kill process command. This is really an refinement on your current solution.\nThere's also Process.destroy(), if you're using the ProcessBuilder API\n",
"Not exactly process management, but you could start an rmi server in the java virtual machine you are launching, and bind a remote instance with a method that does whatever cleanup required and calls System.exit(). The first vm could then call that remote method to shutdown the second vm.\n"
] |
[
6,
3,
2,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"java",
"process_management"
] |
stackoverflow_0000063758_java_process_management.txt
|
Q:
Creating/modifying images in JavaScript
Is it possible to dynamically create and modify images on a per pixel level in JavaScript (on client side)? Or has this to be done with server based languaged, such as PHP?
My use case is as follows:
The user opens webpage and loads locally stored image
A preview of the image is displayed
The user can modify the image with a set of sliders (pixel level operations)
In the end he can download the image to his local HDD
When searching in the web I just found posts about using IE's filtering method, but didn't find anything about image editing functions in JavaScript.
A:
Some browsers support the canvas:
http://developer.mozilla.org/En/Drawing_Graphics_with_Canvas
A:
This has to be done on the server side. One thing you might look at doing is allowing all the editing to go on client side, and then in the end POST the final image (via AJAX) to the server to allow it to return it to you as the correct MIME type, and correctly packed.
A:
You may want to check out Processing.js. John Resig of jQuery fame wrote it. It supports pixel processing, unfortunately only Firefox 3 can handle it sufficiently.
A:
Also look at data URIs (though IE versions below 8 don't support them, unfortunately!)
A:
You can imagine a set of JS tools that will allow the user to define what kind of transformation he wants to do, but the final work of transformation MUST be done on a server side. JS on the client side is unable to create a file, for security reason.
A:
Try Allicorn's Image Retargetter - it sounds like that's what you're looking for.
A:
Local image manipulation in JavaScript should be possible - have a look at Defender of the Favicon. ;-) The question is how to get the original image from the file system into your page (I don't know of any other way than doing a HTTP upload to the server first).
|
Creating/modifying images in JavaScript
|
Is it possible to dynamically create and modify images on a per pixel level in JavaScript (on client side)? Or has this to be done with server based languaged, such as PHP?
My use case is as follows:
The user opens webpage and loads locally stored image
A preview of the image is displayed
The user can modify the image with a set of sliders (pixel level operations)
In the end he can download the image to his local HDD
When searching in the web I just found posts about using IE's filtering method, but didn't find anything about image editing functions in JavaScript.
|
[
"Some browsers support the canvas:\nhttp://developer.mozilla.org/En/Drawing_Graphics_with_Canvas\n",
"This has to be done on the server side. One thing you might look at doing is allowing all the editing to go on client side, and then in the end POST the final image (via AJAX) to the server to allow it to return it to you as the correct MIME type, and correctly packed. \n",
"You may want to check out Processing.js. John Resig of jQuery fame wrote it. It supports pixel processing, unfortunately only Firefox 3 can handle it sufficiently.\n",
"Also look at data URIs (though IE versions below 8 don't support them, unfortunately!)\n",
"You can imagine a set of JS tools that will allow the user to define what kind of transformation he wants to do, but the final work of transformation MUST be done on a server side. JS on the client side is unable to create a file, for security reason.\n",
"Try Allicorn's Image Retargetter - it sounds like that's what you're looking for.\n",
"Local image manipulation in JavaScript should be possible - have a look at Defender of the Favicon. ;-) The question is how to get the original image from the file system into your page (I don't know of any other way than doing a HTTP upload to the server first).\n"
] |
[
8,
2,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"image",
"javascript"
] |
stackoverflow_0000080980_image_javascript.txt
|
Q:
Adding my own application events in Control Panel -> Sounds
I have just read this question and I really loved this answer to the question. Naturally, an interesting question popped in my head...
How to add my own events (of my own applications) in the Control Panel -> Sounds and Audio Devices -> Sounds -> Program Events?
And another related question, that I suppose should be answered here as well is...
How do I play those sounds specified in the Control Panel, when the event in my application occurs?
A:
A bit of quality time with Google led me to a CodeProject article called "Creating Your Own Sound Alerts". It seems the secret sauce is all underneath the HKEY_CURRENT_USER\AppEvents registry key.
From the article:
Ok, it was very easy to create new Sound Alert Scheme. Now let us move to add our own Sound Alert Type in the sounds. For that follow these steps.
Create a new Key under HKEY_CURRENT_USER\AppEvents\Schemes\App.Default and name that XYZAlert
Create another key under the key XYZAlert (the key you have created in above step) and name that .default
Set the default value of the .default key to path of some .wav file. eg. C:\abc\abc.wav
Create another key under XYZAlert and name that to .current and also set the path to some wav file, or leave that blank.
Now Create another key under HKEY_CURRENT_USER\AppEvents\EventLabels and name that XYZAlert
Set the default value of this key to anything like "XYZ Alert Here."
That's finish. Now go to your control panel and start the sounds applet. You will see the new sound alert type with name XYZ Alert.
Note that you also have to play the sounds using the "PlaySound" native call.
|
Adding my own application events in Control Panel -> Sounds
|
I have just read this question and I really loved this answer to the question. Naturally, an interesting question popped in my head...
How to add my own events (of my own applications) in the Control Panel -> Sounds and Audio Devices -> Sounds -> Program Events?
And another related question, that I suppose should be answered here as well is...
How do I play those sounds specified in the Control Panel, when the event in my application occurs?
|
[
"A bit of quality time with Google led me to a CodeProject article called \"Creating Your Own Sound Alerts\". It seems the secret sauce is all underneath the HKEY_CURRENT_USER\\AppEvents registry key.\nFrom the article:\n\nOk, it was very easy to create new Sound Alert Scheme. Now let us move to add our own Sound Alert Type in the sounds. For that follow these steps.\n\nCreate a new Key under HKEY_CURRENT_USER\\AppEvents\\Schemes\\App.Default and name that XYZAlert\nCreate another key under the key XYZAlert (the key you have created in above step) and name that .default\nSet the default value of the .default key to path of some .wav file. eg. C:\\abc\\abc.wav\nCreate another key under XYZAlert and name that to .current and also set the path to some wav file, or leave that blank.\nNow Create another key under HKEY_CURRENT_USER\\AppEvents\\EventLabels and name that XYZAlert\nSet the default value of this key to anything like \"XYZ Alert Here.\"\n\nThat's finish. Now go to your control panel and start the sounds applet. You will see the new sound alert type with name XYZ Alert.\n\nNote that you also have to play the sounds using the \"PlaySound\" native call.\n"
] |
[
6
] |
[] |
[] |
[
"audio",
"events",
"language_agnostic",
"windows"
] |
stackoverflow_0000080918_audio_events_language_agnostic_windows.txt
|
Q:
Oracle Client Upgrade from 9 to 10
Last Friday where I work, an oracle client was upgarded and our IIS server from version 9 to version 10. Now that its on version 10, we are seeing a lot of connections being open up to the database. It is opening up so many connections that we cannot log onto the database using tools like PlSQL developer or Toad. We never had an issue like this when the oracle client was at version 9. Because of the number of clients that exists on this particular box, i dont think it will be possible to revert back to the Oracle 9 client.
Is anyone aware of this problem or know of any possible work arounds?
Any help is greatly appreciated
A:
Which connection library are you using? OO4O, ODP, Other?
I'm working from memories of old issues here, so the details are a little fuzzy. With OO4O there are two different ways to initialize the library. One tries to re-use connections more than the other.
In ODP the default is to use connection pooling. Sometimes this leads to extra connections, in case they're needed again. There are some issues with pooled connections that lead me to turn them off. (PL/SQL procedures can hang if called on a dead connection)
If you get more information I'll try to get clarification
Let us know what you find and good luck
A:
Thanks very much for your response, it was very useful to us.
We sent off our issue to Oracle and got the following back
============
This is a known issue discussed in
Note:417092.1
Database Connections Are Left Open By Oracle Objects for OLE (OO4O)
Your question:
"Does 10g client interface allow the ASP code/class functions the same way as 9i client?"
The workaround for this issue is to implement a loop to remove all the parameters. For example -
for i = 1 to OraDatabase.Parameters.Count
OraDatabase.Parameters.Remove(0)
next
Bug 5918934 OO4O Leaves Sessions Behind If OraParameters Are Not Removed
was logged for this behavior, and has been deemed "not feasible to fix" due to architecture changes required to resolve memory issues.
We did have a loop implemented within our code to remove parameters but on looking at it again, it looks like it is not removing all the parameters.
We are currently investigating this.
I will write back to this post once we have identified a solution
Thnaks
Damien
|
Oracle Client Upgrade from 9 to 10
|
Last Friday where I work, an oracle client was upgarded and our IIS server from version 9 to version 10. Now that its on version 10, we are seeing a lot of connections being open up to the database. It is opening up so many connections that we cannot log onto the database using tools like PlSQL developer or Toad. We never had an issue like this when the oracle client was at version 9. Because of the number of clients that exists on this particular box, i dont think it will be possible to revert back to the Oracle 9 client.
Is anyone aware of this problem or know of any possible work arounds?
Any help is greatly appreciated
|
[
"Which connection library are you using? OO4O, ODP, Other? \nI'm working from memories of old issues here, so the details are a little fuzzy. With OO4O there are two different ways to initialize the library. One tries to re-use connections more than the other. \nIn ODP the default is to use connection pooling. Sometimes this leads to extra connections, in case they're needed again. There are some issues with pooled connections that lead me to turn them off. (PL/SQL procedures can hang if called on a dead connection)\nIf you get more information I'll try to get clarification\nLet us know what you find and good luck\n",
"Thanks very much for your response, it was very useful to us.\nWe sent off our issue to Oracle and got the following back\n============\nThis is a known issue discussed in \nNote:417092.1\nDatabase Connections Are Left Open By Oracle Objects for OLE (OO4O)\nYour question:\n\"Does 10g client interface allow the ASP code/class functions the same way as 9i client?\" \nThe workaround for this issue is to implement a loop to remove all the parameters. For example -\nfor i = 1 to OraDatabase.Parameters.Count \nOraDatabase.Parameters.Remove(0) \nnext \nBug 5918934 OO4O Leaves Sessions Behind If OraParameters Are Not Removed\nwas logged for this behavior, and has been deemed \"not feasible to fix\" due to architecture changes required to resolve memory issues.\nWe did have a loop implemented within our code to remove parameters but on looking at it again, it looks like it is not removing all the parameters.\nWe are currently investigating this.\nI will write back to this post once we have identified a solution\nThnaks\nDamien\n"
] |
[
1,
1
] |
[] |
[] |
[
"oracle"
] |
stackoverflow_0000070721_oracle.txt
|
Q:
Windows IDE / editor for a beginner
I'm teaching (or trying to teach) computer programming to a grad-student. Her previous experience amounts to little more than writing spreadsheet formulae. Which IDE or text editor should I recommend?
Please bear in mind that:
I only meet my student about once a week.
She uses Windows and I use Linux.
She doesn't have a community of users on hand.
She doesn't have much money to spend.
Edit: The languages she's learning at the moment are Perl and R. (Sorry ... for forgetting to mention them earlier.)
Edit: Thanks for all your answers!
The most highly recommended editors are jEdit and Notepad++.
If I can find a way to give my student adequate support for Notepad++ (e.g. by running it under Wine) or if I think that she can manage without support from me, then I'll recommend that. If not, I'll go for jEdit.
Apologies, once again, to those who saw the question before I got around to listing the languages that I'm teaching.
A:
The Visual Studio Express products are all free. Unless the fact that you're using Linux changes things :)
A:
Start off simple. Do not not scare her with an IDE! They are overwhelming at first and are not core to developing software. I learnt rudimentary Java with Crimson Editor.
If I started again I'd probably go for Notepad++.
A:
Eclipse might be a good option (if a little overwhelming at first).
You obviously need to look at a cross-platform IDE. Eclipse is one of the best in this regard, as well as having support for many languages. It also comes with a good set of tutorials.
A:
Since you didn't mention what programming language (guess it doesn't matter) you were teaching, I'll stick to something that supports multiple programming languages and multiple platforms. Given your situation, I would use jEdit (http://www.jedit.org).
jEdit is a programmer's text editor with hundreds of plugins, auto indent, and syntax highlighting for more than 130 languages and since it's written in Java, it runs beautifully on Linux, Windows or MAC. Hope this helps.
A:
I have used Notepad++]1 a lot for various editing tasks, and I find it quite useful and competent.
A:
The best, most documented, IDE that is free in my opinion is Visual Studio Express. There are tons of blogs, howtos, videos, training, etc. You can find more information about them here:
http://www.microsoft.com/Express/
Also, if you are a student, Microsoft provides an entire stack of software free to students just for this purpose. This is through a program called DreamSpark. Included is an operating system, the professional version of the IDE, SQL Server, XNA Game studio and Expression. Any student can get this. More information is here:
https://downloads.channel8.msdn.com/
Hope that helps.
A:
Depends on the programming language. FoR C/C++ and anything .net Visual Studio is the way to go. The Express edition is free.
A:
Eclipse or Jedit, if Eclipse is too complicated. jEdit is cross platform, free and supports a number of different languages.
A:
Crimson Editor is also very nice; it's similar to Edit Plus. Syntax highlighting, tabs, etc.
A:
Notepad++ for editing is awesome to me: it's Windows only, but maybe you can use it with Wine under Linux. But if you want someting more like an IDE, then Eclipse, or NetBean (both use java) can be very useful, although they are very resource expensive on old PC.
A:
My suggestion is Textpad. You can teach her javascript, all the basic, and some advanced concepts are there. It's fun for the student see the output in a browser, and you can even teach a little HTML if the mood strikes.
A:
Komodo Edit from active vision is free, open source, and available for Windows and Linux. Very nice features.
Otherwise, Emacs as it is available on both platforms and can be configured for CUA controls.
The Cream version of VIM is also a good option.
A:
It really depends on the language you are teaching her.
EditPlus is a good simple editor. Free trial version and pretty cheap license.
A:
Dev-C++ as a non-MS alternative.
Quote: "Bloodshed Dev-C++ is a full-featured Integrated Development Environment (IDE) for the C/C++ programming language. It uses Mingw port of GCC (GNU Compiler Collection) as it's compiler. Dev-C++ can also be used in combination with Cygwin or any other GCC based compiler."
A:
Code::Blocks is also another good one, free and cross platform. Unless you need something for using VB / C# or other .NET languages as it is mostly C/C++. For the .NET languages on linux I would recommed MonoDevelop
A:
Aptana is very handy for web-oriented programming.
http://www.aptana.com
A:
That depends at least in part on the programming language you intend to teach her. That said, you might want to take a look at Eclipse. Though it started primarily as a Java IDE, it's been extended via plugins to support many others (including C/C++, Flex, Haskell, and ColdFusion, to name a few), and can fairly easily be adapted to a new language if support isn't already out there.
Add to that the fact that the IDE is cross-platform so you can both use the same tool on your platforms of choice, and it looks like this might be a good fit.
A:
I'd recommend SciTE, as it's both available for *nix and Windows and free (as in beer). It supports pretty much anything you'd expect from a decent editor and, if she goes on to use it, quite customizable. It also isn't too complex, so it should be easy for her to get going with it.
A:
+1 to the Notepad++ suggestion - Anything I do that's not .Net-related I do in that.
A:
For Java, BlueJ is an excellent teaching IDE. It doesn't confuse the new student with a lot of advanced functionality (stuff they won't use for years to come). Eclipse is a great IDE, but there is a LOT of stuff there they could drown in. The same is true for Visual Studio, but I don't know of a simpler IDE for .NET languages.
You may also consider Ruby with Scite as a teaching option. The IDE isn't that fancy, but along with the ease-of-startup of learning Ruby this could work very well. Ruby certainly has some advantages over Java/C#/C++ for the beginning student (mostly in that you don't have to create a full class with a main method just to get a program running).
A:
For the easy to teach Component Pascal language (a successor to Niklaus Wirth's Pascal and Oberon) try the free, open source BlackBox IDE and the book Computing Fundamentals by Stan Warford.
Regards,
tamberg
A:
If you are writing software targeted at a Windows platform then Visual Studio is more or less the standard IDE. Since you are teaching a graduate student I would recommend getting the academic license for the professional edition if they are going to be writing a lot of software, otherwise the express editions should be enough for leaning purposes.
In terms of text editors, the one that I currently use the most is Notepad++ which is free, open source, and support a wide variety of features that are useful to software development. There are also also a number of useful plug-ins available for it as well.
A:
I can't believe nobody has mentioned vi. I'll argue that the less your tool does for you in the beginning the better coder you'll be in the end. For a newbie, give them syntax highlighting and some helpers for dealing with blocks and lines. Something like vi is great, emacs is also fine, or if you absolutely must be on Windows, something like notepad++ or jedit will be decent. The main point is to learn to program before you learn to let your IDE insert code that you don't understand for you.
A:
MultiEdit
Extremely powerfull (and extensible on emacs level) text editor with many IDE features (integration with compilers/debuggers etc). Beats all other suggested editors on every aspect.
Much easier to learn and use than editors with UNIX/terminal roots like vi or Emacs.
Not free (not too expensive though), and requires some learning to use effectively.
A:
Another full blown IDE is SharpDevelop. It's OpenSource.
http://www.icsharpcode.net/OpenSource/SD/
A:
Zeus - http://www.zeusedit.com
A:
I have to mention PSPad.
It is very good, feature rich free editor. I have used UtraEdit and finally found free alternative in PSPad
|
Windows IDE / editor for a beginner
|
I'm teaching (or trying to teach) computer programming to a grad-student. Her previous experience amounts to little more than writing spreadsheet formulae. Which IDE or text editor should I recommend?
Please bear in mind that:
I only meet my student about once a week.
She uses Windows and I use Linux.
She doesn't have a community of users on hand.
She doesn't have much money to spend.
Edit: The languages she's learning at the moment are Perl and R. (Sorry ... for forgetting to mention them earlier.)
Edit: Thanks for all your answers!
The most highly recommended editors are jEdit and Notepad++.
If I can find a way to give my student adequate support for Notepad++ (e.g. by running it under Wine) or if I think that she can manage without support from me, then I'll recommend that. If not, I'll go for jEdit.
Apologies, once again, to those who saw the question before I got around to listing the languages that I'm teaching.
|
[
"The Visual Studio Express products are all free. Unless the fact that you're using Linux changes things :)\n",
"Start off simple. Do not not scare her with an IDE! They are overwhelming at first and are not core to developing software. I learnt rudimentary Java with Crimson Editor. \nIf I started again I'd probably go for Notepad++.\n",
"Eclipse might be a good option (if a little overwhelming at first).\nYou obviously need to look at a cross-platform IDE. Eclipse is one of the best in this regard, as well as having support for many languages. It also comes with a good set of tutorials.\n",
"Since you didn't mention what programming language (guess it doesn't matter) you were teaching, I'll stick to something that supports multiple programming languages and multiple platforms. Given your situation, I would use jEdit (http://www.jedit.org).\njEdit is a programmer's text editor with hundreds of plugins, auto indent, and syntax highlighting for more than 130 languages and since it's written in Java, it runs beautifully on Linux, Windows or MAC. Hope this helps.\n",
"I have used Notepad++]1 a lot for various editing tasks, and I find it quite useful and competent.\n",
"The best, most documented, IDE that is free in my opinion is Visual Studio Express. There are tons of blogs, howtos, videos, training, etc. You can find more information about them here:\nhttp://www.microsoft.com/Express/\nAlso, if you are a student, Microsoft provides an entire stack of software free to students just for this purpose. This is through a program called DreamSpark. Included is an operating system, the professional version of the IDE, SQL Server, XNA Game studio and Expression. Any student can get this. More information is here:\nhttps://downloads.channel8.msdn.com/\nHope that helps.\n",
"Depends on the programming language. FoR C/C++ and anything .net Visual Studio is the way to go. The Express edition is free.\n",
"Eclipse or Jedit, if Eclipse is too complicated. jEdit is cross platform, free and supports a number of different languages.\n",
"Crimson Editor is also very nice; it's similar to Edit Plus. Syntax highlighting, tabs, etc.\n",
"Notepad++ for editing is awesome to me: it's Windows only, but maybe you can use it with Wine under Linux. But if you want someting more like an IDE, then Eclipse, or NetBean (both use java) can be very useful, although they are very resource expensive on old PC.\n",
"My suggestion is Textpad. You can teach her javascript, all the basic, and some advanced concepts are there. It's fun for the student see the output in a browser, and you can even teach a little HTML if the mood strikes.\n",
"Komodo Edit from active vision is free, open source, and available for Windows and Linux. Very nice features. \nOtherwise, Emacs as it is available on both platforms and can be configured for CUA controls.\nThe Cream version of VIM is also a good option.\n",
"It really depends on the language you are teaching her.\nEditPlus is a good simple editor. Free trial version and pretty cheap license.\n",
"Dev-C++ as a non-MS alternative.\nQuote: \"Bloodshed Dev-C++ is a full-featured Integrated Development Environment (IDE) for the C/C++ programming language. It uses Mingw port of GCC (GNU Compiler Collection) as it's compiler. Dev-C++ can also be used in combination with Cygwin or any other GCC based compiler.\"\n",
"Code::Blocks is also another good one, free and cross platform. Unless you need something for using VB / C# or other .NET languages as it is mostly C/C++. For the .NET languages on linux I would recommed MonoDevelop\n",
"Aptana is very handy for web-oriented programming.\nhttp://www.aptana.com\n",
"That depends at least in part on the programming language you intend to teach her. That said, you might want to take a look at Eclipse. Though it started primarily as a Java IDE, it's been extended via plugins to support many others (including C/C++, Flex, Haskell, and ColdFusion, to name a few), and can fairly easily be adapted to a new language if support isn't already out there.\nAdd to that the fact that the IDE is cross-platform so you can both use the same tool on your platforms of choice, and it looks like this might be a good fit.\n",
"I'd recommend SciTE, as it's both available for *nix and Windows and free (as in beer). It supports pretty much anything you'd expect from a decent editor and, if she goes on to use it, quite customizable. It also isn't too complex, so it should be easy for her to get going with it.\n",
"+1 to the Notepad++ suggestion - Anything I do that's not .Net-related I do in that.\n",
"For Java, BlueJ is an excellent teaching IDE. It doesn't confuse the new student with a lot of advanced functionality (stuff they won't use for years to come). Eclipse is a great IDE, but there is a LOT of stuff there they could drown in. The same is true for Visual Studio, but I don't know of a simpler IDE for .NET languages.\nYou may also consider Ruby with Scite as a teaching option. The IDE isn't that fancy, but along with the ease-of-startup of learning Ruby this could work very well. Ruby certainly has some advantages over Java/C#/C++ for the beginning student (mostly in that you don't have to create a full class with a main method just to get a program running).\n",
"For the easy to teach Component Pascal language (a successor to Niklaus Wirth's Pascal and Oberon) try the free, open source BlackBox IDE and the book Computing Fundamentals by Stan Warford.\nRegards,\ntamberg\n",
"If you are writing software targeted at a Windows platform then Visual Studio is more or less the standard IDE. Since you are teaching a graduate student I would recommend getting the academic license for the professional edition if they are going to be writing a lot of software, otherwise the express editions should be enough for leaning purposes.\nIn terms of text editors, the one that I currently use the most is Notepad++ which is free, open source, and support a wide variety of features that are useful to software development. There are also also a number of useful plug-ins available for it as well.\n",
"I can't believe nobody has mentioned vi. I'll argue that the less your tool does for you in the beginning the better coder you'll be in the end. For a newbie, give them syntax highlighting and some helpers for dealing with blocks and lines. Something like vi is great, emacs is also fine, or if you absolutely must be on Windows, something like notepad++ or jedit will be decent. The main point is to learn to program before you learn to let your IDE insert code that you don't understand for you.\n",
"MultiEdit\nExtremely powerfull (and extensible on emacs level) text editor with many IDE features (integration with compilers/debuggers etc). Beats all other suggested editors on every aspect. \nMuch easier to learn and use than editors with UNIX/terminal roots like vi or Emacs. \nNot free (not too expensive though), and requires some learning to use effectively.\n",
"Another full blown IDE is SharpDevelop. It's OpenSource.\nhttp://www.icsharpcode.net/OpenSource/SD/\n",
"Zeus - http://www.zeusedit.com\n",
"I have to mention PSPad. \nIt is very good, feature rich free editor. I have used UtraEdit and finally found free alternative in PSPad\n"
] |
[
8,
4,
2,
2,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"editor",
"ide",
"windows"
] |
stackoverflow_0000071786_editor_ide_windows.txt
|
Q:
SQL Server 2000 - Debugging Deadlocks
I'm looking for suggestions on how to debug and chase down deadlock issues in an SQL Server 2000 database. I've had it recommended to me to use trace flags 1024 and 3605, which I have found give me the following:
1024 - this trace flag returns the type of locks participating in the deadlock and the current command affected.
3605 - this trace flag sends trace output to the error log.
The specific stored procedures, tables and indexes still need to be uncovered, so the goal is to use these trace flags to do so. And then from there I'll know better what indexes need tuning, locking hints for tuning queries, and potential sproc bugs to fix.
Any other suggestions or happy ending war stories on chasing down this all to common problem?
A:
Here is the bible on deadlock troubleshooting: http://blogs.msdn.com/bartd/archive/2006/09/09/Deadlock-Troubleshooting_2C00_-Part-1.aspx
A:
Very useful script for analyzing deadlocks: http://www.sommarskog.se/sqlutil/aba_lockinfo.html
A:
Once you know the sql statements involved, a code analysis will surely help. Some thumb rules are checking for the order in which the rows are accessed and checking for the isolation level used for the SQL statements. A profiler trace can help a lot.
Most of the time, it is because of a reader trying to get a shared lock on a resource locked exclusively by a writer.
|
SQL Server 2000 - Debugging Deadlocks
|
I'm looking for suggestions on how to debug and chase down deadlock issues in an SQL Server 2000 database. I've had it recommended to me to use trace flags 1024 and 3605, which I have found give me the following:
1024 - this trace flag returns the type of locks participating in the deadlock and the current command affected.
3605 - this trace flag sends trace output to the error log.
The specific stored procedures, tables and indexes still need to be uncovered, so the goal is to use these trace flags to do so. And then from there I'll know better what indexes need tuning, locking hints for tuning queries, and potential sproc bugs to fix.
Any other suggestions or happy ending war stories on chasing down this all to common problem?
|
[
"Here is the bible on deadlock troubleshooting: http://blogs.msdn.com/bartd/archive/2006/09/09/Deadlock-Troubleshooting_2C00_-Part-1.aspx\n",
"Very useful script for analyzing deadlocks: http://www.sommarskog.se/sqlutil/aba_lockinfo.html\n",
"Once you know the sql statements involved, a code analysis will surely help. Some thumb rules are checking for the order in which the rows are accessed and checking for the isolation level used for the SQL statements. A profiler trace can help a lot.\nMost of the time, it is because of a reader trying to get a shared lock on a resource locked exclusively by a writer.\n"
] |
[
3,
3,
2
] |
[] |
[] |
[
"deadlock",
"debugging",
"sql_server",
"sql_server_2000",
"tsql"
] |
stackoverflow_0000052566_deadlock_debugging_sql_server_sql_server_2000_tsql.txt
|
Q:
Lighttpd and WebDAV for serving a Subversion repo
I've configured (at least I've tried to configure) Lighty to enable the WebDAV plugin when I go to a certain URL. I don't get any errors, so it seems to be working. How, then, do I configure it to serve my subversion repositories (of which I have many)?
A:
I don't think that's possible right now, since mod_dav_svn is an apache module and AFAIK there is no lighttpd module available.
|
Lighttpd and WebDAV for serving a Subversion repo
|
I've configured (at least I've tried to configure) Lighty to enable the WebDAV plugin when I go to a certain URL. I don't get any errors, so it seems to be working. How, then, do I configure it to serve my subversion repositories (of which I have many)?
|
[
"I don't think that's possible right now, since mod_dav_svn is an apache module and AFAIK there is no lighttpd module available.\n"
] |
[
3
] |
[] |
[] |
[
"lighttpd",
"svn",
"webdav"
] |
stackoverflow_0000081212_lighttpd_svn_webdav.txt
|
Q:
How to convert Typed DataSet Scheme when one of the types was changed?
I got a typed (not connected) dataset, and many records (binary seriliazed) created with this dataset.
I've added a property to one of the types, and I want to convert the old records with the new data set.
I know how to load them: providing custom binder for the BinaryFormatter with the old schema dll.
The question is how can I convert objects of the old type to objects of the new type - both types has the same name but the new one has one more property.
A:
If the only difference between the existing dataset and the new one is an added field then you can "upgrade" them by writing out the old ones to XML and then reading that into the new ones. The value of the added field will be DBNull.
MyDataSet myDS = new MyDataSet();
MyDataSet.MyTableRow row1 = myDS.MyTable.NewMyTableRow();
row1.Name = "Brownie";
myDS.MyTable.Rows.Add(row1);
MyNewDataSet myNewDS = new MyNewDataSet();
using(MemoryStream ms = new MemoryStream()){
myDS.WriteXml(ms);
ms.Position = 0;
myNewDS.ReadXml(ms);
}
A:
Can you make the new class inherit from the old one? If so, maybe you can simply deserialize into the new one through casting.
If not, another possible solution is to implement a batch operation where you include a reference to the old class and new class in different namespaces, hydrate the old object, perform a deep copy into an object of the new class, and serialize the new object.
|
How to convert Typed DataSet Scheme when one of the types was changed?
|
I got a typed (not connected) dataset, and many records (binary seriliazed) created with this dataset.
I've added a property to one of the types, and I want to convert the old records with the new data set.
I know how to load them: providing custom binder for the BinaryFormatter with the old schema dll.
The question is how can I convert objects of the old type to objects of the new type - both types has the same name but the new one has one more property.
|
[
"If the only difference between the existing dataset and the new one is an added field then you can \"upgrade\" them by writing out the old ones to XML and then reading that into the new ones. The value of the added field will be DBNull.\nMyDataSet myDS = new MyDataSet();\nMyDataSet.MyTableRow row1 = myDS.MyTable.NewMyTableRow();\nrow1.Name = \"Brownie\";\nmyDS.MyTable.Rows.Add(row1);\n\nMyNewDataSet myNewDS = new MyNewDataSet();\n\nusing(MemoryStream ms = new MemoryStream()){\n myDS.WriteXml(ms);\n ms.Position = 0;\n myNewDS.ReadXml(ms);\n}\n\n",
"Can you make the new class inherit from the old one? If so, maybe you can simply deserialize into the new one through casting.\nIf not, another possible solution is to implement a batch operation where you include a reference to the old class and new class in different namespaces, hydrate the old object, perform a deep copy into an object of the new class, and serialize the new object.\n"
] |
[
2,
0
] |
[] |
[] |
[
"ado.net",
"c#",
"dataset"
] |
stackoverflow_0000080766_ado.net_c#_dataset.txt
|
Q:
DataGridView column of type DataGridViewCheckBoxCell is constantly readonly/disabled
I am using a .NET Windows Forms DataGridView and I need to edit a DataBound column (that binds on a boolean DataTable column). For this I specify the cell template like this:
DataGridViewColumn column = new DataGridViewColumn(new DataGridViewCheckBoxCell());
You see that I need a CheckBox cell template.
The problem I face is that this column is constantly readonly/disabled, as if it would be of TextBox type. It doesn't show a checkbox at all.
Any thoughts on how to work with editable checkbox columns for DataGridView?
Update: For windows forms, please.
Thanks.
A:
Well, after more than 4 hours of debugging, I have found that the DataGridView row height was too small for the checkbox to be painted, so it was not displayed at all. I have found this after an accidental row height resizing.
As a solution, you can set the AutoSizeRowsMode to AllCells.
richDataGrid.AutoSizeRowsMode = System.Windows.Forms.DataGridViewAutoSizeRowsMode.AllCells;
A:
Instead of trying to create the column in code, click on the tiny arrow in a box at the top right of the DataGridView control, and select "Edit Columns..." from the menu that appears. In the dialog box, click the Add button, then choose the "Databound column" option and pick the boolean column you're binding to.
A:
Create a TemplateField and bound the id to it, something like this:
<asp:TemplateField HeaderText="Whatever" SortExpression="fieldname" ItemStyle-HorizontalAlign="Center">
<ItemTemplate>
<asp:CheckBox runat="server" ID="rowCheck" key='<%# Eval("id") %>' />
</ItemTemplate>
</asp:TemplateField>
|
DataGridView column of type DataGridViewCheckBoxCell is constantly readonly/disabled
|
I am using a .NET Windows Forms DataGridView and I need to edit a DataBound column (that binds on a boolean DataTable column). For this I specify the cell template like this:
DataGridViewColumn column = new DataGridViewColumn(new DataGridViewCheckBoxCell());
You see that I need a CheckBox cell template.
The problem I face is that this column is constantly readonly/disabled, as if it would be of TextBox type. It doesn't show a checkbox at all.
Any thoughts on how to work with editable checkbox columns for DataGridView?
Update: For windows forms, please.
Thanks.
|
[
"Well, after more than 4 hours of debugging, I have found that the DataGridView row height was too small for the checkbox to be painted, so it was not displayed at all. I have found this after an accidental row height resizing.\nAs a solution, you can set the AutoSizeRowsMode to AllCells.\nrichDataGrid.AutoSizeRowsMode = System.Windows.Forms.DataGridViewAutoSizeRowsMode.AllCells;\n",
"Instead of trying to create the column in code, click on the tiny arrow in a box at the top right of the DataGridView control, and select \"Edit Columns...\" from the menu that appears. In the dialog box, click the Add button, then choose the \"Databound column\" option and pick the boolean column you're binding to.\n",
"Create a TemplateField and bound the id to it, something like this:\n<asp:TemplateField HeaderText=\"Whatever\" SortExpression=\"fieldname\" ItemStyle-HorizontalAlign=\"Center\">\n <ItemTemplate>\n <asp:CheckBox runat=\"server\" ID=\"rowCheck\" key='<%# Eval(\"id\") %>' />\n </ItemTemplate>\n</asp:TemplateField>\n\n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"datagridview",
"datagridviewcheckboxcell",
"winforms"
] |
stackoverflow_0000071226_datagridview_datagridviewcheckboxcell_winforms.txt
|
Q:
Dynamic contact information data/design pattern: Is this in any way feasible?
I'm currently working on a web business application that has many entities (people,organizations) with lots of contact information ie. multiple postal addresses, email addresses, phone numbers etc.
At the moment the database schema is such that persons table has postal address columns, phone number columns as does organizations table. This is not a good way to handle this.
I've read the c2 Wiki on this and there's some good discussion regarding Contact and address models (http://c2.com/cgi-bin/wiki?ContactAndAddressModels) and wheter or not physical addresses are archaic (http://c2.com/cgi-bin/wiki?ArePhysicalPostalAddressesArchaic). These two discussions really opened my eyes on the scope of this problem.
I'm thinking about separating contact information fields to separate table(s). But what's the best way to do this. At the moment the application mainly handles Finnish addresses but it's on the horizon that it needs also to handle international addresses.
I could define an "addresses" -table, a "phone numbers" -table, an "email addresses" -table and so on and these would be linked to people and organizations. But this just feels too much like the previous solution: it's inevitable that the predefined database schema isn't sufficient.
What I'm proposing is to create a contact information schema/program logic that is dynamic:
There are no predefined contact information fields/field sets
Users can define new contact information types and required fields at any time like
Finnish postal address
Swedish postal address
... postal address
Phone number
Email address
ICQ-number
Is this feasible? Has anyone done anything like this?
There could be a table that defines contact information types:
contact information types
Id: Identifier
Name: "Finnish postal address"
Description: "Use this contact information type for finnish postal addresses"
Then there could be a table that defines what fields are used per contact information type:
contact information type fields
Id: Identifier
Contact_information_type_id: References the previous table
Field title: "Address line 1"
Field description: "Use this line for postal addresses' first line"
Field type: String/Integer/etc.
Field format: Regular expression for validating field data
Field order: In which order should this field appear when displaying/using this contact information type
Then we'd have a "contact information table" that just is used to map contact information fields together:
contact information
Id: Identifier
Contact_information_type_id: References the contact information type table
Then we'd have a "contact information of person" -table mapping different contact information to persons:
contact information of person
Id: Identifier
Contact_information_id: References the contact information table
Person id: References the person
Then we'd need tables per contact information field type like:
contact information integer fields
Id: Identifier
Contact_information_id: References the contact information table
Value: The value of this field
and so on for strings etc...
Finally when displaying different contact information of a given person this would happen through person's contact information -table whis looks up what fields are used to form this contact information from contact information type fields -table through contact information -table. After determining what fields are used all the necessary tables would be joined together.
I'm having doubts about the feasibility of in SQL. Any thoughts?
In Java I probably could program some logic to determine what tables are neede to form a contact information entity and then i could use some sort of dynamic beans to represent this data in Java. But that's a bit foggy to me too. Anyt thoughts on this too?
A:
It is starting to sound like you have a perfectly good hammer (i.e your SQL database) and you are trying to make another hammer with it (a meta-language to define SQL schemas).
Before you go down this path, there are many products on the market that aim to store customer details in an SQL database. It might be best to just purchase one off the shelf and integrate with it. Then all the concerns you have are addressed by someone else and you can focus on your specific business case.
Edit: One example of a package that allows you to add custom contact fields is SugarCRM - it is a commercial product where you buy access to the source on purchase. I'm sure there are many more but this is the only one that comes to mind at present.
A:
Your design is feasible, and I'm as big a fan of normalization as the next guy, but you really have to find a balance somewhere. So to begin, I think you're right that having fields like address1, address2, address3, etc... is bad practice. And if you are planning on handling many different types of mailing addresses from different countries, it might make sense to abstract out various address types.
Think about the data you're going to want to get out of the system - for example, will someone be asking for all the customers in a certain state or province? In that case your design will be pretty painful.
Another thing to keep in mind is that database schema changes, though they can sometimes be painful, are not the worst thing in the world. Follow that path to it's logical extreme and you'll end up with one gigantic table with fields like "key" and "value" and thousands of self-joins in every query.
Good luck finding the right balance!
A:
This is not a very informative post; have you had a look at how the vCard people handle the same issues? Also, be careful of overengineering, you might end up with N3.
A:
First: Speaking pragmatically, it depends what you want to do with the data. In my experience, 99% of all address data is only ever used as a string to be printed on a letter. If that is the case for you, then you should stop worrying and just store it as a string. Of course, if you're doing deeper work with it then it's not going to be so easy.
Apart from that...
I like the way you're thinking. I have done similar things (although not with addresses) to handle dynamic schemas. The problem I run into is (as you've identified) that the SQL to extract the stuff gets complex. Another problem is that this flexibility can lead to spaghetti data, in exactly the same way you can get spaghetti code. I.e. the meaning of what's in your tables can become obscured because you can only understand it by looking at the code which accesses it.
So, what you have to decide is where you are prepared to accept complexity, and what kind of complexity you can best handle. If you don't mind complex SQL, then go ahead and build your dynamic schema. If you do mind complex SQL, then either build the static tables (with one table per address type), or accept that you won't have such an elegant data structure.
So, short answer: you have to call it.
|
Dynamic contact information data/design pattern: Is this in any way feasible?
|
I'm currently working on a web business application that has many entities (people,organizations) with lots of contact information ie. multiple postal addresses, email addresses, phone numbers etc.
At the moment the database schema is such that persons table has postal address columns, phone number columns as does organizations table. This is not a good way to handle this.
I've read the c2 Wiki on this and there's some good discussion regarding Contact and address models (http://c2.com/cgi-bin/wiki?ContactAndAddressModels) and wheter or not physical addresses are archaic (http://c2.com/cgi-bin/wiki?ArePhysicalPostalAddressesArchaic). These two discussions really opened my eyes on the scope of this problem.
I'm thinking about separating contact information fields to separate table(s). But what's the best way to do this. At the moment the application mainly handles Finnish addresses but it's on the horizon that it needs also to handle international addresses.
I could define an "addresses" -table, a "phone numbers" -table, an "email addresses" -table and so on and these would be linked to people and organizations. But this just feels too much like the previous solution: it's inevitable that the predefined database schema isn't sufficient.
What I'm proposing is to create a contact information schema/program logic that is dynamic:
There are no predefined contact information fields/field sets
Users can define new contact information types and required fields at any time like
Finnish postal address
Swedish postal address
... postal address
Phone number
Email address
ICQ-number
Is this feasible? Has anyone done anything like this?
There could be a table that defines contact information types:
contact information types
Id: Identifier
Name: "Finnish postal address"
Description: "Use this contact information type for finnish postal addresses"
Then there could be a table that defines what fields are used per contact information type:
contact information type fields
Id: Identifier
Contact_information_type_id: References the previous table
Field title: "Address line 1"
Field description: "Use this line for postal addresses' first line"
Field type: String/Integer/etc.
Field format: Regular expression for validating field data
Field order: In which order should this field appear when displaying/using this contact information type
Then we'd have a "contact information table" that just is used to map contact information fields together:
contact information
Id: Identifier
Contact_information_type_id: References the contact information type table
Then we'd have a "contact information of person" -table mapping different contact information to persons:
contact information of person
Id: Identifier
Contact_information_id: References the contact information table
Person id: References the person
Then we'd need tables per contact information field type like:
contact information integer fields
Id: Identifier
Contact_information_id: References the contact information table
Value: The value of this field
and so on for strings etc...
Finally when displaying different contact information of a given person this would happen through person's contact information -table whis looks up what fields are used to form this contact information from contact information type fields -table through contact information -table. After determining what fields are used all the necessary tables would be joined together.
I'm having doubts about the feasibility of in SQL. Any thoughts?
In Java I probably could program some logic to determine what tables are neede to form a contact information entity and then i could use some sort of dynamic beans to represent this data in Java. But that's a bit foggy to me too. Anyt thoughts on this too?
|
[
"It is starting to sound like you have a perfectly good hammer (i.e your SQL database) and you are trying to make another hammer with it (a meta-language to define SQL schemas).\nBefore you go down this path, there are many products on the market that aim to store customer details in an SQL database. It might be best to just purchase one off the shelf and integrate with it. Then all the concerns you have are addressed by someone else and you can focus on your specific business case.\nEdit: One example of a package that allows you to add custom contact fields is SugarCRM - it is a commercial product where you buy access to the source on purchase. I'm sure there are many more but this is the only one that comes to mind at present.\n",
"Your design is feasible, and I'm as big a fan of normalization as the next guy, but you really have to find a balance somewhere. So to begin, I think you're right that having fields like address1, address2, address3, etc... is bad practice. And if you are planning on handling many different types of mailing addresses from different countries, it might make sense to abstract out various address types. \nThink about the data you're going to want to get out of the system - for example, will someone be asking for all the customers in a certain state or province? In that case your design will be pretty painful.\nAnother thing to keep in mind is that database schema changes, though they can sometimes be painful, are not the worst thing in the world. Follow that path to it's logical extreme and you'll end up with one gigantic table with fields like \"key\" and \"value\" and thousands of self-joins in every query. \nGood luck finding the right balance!\n",
"This is not a very informative post; have you had a look at how the vCard people handle the same issues? Also, be careful of overengineering, you might end up with N3.\n",
"First: Speaking pragmatically, it depends what you want to do with the data. In my experience, 99% of all address data is only ever used as a string to be printed on a letter. If that is the case for you, then you should stop worrying and just store it as a string. Of course, if you're doing deeper work with it then it's not going to be so easy. \nApart from that... \nI like the way you're thinking. I have done similar things (although not with addresses) to handle dynamic schemas. The problem I run into is (as you've identified) that the SQL to extract the stuff gets complex. Another problem is that this flexibility can lead to spaghetti data, in exactly the same way you can get spaghetti code. I.e. the meaning of what's in your tables can become obscured because you can only understand it by looking at the code which accesses it. \nSo, what you have to decide is where you are prepared to accept complexity, and what kind of complexity you can best handle. If you don't mind complex SQL, then go ahead and build your dynamic schema. If you do mind complex SQL, then either build the static tables (with one table per address type), or accept that you won't have such an elegant data structure. \nSo, short answer: you have to call it. \n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"design_patterns",
"java",
"sql"
] |
stackoverflow_0000080876_design_patterns_java_sql.txt
|
Q:
How can I block mp3 crawlers from my website under Apache?
Is there some way to block access from a referrer using a .htaccess file or similar? My bandwidth is being eaten up by people referred from http://www.dizzler.com which is a flash based site that allows you to browse a library of crawled publicly available mp3s.
Edit: Dizzler was still getting in (probably wasn't indicating referrer in all cases) so instead I moved all my mp3s to a new folder, disabled directory browsing, and created a robots.txt file to (hopefully) keep it from being indexed again. Accepted answer changed to reflect futility of my previous attempt :P
A:
That's like saying you want to stop spam-bots from harvesting emails on your publicly visible page - it's very tough to tell the difference between users and bots without forcing your viewers to log in to confirm their identity.
You could use robots.txt to disallow the spiders that actually follow those rules, but that's on their side, not your server's. There's a page that explains how to catch the ones that break the rules and explicitly ban them : Using Apache to stop bad robots [evolt.org]
If you want an easy way to stop dizzler in particular using the .htaccess, you should be able to pop it open and add:
<Directory /directoryName/subDirectory>
Order Allow,Deny
Allow from all
Deny from 66.232.150.219
</Directory>
A:
From this site: (put this in your .htaccess file)
RewriteEngine on
RewriteCond %{HTTP_REFERER} ^http://((www\.)?dizzler\.com [NC]
RewriteRule .* - [F]
A:
You could use something like
SetEnvIfNoCase Referer dizzler.com spammer=yes
Order allow,deny
allow from all
deny from env=spammer
Source: http://codex.wordpress.org/Combating_Comment_Spam/Denying_Access
A:
It's not a very elegant solution, but you could block the site's crawler bot, then rename your mp3 files to break the links already on the site.
|
How can I block mp3 crawlers from my website under Apache?
|
Is there some way to block access from a referrer using a .htaccess file or similar? My bandwidth is being eaten up by people referred from http://www.dizzler.com which is a flash based site that allows you to browse a library of crawled publicly available mp3s.
Edit: Dizzler was still getting in (probably wasn't indicating referrer in all cases) so instead I moved all my mp3s to a new folder, disabled directory browsing, and created a robots.txt file to (hopefully) keep it from being indexed again. Accepted answer changed to reflect futility of my previous attempt :P
|
[
"That's like saying you want to stop spam-bots from harvesting emails on your publicly visible page - it's very tough to tell the difference between users and bots without forcing your viewers to log in to confirm their identity.\nYou could use robots.txt to disallow the spiders that actually follow those rules, but that's on their side, not your server's. There's a page that explains how to catch the ones that break the rules and explicitly ban them : Using Apache to stop bad robots [evolt.org]\nIf you want an easy way to stop dizzler in particular using the .htaccess, you should be able to pop it open and add:\n<Directory /directoryName/subDirectory>\nOrder Allow,Deny\nAllow from all\nDeny from 66.232.150.219\n</Directory>\n\n",
"From this site: (put this in your .htaccess file)\nRewriteEngine on\nRewriteCond %{HTTP_REFERER} ^http://((www\\.)?dizzler\\.com [NC]\nRewriteRule .* - [F]\n\n",
"You could use something like\nSetEnvIfNoCase Referer dizzler.com spammer=yes\n\nOrder allow,deny\nallow from all\ndeny from env=spammer\n\nSource: http://codex.wordpress.org/Combating_Comment_Spam/Denying_Access\n",
"It's not a very elegant solution, but you could block the site's crawler bot, then rename your mp3 files to break the links already on the site.\n"
] |
[
3,
2,
2,
0
] |
[] |
[] |
[
"apache",
"bandwidth"
] |
stackoverflow_0000081238_apache_bandwidth.txt
|
Q:
How to use form values from an unbound form
I have a web report that uses a Django form (new forms) for fields that control the query used to generate the report (start date, end date, ...). The issue I'm having is that the page should work using the form's initial values (unbound), but I can't access the cleaned_data field unless I call is_valid(). But is_valid() always fails on unbound forms.
It seems like Django's forms were designed with the use case of editing data such that an unbound form isn't really useful for anything other than displaying HTML.
For example, if I have:
if request.method == 'GET':
form = MyForm()
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
is_valid() will fail if this is a GET (since it's unbound), and if I do:
if request.method == 'GET':
form = MyForm()
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
the first call to do_query triggers exceptions on form.cleaned_data, which is not a valid field because is_valid() has not been called. It seems like I have to do something like:
if request.method == 'GET':
form = MyForm()
do_query(form['start_date'].field.initial, form['end_date'].field.initial)
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
that is, there isn't a common interface for retrieving the form's values between a bound form and an unbound one.
Does anyone see a cleaner way to do this?
A:
If you add this method to your form class:
def get_cleaned_or_initial(self, fieldname):
if hasattr(self, 'cleaned_data'):
return self.cleaned_data.get(fieldname)
else:
return self[fieldname].field.initial
you could then re-write your code as:
if request.method == 'GET':
form = MyForm()
else:
form = MyForm(request.method.POST)
form.is_valid()
do_query(form.get_cleaned_or_initial('start_date'), form.get_cleaned_or_initial('end_date'))
A:
Unbound means there is no data associated with form (either initial or provided later), so the validation may fail. As mentioned in other answers (and in your own conclusion), you have to provide initial values and check for both bound data and initial values.
The use case for forms is form processing and validation, so you must have some data to validate before you accessing cleaned_data.
A:
You can pass a dictionary of initial values to your form:
if request.method == "GET":
# calculate my_start_date and my_end_date here...
form = MyForm( { 'start_date': my_start_date, 'end_date': my_end_date} )
...
See the official forms API documentation, where they demonstrate this.
edit: Based on answers from other users, maybe this is the cleanest solution:
if request.method == "GET":
form = MyForm()
form['start_date'] = form['start_date'].field.initial
form['end_date'] = form['end_date'].field.initial
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
I haven't tried this though; can someone confirm that this works? I think this is better than creating a new method, because this approach doesn't require other code (possibly not written by you) to know about your new 'magic' accessor.
|
How to use form values from an unbound form
|
I have a web report that uses a Django form (new forms) for fields that control the query used to generate the report (start date, end date, ...). The issue I'm having is that the page should work using the form's initial values (unbound), but I can't access the cleaned_data field unless I call is_valid(). But is_valid() always fails on unbound forms.
It seems like Django's forms were designed with the use case of editing data such that an unbound form isn't really useful for anything other than displaying HTML.
For example, if I have:
if request.method == 'GET':
form = MyForm()
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
is_valid() will fail if this is a GET (since it's unbound), and if I do:
if request.method == 'GET':
form = MyForm()
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
the first call to do_query triggers exceptions on form.cleaned_data, which is not a valid field because is_valid() has not been called. It seems like I have to do something like:
if request.method == 'GET':
form = MyForm()
do_query(form['start_date'].field.initial, form['end_date'].field.initial)
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
that is, there isn't a common interface for retrieving the form's values between a bound form and an unbound one.
Does anyone see a cleaner way to do this?
|
[
"If you add this method to your form class:\ndef get_cleaned_or_initial(self, fieldname):\n if hasattr(self, 'cleaned_data'):\n return self.cleaned_data.get(fieldname)\n else:\n return self[fieldname].field.initial\n\nyou could then re-write your code as:\nif request.method == 'GET':\n form = MyForm()\nelse:\n form = MyForm(request.method.POST)\n form.is_valid()\n\ndo_query(form.get_cleaned_or_initial('start_date'), form.get_cleaned_or_initial('end_date'))\n\n",
"Unbound means there is no data associated with form (either initial or provided later), so the validation may fail. As mentioned in other answers (and in your own conclusion), you have to provide initial values and check for both bound data and initial values.\nThe use case for forms is form processing and validation, so you must have some data to validate before you accessing cleaned_data.\n",
"You can pass a dictionary of initial values to your form:\nif request.method == \"GET\":\n # calculate my_start_date and my_end_date here...\n form = MyForm( { 'start_date': my_start_date, 'end_date': my_end_date} )\n...\n\nSee the official forms API documentation, where they demonstrate this.\nedit: Based on answers from other users, maybe this is the cleanest solution:\nif request.method == \"GET\":\n form = MyForm()\n form['start_date'] = form['start_date'].field.initial\n form['end_date'] = form['end_date'].field.initial\nelse:\n form = MyForm(request.method.POST)\nif form.is_valid():\n do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])\n\nI haven't tried this though; can someone confirm that this works? I think this is better than creating a new method, because this approach doesn't require other code (possibly not written by you) to know about your new 'magic' accessor.\n"
] |
[
7,
2,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000075621_django_python.txt
|
Q:
How to send SOAP requests in ISO-8859-1 with Flex?
Flex uses by default UTF-8. I have not fount a way to specify a different encoding/charset on the actionscript WebService class.
A:
Ummm, look here:
http://www.adobe.com/devnet/flex/articles/struts_06.html
I think that sample implies that declaring your mxml file as iso-8859-1 might do the trick, but I really don't think so.
I might be wrong but as far as I know the Flash player only handles UTF-8 encoding. I've searched for a link to an official page saying so, but couldn't find it.
If that's the case you either:
a) update the webservice to handle UTF-8 encoding
b) if that's not possible, proxy your call to your own webservice that accepts UTF-8 and then call the actual one.
You might want to give a go to the old system.useCodepage=true trick BUT that didn't use to work when the user was on Linux or Mac, USE WITH CARE!
A:
There is also a way to specify an encoding to the flex compiler but that does not seem to work.
Right now the only solution I have found is to re-encode the incomming requests on the server side.
I am surprised this limitation is not written black on white in the flex reference documentation.
|
How to send SOAP requests in ISO-8859-1 with Flex?
|
Flex uses by default UTF-8. I have not fount a way to specify a different encoding/charset on the actionscript WebService class.
|
[
"Ummm, look here:\nhttp://www.adobe.com/devnet/flex/articles/struts_06.html\nI think that sample implies that declaring your mxml file as iso-8859-1 might do the trick, but I really don't think so.\nI might be wrong but as far as I know the Flash player only handles UTF-8 encoding. I've searched for a link to an official page saying so, but couldn't find it.\nIf that's the case you either:\na) update the webservice to handle UTF-8 encoding\nb) if that's not possible, proxy your call to your own webservice that accepts UTF-8 and then call the actual one.\nYou might want to give a go to the old system.useCodepage=true trick BUT that didn't use to work when the user was on Linux or Mac, USE WITH CARE!\n",
"There is also a way to specify an encoding to the flex compiler but that does not seem to work.\nRight now the only solution I have found is to re-encode the incomming requests on the server side.\nI am surprised this limitation is not written black on white in the flex reference documentation.\n"
] |
[
1,
0
] |
[] |
[] |
[
"actionscript",
"apache_flex",
"character_encoding",
"encoding",
"soap"
] |
stackoverflow_0000076817_actionscript_apache_flex_character_encoding_encoding_soap.txt
|
Q:
Windows Vista Programmatically remap user directories
I re-image one of my machines regularly; and have a script that I run after the OS install completes to configure my machine; such that it works how I like.
I happen to have my data on another drive...and I'd like to add code to my script to change the location of the Documents directory from "C:\Users\bryansh\Documents" to "D:\Users\bryansh\Documents".
Does anybody have any insight, before I fire up regmon and really roll up my sleeves?
A:
I use reparse points http://www.hanselman.com/blog/MoreOnVistaReparsePoints.aspx to redirect My Documents.
A:
SHSetFolderPath Function should help, since this article mentions its use for folder redirection by the Group Policy API.
|
Windows Vista Programmatically remap user directories
|
I re-image one of my machines regularly; and have a script that I run after the OS install completes to configure my machine; such that it works how I like.
I happen to have my data on another drive...and I'd like to add code to my script to change the location of the Documents directory from "C:\Users\bryansh\Documents" to "D:\Users\bryansh\Documents".
Does anybody have any insight, before I fire up regmon and really roll up my sleeves?
|
[
"I use reparse points http://www.hanselman.com/blog/MoreOnVistaReparsePoints.aspx to redirect My Documents.\n",
"SHSetFolderPath Function should help, since this article mentions its use for folder redirection by the Group Policy API.\n"
] |
[
1,
1
] |
[] |
[] |
[
"user_environment",
"windows",
"windows_vista"
] |
stackoverflow_0000081285_user_environment_windows_windows_vista.txt
|
Q:
How to find header dependencies for large scale projects on linux
I'm working an a very large scale projects, where the compilation time is very long. What tools can I use (preferably open source) on Linux, to find the most heavily included files and that optimize their useages?
Just to be clearer, I need a tool which will, given the dependencies, show me which headers are the most included. By the way, we do use distributed compiling
A:
Check out makdepend
A:
The answers here will give you tools which track #include dependencies. But there's no mention of optimization and such.
Aside: The book "Large Scale C++ Software Design" should help.
A:
Using the Unix philosophy of "gluing together many small tools" I'd suggest writing a short script that calls gcc with the -M (or -MM) and -MF (OUTFILE) options (As detailed here). That will generate the dependency lists for the make tool, which you can then parse easily (relative to parsing the source files directly) and extract out the required information.
A:
Tools like doxygen (used with the graphviz options) can generate dependency graphs for include files... I don't know if they'd provide enough overview for what you're trying to do, but it could be worth trying.
A:
From the root level of the source tree and do the following (\t is the tab character):
find . -exec grep '[ \t]*#include[ \t][ \t]*["<][^">][">]' {} ';'
| sed 's/^[ \t]*#include[ \t][ \t]*["<]//'
| sed 's/[">].*$//'
| sort
| uniq -c
| sort -r -k1 -n
Line 1 get all the include lines.
Line 2 strips off everything before the actual filename.
Line 3 strips off the end of the line, leaving only the filename.
Line 4 and 5 counts each unique line.
Line 6 sorts by line count in reverse order.
A:
If you wish to know which files are included most of all, use this bash command:
find . -name '.cpp' -exec egrep '^[:space:]#include[[:space:]]+["<][[:alpha:][:digit:]_.]+[">]' {} \;
| sort | uniq -c | sort -k 1rn,1
| head -20
It will display top 20 files ranked by amount of times they were included.
Explanation: The 1st line finds all *.cpp files and extract lines with "#include" directive from it. The 2nd line calculates how many times each file was included and the 3rd line takes 20 mostly included files.
A:
Use ccache. It will hash the inputs to a compilation, and cache the results, which will drastically increase the speed of these sorts of compiles.
If you wanted to detect the multiple includes, so that you could remove them, you could use makedepend as Iulian Șerbănoiu suggests:
makedepend -m *.c -f - > /dev/null
will give a warning for each multiple include.
A:
Bash scripts found in the page aren't good solution. It works only on simple project. In fact, in large project, like discribe in header page, C-preprocessor (#if, #else, ...) are often used. Only good software more complex, like makedepend or scons can give good informations. gcc -E can help, but, on large project, its result analysis is a wasting time.
A:
IIRC gcc could create dependency files.
A:
You might want to look at distributed compiling, see for example distcc
A:
This is not exactly what you are searchng for, and it might not be easy to setup, but may be you could have a look at lxr : lxr.linux.no is a browseable kernel tree.
In the search box, if you enter a filename, it will give you where it is included.
But this is still guessing, and it does not track chained dependencies.
Maybe
strace -e trace=open -o outfile make
grep 'some handy regex to match header'
|
How to find header dependencies for large scale projects on linux
|
I'm working an a very large scale projects, where the compilation time is very long. What tools can I use (preferably open source) on Linux, to find the most heavily included files and that optimize their useages?
Just to be clearer, I need a tool which will, given the dependencies, show me which headers are the most included. By the way, we do use distributed compiling
|
[
"Check out makdepend\n",
"The answers here will give you tools which track #include dependencies. But there's no mention of optimization and such.\nAside: The book \"Large Scale C++ Software Design\" should help.\n",
"Using the Unix philosophy of \"gluing together many small tools\" I'd suggest writing a short script that calls gcc with the -M (or -MM) and -MF (OUTFILE) options (As detailed here). That will generate the dependency lists for the make tool, which you can then parse easily (relative to parsing the source files directly) and extract out the required information.\n",
"Tools like doxygen (used with the graphviz options) can generate dependency graphs for include files... I don't know if they'd provide enough overview for what you're trying to do, but it could be worth trying.\n",
"From the root level of the source tree and do the following (\\t is the tab character):\nfind . -exec grep '[ \\t]*#include[ \\t][ \\t]*[\"<][^\">][\">]' {} ';'\n | sed 's/^[ \\t]*#include[ \\t][ \\t]*[\"<]//'\n | sed 's/[\">].*$//'\n | sort\n | uniq -c\n | sort -r -k1 -n\n\nLine 1 get all the include lines.\nLine 2 strips off everything before the actual filename.\nLine 3 strips off the end of the line, leaving only the filename.\nLine 4 and 5 counts each unique line.\nLine 6 sorts by line count in reverse order.\n",
"If you wish to know which files are included most of all, use this bash command:\n\nfind . -name '.cpp' -exec egrep '^[:space:]#include[[:space:]]+[\"<][[:alpha:][:digit:]_.]+[\">]' {} \\;\n\n| sort | uniq -c | sort -k 1rn,1\n | head -20\n\n\nIt will display top 20 files ranked by amount of times they were included.\nExplanation: The 1st line finds all *.cpp files and extract lines with \"#include\" directive from it. The 2nd line calculates how many times each file was included and the 3rd line takes 20 mostly included files.\n",
"Use ccache. It will hash the inputs to a compilation, and cache the results, which will drastically increase the speed of these sorts of compiles.\nIf you wanted to detect the multiple includes, so that you could remove them, you could use makedepend as Iulian Șerbănoiu suggests:\nmakedepend -m *.c -f - > /dev/null\n\nwill give a warning for each multiple include.\n",
"Bash scripts found in the page aren't good solution. It works only on simple project. In fact, in large project, like discribe in header page, C-preprocessor (#if, #else, ...) are often used. Only good software more complex, like makedepend or scons can give good informations. gcc -E can help, but, on large project, its result analysis is a wasting time.\n",
"IIRC gcc could create dependency files.\n",
"You might want to look at distributed compiling, see for example distcc\n",
"This is not exactly what you are searchng for, and it might not be easy to setup, but may be you could have a look at lxr : lxr.linux.no is a browseable kernel tree.\nIn the search box, if you enter a filename, it will give you where it is included.\nBut this is still guessing, and it does not track chained dependencies.\nMaybe \nstrace -e trace=open -o outfile make\ngrep 'some handy regex to match header' \n\n"
] |
[
4,
4,
3,
2,
2,
1,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"c++"
] |
stackoverflow_0000080923_c++.txt
|
Q:
Column Info only Returned with FMTONLY set to OFF
I have a query that is dynamically built after looking up a field list and table name. I execute this dynamic query inside a stored proc. The query is built without a where clause when the two proc parameters are zero, and built with a where clause when not.
When I execute the proc with
SET FMTONLY ON
exec [cpExportRecordType_ListByExportAgentID] null, null
It returns no column information. I have just now replaced building the query without a where clause to just executing the same query directly, and now I get column information. I would love to know what causes this, anyone?
A:
Perhaps it is related to the fact that the passed parameters are NULL,
check how your query is build perhaps it behaves in different way then expected when you pass NULL.
Does you proc returns expected results when you call:
SET FMTONLY OFF exec [cpExportRecordType_ListByExportAgentID] null, null
?
Other possibility:
I understand that you build your query dynamically by getting results from calling another queries to get the column names.
Perhaps the query that would normally give you the column names returns no data but only column information (SET FMTONLY ON) so you do not have data to build you dynamic query.
A:
kristof:
so you do not have data to build you dynamic query.
With null parameters my dynamic query was a pure string literal, independent of data. Changing it to a static query solved the problem.
|
Column Info only Returned with FMTONLY set to OFF
|
I have a query that is dynamically built after looking up a field list and table name. I execute this dynamic query inside a stored proc. The query is built without a where clause when the two proc parameters are zero, and built with a where clause when not.
When I execute the proc with
SET FMTONLY ON
exec [cpExportRecordType_ListByExportAgentID] null, null
It returns no column information. I have just now replaced building the query without a where clause to just executing the same query directly, and now I get column information. I would love to know what causes this, anyone?
|
[
"Perhaps it is related to the fact that the passed parameters are NULL, \ncheck how your query is build perhaps it behaves in different way then expected when you pass NULL.\nDoes you proc returns expected results when you call:\n SET FMTONLY OFF exec [cpExportRecordType_ListByExportAgentID] null, null\n?\nOther possibility: \nI understand that you build your query dynamically by getting results from calling another queries to get the column names.\nPerhaps the query that would normally give you the column names returns no data but only column information (SET FMTONLY ON) so you do not have data to build you dynamic query.\n",
"\nkristof:\n\nso you do not have data to build you dynamic query.\n\n\nWith null parameters my dynamic query was a pure string literal, independent of data. Changing it to a static query solved the problem.\n"
] |
[
1,
0
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000072185_sql_server.txt
|
Q:
In Rails, What's the Best Way to Get Autocomplete that Shows Names but Uses IDs?
I want to have a text box that the user can type in that shows an Ajax-populated list of my model's names, and then when the user selects one I want the HTML to save the model's ID, and use that when the form is submitted.
I've been poking at the auto_complete plugin that got excised in Rails 2, but it seems to have no inkling that this might be useful. There's a Railscast episode that covers using that plugin, but it doesn't touch on this topic. The comments point out that it could be an issue, and point to model_auto_completer as a possible solution, which seems to work if the viewed items are simple strings, but the inserted text includes lots of junk spaces if (as I would like to do) you include a picture into the list items, despite what the documentation says.
I could probably hack model_auto_completer into shape, and I may still end up doing so, but I am eager to find out if there are better options out there.
A:
I rolled my own. The process is a little convoluted, but...
I just made a text_field on the form with an observer. When you start typing into the text field, the observer sends the search string and the controller returns a list of objects (maximum of 10).
The objects are then sent to render via a partial which fills out the dynamic autocomplete search results. The partial actually populates link_to_remote lines that post back to the controller again. The link_to_remote sends the id of the user selection and then some RJS cleans up the search, fills in the name in the text field, and then places the selected id into a hidden form field.
Phew... I couldn't find a plugin to do this at the time, so I rolled my own, I hope all that makes sense.
A:
I've got a hackneyed fix for the junk spaces from the image. I added a :after_update_element => "trimSelectedItem" to the options hash of the model_auto_completer (that's the first hash of the three given). My trimSelectedItem then finds the appropriate sub-element and uses the contents of that for the element value:
function trimSelectedItem(element, value, hiddenField, modelID) {
var span = value.down('span.display-text')
console.log(span)
var text = span.innerText || span.textContent
console.log(text)
element.value = text
}
However, this then runs afoul of the :allow_free_text option, which by default changes the text back as soon as the text box loses focus if the text inside is not a "valid" item from the list. So I had to turn that off, too, by passing :allow_free_text => true into the options hash (again, the first hash). I'd really rather it remained on, though.
So my current call to create the autocompleter is:
<%= model_auto_completer(
"line_items_info[][name]", "",
"line_items_info[][id]", "",
{:url => formatted_products_path(:js),
:after_update_element => "trimSelectedItem",
:allow_free_text => true},
{:class => 'product-selector'},
{:method => 'GET', :param_name => 'q'}) %>
And the products/index.js.erb is:
<ul class='products'>
<%- for product in @products -%>
<li id="<%= dom_id(product) %>">
<%= image_tag image_product_path(product), :alt => "" %>
<span class='display-text'><%=h product.name %></span>
</li>
<%- end -%>
</ul>
|
In Rails, What's the Best Way to Get Autocomplete that Shows Names but Uses IDs?
|
I want to have a text box that the user can type in that shows an Ajax-populated list of my model's names, and then when the user selects one I want the HTML to save the model's ID, and use that when the form is submitted.
I've been poking at the auto_complete plugin that got excised in Rails 2, but it seems to have no inkling that this might be useful. There's a Railscast episode that covers using that plugin, but it doesn't touch on this topic. The comments point out that it could be an issue, and point to model_auto_completer as a possible solution, which seems to work if the viewed items are simple strings, but the inserted text includes lots of junk spaces if (as I would like to do) you include a picture into the list items, despite what the documentation says.
I could probably hack model_auto_completer into shape, and I may still end up doing so, but I am eager to find out if there are better options out there.
|
[
"I rolled my own. The process is a little convoluted, but...\nI just made a text_field on the form with an observer. When you start typing into the text field, the observer sends the search string and the controller returns a list of objects (maximum of 10).\nThe objects are then sent to render via a partial which fills out the dynamic autocomplete search results. The partial actually populates link_to_remote lines that post back to the controller again. The link_to_remote sends the id of the user selection and then some RJS cleans up the search, fills in the name in the text field, and then places the selected id into a hidden form field.\nPhew... I couldn't find a plugin to do this at the time, so I rolled my own, I hope all that makes sense.\n",
"I've got a hackneyed fix for the junk spaces from the image. I added a :after_update_element => \"trimSelectedItem\" to the options hash of the model_auto_completer (that's the first hash of the three given). My trimSelectedItem then finds the appropriate sub-element and uses the contents of that for the element value:\nfunction trimSelectedItem(element, value, hiddenField, modelID) {\n var span = value.down('span.display-text')\n console.log(span)\n var text = span.innerText || span.textContent\n console.log(text)\n element.value = text\n}\n\nHowever, this then runs afoul of the :allow_free_text option, which by default changes the text back as soon as the text box loses focus if the text inside is not a \"valid\" item from the list. So I had to turn that off, too, by passing :allow_free_text => true into the options hash (again, the first hash). I'd really rather it remained on, though.\nSo my current call to create the autocompleter is:\n<%= model_auto_completer(\n \"line_items_info[][name]\", \"\", \n \"line_items_info[][id]\", \"\",\n {:url => formatted_products_path(:js),\n :after_update_element => \"trimSelectedItem\",\n :allow_free_text => true},\n {:class => 'product-selector'},\n {:method => 'GET', :param_name => 'q'}) %>\n\nAnd the products/index.js.erb is:\n <ul class='products'>\n <%- for product in @products -%>\n <li id=\"<%= dom_id(product) %>\">\n <%= image_tag image_product_path(product), :alt => \"\" %>\n <span class='display-text'><%=h product.name %></span>\n </li>\n <%- end -%>\n </ul>\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"autocomplete",
"model",
"ruby_on_rails"
] |
stackoverflow_0000081174_autocomplete_model_ruby_on_rails.txt
|
Q:
DataGridView : How to can I do multiline data entry in a usable way?
With the DataGridView it is possible to display cells containing some long text. The grid just increases the row height to display all the text, taking care of word wrap and linefeeds.
Data entry is possible as well. Control+Return inserts a line feed.
But: if the cell only has one line of text initially, the row height is just the height of one line. When I enter text, I always see only one line. Ctrl+Return scrolls the text up, and I can enter a new line. But the last line is not visible any more, only the line I just enter.
How can I tell the DataGridView to increase the line heigth automatically while I enter text?
A:
There's an entry here from a DataGridView Program Manager here that should be a good place to start.
|
DataGridView : How to can I do multiline data entry in a usable way?
|
With the DataGridView it is possible to display cells containing some long text. The grid just increases the row height to display all the text, taking care of word wrap and linefeeds.
Data entry is possible as well. Control+Return inserts a line feed.
But: if the cell only has one line of text initially, the row height is just the height of one line. When I enter text, I always see only one line. Ctrl+Return scrolls the text up, and I can enter a new line. But the last line is not visible any more, only the line I just enter.
How can I tell the DataGridView to increase the line heigth automatically while I enter text?
|
[
"There's an entry here from a DataGridView Program Manager here that should be a good place to start.\n"
] |
[
0
] |
[] |
[] |
[
"datagridview",
"winforms"
] |
stackoverflow_0000081315_datagridview_winforms.txt
|
Q:
private IP address ranges
What are the private IP address ranges?
A:
You will find the answers to this in RFC 1918. Though, I have listed them below for you.
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
It is a common misconception that 169.254.0.0/16 is a private IP address block. This is not true. It is link local, basically it is meant to be only used within networks, but it isn't official RFC1918. Additional information about IPv4 addresses can be found in RFC 3300.
On the other hand IPv6 doesn't have an equivalent to RFC1918, but any sort of site-local work should be done in fc00::/7. This is further touched on in RFC 4193.
A:
also, 169.254.0.0 - 169.254.255.255 are reserved for automatic private IP addressing. Refer to Link-local address wikipedia article
|
private IP address ranges
|
What are the private IP address ranges?
|
[
"You will find the answers to this in RFC 1918. Though, I have listed them below for you.\n 10.0.0.0 - 10.255.255.255 (10/8 prefix)\n 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)\n 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)\n\nIt is a common misconception that 169.254.0.0/16 is a private IP address block. This is not true. It is link local, basically it is meant to be only used within networks, but it isn't official RFC1918. Additional information about IPv4 addresses can be found in RFC 3300. \nOn the other hand IPv6 doesn't have an equivalent to RFC1918, but any sort of site-local work should be done in fc00::/7. This is further touched on in RFC 4193.\n",
"also, 169.254.0.0 - 169.254.255.255 are reserved for automatic private IP addressing. Refer to Link-local address wikipedia article\n"
] |
[
22,
3
] |
[] |
[] |
[
"ip_address",
"networking"
] |
stackoverflow_0000081350_ip_address_networking.txt
|
Q:
Perl Regex Match and Removal
I have a string which starts with //#... goes upto the newline characater. I have figured out the regex for the which is this ..#([^\n]*).
My question is how do you remove this line from a file if the following condition matches
A:
Your regex is badly chosen on several points:
Instead of matching two slashes specifically, you use .. to match two characters that can be anything at all, presumably because you don’t know how to match slashes when you’re also using them as delimiters. (Actually, dots match almost anything, as we’ll see in #3.)
Within a slash-delimited regex literal, //, you can match slashes simply by protecting them with backslashes, eg. /\/\//. The nicer variant, however, is to use the longer form of regex literal, m//, where you can choose the delimiter, eg. m!!. Since you use something other than slashes for delimitation, you can then write them without escaping them: m!//!. See perldoc perlop.
It’s not anchored to the start of the string so it will match anywhere. Use the ^ start-of-string assertion in front.
You wrote [^\n] to match “any character except newline” when there is a much simpler way to write that, which is just the . wildcard. It does exactly that – match any character except newline.
You are using parentheses to group a part of the match, but the group is neither quantified (you are not specifying that it can match any other number of times than exactly once) nor are you interested in keeping it. So the parentheses are superfluous.
Altogether, that makes it m!^//#.*!. But putting an uncaptured .* (or anything with a * quantifier) at the end of a regex is meaningless, since it never changes whether a string will match or not: the * is happy to match nothing at all.
So that leaves you with m!^//#!.
As for removing the line from the file, as everyone else explained, read it in line by line and print all the lines you want to keep back to another file. If you are not doing this within a larger program, use perl’s command line switches to do it easily:
perl -ni.bak -e'print unless m!^//#!' somefile.txt
Here, the -n switch makes perl put a loop around the code you provide which will read all the files you pass on the command line in sequence. The -i switch (for “in-place”) says to collect the output from your script and overwrite the original contents of each file with it. The .bak parameter to the -i option tells perl to keep a backup of the original file in a file named after the original file name with .bak appended. For all of these bits, see perldoc perlrun.
If you want to do this within the context of a larger program, the easiest way to do it safely is to open the file twice, once for reading, and separately, with IO::AtomicFile, another time for writing. IO::AtomicFile will replace the original file only if it’s successfully closed.
A:
To filter out all the lines in a file that match a certain regex:
perl -n -i.orig -e 'print unless /^#/' file1 file2 file3
The '.orig' after the -i switch creates a backup of the file with the given extension (.orig). You can skip it if you don't need a backup (just use -i).
The -n switch causes perl to execute your instructions (-e ' ... ') for each line in the file. The line is stored in $_ (which is also the default argument for many instructions, in this case: print and regex matching).
Finally, the argument to the -e switch says "print the line unless it matches a # character at the start of the line.
PS. There is also a -p switch which behaves like -n, except the lines are always printed (good for searching and replacing)
A:
As others have pointed out, if the end goal is only to remove lines starting with //#, for performance reasons you are probably better off using grep or sed:
grep -v '^\/\/#' filename.txt > filename.stripped.txt
sed '/^\/\/#/d' filename.txt > filename.stripped.txt
or
sed -i '/^\/\/#/d' filename.txt
if you prefer in-place editing.
Note that in perl your regex would be
m{^//#}
which matches two slashes followed by a # at the start of the string.
Note that you avoid "backslashitis" by using the match operator m{pattern} instead of the more familiar /pattern/. Train yourself on this syntax early since it's a simple way to avoid excessive escaping. You could write m{^//#} just as effectively as m%^//#% or m#^//\##, depending on what you want to match. Strive for clarity - regular expressions are hard enough to decipher without a prickly forest of avoidable backslashes killing readability. Seriously, m/^\/\/#/ looks like an alligator with a chipped tooth and a filling or a tiny ASCII painting of the Alps.
One problem that might come up in your script is if the entire file is slurped up into a string, newlines and all. To defend against that case, use the /m (multiline) modifier on the regex:
m{^//#}m
This allows ^ to match at the beginning of the string and after a newline. You would think there was a way to strip or match the lines matching m{^//#.*$} using the regex modifiers /g, /m, and /s in the case where you've slurped the file into a string but you don't want to make a copy of it (begging the question of why it was slurped into a string in the first place.) It should be possible, but it's late and I'm not seeing the answer. However, one 'simple' way of doing it is:
my $cooked = join qq{\n}, (grep { ! m{^//} } (split m{\n}, $raw));
even though that creates a copy instead of an in-place edit on the original string $raw.
A:
You really don't need perl for this.
sed '/^\/\/#/d' inputfile > outputfile
I <3 sed.
A:
Read the file line by line and only write those lines to a new file that don't match the regex.
You cannot just remove a line.
A:
Does it start at the begining of a line or can it appear anywhere? If the former s/old/new is what you want. If the latter, I'll have to figure that out. I suspect that back referances could be used somehow.
A:
I don't think your regex is correct.
First you need to start with ^ or else it will match this pattern anywhere on the line.
Second, the .. should be \/\/ or else it will match any two characters.
^\/\/#[^\n]* is probably what you want.
Then do what EricSchaefer says and read the file line by line only writing lines that don't match.
--
bmb
A:
Try the following:
perl -ne 'print unless m{^//#}' input.txt > output.txt
If you are using windows you need double quotes instead of single quotes.
You can do the same with grep
grep -v -e '^//#' input.txt > output.txt
A:
Iterate over each line in the file, and skip the line if it matches the pattern:
my $fh = new FileHandle 'filename'
or die "Failed to open file - $!";
while (my $line = $fh->getline) {
next if $line =~ m{^//#};
print $line;
}
close $fh;
This will print all lines from the file, except the line that starts with '//#'.
|
Perl Regex Match and Removal
|
I have a string which starts with //#... goes upto the newline characater. I have figured out the regex for the which is this ..#([^\n]*).
My question is how do you remove this line from a file if the following condition matches
|
[
"Your regex is badly chosen on several points:\n\nInstead of matching two slashes specifically, you use .. to match two characters that can be anything at all, presumably because you don’t know how to match slashes when you’re also using them as delimiters. (Actually, dots match almost anything, as we’ll see in #3.)\nWithin a slash-delimited regex literal, //, you can match slashes simply by protecting them with backslashes, eg. /\\/\\//. The nicer variant, however, is to use the longer form of regex literal, m//, where you can choose the delimiter, eg. m!!. Since you use something other than slashes for delimitation, you can then write them without escaping them: m!//!. See perldoc perlop.\nIt’s not anchored to the start of the string so it will match anywhere. Use the ^ start-of-string assertion in front.\nYou wrote [^\\n] to match “any character except newline” when there is a much simpler way to write that, which is just the . wildcard. It does exactly that – match any character except newline.\nYou are using parentheses to group a part of the match, but the group is neither quantified (you are not specifying that it can match any other number of times than exactly once) nor are you interested in keeping it. So the parentheses are superfluous.\n\nAltogether, that makes it m!^//#.*!. But putting an uncaptured .* (or anything with a * quantifier) at the end of a regex is meaningless, since it never changes whether a string will match or not: the * is happy to match nothing at all.\nSo that leaves you with m!^//#!.\nAs for removing the line from the file, as everyone else explained, read it in line by line and print all the lines you want to keep back to another file. If you are not doing this within a larger program, use perl’s command line switches to do it easily:\nperl -ni.bak -e'print unless m!^//#!' somefile.txt\n\nHere, the -n switch makes perl put a loop around the code you provide which will read all the files you pass on the command line in sequence. The -i switch (for “in-place”) says to collect the output from your script and overwrite the original contents of each file with it. The .bak parameter to the -i option tells perl to keep a backup of the original file in a file named after the original file name with .bak appended. For all of these bits, see perldoc perlrun.\nIf you want to do this within the context of a larger program, the easiest way to do it safely is to open the file twice, once for reading, and separately, with IO::AtomicFile, another time for writing. IO::AtomicFile will replace the original file only if it’s successfully closed.\n",
"To filter out all the lines in a file that match a certain regex:\nperl -n -i.orig -e 'print unless /^#/' file1 file2 file3\n\nThe '.orig' after the -i switch creates a backup of the file with the given extension (.orig). You can skip it if you don't need a backup (just use -i).\nThe -n switch causes perl to execute your instructions (-e ' ... ') for each line in the file. The line is stored in $_ (which is also the default argument for many instructions, in this case: print and regex matching).\nFinally, the argument to the -e switch says \"print the line unless it matches a # character at the start of the line.\nPS. There is also a -p switch which behaves like -n, except the lines are always printed (good for searching and replacing)\n",
"As others have pointed out, if the end goal is only to remove lines starting with //#, for performance reasons you are probably better off using grep or sed:\ngrep -v '^\\/\\/#' filename.txt > filename.stripped.txt\n\nsed '/^\\/\\/#/d' filename.txt > filename.stripped.txt\n\nor\nsed -i '/^\\/\\/#/d' filename.txt\n\nif you prefer in-place editing.\nNote that in perl your regex would be\nm{^//#}\n\nwhich matches two slashes followed by a # at the start of the string.\nNote that you avoid \"backslashitis\" by using the match operator m{pattern} instead of the more familiar /pattern/. Train yourself on this syntax early since it's a simple way to avoid excessive escaping. You could write m{^//#} just as effectively as m%^//#% or m#^//\\##, depending on what you want to match. Strive for clarity - regular expressions are hard enough to decipher without a prickly forest of avoidable backslashes killing readability. Seriously, m/^\\/\\/#/ looks like an alligator with a chipped tooth and a filling or a tiny ASCII painting of the Alps.\nOne problem that might come up in your script is if the entire file is slurped up into a string, newlines and all. To defend against that case, use the /m (multiline) modifier on the regex:\nm{^//#}m\n\nThis allows ^ to match at the beginning of the string and after a newline. You would think there was a way to strip or match the lines matching m{^//#.*$} using the regex modifiers /g, /m, and /s in the case where you've slurped the file into a string but you don't want to make a copy of it (begging the question of why it was slurped into a string in the first place.) It should be possible, but it's late and I'm not seeing the answer. However, one 'simple' way of doing it is:\nmy $cooked = join qq{\\n}, (grep { ! m{^//} } (split m{\\n}, $raw));\n\neven though that creates a copy instead of an in-place edit on the original string $raw.\n",
"You really don't need perl for this.\nsed '/^\\/\\/#/d' inputfile > outputfile\n\nI <3 sed.\n",
"Read the file line by line and only write those lines to a new file that don't match the regex.\nYou cannot just remove a line.\n",
"Does it start at the begining of a line or can it appear anywhere? If the former s/old/new is what you want. If the latter, I'll have to figure that out. I suspect that back referances could be used somehow.\n",
"I don't think your regex is correct.\nFirst you need to start with ^ or else it will match this pattern anywhere on the line.\nSecond, the .. should be \\/\\/ or else it will match any two characters.\n^\\/\\/#[^\\n]* is probably what you want.\nThen do what EricSchaefer says and read the file line by line only writing lines that don't match.\n--\nbmb\n",
"Try the following:\nperl -ne 'print unless m{^//#}' input.txt > output.txt\n\nIf you are using windows you need double quotes instead of single quotes.\nYou can do the same with grep\ngrep -v -e '^//#' input.txt > output.txt\n\n",
"Iterate over each line in the file, and skip the line if it matches the pattern:\n\nmy $fh = new FileHandle 'filename'\n or die \"Failed to open file - $!\";\n\nwhile (my $line = $fh->getline) {\n next if $line =~ m{^//#};\n print $line;\n}\nclose $fh;\n\nThis will print all lines from the file, except the line that starts with '//#'.\n"
] |
[
29,
5,
2,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"perl",
"regex"
] |
stackoverflow_0000080415_perl_regex.txt
|
Q:
Pattern for saving and writing to different file formats
Is there a pattern that is good to use when saving and loading different file formats?
For example, I have a complicated class hierarchy for the document, but I want to support a few different file formats.
I thought about the Strategy pattern, but I'm not convinced because of the need to access every part of the object in order to save and load it.
A:
You could use a Visitor Pattern, it allows to iterate over your hierachy doing different operations depending of the node the Visitor is currently processing.
Bad news: you probably need to add at least a virtual method at the top of the hierarchy, and maybe redefine it in some derived classes, and the visitor still access the data of the nodes, but you decouple the file format, as different visitors implementations can write the data gathered in different ways.
Take a look also at the memento pattern if hiding the class hierachy data is a must. This article could also be helpful.
Edit: Link to the original Memento pattern article using google cache
A:
You might want to take a look at the Builder pattern. GoF page 97..
A:
How about (something based on) the Template method pattern?
One superclass knows how to rip apart the class hierarchy, but relies on its subclasses to actually do something useful with it.
|
Pattern for saving and writing to different file formats
|
Is there a pattern that is good to use when saving and loading different file formats?
For example, I have a complicated class hierarchy for the document, but I want to support a few different file formats.
I thought about the Strategy pattern, but I'm not convinced because of the need to access every part of the object in order to save and load it.
|
[
"You could use a Visitor Pattern, it allows to iterate over your hierachy doing different operations depending of the node the Visitor is currently processing.\nBad news: you probably need to add at least a virtual method at the top of the hierarchy, and maybe redefine it in some derived classes, and the visitor still access the data of the nodes, but you decouple the file format, as different visitors implementations can write the data gathered in different ways.\nTake a look also at the memento pattern if hiding the class hierachy data is a must. This article could also be helpful.\nEdit: Link to the original Memento pattern article using google cache \n",
"You might want to take a look at the Builder pattern. GoF page 97..\n",
"How about (something based on) the Template method pattern?\nOne superclass knows how to rip apart the class hierarchy, but relies on its subclasses to actually do something useful with it.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"design_patterns",
"file_io"
] |
stackoverflow_0000081288_design_patterns_file_io.txt
|
Q:
File storing strategies for a web hosting website
I am going to hosting for files that user submits. I need to grab some data from the file and then move it to some directory.
There two points of interest for the lifetime of this file. The first is when the data is being abstracted and the second is when the file is archived so that it can be shared.
When data is being abstracted, I've thought that renaming the file to something unique or append a unique string to filename to keep it from overwriting other existing files.
When the file is going to be archived, I've thought of three strategies. One is to keep all files uploaded from a certain data in one folder. (2006/sept/04, 2008/jan/05) The other is to keep a folder and keep filling it until some max number of files I want to keep in folder and then create another one (/folder001/, /folder002/, /folder003/, etc..). Another one is to create subfolders once they reach some threshold. So like (/j/jd/jde/jdelator) I've seen this in unix not sure how to explain this.
The questions I have is what kind of strategies you guys have found useful or used?
A:
When data is being abstracted, I would choose something like : filename + millisec();
It is unlikely that two call to millisec will be the same, and filename is more userfriendly when accessing.
The date strategy can be convenient if you decide to remove old and unused files : you only have to get the 2006 folder, and remove all that has not been accessed in the last year, according to your log.
This also can be a good indication for your users, as they will know if it is a fresh file or not.
The folderXYZ is only a variant of this one, replacing date with a tag each N files.
The threshold subfolders helps you to keep the number of entries of your directories low, so access is faster. Note that this solution requires to sometimes move files (and then break some url if not mapped) when a particular directory grows.
Another possibility is to use a DB with UID corresponding to filename location, and accessing file through http://server.com/UID/filename.txt .
This way, the user saves the file as "filename.txt" which is convenient for him, and you know with the URL where to find the file (using the DB to transform UID to location).
Note that the UID can be a checksum (MD5, SHA-1) to handle duplicates of the same file.
A:
I'd vote with guid in a database and then use the Content-Disposition header to name it back to the original filename if necessary. One thing I would advocate is that the folders you use are stored outside of the web root; you don't want users uploading files into your application folders.
A:
I've used a relational database which tags ID's (int) to uuids that are the name of the files. This way it doesn't matter how they are on disk. It helps me obfuscate the files. Also, I can then use JOINs to "rename" the file arbitrarily. Also, I can use different file "names." It all depends on your app and where it is running.
A:
Though it depends on your application and etc., I would suggest keeping file repository scheme very simple for now, and decide on more elaborate strategy later. In other words, you make kind of "managed chaos" for a while; structure and strategy will come up later, when you will find out all requirements and domain specifics. By keeping simple, you can change everything easily.
Anyways, change is inevitable, the best thing you can do now is to choose some strategy and to document everything.
|
File storing strategies for a web hosting website
|
I am going to hosting for files that user submits. I need to grab some data from the file and then move it to some directory.
There two points of interest for the lifetime of this file. The first is when the data is being abstracted and the second is when the file is archived so that it can be shared.
When data is being abstracted, I've thought that renaming the file to something unique or append a unique string to filename to keep it from overwriting other existing files.
When the file is going to be archived, I've thought of three strategies. One is to keep all files uploaded from a certain data in one folder. (2006/sept/04, 2008/jan/05) The other is to keep a folder and keep filling it until some max number of files I want to keep in folder and then create another one (/folder001/, /folder002/, /folder003/, etc..). Another one is to create subfolders once they reach some threshold. So like (/j/jd/jde/jdelator) I've seen this in unix not sure how to explain this.
The questions I have is what kind of strategies you guys have found useful or used?
|
[
"When data is being abstracted, I would choose something like : filename + millisec();\nIt is unlikely that two call to millisec will be the same, and filename is more userfriendly when accessing.\nThe date strategy can be convenient if you decide to remove old and unused files : you only have to get the 2006 folder, and remove all that has not been accessed in the last year, according to your log.\nThis also can be a good indication for your users, as they will know if it is a fresh file or not.\nThe folderXYZ is only a variant of this one, replacing date with a tag each N files.\nThe threshold subfolders helps you to keep the number of entries of your directories low, so access is faster. Note that this solution requires to sometimes move files (and then break some url if not mapped) when a particular directory grows.\nAnother possibility is to use a DB with UID corresponding to filename location, and accessing file through http://server.com/UID/filename.txt .\nThis way, the user saves the file as \"filename.txt\" which is convenient for him, and you know with the URL where to find the file (using the DB to transform UID to location).\nNote that the UID can be a checksum (MD5, SHA-1) to handle duplicates of the same file.\n",
"I'd vote with guid in a database and then use the Content-Disposition header to name it back to the original filename if necessary. One thing I would advocate is that the folders you use are stored outside of the web root; you don't want users uploading files into your application folders.\n",
"I've used a relational database which tags ID's (int) to uuids that are the name of the files. This way it doesn't matter how they are on disk. It helps me obfuscate the files. Also, I can then use JOINs to \"rename\" the file arbitrarily. Also, I can use different file \"names.\" It all depends on your app and where it is running. \n",
"Though it depends on your application and etc., I would suggest keeping file repository scheme very simple for now, and decide on more elaborate strategy later. In other words, you make kind of \"managed chaos\" for a while; structure and strategy will come up later, when you will find out all requirements and domain specifics. By keeping simple, you can change everything easily.\nAnyways, change is inevitable, the best thing you can do now is to choose some strategy and to document everything.\n"
] |
[
3,
2,
1,
1
] |
[] |
[] |
[
"filesystems"
] |
stackoverflow_0000080561_filesystems.txt
|
Q:
Reading Unformatted Data
In C, using the standard Windows API, what is the best way to read an unformatted disk? Specifically, I have an MMC or SD card with data, but no file system (not FAT16, not FAT32, just raw data). If there was a simple way to open the entire card for byte by byte binary access, that would be great.
Thanks!
A:
I would go with
HANDLE drive = CreateFile(_T("\\.\PhysicalDrive0"), GENERIC_READ, FILE_SHARE_WRITE, 0, OPEN_EXISTING, 0, 0);
// error handling
DWORD br = 0;
DISK_GEOMETRY dg;
DeviceIOControl(drive, IOCTL_DISK_GET_DRIVE_GEOMETRY, 0, 0, &dg, sizeof(dg), &br, 0);
//
LARGE_INTEGER pos;
pos.QuadPart = static_cast<LONGLONG>(sectorToRead) * dg.BytesPerSector;
SetFilePointerEx(drive, pos, 0, FILE_BEGIN);
const bool success = ReadFile(drive, sectorData, dg.BytesPerSector, &br) && br == dg.BytesPerSector;
//
CloseHandle(drive);
Please note that in order to verify that you've successfully read a sector you must verify that the read byte count corresponds to the number of bytes you wanted to read, i.e. in my experience ReadFile() on a physical disk can return TRUE even when no bytes are read (or maybe I just have a buggy driver).
The problem that remains is to determine your drive number (0 as is used in my example refers to C: which is probably not what you want). I don't know how to do that, but if you only have one drive connected which is not formatted, it ought to be possible by calling opening each PhysicalDrive in order and calling DeviceIOControl() with IOCTL_DISK_GET_DRIVE_LAYOUT_EX as a command:
DRIVE_LAYOUT_INFORMATION_EX dl;
DeviceIOControl(drive, IOCTL_DISK_GET_DRIVE_LAYOUT_EX, 0, 0, &dl, sizeof(dl), &br, 0);
if(dl.PartitionStyle == PARTITION_STYLE_RAW)
{
// found correct disk
}
But that's just a guess.
A:
You have to open the device file with CreateFile and then use ReadFile/readFileEx. Don't forget to close the file with CloseHandle
A:
CreateFile function reference on MSDN
Scroll down to "Physical Disks and Volumes" - note the security restrictions on Vista do not apply for voulmes without a filesystem, so you'll be fine even on Vista under the conditions you have given.
|
Reading Unformatted Data
|
In C, using the standard Windows API, what is the best way to read an unformatted disk? Specifically, I have an MMC or SD card with data, but no file system (not FAT16, not FAT32, just raw data). If there was a simple way to open the entire card for byte by byte binary access, that would be great.
Thanks!
|
[
"I would go with\nHANDLE drive = CreateFile(_T(\"\\\\.\\PhysicalDrive0\"), GENERIC_READ, FILE_SHARE_WRITE, 0, OPEN_EXISTING, 0, 0);\n// error handling\nDWORD br = 0;\nDISK_GEOMETRY dg;\nDeviceIOControl(drive, IOCTL_DISK_GET_DRIVE_GEOMETRY, 0, 0, &dg, sizeof(dg), &br, 0);\n//\nLARGE_INTEGER pos;\npos.QuadPart = static_cast<LONGLONG>(sectorToRead) * dg.BytesPerSector;\nSetFilePointerEx(drive, pos, 0, FILE_BEGIN);\nconst bool success = ReadFile(drive, sectorData, dg.BytesPerSector, &br) && br == dg.BytesPerSector;\n//\nCloseHandle(drive);\n\nPlease note that in order to verify that you've successfully read a sector you must verify that the read byte count corresponds to the number of bytes you wanted to read, i.e. in my experience ReadFile() on a physical disk can return TRUE even when no bytes are read (or maybe I just have a buggy driver).\nThe problem that remains is to determine your drive number (0 as is used in my example refers to C: which is probably not what you want). I don't know how to do that, but if you only have one drive connected which is not formatted, it ought to be possible by calling opening each PhysicalDrive in order and calling DeviceIOControl() with IOCTL_DISK_GET_DRIVE_LAYOUT_EX as a command:\nDRIVE_LAYOUT_INFORMATION_EX dl;\nDeviceIOControl(drive, IOCTL_DISK_GET_DRIVE_LAYOUT_EX, 0, 0, &dl, sizeof(dl), &br, 0);\nif(dl.PartitionStyle == PARTITION_STYLE_RAW)\n{\n // found correct disk\n}\n\nBut that's just a guess.\n",
"You have to open the device file with CreateFile and then use ReadFile/readFileEx. Don't forget to close the file with CloseHandle\n",
"CreateFile function reference on MSDN\nScroll down to \"Physical Disks and Volumes\" - note the security restrictions on Vista do not apply for voulmes without a filesystem, so you'll be fine even on Vista under the conditions you have given.\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"c",
"disk",
"windows"
] |
stackoverflow_0000080493_c_disk_windows.txt
|
Q:
SQL Server Merge Replication Schedule
We're replicating a database between London and Hong Kong using SQL Server 2005 Merge replication. The replication is set to synchronise every one minute and it works just fine. There is however the option to set the synchronisation to be "Continuous". Is there any real difference between replication every one minute and continuously?
The only reason for us doing every one minute rather than continuous in the first place was that it recovered better if the line went down for a few minutes, but this experience was all from SQL Server 2000 so it might not be applicable any more...
A:
We have been trying the continuous replication solution on SQL SERVER 2005 and it appeared to be less efficient than a scheduled solution: as your process is continuous, you will not get all the info related to your passed replications (how many replications failed, how long did the process take, why was the process stopped, how many records were updated, how many database structure modifications were replicated to suscribers, and so on), making the replication follow-up a lot more difficult.
We have also been experiencing troubles while modifying database structure (ALTER TABLE instructions) and/or making bulk updates on one of the databases with continuous replication going on.
Keep you "every minute" synchro as it is and just forget about this "continuous" option.
|
SQL Server Merge Replication Schedule
|
We're replicating a database between London and Hong Kong using SQL Server 2005 Merge replication. The replication is set to synchronise every one minute and it works just fine. There is however the option to set the synchronisation to be "Continuous". Is there any real difference between replication every one minute and continuously?
The only reason for us doing every one minute rather than continuous in the first place was that it recovered better if the line went down for a few minutes, but this experience was all from SQL Server 2000 so it might not be applicable any more...
|
[
"We have been trying the continuous replication solution on SQL SERVER 2005 and it appeared to be less efficient than a scheduled solution: as your process is continuous, you will not get all the info related to your passed replications (how many replications failed, how long did the process take, why was the process stopped, how many records were updated, how many database structure modifications were replicated to suscribers, and so on), making the replication follow-up a lot more difficult.\nWe have also been experiencing troubles while modifying database structure (ALTER TABLE instructions) and/or making bulk updates on one of the databases with continuous replication going on.\nKeep you \"every minute\" synchro as it is and just forget about this \"continuous\" option.\n"
] |
[
9
] |
[] |
[] |
[
"merge",
"replication",
"sql_server"
] |
stackoverflow_0000041273_merge_replication_sql_server.txt
|
Q:
Cross-browser JavaScript debugging
I have a few scripts on a site I recently started maintaining. I get those Object Not Found errors in IE6 (which Firefox fails to report in its Error Console?). What's the best way to debug these- any good cross-browser-compatible IDEs, or javascript debugging libraries of some sort?
A:
There's no cross-browser JS debugger that I know of (because most browsers use different JS engines).
For firefox, I'd definitely recommend firebug (http://www.getfirebug.com)
For IE, the best I've found is Microsoft Script Debugger (http://www.microsoft.com/downloads/details.aspx?familyid=2f465be0-94fd-4569-b3c4-dffdf19ccd99&displaylang=en). If you have Office installed, you may also have Microsoft Script Editor installed. To use either of these, you need to turn on script debugging in IE. (uncheck Tools -> Internet Options -> Advanced -> Disable Script debugging).
A:
You could also use Firebug Lite - which will work in IE & Opera. It's an external lib that will help you track down problems. It's sometimes more convenient than dealing with the MS Script Debugger.
A:
Firebug
It's only for firefox but it should let you figure out what's happening on IE especially once you have the script line numbers.
A:
You can use Visual Studio and enable debugging in browser
You can install FireBug plugin for Firefox, it's really good!
You can try to install IE8 beta 2 and use it in compatibility mode with built-in debugger.
Also in any line of your JS code you can write
debugger;
and this will be threated as breakpoint for any of the debug tools you use.
Cheers!
A:
Aptana Studio provides JavaScript debugging for Firefox and IE
A:
Firebug is the best all around client-side debugger. I frequently use it to debug CSS code as well as javascript. It allows you to easily find offending areas of code. I especially like the ability to modify tag attributes in the firebug pane and see the effects immediately before committing. Very useful for anyone designing websites.
A:
You could use this tool apparently - Microsoft Script Debugger
Personally I try to go through the code and figure out what's going on - it gives you the line number where it goes wrong right?
A:
To make the Microsoft Script Debugger more user friendly (and to add javascript error messages that actually are helpful to IE), I highly recommend Companion.JS.
A:
Firebug seems to be the most useful so far. When a page is running on firebug, it can be very handy to log messages into firebug via javascript calls to console.log('your log message'); but don't execute that code in IE since the console object is only in scope when firebug is running.
For IE, other folks have mentioned the Script Debugger. Although it is not primarily for javascript debugging, it can be useful to also add the IE developer toolbar, which allows you to easily and dynamically inspect the style and other properties of your page's DOM.
A:
In response to mopoke, for IE6 you definitely want to use Visual Studio for debugging if you can get it. For all intents and purposes, the MS script debugger is useless. You're better off using some form of tracing (not alerts) than using the MS script debugger. Dojo Toolkit, for instance, provides a debug console for tracing, but you can write your own by dumping messages to a secondary window or div.
The script debugger needlessly prompts you on each error in IE6 and even then doesn't give you enough state context to make it useful in a sufficiently complex JS app. Visual Studio is more tightly integrated and much friendlier. Just my experience.
|
Cross-browser JavaScript debugging
|
I have a few scripts on a site I recently started maintaining. I get those Object Not Found errors in IE6 (which Firefox fails to report in its Error Console?). What's the best way to debug these- any good cross-browser-compatible IDEs, or javascript debugging libraries of some sort?
|
[
"There's no cross-browser JS debugger that I know of (because most browsers use different JS engines).\nFor firefox, I'd definitely recommend firebug (http://www.getfirebug.com)\nFor IE, the best I've found is Microsoft Script Debugger (http://www.microsoft.com/downloads/details.aspx?familyid=2f465be0-94fd-4569-b3c4-dffdf19ccd99&displaylang=en). If you have Office installed, you may also have Microsoft Script Editor installed. To use either of these, you need to turn on script debugging in IE. (uncheck Tools -> Internet Options -> Advanced -> Disable Script debugging).\n",
"You could also use Firebug Lite - which will work in IE & Opera. It's an external lib that will help you track down problems. It's sometimes more convenient than dealing with the MS Script Debugger.\n",
"Firebug\nIt's only for firefox but it should let you figure out what's happening on IE especially once you have the script line numbers.\n",
"\nYou can use Visual Studio and enable debugging in browser\nYou can install FireBug plugin for Firefox, it's really good!\nYou can try to install IE8 beta 2 and use it in compatibility mode with built-in debugger.\n\nAlso in any line of your JS code you can write \ndebugger;\n\nand this will be threated as breakpoint for any of the debug tools you use.\nCheers!\n",
"Aptana Studio provides JavaScript debugging for Firefox and IE\n",
"Firebug is the best all around client-side debugger. I frequently use it to debug CSS code as well as javascript. It allows you to easily find offending areas of code. I especially like the ability to modify tag attributes in the firebug pane and see the effects immediately before committing. Very useful for anyone designing websites.\n",
"You could use this tool apparently - Microsoft Script Debugger\nPersonally I try to go through the code and figure out what's going on - it gives you the line number where it goes wrong right? \n",
"To make the Microsoft Script Debugger more user friendly (and to add javascript error messages that actually are helpful to IE), I highly recommend Companion.JS.\n",
"Firebug seems to be the most useful so far. When a page is running on firebug, it can be very handy to log messages into firebug via javascript calls to console.log('your log message'); but don't execute that code in IE since the console object is only in scope when firebug is running.\nFor IE, other folks have mentioned the Script Debugger. Although it is not primarily for javascript debugging, it can be useful to also add the IE developer toolbar, which allows you to easily and dynamically inspect the style and other properties of your page's DOM.\n",
"In response to mopoke, for IE6 you definitely want to use Visual Studio for debugging if you can get it. For all intents and purposes, the MS script debugger is useless. You're better off using some form of tracing (not alerts) than using the MS script debugger. Dojo Toolkit, for instance, provides a debug console for tracing, but you can write your own by dumping messages to a secondary window or div. \nThe script debugger needlessly prompts you on each error in IE6 and even then doesn't give you enough state context to make it useful in a sufficiently complex JS app. Visual Studio is more tightly integrated and much friendlier. Just my experience. \n"
] |
[
3,
2,
1,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0000078909_javascript.txt
|
Q:
Asynchronous Programming in Python Twisted
I'm having trouble developing a reverse proxy in Twisted. It works, but it seems overly complex and convoluted. So much of it feels like voodoo.
Are there any simple, solid examples of asynchronous program structure on the web or in books? A sort of best practices guide? When I complete my program I'd like to be able to still see the structure in some way, not be looking at a bowl of spaghetti.
A:
Twisted contains a large number of examples. One in particular, the "evolution of Finger" tutorial, contains a thorough explanation of how an asynchronous program grows from a very small kernel up to a complex system with lots of moving parts. Another one that might be of interest to you is the tutorial about simply writing servers.
The key thing to keep in mind about Twisted, or even other asynchronous networking libraries (such as asyncore, MINA, or ACE), is that your code only gets invoked when something happens. The part that I've heard most often sound like "voodoo" is the management of callbacks: for example, Deferred. If you're used to writing code that runs in a straight line, and only calls functions which return immediately with results, the idea of waiting for something to call you back might be confusing. But there's nothing magical, no "voodoo" about callbacks. At the lowest level, the reactor is just sitting around and waiting for one of a small number of things to happen:
Data arrives on a connection (it will call dataReceived on a Protocol)
Time has passed (it will call a function registered with callLater).
A connection has been accepted (it will call buildProtocol on a factory registered with a listenXXX or connectXXX function).
A connection has been dropped (it will call connectionLost on the appropriate Protocol)
Every asynchronous program starts by hooking up a few of these events and then kicking off the reactor to wait for them to happen. Of course, events that happen lead to more events that get hooked up or disconnected, and so your program goes on its merry way. Beyond that, there's nothing special about asynchronous program structure that are interesting or special; event handlers and callbacks are just objects, and your code is run in the usual way.
Here's a simple "event-driven engine" that shows you just how simple this process is.
# Engine
import time
class SimplestReactor(object):
def __init__(self):
self.events = []
self.stopped = False
def do(self, something):
self.events.append(something)
def run(self):
while not self.stopped:
time.sleep(0.1)
if self.events:
thisTurn = self.events.pop(0)
thisTurn()
def stop(self):
self.stopped = True
reactor = SimplestReactor()
# Application
def thing1():
print 'Doing thing 1'
reactor.do(thing2)
reactor.do(thing3)
def thing2():
print 'Doing thing 2'
def thing3():
print 'Doing thing 3: and stopping'
reactor.stop()
reactor.do(thing1)
print 'Running'
reactor.run()
print 'Done!'
At the core of libraries like Twisted, the function in the main loop is not sleep, but an operating system call like select() or poll(), as exposed by a module like the Python select module. I say "like" select, because this is an API that varies a lot between platforms, and almost every GUI toolkit has its own version. Twisted currently provides an abstract interface to 14 different variations on this theme. The common thing that such an API provides is provide a way to say "Here are a list of events that I'm waiting for. Go to sleep until one of them happens, then wake up and tell me which one of them it was."
|
Asynchronous Programming in Python Twisted
|
I'm having trouble developing a reverse proxy in Twisted. It works, but it seems overly complex and convoluted. So much of it feels like voodoo.
Are there any simple, solid examples of asynchronous program structure on the web or in books? A sort of best practices guide? When I complete my program I'd like to be able to still see the structure in some way, not be looking at a bowl of spaghetti.
|
[
"Twisted contains a large number of examples. One in particular, the \"evolution of Finger\" tutorial, contains a thorough explanation of how an asynchronous program grows from a very small kernel up to a complex system with lots of moving parts. Another one that might be of interest to you is the tutorial about simply writing servers.\nThe key thing to keep in mind about Twisted, or even other asynchronous networking libraries (such as asyncore, MINA, or ACE), is that your code only gets invoked when something happens. The part that I've heard most often sound like \"voodoo\" is the management of callbacks: for example, Deferred. If you're used to writing code that runs in a straight line, and only calls functions which return immediately with results, the idea of waiting for something to call you back might be confusing. But there's nothing magical, no \"voodoo\" about callbacks. At the lowest level, the reactor is just sitting around and waiting for one of a small number of things to happen:\n\nData arrives on a connection (it will call dataReceived on a Protocol)\nTime has passed (it will call a function registered with callLater).\nA connection has been accepted (it will call buildProtocol on a factory registered with a listenXXX or connectXXX function).\nA connection has been dropped (it will call connectionLost on the appropriate Protocol)\n\nEvery asynchronous program starts by hooking up a few of these events and then kicking off the reactor to wait for them to happen. Of course, events that happen lead to more events that get hooked up or disconnected, and so your program goes on its merry way. Beyond that, there's nothing special about asynchronous program structure that are interesting or special; event handlers and callbacks are just objects, and your code is run in the usual way.\nHere's a simple \"event-driven engine\" that shows you just how simple this process is.\n# Engine\nimport time\nclass SimplestReactor(object):\n def __init__(self):\n self.events = []\n self.stopped = False\n\n def do(self, something):\n self.events.append(something)\n\n def run(self):\n while not self.stopped:\n time.sleep(0.1)\n if self.events:\n thisTurn = self.events.pop(0)\n thisTurn()\n\n def stop(self):\n self.stopped = True\n\nreactor = SimplestReactor()\n\n# Application \ndef thing1():\n print 'Doing thing 1'\n reactor.do(thing2)\n reactor.do(thing3)\n\ndef thing2():\n print 'Doing thing 2'\n\ndef thing3():\n print 'Doing thing 3: and stopping'\n reactor.stop()\n\nreactor.do(thing1)\nprint 'Running'\nreactor.run()\nprint 'Done!'\n\nAt the core of libraries like Twisted, the function in the main loop is not sleep, but an operating system call like select() or poll(), as exposed by a module like the Python select module. I say \"like\" select, because this is an API that varies a lot between platforms, and almost every GUI toolkit has its own version. Twisted currently provides an abstract interface to 14 different variations on this theme. The common thing that such an API provides is provide a way to say \"Here are a list of events that I'm waiting for. Go to sleep until one of them happens, then wake up and tell me which one of them it was.\"\n"
] |
[
65
] |
[] |
[] |
[
"asynchronous",
"python",
"twisted"
] |
stackoverflow_0000080617_asynchronous_python_twisted.txt
|
Q:
How do F# units of measure work?
Has anyone had a chance to dig into how F# Units of Measure work? Is it just type-based chicanery, or are there CLR types hiding underneath that could (potentially) be used from other .net languages? Will it work for any numerical unit, or is it limited to floating point values (which is what all the examples use)?
A:
The best (and I think official) place to find out about this is on Andrew Kennedy's blog.
Here are the (current) relevant posts.
Units of Measure in F#: Part One, Introducing Units
Units of Measure in F#: Part Two, Unit Conversions
Units of Measure in F#: Part Three, Generic Units
Units of Measure in F#: Part Four, Parameterized Types
As I said in the post that your answerer referred to, this is most definitely something that you CAN'T do in C# (though I wish you could).
A:
According to a response on the next related blog post, they are a purely static mechanism in the F# compiler. So there is no CLR representation of the units data.
Its not entirely clear whether it currently works with non-float types, but from the perspective of the type system it is theoretically possible.
|
How do F# units of measure work?
|
Has anyone had a chance to dig into how F# Units of Measure work? Is it just type-based chicanery, or are there CLR types hiding underneath that could (potentially) be used from other .net languages? Will it work for any numerical unit, or is it limited to floating point values (which is what all the examples use)?
|
[
"The best (and I think official) place to find out about this is on Andrew Kennedy's blog.\nHere are the (current) relevant posts.\n\nUnits of Measure in F#: Part One, Introducing Units\nUnits of Measure in F#: Part Two, Unit Conversions\nUnits of Measure in F#: Part Three, Generic Units\nUnits of Measure in F#: Part Four, Parameterized Types\n\nAs I said in the post that your answerer referred to, this is most definitely something that you CAN'T do in C# (though I wish you could).\n",
"According to a response on the next related blog post, they are a purely static mechanism in the F# compiler. So there is no CLR representation of the units data.\nIts not entirely clear whether it currently works with non-float types, but from the perspective of the type system it is theoretically possible. \n"
] |
[
17,
13
] |
[] |
[] |
[
".net",
"f#",
"functional_programming",
"units_of_measurement"
] |
stackoverflow_0000040845_.net_f#_functional_programming_units_of_measurement.txt
|
Q:
Mono's DateTime Serialization
if you uses Mono Remoting on Linux, what's your work-around for DateTime marshalling incompatibility between Mono and .NET Remoting?
i'm using WinForms on Windows using .NET 2.0 runtime, using Remoting on Linux using Mono. i cannot yet use Mono runtime on both ends as Mono's DataGridView isn't yet working.
[UPDATE]
i used Mono 1.9 when the question was posted. i'm using Mono 2.4 now, its DateTime is now compatible with .NET. kudos to Miguel de Icaza, his team and Novell
A:
I think a much better solution would be refactoring the code, so instead of the (yet under-supported) remoting, use web services. XML serialization of most basic data types are IIRC fully supported; and in certain circumstances, fits the architecture much better (especially server-client architectures).
A:
File a bug with a test case.
|
Mono's DateTime Serialization
|
if you uses Mono Remoting on Linux, what's your work-around for DateTime marshalling incompatibility between Mono and .NET Remoting?
i'm using WinForms on Windows using .NET 2.0 runtime, using Remoting on Linux using Mono. i cannot yet use Mono runtime on both ends as Mono's DataGridView isn't yet working.
[UPDATE]
i used Mono 1.9 when the question was posted. i'm using Mono 2.4 now, its DateTime is now compatible with .NET. kudos to Miguel de Icaza, his team and Novell
|
[
"I think a much better solution would be refactoring the code, so instead of the (yet under-supported) remoting, use web services. XML serialization of most basic data types are IIRC fully supported; and in certain circumstances, fits the architecture much better (especially server-client architectures).\n",
"File a bug with a test case.\n"
] |
[
2,
1
] |
[] |
[] |
[
".net",
"datetime",
"mono",
"remoting"
] |
stackoverflow_0000070487_.net_datetime_mono_remoting.txt
|
Q:
Perl Sys::Syslog on Solaris
Has anyone got Sys::Syslog to work on Solaris? (I'm running Sys::Syslog 0.05 on Perl v5.8.4 on SunOS 5.10 on SPARC). Here's what doesn't work for me:
openlog "myprog", "pid", "user" or die;
syslog "crit", "%s", "Test from $0" or die;
closelog() or warn "Can't close: $!";
system "tail /var/adm/messages";
Whatever I do, the closelog returns an error and nothing ever gets logged anywhere.
A:
By default, Sys::Syslog is going to try to connect with one of the following socket types:
[ 'tcp', 'udp', 'unix', 'stream' ]
On Solaris, though, you'll need to use an inet socket. Call:
setlogsock('inet', $hostname);
and things should start working.
A:
In general you can answer "does module $x work on platform $y" questions by looking at the CPAN testers matrix, like here.
A:
setlogsock('inet') didn't do it for me (it looks for host "syslog") but building and installing Sys::Syslog from CPAN did. The Sys::Syslog that comes with Solaris 10 is ancient.
|
Perl Sys::Syslog on Solaris
|
Has anyone got Sys::Syslog to work on Solaris? (I'm running Sys::Syslog 0.05 on Perl v5.8.4 on SunOS 5.10 on SPARC). Here's what doesn't work for me:
openlog "myprog", "pid", "user" or die;
syslog "crit", "%s", "Test from $0" or die;
closelog() or warn "Can't close: $!";
system "tail /var/adm/messages";
Whatever I do, the closelog returns an error and nothing ever gets logged anywhere.
|
[
"By default, Sys::Syslog is going to try to connect with one of the following socket types:\n[ 'tcp', 'udp', 'unix', 'stream' ]\n\nOn Solaris, though, you'll need to use an inet socket. Call:\nsetlogsock('inet', $hostname);\n\nand things should start working.\n",
"In general you can answer \"does module $x work on platform $y\" questions by looking at the CPAN testers matrix, like here.\n",
"setlogsock('inet') didn't do it for me (it looks for host \"syslog\") but building and installing Sys::Syslog from CPAN did. The Sys::Syslog that comes with Solaris 10 is ancient.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"perl",
"solaris",
"syslog"
] |
stackoverflow_0000067760_perl_solaris_syslog.txt
|
Q:
Spatial Data Structures in C
I do work in theoretical chemistry on a high performance cluster, often involving molecular dynamics simulations. One of the problems my work addresses involves a static field of N-dimensional (typically N = 2-5) hyper-spheres, that a test particle may collide with. I'm looking to optimize (read: overhaul) the the data structure I use for representing the field of spheres so I can do rapid collision detection. Currently I use a dead simple array of pointers to an N-membered struct (doubles for each coordinate of the center) and a nearest-neighbor list. I've heard of oct- and quad- trees but haven't found a clear explanation of how they work, how to efficiently implement one, or how to then do fast collision detection with one. Given the size of my simulations, memory is (almost) no object, but cycles are.
A:
How best to approach this for your problem depends on several factors that you have not described:
- Will the same hypersphere arrangement be used for many particle collision calculations?
- Are the hyperspheres uniform size?
- What is the movement of the particle (e.g. straight line/curve) and is that movement affected by the spheres?
- Do you consider the particle to have zero volume?
I assume that the particle does not have simple straight line movement as that would be the relatively fast calculation of finding the closest point between a line and a point, which is likely going to be about the same speed as finding which of the boxes the line intersects with (to determine where in the n-tree to examine).
If your hypersphere positions are fixed for a lot of particle collisions then computing a voronoi decomposition/Dirichlet tessellation would give you a fast way of later finding exactly which sphere is closest to your particle for any given point in the space.
However to answer your original question about octrees/quadtrees/2^n-trees, in n dimensions you start with a (hyper)-cube that contains the area of space that you are interested in. This will be subdivided into 2^n hypercubes if you deem the contents to be too complicated. This continues recursively until you have only simple elements (e.g. one hypersphere centroid) in the leaf nodes.
Now that the n-tree is built you use it for collision detection by taking the path of your particle and intersecting it with the outer hypercube. The intersection position will tell you which hypercube in the next level down of the tree to visit next, and you determine the position of intersection with all 2^n hypercubes at that level, following downwards until you reach a leaf node. Once you reach the leaf you can examine interactions between your particle path and the hypersphere stored at that leaf. If you have collision you have finished, otherwise you have to find the exit point of the particle path from the current hypercube leaf and determine which hypercube it moves to next. Continue until you find a collision or entirely leave the overall bounding hypercube.
Efficiently finding the neighbouring hypercube when exiting a hypercube is one of the most challenging parts of this approach. For 2^n trees Samet's approaches {1, 2} can be adapted. For kd-trees (binary trees) an approach is suggested in {3} section 4.3.3.
Efficient implementation can be as simple as storing a list of 8 pointers from each hypercube to its children hypercubes, and marking the hypercube in a special way if it is a leaf (e.g. make all pointers NULL).
A description of dividing space to create a quadtree (which you can generalise to n-tree) can be found in Klinger & Dyer {4}
As others have mentioned kd-trees may be more suited than 2^n-trees as extension to an arbitrary number of dimensions is more straightforward, however they will result in a deeper tree. It is also easier to adapt the split positions to match the geometry of your
hyperspheres with a kd-tree. The description above of collision detection in a 2^n tree is equally applicable to a kd-tree.
{1} Connected Component Labeling, Hanan Samet, Using Quadtrees Journal of the ACM Volume 28 , Issue 3 (July 1981)
{2} Neighbor finding in images represented by octrees, Hanan Samet, Computer Vision, Graphics, and Image Processing Volume 46 , Issue 3 (June 1989)
{3} Convex hull generation, connected component labelling, and minimum distance
calculation for set-theoretically defined models, Dan Pidcock, 2000
{4} Experiments in picture representation using regular decomposition, Klinger, A., and Dyer, C.R. E, Comptr. Graphics and Image Processing 5 (1976), 68-105.
A:
It sounds like you'd want to implement a kd-tree, which would allow you to more quickly search the N-dimensional space. There's some more information and links to implementations at the Stony Brook Algorithm Repository.
A:
Since your field is static (by which I'm assuming you mean that the hyper spheres don't move), then the fastest solution I know of is a Kdtree.
You can either make your own, or use someone else's, like this one:
http://libkdtree.alioth.debian.org/
A:
A Quad tree is a 2 dimensional tree, in which at each level a node has 4 children, each of which covers 1/4 of the area of the parent node.
An Oct tree is a 3 dimensional tree, in which at each level a node has 8 children, each of which contains 1/8th of the volume of the parent node. Here is picture to help you visualize it: http://en.wikipedia.org/wiki/Octree
If you're doing N dimensional intersection tests, you could generalize this to an N tree.
Intersection algorithms work by starting at the top of the tree and recursively traversing into any child nodes that intersect the object being tested, at some point you get to leaf nodes, which contain the actual objects.
A:
An octree will work as long as you can specify the spheres by their centres - it hierarchically bins points into cubic regions with eight children. Working out neighbours in an octree data structure will require you to do sphere-intersecting-cube calculations (to some extent easier than they look) to work out which cubic regions in an octree are within the sphere.
Finding the nearest neighbours means walking back up the tree until you get a node with more than one populated child and all surrounding nodes included (this ensures the query gets all sides).
From memory, this is the (somewhat naive) basic algorithm for sphere-cube intersection:
i. Is the centre within the cube (this gets the eponymous situation)
ii. Are any of the corners of the cube within radius r of the centre (corners within the sphere)
iii. For each surface of the cube (you can eliminate some of the surfaces by working out which side of the surface the centre lies on) work out (this is all first-year vector arithmetic):
a. A normal of the surface that goes to the centre of the sphere
b. The distance from the centre of the sphere to the intersection of the normal with the plane of the surface (chord intersets plane the surface of the cube)
c. Intersection of the plane lies within the side of the cube (one condition of chord intersection to the cube)
iv. Calculate the size of the chord (Sin of Cos^-1 of ratio of normal length to radius of sphere)
v. If the nearest point on the line is less than the distance of the chord and the point lies between the ends of the line the chord intersects one of the edges of the cube (chord intersects cube surface somewhere along one of the edges).
Slightly dimly remembered but this is something I did for a situation involving spherical regions using an octee data structure (many years ago). You may also wish to check out KD-trees as some of the other posters suggest but your initial question sounds very similar to what I did.
|
Spatial Data Structures in C
|
I do work in theoretical chemistry on a high performance cluster, often involving molecular dynamics simulations. One of the problems my work addresses involves a static field of N-dimensional (typically N = 2-5) hyper-spheres, that a test particle may collide with. I'm looking to optimize (read: overhaul) the the data structure I use for representing the field of spheres so I can do rapid collision detection. Currently I use a dead simple array of pointers to an N-membered struct (doubles for each coordinate of the center) and a nearest-neighbor list. I've heard of oct- and quad- trees but haven't found a clear explanation of how they work, how to efficiently implement one, or how to then do fast collision detection with one. Given the size of my simulations, memory is (almost) no object, but cycles are.
|
[
"How best to approach this for your problem depends on several factors that you have not described:\n- Will the same hypersphere arrangement be used for many particle collision calculations?\n- Are the hyperspheres uniform size?\n- What is the movement of the particle (e.g. straight line/curve) and is that movement affected by the spheres?\n- Do you consider the particle to have zero volume?\nI assume that the particle does not have simple straight line movement as that would be the relatively fast calculation of finding the closest point between a line and a point, which is likely going to be about the same speed as finding which of the boxes the line intersects with (to determine where in the n-tree to examine).\nIf your hypersphere positions are fixed for a lot of particle collisions then computing a voronoi decomposition/Dirichlet tessellation would give you a fast way of later finding exactly which sphere is closest to your particle for any given point in the space.\nHowever to answer your original question about octrees/quadtrees/2^n-trees, in n dimensions you start with a (hyper)-cube that contains the area of space that you are interested in. This will be subdivided into 2^n hypercubes if you deem the contents to be too complicated. This continues recursively until you have only simple elements (e.g. one hypersphere centroid) in the leaf nodes.\nNow that the n-tree is built you use it for collision detection by taking the path of your particle and intersecting it with the outer hypercube. The intersection position will tell you which hypercube in the next level down of the tree to visit next, and you determine the position of intersection with all 2^n hypercubes at that level, following downwards until you reach a leaf node. Once you reach the leaf you can examine interactions between your particle path and the hypersphere stored at that leaf. If you have collision you have finished, otherwise you have to find the exit point of the particle path from the current hypercube leaf and determine which hypercube it moves to next. Continue until you find a collision or entirely leave the overall bounding hypercube.\nEfficiently finding the neighbouring hypercube when exiting a hypercube is one of the most challenging parts of this approach. For 2^n trees Samet's approaches {1, 2} can be adapted. For kd-trees (binary trees) an approach is suggested in {3} section 4.3.3.\nEfficient implementation can be as simple as storing a list of 8 pointers from each hypercube to its children hypercubes, and marking the hypercube in a special way if it is a leaf (e.g. make all pointers NULL).\nA description of dividing space to create a quadtree (which you can generalise to n-tree) can be found in Klinger & Dyer {4}\nAs others have mentioned kd-trees may be more suited than 2^n-trees as extension to an arbitrary number of dimensions is more straightforward, however they will result in a deeper tree. It is also easier to adapt the split positions to match the geometry of your \nhyperspheres with a kd-tree. The description above of collision detection in a 2^n tree is equally applicable to a kd-tree.\n{1} Connected Component Labeling, Hanan Samet, Using Quadtrees Journal of the ACM Volume 28 , Issue 3 (July 1981)\n{2} Neighbor finding in images represented by octrees, Hanan Samet, Computer Vision, Graphics, and Image Processing Volume 46 , Issue 3 (June 1989)\n{3} Convex hull generation, connected component labelling, and minimum distance\ncalculation for set-theoretically defined models, Dan Pidcock, 2000\n{4} Experiments in picture representation using regular decomposition, Klinger, A., and Dyer, C.R. E, Comptr. Graphics and Image Processing 5 (1976), 68-105.\n",
"It sounds like you'd want to implement a kd-tree, which would allow you to more quickly search the N-dimensional space. There's some more information and links to implementations at the Stony Brook Algorithm Repository.\n",
"Since your field is static (by which I'm assuming you mean that the hyper spheres don't move), then the fastest solution I know of is a Kdtree.\nYou can either make your own, or use someone else's, like this one:\nhttp://libkdtree.alioth.debian.org/\n",
"A Quad tree is a 2 dimensional tree, in which at each level a node has 4 children, each of which covers 1/4 of the area of the parent node.\nAn Oct tree is a 3 dimensional tree, in which at each level a node has 8 children, each of which contains 1/8th of the volume of the parent node. Here is picture to help you visualize it: http://en.wikipedia.org/wiki/Octree\nIf you're doing N dimensional intersection tests, you could generalize this to an N tree.\nIntersection algorithms work by starting at the top of the tree and recursively traversing into any child nodes that intersect the object being tested, at some point you get to leaf nodes, which contain the actual objects.\n",
"An octree will work as long as you can specify the spheres by their centres - it hierarchically bins points into cubic regions with eight children. Working out neighbours in an octree data structure will require you to do sphere-intersecting-cube calculations (to some extent easier than they look) to work out which cubic regions in an octree are within the sphere.\nFinding the nearest neighbours means walking back up the tree until you get a node with more than one populated child and all surrounding nodes included (this ensures the query gets all sides).\nFrom memory, this is the (somewhat naive) basic algorithm for sphere-cube intersection:\ni. Is the centre within the cube (this gets the eponymous situation)\nii. Are any of the corners of the cube within radius r of the centre (corners within the sphere)\niii. For each surface of the cube (you can eliminate some of the surfaces by working out which side of the surface the centre lies on) work out (this is all first-year vector arithmetic):\na. A normal of the surface that goes to the centre of the sphere\nb. The distance from the centre of the sphere to the intersection of the normal with the plane of the surface (chord intersets plane the surface of the cube)\nc. Intersection of the plane lies within the side of the cube (one condition of chord intersection to the cube)\niv. Calculate the size of the chord (Sin of Cos^-1 of ratio of normal length to radius of sphere)\nv. If the nearest point on the line is less than the distance of the chord and the point lies between the ends of the line the chord intersects one of the edges of the cube (chord intersects cube surface somewhere along one of the edges).\nSlightly dimly remembered but this is something I did for a situation involving spherical regions using an octee data structure (many years ago). You may also wish to check out KD-trees as some of the other posters suggest but your initial question sounds very similar to what I did.\n"
] |
[
4,
1,
1,
0,
0
] |
[] |
[] |
[
"c",
"computational_geometry",
"data_structures",
"optimization",
"performance"
] |
stackoverflow_0000078045_c_computational_geometry_data_structures_optimization_performance.txt
|
Q:
How to avoid temporary file creation on server-side when pushing back full HTML content to clients?
In a server-side application running on Tomcat, I am generating full HTML pages (with header) based on random user-requested sites pulled down from the Internet. The client-side application uses asynchronous callbacks for requesting processing of a particular web page. Since processing can take a while, I want to inform the user about progress via polling, hence the callbacks.
On server-side, after the web page is retrieved, it is processed and an "enhanced" version is created. Then this version has to go back to the user.
Displaying the page as part of the page of the client-side application is not an option.
Currently, the server generates a temporary file and sends back a link to it. This is clearly suboptimal.
The next best solution I can come up with inolves creating a caching-DB that stores the HTML content together with its md5-sums or sha1-ids and then sends back a link to a servlet, with the hash-ID as an argument. The servlet then requests the site from the caching-DB.
Is there any better solution? If not, which DB-backend would you propose? I'm thinking of SQLite. Part of the problem to be solved is: how do I push a page <html> to </html> back to client side?
A:
If true persistence isn't required how about using something more temporal like memcached instead of SQL? Calling semantics are pretty clean and easy - and of course you can expire the data manually, ttl, or @ restart.
A:
Instead of creating a temporary file, filling it up, and then sending a link, you can create a memory buffer, fill it up, and then send that as the response (serve it with mime-type 'text/html'). If you don't want to send page-buffers immediately, you can save them for later in the user's session. If you're worried of taking up too much memory that way, you may want to keep only a certain number of page-buffers around in memory, and write the rest to disk for later retrieval. Using a DB sounds like overkill (after all, there's no relational information involved) - but it would solve the caching problem nicely.
|
How to avoid temporary file creation on server-side when pushing back full HTML content to clients?
|
In a server-side application running on Tomcat, I am generating full HTML pages (with header) based on random user-requested sites pulled down from the Internet. The client-side application uses asynchronous callbacks for requesting processing of a particular web page. Since processing can take a while, I want to inform the user about progress via polling, hence the callbacks.
On server-side, after the web page is retrieved, it is processed and an "enhanced" version is created. Then this version has to go back to the user.
Displaying the page as part of the page of the client-side application is not an option.
Currently, the server generates a temporary file and sends back a link to it. This is clearly suboptimal.
The next best solution I can come up with inolves creating a caching-DB that stores the HTML content together with its md5-sums or sha1-ids and then sends back a link to a servlet, with the hash-ID as an argument. The servlet then requests the site from the caching-DB.
Is there any better solution? If not, which DB-backend would you propose? I'm thinking of SQLite. Part of the problem to be solved is: how do I push a page <html> to </html> back to client side?
|
[
"If true persistence isn't required how about using something more temporal like memcached instead of SQL? Calling semantics are pretty clean and easy - and of course you can expire the data manually, ttl, or @ restart.\n",
"Instead of creating a temporary file, filling it up, and then sending a link, you can create a memory buffer, fill it up, and then send that as the response (serve it with mime-type 'text/html'). If you don't want to send page-buffers immediately, you can save them for later in the user's session. If you're worried of taking up too much memory that way, you may want to keep only a certain number of page-buffers around in memory, and write the rest to disk for later retrieval. Using a DB sounds like overkill (after all, there's no relational information involved) - but it would solve the caching problem nicely.\n"
] |
[
1,
1
] |
[] |
[] |
[
"java",
"rpc",
"servlets",
"tomcat"
] |
stackoverflow_0000081449_java_rpc_servlets_tomcat.txt
|
Q:
How do I convert from a location (address) String to a YGeoPoint in Yahoo Maps API?
I have a list of addresses from a Database for which I'd like to put markers on a Yahoo Map. The addMarker() method on YMap takes a YGeoPoint, which requires a latitude and longitude. However, Yahoo Maps must know how to convert from addresses because drawZoomAndCenter(LocationType,ZoomLevel) can take an address. I could convert by using drawZoomAndCenter() then getCenterLatLon() but is there a better way, which doesn't require a draw?
A:
You can ask the map object to do the geoCoding, and catch the callback:
<script type="text/javascript">
var map = new YMap(document.getElementById('map'));
map.drawZoomAndCenter("Algeria", 17);
map.geoCodeAddress("Cambridge, UK");
YEvent.Capture(map, EventsList.onEndGeoCode, function(geoCode) {
if (geoCode.success)
map.addOverlay(new YMarker(geoCode.GeoPoint));
});
</script>
One thing to beware of -- in this example the drawAndZoom call will itself make a geoCoding request, so you'll get the callback from that too. You might want to filter that out, or set the map's centre based on a GeoPoint.
|
How do I convert from a location (address) String to a YGeoPoint in Yahoo Maps API?
|
I have a list of addresses from a Database for which I'd like to put markers on a Yahoo Map. The addMarker() method on YMap takes a YGeoPoint, which requires a latitude and longitude. However, Yahoo Maps must know how to convert from addresses because drawZoomAndCenter(LocationType,ZoomLevel) can take an address. I could convert by using drawZoomAndCenter() then getCenterLatLon() but is there a better way, which doesn't require a draw?
|
[
"You can ask the map object to do the geoCoding, and catch the callback:\n<script type=\"text/javascript\"> \nvar map = new YMap(document.getElementById('map'));\nmap.drawZoomAndCenter(\"Algeria\", 17);\n\nmap.geoCodeAddress(\"Cambridge, UK\");\n\nYEvent.Capture(map, EventsList.onEndGeoCode, function(geoCode) {\n if (geoCode.success)\n map.addOverlay(new YMarker(geoCode.GeoPoint));\n});\n</script>\n\nOne thing to beware of -- in this example the drawAndZoom call will itself make a geoCoding request, so you'll get the callback from that too. You might want to filter that out, or set the map's centre based on a GeoPoint.\n"
] |
[
1
] |
[
"If you're working with U.S. addresses, you can use geocoder.us, which has APIs.\nAlso, Google Maps Hacks has a hack, \"Hack 62. Find the Latitude and Longitude of a Street Address\", for that.\n"
] |
[
-1
] |
[
"api",
"yahoo_maps"
] |
stackoverflow_0000073524_api_yahoo_maps.txt
|
Q:
Which to use, eruby or erb?
What's the difference between eruby and erb? What considerations would drive me to choose one or the other?
My application is generating config files for network devices (routers, load balancers, firewalls, etc.). My plan is to template the config files, using embedded ruby (via either eruby or erb) within the source files to do things like iteratively generate all the interface config blocks for a router (these blocks are all very similar, differing only in a label and an IP address). For example, I might have a config template file like this:
hostname sample-router
<%=
r = String.new;
[
["GigabitEthernet1/1", "10.5.16.1"],
["GigabitEthernet1/2", "10.5.17.1"],
["GigabitEthernet1/3", "10.5.18.1"]
].each { |tuple|
r << "interface #{tuple[0]}\n"
r << " ip address #{tuple[1]} netmask 255.255.255.0\n"
}
r.chomp
%>
logging 10.5.16.26
which, when run through an embedded ruby interpreter (either erb or eruby), produces the following output:
hostname sample-router
interface GigabitEthernet1/1
ip address 10.5.16.1 netmask 255.255.255.0
interface GigabitEthernet1/2
ip address 10.5.17.1 netmask 255.255.255.0
interface GigabitEthernet1/3
ip address 10.5.18.1 netmask 255.255.255.0
logging 10.5.16.26
A:
Doesn't really matter, they're both the same. erb is pure ruby, eruby is written in C so it's a bit faster.
erubis (a third one) is pure ruby, and faster than both the ones listed above. But I doubt the speed of that is the bottleneck for you, so just use erb. It's part of Ruby Standard Library.
A:
Eruby is an external executable, while erb is a library within Ruby. You would use the former if you wanted independent processing of your template files (e.g. quick-and-dirty PHP replacement), and the latter if you needed to process them within the context of some other Ruby script. It is more common to use ERB simply because it is more flexible, but I'll admit that I have been guilty of dabbling in eruby to execute .rhtml files for quick little utility websites.
A:
I'm doing something similar using erb, and the performance is fine for me.
As Jordi said though, it depends what context you want to run this in - if you're literally going to use templates like the one you listed, eruby would probably work better, but I'd guess you're actually going to be passing variables to the template, in which case you want erb.
Just for reference, when using erb you'll need to pass it the binding for the object you want to take variables from, something like this:
device = Device.new
device.add_interface("GigabitEthernet1/1", "10.5.16.1")
device.add_interface("GigabitEthernet1/2", "10.5.17.1")
template = File.read("/path/to/your/template.erb")
config = ERB.new(template).result(device.binding)
|
Which to use, eruby or erb?
|
What's the difference between eruby and erb? What considerations would drive me to choose one or the other?
My application is generating config files for network devices (routers, load balancers, firewalls, etc.). My plan is to template the config files, using embedded ruby (via either eruby or erb) within the source files to do things like iteratively generate all the interface config blocks for a router (these blocks are all very similar, differing only in a label and an IP address). For example, I might have a config template file like this:
hostname sample-router
<%=
r = String.new;
[
["GigabitEthernet1/1", "10.5.16.1"],
["GigabitEthernet1/2", "10.5.17.1"],
["GigabitEthernet1/3", "10.5.18.1"]
].each { |tuple|
r << "interface #{tuple[0]}\n"
r << " ip address #{tuple[1]} netmask 255.255.255.0\n"
}
r.chomp
%>
logging 10.5.16.26
which, when run through an embedded ruby interpreter (either erb or eruby), produces the following output:
hostname sample-router
interface GigabitEthernet1/1
ip address 10.5.16.1 netmask 255.255.255.0
interface GigabitEthernet1/2
ip address 10.5.17.1 netmask 255.255.255.0
interface GigabitEthernet1/3
ip address 10.5.18.1 netmask 255.255.255.0
logging 10.5.16.26
|
[
"Doesn't really matter, they're both the same. erb is pure ruby, eruby is written in C so it's a bit faster.\nerubis (a third one) is pure ruby, and faster than both the ones listed above. But I doubt the speed of that is the bottleneck for you, so just use erb. It's part of Ruby Standard Library.\n",
"Eruby is an external executable, while erb is a library within Ruby. You would use the former if you wanted independent processing of your template files (e.g. quick-and-dirty PHP replacement), and the latter if you needed to process them within the context of some other Ruby script. It is more common to use ERB simply because it is more flexible, but I'll admit that I have been guilty of dabbling in eruby to execute .rhtml files for quick little utility websites.\n",
"I'm doing something similar using erb, and the performance is fine for me.\nAs Jordi said though, it depends what context you want to run this in - if you're literally going to use templates like the one you listed, eruby would probably work better, but I'd guess you're actually going to be passing variables to the template, in which case you want erb.\nJust for reference, when using erb you'll need to pass it the binding for the object you want to take variables from, something like this:\ndevice = Device.new\ndevice.add_interface(\"GigabitEthernet1/1\", \"10.5.16.1\")\ndevice.add_interface(\"GigabitEthernet1/2\", \"10.5.17.1\")\n\ntemplate = File.read(\"/path/to/your/template.erb\")\nconfig = ERB.new(template).result(device.binding)\n\n"
] |
[
9,
2,
0
] |
[] |
[] |
[
"erb",
"eruby",
"ruby"
] |
stackoverflow_0000074782_erb_eruby_ruby.txt
|
Q:
How can I listen to a RoutedEvent from a class that doesn't derive from FrameworkElement ? Can it be done?
The question says it all basically.
I want in a
class MyClass
to listen to a routed event. Can it be done ?
A:
Actually I wiredup the event the wrong way :|
I had
EventManager.RegisterClassHandler ( typeof ( MyClass )......
Instead of
EventManager.RegisterClassHandler ( typeof ( TheClassThatOwnedTheEvent )
So .. my bad.
A:
If you can create an inner class of MyClass (call it MyInnerClass) that derives from FrameworkElement while retaining the capability to access an enclosing MyClass object, your problem will be solved. You can then implement a 'getListener' method within MyClass that returns the embedded MyInnerClass that you will use to actually listen to events.
|
How can I listen to a RoutedEvent from a class that doesn't derive from FrameworkElement ? Can it be done?
|
The question says it all basically.
I want in a
class MyClass
to listen to a routed event. Can it be done ?
|
[
"Actually I wiredup the event the wrong way :|\nI had\nEventManager.RegisterClassHandler ( typeof ( MyClass )......\n\nInstead of\nEventManager.RegisterClassHandler ( typeof ( TheClassThatOwnedTheEvent )\n\nSo .. my bad.\n",
"If you can create an inner class of MyClass (call it MyInnerClass) that derives from FrameworkElement while retaining the capability to access an enclosing MyClass object, your problem will be solved. You can then implement a 'getListener' method within MyClass that returns the embedded MyInnerClass that you will use to actually listen to events.\n"
] |
[
1,
0
] |
[] |
[] |
[
"routed_events",
"wpf"
] |
stackoverflow_0000081280_routed_events_wpf.txt
|
Q:
What IDE to use for Python?
What IDEs ("GUIs/editors") do others use for Python coding?
A:
Results
Spreadsheet version
Alternatively, in plain text: (also available as a a screenshot)
Bracket Matching -. .- Line Numbering
Smart Indent -. | | .- UML Editing / Viewing
Source Control Integration -. | | | | .- Code Folding
Error Markup -. | | | | | | .- Code Templates
Integrated Python Debugging -. | | | | | | | | .- Unit Testing
Multi-Language Support -. | | | | | | | | | | .- GUI Designer (Qt, Eric, etc)
Auto Code Completion -. | | | | | | | | | | | | .- Integrated DB Support
Commercial/Free -. | | | | | | | | | | | | | | .- Refactoring
Cross Platform -. | | | | | | | | | | | | | | | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
Atom |Y |F |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y | | | | |*many plugins
Editra |Y |F |Y |Y | | |Y |Y |Y |Y | |Y | | | | | |
Emacs |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | |
Eric Ide |Y |F |Y | |Y |Y | |Y | |Y | |Y | |Y | | | |
Geany |Y |F |Y*|Y | | | |Y |Y |Y | |Y | | | | | |*very limited
Gedit |Y |F |Y¹|Y | | | |Y |Y |Y | | |Y²| | | | |¹with plugin; ²sort of
Idle |Y |F |Y | |Y | | |Y |Y | | | | | | | | |
IntelliJ |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |
JEdit |Y |F | |Y | | | | |Y |Y | |Y | | | | | |
KDevelop |Y |F |Y*|Y | | |Y |Y |Y |Y | |Y | | | | | |*no type inference
Komodo |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | |Y | |
NetBeans* |Y |F |Y |Y |Y | |Y |Y |Y |Y |Y |Y |Y |Y | | |Y |*pre-v7.0
Notepad++ |W |F |Y |Y | |Y*|Y*|Y*|Y |Y | |Y |Y*| | | | |*with plugin
Pfaide |W |C |Y |Y | | | |Y |Y |Y | |Y |Y | | | | |
PIDA |LW|F |Y |Y | | | |Y |Y |Y | |Y | | | | | |VIM based
PTVS |W |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y | | |Y*| |Y |*WPF bsed
PyCharm |Y |CF|Y |Y*|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |*JavaScript
PyDev (Eclipse) |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | |
PyScripter |W |F |Y | |Y |Y | |Y |Y |Y | |Y |Y |Y | | | |
PythonWin |W |F |Y | |Y | | |Y |Y | | |Y | | | | | |
SciTE |Y |F¹| |Y | |Y | |Y |Y |Y | |Y |Y | | | | |¹Mac version is
ScriptDev |W |C |Y |Y |Y |Y | |Y |Y |Y | |Y |Y | | | | | commercial
Spyder |Y |F |Y | |Y |Y | |Y |Y |Y | | | | | | | |
Sublime Text |Y |CF|Y |Y | |Y |Y |Y |Y |Y | |Y |Y |Y*| | | |extensible w/Python,
TextMate |M |F | |Y | | |Y |Y |Y |Y | |Y |Y | | | | | *PythonTestRunner
UliPad |Y |F |Y |Y |Y | | |Y |Y | | | |Y |Y | | | |
Vim |Y |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |
Visual Studio |W |CF|Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |Y |? |Y |
Visual Studio Code|Y |F |Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |? |? |Y |uses plugins
WingIde |Y |C |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |*support for C
Zeus |W |C | | | | |Y |Y |Y |Y | |Y |Y | | | | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
Cross Platform -' | | | | | | | | | | | | | | | |
Commercial/Free -' | | | | | | | | | | | | | | '- Refactoring
Auto Code Completion -' | | | | | | | | | | | | '- Integrated DB Support
Multi-Language Support -' | | | | | | | | | | '- GUI Designer (Qt, Eric, etc)
Integrated Python Debugging -' | | | | | | | | '- Unit Testing
Error Markup -' | | | | | | '- Code Templates
Source Control Integration -' | | | | '- Code Folding
Smart Indent -' | | '- UML Editing / Viewing
Bracket Matching -' '- Line Numbering
Acronyms used:
L - Linux
W - Windows
M - Mac
C - Commercial
F - Free
CF - Commercial with Free limited edition
? - To be confirmed
I don't mention basics like syntax highlighting as I expect these by default.
This is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers.
PS. Can you help me to add features of the above editors to the list (like auto-complete, debugging, etc.)?
We have a comprehensive wiki page for this question https://wiki.python.org/moin/IntegratedDevelopmentEnvironments
Submit edits to the spreadsheet
|
What IDE to use for Python?
|
What IDEs ("GUIs/editors") do others use for Python coding?
|
[
"\nResults\nSpreadsheet version\n\nAlternatively, in plain text: (also available as a a screenshot)\n Bracket Matching -. .- Line Numbering\n Smart Indent -. | | .- UML Editing / Viewing\n Source Control Integration -. | | | | .- Code Folding\n Error Markup -. | | | | | | .- Code Templates\n Integrated Python Debugging -. | | | | | | | | .- Unit Testing\n Multi-Language Support -. | | | | | | | | | | .- GUI Designer (Qt, Eric, etc)\n Auto Code Completion -. | | | | | | | | | | | | .- Integrated DB Support\n Commercial/Free -. | | | | | | | | | | | | | | .- Refactoring\n Cross Platform -. | | | | | | | | | | | | | | | | \n +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\nAtom |Y |F |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y | | | | |*many plugins\nEditra |Y |F |Y |Y | | |Y |Y |Y |Y | |Y | | | | | |\nEmacs |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | |\nEric Ide |Y |F |Y | |Y |Y | |Y | |Y | |Y | |Y | | | |\nGeany |Y |F |Y*|Y | | | |Y |Y |Y | |Y | | | | | |*very limited\nGedit |Y |F |Y¹|Y | | | |Y |Y |Y | | |Y²| | | | |¹with plugin; ²sort of\nIdle |Y |F |Y | |Y | | |Y |Y | | | | | | | | |\nIntelliJ |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |\nJEdit |Y |F | |Y | | | | |Y |Y | |Y | | | | | |\nKDevelop |Y |F |Y*|Y | | |Y |Y |Y |Y | |Y | | | | | |*no type inference\nKomodo |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | |Y | |\nNetBeans* |Y |F |Y |Y |Y | |Y |Y |Y |Y |Y |Y |Y |Y | | |Y |*pre-v7.0\nNotepad++ |W |F |Y |Y | |Y*|Y*|Y*|Y |Y | |Y |Y*| | | | |*with plugin\nPfaide |W |C |Y |Y | | | |Y |Y |Y | |Y |Y | | | | |\nPIDA |LW|F |Y |Y | | | |Y |Y |Y | |Y | | | | | |VIM based\nPTVS |W |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y | | |Y*| |Y |*WPF bsed\nPyCharm |Y |CF|Y |Y*|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |*JavaScript\nPyDev (Eclipse) |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | |\nPyScripter |W |F |Y | |Y |Y | |Y |Y |Y | |Y |Y |Y | | | |\nPythonWin |W |F |Y | |Y | | |Y |Y | | |Y | | | | | |\nSciTE |Y |F¹| |Y | |Y | |Y |Y |Y | |Y |Y | | | | |¹Mac version is\nScriptDev |W |C |Y |Y |Y |Y | |Y |Y |Y | |Y |Y | | | | | commercial\nSpyder |Y |F |Y | |Y |Y | |Y |Y |Y | | | | | | | |\nSublime Text |Y |CF|Y |Y | |Y |Y |Y |Y |Y | |Y |Y |Y*| | | |extensible w/Python,\nTextMate |M |F | |Y | | |Y |Y |Y |Y | |Y |Y | | | | | *PythonTestRunner\nUliPad |Y |F |Y |Y |Y | | |Y |Y | | | |Y |Y | | | |\nVim |Y |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |\nVisual Studio |W |CF|Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |Y |? |Y |\nVisual Studio Code|Y |F |Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |? |? |Y |uses plugins\nWingIde |Y |C |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |*support for C\nZeus |W |C | | | | |Y |Y |Y |Y | |Y |Y | | | | |\n +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\n Cross Platform -' | | | | | | | | | | | | | | | | \n Commercial/Free -' | | | | | | | | | | | | | | '- Refactoring\n Auto Code Completion -' | | | | | | | | | | | | '- Integrated DB Support\n Multi-Language Support -' | | | | | | | | | | '- GUI Designer (Qt, Eric, etc)\n Integrated Python Debugging -' | | | | | | | | '- Unit Testing\n Error Markup -' | | | | | | '- Code Templates\n Source Control Integration -' | | | | '- Code Folding\n Smart Indent -' | | '- UML Editing / Viewing\n Bracket Matching -' '- Line Numbering\n\n\nAcronyms used:\n L - Linux\n W - Windows\n M - Mac\n C - Commercial\n F - Free\n CF - Commercial with Free limited edition\n ? - To be confirmed\n\nI don't mention basics like syntax highlighting as I expect these by default.\n\nThis is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers.\nPS. Can you help me to add features of the above editors to the list (like auto-complete, debugging, etc.)?\nWe have a comprehensive wiki page for this question https://wiki.python.org/moin/IntegratedDevelopmentEnvironments\nSubmit edits to the spreadsheet\n"
] |
[
1293
] |
[] |
[] |
[
"editor",
"ide",
"python"
] |
stackoverflow_0000081584_editor_ide_python.txt
|
Q:
How to fix an MFC Painting Glitch?
I'm trying to implement some drag and drop functionality for a material system being developed at my work. Part of this system includes a 'Material Library' which acts as a repository, divided into groups, of saved materials on the user's hard drive.
As part of some UI polish, I was hoping to implement a 'highlight' type feature. When dragging and dropping, windows that you can legally drop a material onto will very subtly change color to improve feedback to the user that this is a valid action.
I am changing the bar with 'Basic Materials' (Just a CWnd with a CStatic) from having a medium gray background when unhighlighed to a blue background when hovered over. It all works well, the OnDragEnter and OnDragExit messages seem robust and set a flag indicating the highlight status. Then in OnCtrlColor I do this:
if (!m_bHighlighted) {
pDC->FillSolidRect(0, 0, m_SizeX, kGroupHeaderHeight, kBackgroundColour);
}
else {
pDC->FillSolidRect(0, 0, m_SizeX, kGroupHeaderHeight, kHighlightedBackgroundColour);
}
However, as you can see in the screenshot, the painting 'glitches' below the dragged object, leaving the original gray in place. It looks really ugly and basically spoils the whole effect.
Is there any way I can get around this?
A:
Remote debugging is a godsend for debugging visual issues. It's a pain to set up, but having a VM ready for remote debugging will pay off for sure.
What I like to do is set a ton of breakpoints in my paint handling, as well as in the framework paint code itself. This allows you to effectively "freeze frame" the painting without borking it up by flipping into devenv. This way you can get the true picture of who's painting in what order, and where you've got the chance to break in a fill that rect the way you need to.
A:
It almost looks like the CStatic doesn't know that it needs to repaint itself, so the background color of the draggable object is left behind. Maybe try to invalidate the CStatic, and see if that helps at all?
A:
Thanks for the answers guys, ajryan, you seem to always come up with help for my questions so extra thanks.
Thankfully this time the answer was fairly straightforward....
ImageList_DragShowNolock(FALSE);
m_pDragDropTargetWnd->SendMessage(WM_USER_DRAG_DROP_OBJECT_DRAG_ENTER, (WPARAM)pDragDropObject, (LPARAM)(&dragDropPoint));
ImageList_DragShowNolock(TRUE);
This turns off the drawing of the dragged image, then sends a message to the window being entered to repaint in a highlighted state, then finally redraws the drag image over the top. Seems to have done the trick.
|
How to fix an MFC Painting Glitch?
|
I'm trying to implement some drag and drop functionality for a material system being developed at my work. Part of this system includes a 'Material Library' which acts as a repository, divided into groups, of saved materials on the user's hard drive.
As part of some UI polish, I was hoping to implement a 'highlight' type feature. When dragging and dropping, windows that you can legally drop a material onto will very subtly change color to improve feedback to the user that this is a valid action.
I am changing the bar with 'Basic Materials' (Just a CWnd with a CStatic) from having a medium gray background when unhighlighed to a blue background when hovered over. It all works well, the OnDragEnter and OnDragExit messages seem robust and set a flag indicating the highlight status. Then in OnCtrlColor I do this:
if (!m_bHighlighted) {
pDC->FillSolidRect(0, 0, m_SizeX, kGroupHeaderHeight, kBackgroundColour);
}
else {
pDC->FillSolidRect(0, 0, m_SizeX, kGroupHeaderHeight, kHighlightedBackgroundColour);
}
However, as you can see in the screenshot, the painting 'glitches' below the dragged object, leaving the original gray in place. It looks really ugly and basically spoils the whole effect.
Is there any way I can get around this?
|
[
"Remote debugging is a godsend for debugging visual issues. It's a pain to set up, but having a VM ready for remote debugging will pay off for sure.\nWhat I like to do is set a ton of breakpoints in my paint handling, as well as in the framework paint code itself. This allows you to effectively \"freeze frame\" the painting without borking it up by flipping into devenv. This way you can get the true picture of who's painting in what order, and where you've got the chance to break in a fill that rect the way you need to.\n",
"It almost looks like the CStatic doesn't know that it needs to repaint itself, so the background color of the draggable object is left behind. Maybe try to invalidate the CStatic, and see if that helps at all?\n",
"Thanks for the answers guys, ajryan, you seem to always come up with help for my questions so extra thanks.\nThankfully this time the answer was fairly straightforward....\nImageList_DragShowNolock(FALSE);\nm_pDragDropTargetWnd->SendMessage(WM_USER_DRAG_DROP_OBJECT_DRAG_ENTER, (WPARAM)pDragDropObject, (LPARAM)(&dragDropPoint));\nImageList_DragShowNolock(TRUE);\n\nThis turns off the drawing of the dragged image, then sends a message to the window being entered to repaint in a highlighted state, then finally redraws the drag image over the top. Seems to have done the trick.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"c++",
"mfc",
"paint"
] |
stackoverflow_0000074350_c++_mfc_paint.txt
|
Q:
override constraint from no action to cascading at runtime
I feel like I have a very basic/stupid question, yet I never saw/read/heard anything in this direction.
Say I have a table users(userId, name) and a table preferences(id, userId, language). The example is trivial but could be extended to a situation with multi-level relations and way more tables..
When my UI requests to delete a user I first want to show a warning stating that also its preferences will be deleted. If at some point the database gets extended with more tables and relationships, but the software isn't adapted accordingly (the client didn't update) a generic message should be shown.
How can I implement this? The UI cannot know about the whole data structure and should not be bothered to walk down all the relations to manually delete all the depending records.
I would think this would be with constraints.
The constraint would be no action at first so the constraint will throw an error that can be caught by the UI. After the UI receives a confirmation, the constraint should become a cascade.
Somehow I'm feeling like I'm getting this all wrong..
A:
What I would do is this:
The constraint is CASCADE
The application checks if preferences exist.
If they do, show the warning.
If no preferences exist, or the warning is accepted, delete the client.
Changing database relationships on the fly is not going to be a good idea!!
Cheers,
RB.
A:
If you are worried about the user not realising the full impact of their delete, you might want to consider not actually deleting the data - instead you could simply set a flag on a column called say "marked_for_deletion". (the entries could then be deleted a safe time later)
The downside is that you need to remember to filter out the marked rows in other queries. This can be mitigated by creating a view on the table with the marked rows filtered out, and then always using the view in your queries.
|
override constraint from no action to cascading at runtime
|
I feel like I have a very basic/stupid question, yet I never saw/read/heard anything in this direction.
Say I have a table users(userId, name) and a table preferences(id, userId, language). The example is trivial but could be extended to a situation with multi-level relations and way more tables..
When my UI requests to delete a user I first want to show a warning stating that also its preferences will be deleted. If at some point the database gets extended with more tables and relationships, but the software isn't adapted accordingly (the client didn't update) a generic message should be shown.
How can I implement this? The UI cannot know about the whole data structure and should not be bothered to walk down all the relations to manually delete all the depending records.
I would think this would be with constraints.
The constraint would be no action at first so the constraint will throw an error that can be caught by the UI. After the UI receives a confirmation, the constraint should become a cascade.
Somehow I'm feeling like I'm getting this all wrong..
|
[
"What I would do is this:\n\nThe constraint is CASCADE\nThe application checks if preferences exist.\nIf they do, show the warning.\nIf no preferences exist, or the warning is accepted, delete the client.\n\nChanging database relationships on the fly is not going to be a good idea!!\nCheers,\nRB.\n",
"If you are worried about the user not realising the full impact of their delete, you might want to consider not actually deleting the data - instead you could simply set a flag on a column called say \"marked_for_deletion\". (the entries could then be deleted a safe time later)\nThe downside is that you need to remember to filter out the marked rows in other queries. This can be mitigated by creating a view on the table with the marked rows filtered out, and then always using the view in your queries.\n"
] |
[
1,
0
] |
[] |
[] |
[
"cascade",
"constraints",
"sql"
] |
stackoverflow_0000081560_cascade_constraints_sql.txt
|
Q:
Elegant method for drawing hourly bar chart from time-interval data?
I have a list of timesheet entries that show a start and stop time. This is sitting in a MySQL database. I need to create bar charts based on this data with the 24 hours of the day along the bottom and the amount of man-hours worked for each hour of the day.
For example, if Alice worked a job from 15:30 to 19:30 and Bob worked from 12:15 to 17:00, the chart would look like this:
I have a WTFey solution right now that involves a spreadsheet going out to column DY or something like that. The needed resolution is 15-minute intervals.
I'm assuming this is something best done in the database then exported for chart creation. Let me know if I'm missing any details. Thanks.
A:
Create a table with just time in it from midnight to midnight containing each minute of the day. In the data warehouse world we would call this a time dimension. Here's an example:
TIME_DIM
-id
-time_of_day
-interval_15
-interval_30
an example of the data in the table would be
id time_of_day interval_15 interval_30
1 00:00 00:00 00:00
...
30 00:23 00:15 00:00
...
100 05:44 05:30 05:30
Then all you have to do is join your table to the time dimension and then group by interval_15. For example:
SELECT b.interval_15, count(*)
FROM my_data_table a
INNER JOIN time_dim b ON a.time_field = b.time
WHERE a.date_field = now()
GROUP BY b.interval_15
A:
I came up with a pseudocode solution, hope it helps.
create an array named timetable with 24 entries
initialise timetable to zero
for each user in SQLtable
firsthour = user.firsthour
lasthour = user.lasthour
firstminutes = 4 - (rounded down integer(user.firstminutes/15))
lastminutes = rounded down integer(user.lastminutes/15)
timetable(firsthour) = timetable(firsthour) + firstminutes
timetable(lasthour) = timetable(lasthour) + lastminutes
for index=firsthour+1 to lasthour-1
timetable(index) = timetable(index) + 4
next index
next user
Now the timetable array holds the values you desire in 15 minute granularity, ie. a value of 4 = 1 hour, 5 = 1 hour 15 minutes, 14 = 3 hours 30 minutes.
A:
Here's another pseudocode solution from a different angle; a bit more intensive because it does 96 queries for every 24hr period:
results = []
for time in range(0, 24, .25):
amount = mysql("select count(*) from User_Activity_Table where time >= start_time and time <= end_time")
results.append(amount)
A:
How about this:
Use that "times" table, but with two columns, containing the 15-minute intervals. The from_times are the 15-minutely times, the to_times are a second before the next from_times. For example 12:30:00 to 12:44:59.
Now get your person work table, which I've called "activity" here, with start_time and end_time columns.
I added values for Alice and Bob as per the original question.
Here's the query from MySQL:
SELECT HOUR(times.from_time) AS 'TIME', count(*) / 4 AS 'HOURS'
FROM times
JOIN activity
ON times.from_time >= activity.start_time AND
times.to_time <= activity.end_time
GROUP BY HOUR(times.from_time)
ORDER BY HOUR(times.from_time)
which gives me this:
TIME HOURS
12 0.7500
13 1.0000
14 1.0000
15 1.5000
16 2.0000
17 1.0000
18 1.0000
19 0.7500
Looks about right...
|
Elegant method for drawing hourly bar chart from time-interval data?
|
I have a list of timesheet entries that show a start and stop time. This is sitting in a MySQL database. I need to create bar charts based on this data with the 24 hours of the day along the bottom and the amount of man-hours worked for each hour of the day.
For example, if Alice worked a job from 15:30 to 19:30 and Bob worked from 12:15 to 17:00, the chart would look like this:
I have a WTFey solution right now that involves a spreadsheet going out to column DY or something like that. The needed resolution is 15-minute intervals.
I'm assuming this is something best done in the database then exported for chart creation. Let me know if I'm missing any details. Thanks.
|
[
"Create a table with just time in it from midnight to midnight containing each minute of the day. In the data warehouse world we would call this a time dimension. Here's an example:\nTIME_DIM\n -id\n -time_of_day\n -interval_15 \n -interval_30\n\nan example of the data in the table would be\nid time_of_day interval_15 interval_30\n1 00:00 00:00 00:00\n...\n30 00:23 00:15 00:00\n...\n100 05:44 05:30 05:30\n\nThen all you have to do is join your table to the time dimension and then group by interval_15. For example:\nSELECT b.interval_15, count(*) \nFROM my_data_table a\nINNER JOIN time_dim b ON a.time_field = b.time\nWHERE a.date_field = now()\nGROUP BY b.interval_15\n\n",
"I came up with a pseudocode solution, hope it helps.\ncreate an array named timetable with 24 entries\ninitialise timetable to zero\n\nfor each user in SQLtable\n firsthour = user.firsthour\n lasthour = user.lasthour\n\n firstminutes = 4 - (rounded down integer(user.firstminutes/15))\n lastminutes = rounded down integer(user.lastminutes/15)\n\n timetable(firsthour) = timetable(firsthour) + firstminutes\n timetable(lasthour) = timetable(lasthour) + lastminutes\n\n for index=firsthour+1 to lasthour-1\n timetable(index) = timetable(index) + 4\n next index\n\nnext user\n\nNow the timetable array holds the values you desire in 15 minute granularity, ie. a value of 4 = 1 hour, 5 = 1 hour 15 minutes, 14 = 3 hours 30 minutes.\n",
"Here's another pseudocode solution from a different angle; a bit more intensive because it does 96 queries for every 24hr period:\nresults = []\nfor time in range(0, 24, .25):\n amount = mysql(\"select count(*) from User_Activity_Table where time >= start_time and time <= end_time\")\n results.append(amount)\n\n",
"How about this:\nUse that \"times\" table, but with two columns, containing the 15-minute intervals. The from_times are the 15-minutely times, the to_times are a second before the next from_times. For example 12:30:00 to 12:44:59.\nNow get your person work table, which I've called \"activity\" here, with start_time and end_time columns.\nI added values for Alice and Bob as per the original question.\nHere's the query from MySQL:\nSELECT HOUR(times.from_time) AS 'TIME', count(*) / 4 AS 'HOURS'\nFROM times\n JOIN activity\n ON times.from_time >= activity.start_time AND \n times.to_time <= activity.end_time\nGROUP BY HOUR(times.from_time)\nORDER BY HOUR(times.from_time)\n\nwhich gives me this:\nTIME HOURS\n12 0.7500\n13 1.0000\n14 1.0000\n15 1.5000\n16 2.0000\n17 1.0000\n18 1.0000\n19 0.7500\n\nLooks about right...\n"
] |
[
2,
0,
0,
0
] |
[] |
[] |
[
"charts",
"excel",
"group_by",
"sql"
] |
stackoverflow_0000079789_charts_excel_group_by_sql.txt
|
Q:
Building a query string based radiobutton values
I'd like to build a query string based on values taken from 5 groups of radio buttons.
Selecting any of the groups is optional so you could pick set A or B or both. How would I build the querystring based on this? I'm using VB.NET 1.1
The asp:Radiobuttonlist control does not like null values so I'm resorting to normal html radio buttons. My question is how do I string up the selected values into a querystring
I have something like this right now:
HTML:
<input type="radio" name="apBoat" id="Apb1" value="1" /> detail1
<input type="radio" name="apBoat" id="Apb2" value="2" /> detail2
<input type="radio" name="cBoat" id="Cb1" value="1" /> detail1
<input type="radio" name="cBoat" id="Cb2" value="2" /> detail2
VB.NET
Public Sub btnSubmit_click(ByVal sender As Object, ByVal e As System.EventArgs)
Dim queryString As String = "nextpage.aspx?"
Dim aBoat, bBoat, cBoat bas String
aBoat = "apb=" & Request("aBoat")
bBoat = "bBoat=" & Request("bBoat")
cBoat = "cBoat=" & Request("cBoat ")
queryString += aBoat & bBoat & cBoat
Response.Redirect(queryString)
End Sub
Is this the best way to build the query string or should I take a different approach altogether? Appreciate all the help I can get. Thanks much.
A:
The easiest way would be to use a non-server-side <form> tag with the method="get" then when the form was submitted you would automatically get the querystring you are after (and don't forget to add <label> tags and associate them with your radio buttons):
<form action="..." method="get">
<input type="radio" name="apBoat" id="Apb1" value="1" /> <label for="Apb1">detail1</label>
<input type="radio" name="apBoat" id="Apb2" value="2" /> <label for="Apb2">detail2</label>
<input type="radio" name="cBoat" id="Cb1" value="1" /> <label for="Cb1">detail1</label>
<input type="radio" name="cBoat" id="Cb2" value="2" /> <label for="Cb2">detail2</label>
</form>
A:
You could use StringBuilder instead of creating those three different strings. You can help it out by preallocating about how much memory you need to store your string. You could also use String.Format instead.
If this is all your submit button is doing why make it a .Net page at all and instead just have a GET form go to nextpage.aspx for processing?
|
Building a query string based radiobutton values
|
I'd like to build a query string based on values taken from 5 groups of radio buttons.
Selecting any of the groups is optional so you could pick set A or B or both. How would I build the querystring based on this? I'm using VB.NET 1.1
The asp:Radiobuttonlist control does not like null values so I'm resorting to normal html radio buttons. My question is how do I string up the selected values into a querystring
I have something like this right now:
HTML:
<input type="radio" name="apBoat" id="Apb1" value="1" /> detail1
<input type="radio" name="apBoat" id="Apb2" value="2" /> detail2
<input type="radio" name="cBoat" id="Cb1" value="1" /> detail1
<input type="radio" name="cBoat" id="Cb2" value="2" /> detail2
VB.NET
Public Sub btnSubmit_click(ByVal sender As Object, ByVal e As System.EventArgs)
Dim queryString As String = "nextpage.aspx?"
Dim aBoat, bBoat, cBoat bas String
aBoat = "apb=" & Request("aBoat")
bBoat = "bBoat=" & Request("bBoat")
cBoat = "cBoat=" & Request("cBoat ")
queryString += aBoat & bBoat & cBoat
Response.Redirect(queryString)
End Sub
Is this the best way to build the query string or should I take a different approach altogether? Appreciate all the help I can get. Thanks much.
|
[
"The easiest way would be to use a non-server-side <form> tag with the method=\"get\" then when the form was submitted you would automatically get the querystring you are after (and don't forget to add <label> tags and associate them with your radio buttons):\n<form action=\"...\" method=\"get\">\n <input type=\"radio\" name=\"apBoat\" id=\"Apb1\" value=\"1\" /> <label for=\"Apb1\">detail1</label>\n <input type=\"radio\" name=\"apBoat\" id=\"Apb2\" value=\"2\" /> <label for=\"Apb2\">detail2</label>\n\n <input type=\"radio\" name=\"cBoat\" id=\"Cb1\" value=\"1\" /> <label for=\"Cb1\">detail1</label>\n <input type=\"radio\" name=\"cBoat\" id=\"Cb2\" value=\"2\" /> <label for=\"Cb2\">detail2</label>\n</form>\n\n",
"You could use StringBuilder instead of creating those three different strings. You can help it out by preallocating about how much memory you need to store your string. You could also use String.Format instead.\nIf this is all your submit button is doing why make it a .Net page at all and instead just have a GET form go to nextpage.aspx for processing?\n"
] |
[
1,
0
] |
[] |
[] |
[
"asp.net",
"vb.net"
] |
stackoverflow_0000081628_asp.net_vb.net.txt
|
Q:
Ruby/Rails Collection to Collection
I have a two tables joined with a join table - this is just pseudo code:
Library
Book
LibraryBooks
What I need to do is if i have the id of a library, i want to get all the libraries that all the books that this library has are in.
So if i have Library 1, and Library 1 has books A and B in them, and books A and B are in Libraries 1, 2, and 3, is there an elegant (one line) way todo this in rails?
I was thinking:
l = Library.find(1)
allLibraries = l.books.libraries
But that doesn't seem to work. Suggestions?
A:
l = Library.find(:all, :include => :books)
l.books.map { |b| b.library_ids }.flatten.uniq
Note that map(&:library_ids) is slower than map { |b| b.library_ids } in Ruby 1.8.6, and faster in 1.9.0.
I should also mention that if you used :joins instead of include there, it would find the library and related books all in the same query speeding up the database time. :joins will only work however if a library has books.
A:
Perhaps:
l.books.map {|b| b.libraries}
or
l.books.map {|b| b.libraries}.flatten.uniq
if you want it all in a flat array.
Of course, you should really define this as a method on Library, so as to uphold the noble cause of encapsulation.
A:
If you want a one-dimensional array of libraries returned, with duplicates removed.
l.books.map{|b| b.libraries}.flatten.uniq
A:
One problem with
l.books.map{|b| b.libraries}.flatten.uniq
is that it will generate one SQL call for each book in l. A better approach (assuming I understand your schema) might be:
LibraryBook.find(:all, :conditions => ['book_id IN (?)', l.book_ids]).map(&:library_id).uniq
|
Ruby/Rails Collection to Collection
|
I have a two tables joined with a join table - this is just pseudo code:
Library
Book
LibraryBooks
What I need to do is if i have the id of a library, i want to get all the libraries that all the books that this library has are in.
So if i have Library 1, and Library 1 has books A and B in them, and books A and B are in Libraries 1, 2, and 3, is there an elegant (one line) way todo this in rails?
I was thinking:
l = Library.find(1)
allLibraries = l.books.libraries
But that doesn't seem to work. Suggestions?
|
[
"l = Library.find(:all, :include => :books)\nl.books.map { |b| b.library_ids }.flatten.uniq\n\nNote that map(&:library_ids) is slower than map { |b| b.library_ids } in Ruby 1.8.6, and faster in 1.9.0.\nI should also mention that if you used :joins instead of include there, it would find the library and related books all in the same query speeding up the database time. :joins will only work however if a library has books. \n",
"Perhaps:\nl.books.map {|b| b.libraries}\n\nor\nl.books.map {|b| b.libraries}.flatten.uniq\n\nif you want it all in a flat array.\nOf course, you should really define this as a method on Library, so as to uphold the noble cause of encapsulation.\n",
"If you want a one-dimensional array of libraries returned, with duplicates removed.\nl.books.map{|b| b.libraries}.flatten.uniq\n\n",
"One problem with \nl.books.map{|b| b.libraries}.flatten.uniq\n\nis that it will generate one SQL call for each book in l. A better approach (assuming I understand your schema) might be:\nLibraryBook.find(:all, :conditions => ['book_id IN (?)', l.book_ids]).map(&:library_id).uniq\n\n"
] |
[
7,
3,
2,
2
] |
[] |
[] |
[
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000079632_ruby_ruby_on_rails.txt
|
Q:
SQL Text Searching, AND Ordering
I have a query:
SELECT *
FROM Items
WHERE column LIKE '%foo%'
OR column LIKE '%bar%'
How do I order the results?
Let's say I have rows that match 'foo' and rows that match 'bar' but I also have a row with 'foobar'.
How do I order the returned rows so that the first results are the ones that matched more LIKEs?
A:
Case or the kind of conditional construct your RDBMS supports is a way to do it
select *, case when col like '%foo%' and col like '%bar%' then 2 end
else 1 end as ordcol
from items
where col like '%foo%' or col like '%bar%' order by ordcol
A:
SELECT * FROM Items WHERE column LIKE '%foo%' OR column LIKE '%bar%'
ORDER BY
(IF(column LIKE '%foo%',1,0) + IF(column LIKE '%bar%',1,0))
DESC
The syntax for if is
IF ( condition, true_value, false_value )
A:
You could use a UNION:
SELECT * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%'
UNION
SELECT * FROM Items WHERE column LIKE '%foo%' AND NOT (column LIKE '%bar%')
UNION
SELECT * FROM Items WHERE column LIKE '%bar%' AND NOT (column LIKE '%foo%');
But this may be bad performance-wise. Worse, I'm guessing that you want to use this to construct a search engine that gives the most meaningful results first, and then the number of words does not remain limited to 2.
In that case, you could create a score column which contains the number of matches. Something like this:
SELECT
*,
(IF(column LIKE '%bar%', 1, 0) + IF(column LIKE '%foo%', 1, 0)) AS score
FROM Items
WHERE column LIKE '%foo%' OR column LIKE '%bar%'
ORDER BY score DESC;
My SQL is a bit rusty, but something like this should be possible in at least MySQL 5.0. See also the manual for the IF function:
http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html
A:
SELECT * FROM Items
WHERE col LIKE '%foo%'
OR col LIKE '%bar%'
ORDER BY CASE WHEN col LIKE '%foo%' THEN 1
WHEN col LIKE '%bar%' THEN 2
END
A:
Which DBMS?
It can be done via CTE or Union for example, but if you are using, for example, MySQL, then you can forget about it.
A:
Try this code:
SELECT * FROM Items WHERE column LIKE '%foo%' OR column LIKE '%bar%'
order by (select count(*) from items i where i.column= item.column) DESC
You could also group by column and count(*) then ORDER, if you don't care about the details.
A:
You might want to give this a go:
SELECT *
FROM Items
WHERE column LIKE '%foo%' OR column LIKE '%bar%'
ORDER BY CASE WHEN column LIKE '%foo%' AND column LIKE '%bar%' THEN 1 ELSE 0 END DESC
Note: this is drycoded and probably not very portable.
A:
2 Queries:
SELECT * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%';
SELECT * FROM Items WHERE (column LIKE '%foo%' AND column NOT LIKE '%bar%') OR (column NOT LIKE '%foo%' AND LIKE '%bar%')
(No XOR in SQL)
A:
Not all RDBMS support IF (or DECODE in Oracle) statements. If not you could use a subquery to define table "a" and search for all employee's named JO SMITH or a combination.
SELECT
a.employee_id,
a.surname,
sum(a.counter)
FROM
(SELECT
employee_id,
surname,
1 as counter
FROM
MyTable
WHERE
surname like '%SMITH%'
UNION ALL
SELECT
employee_id,
surname,
1 as counter
FROM
MyTable
WHERE
surname like '%JO%'
) a
GROUP BY
a.employee_id,
a.surname
ORDER BY 3,1,2
Make sure you use UNION ALL otherwise it will not work. Also you may way to use UPPER() to make your search non-case sensitive.
A:
As your query is currently written, the WHERE clause will not give you any information that can be used to sort your results. I like Brian's idea; add a constant column and UNION the queries and you could even get everything in one result set. For example:
SELECT 1 as rank, * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%'
UNION
SELECT 2 as rank, * FROM Items WHERE column LIKE '%foo%' AND column NOT LIKE '%bar%'
UNION
SELECT 2 as rank, * FROM Items WHERE column LIKE '%bar%' AND column NOT LIKE '%foo%'
ORDER BY rank
However, this would only give you something like this:
The unordered set of all rows that match foo and match bar
followed by (the unordered set of) all rows that match foo or bar, but not both (although you could break this up into two separate groups using a different constant in the last SELECT statement).
Which might be just what you're looking for, but it wouldn't tell you which rows matched foo three times, or sort them ahead of rows that only contained one instance of foo. Also all those LIKEs can get expensive. If what you're really looking to do is sort results based on relevance (however you define that) you might be better off using a full text index. If you're using MS SQL Server, it has a built-in service that will do this, and there are also third-party products that will do the same.
EDIT: After looking at all the other answers (there were only two when I started mine - I'm obviously going to have to learn to think faster ;-) ) it's obvious that there are several ways to go about this, depending on exactly what you're trying to accomplish. I would advise you to test and compare solutions based on how they perform on your system. I'm not a performance/tuning expert, but functions tend to slow things down, especially if you're sorting on the result of a function. The LIKE operator isn't necessarily spry, either. As a developer, it seems natural to use familiar constructs like "IF" and "CASE", but queries that use more of a set-based approach usually have better performance in a RDMS. Again, YMMV, so it's best to test if you're at all concerned about performance.
|
SQL Text Searching, AND Ordering
|
I have a query:
SELECT *
FROM Items
WHERE column LIKE '%foo%'
OR column LIKE '%bar%'
How do I order the results?
Let's say I have rows that match 'foo' and rows that match 'bar' but I also have a row with 'foobar'.
How do I order the returned rows so that the first results are the ones that matched more LIKEs?
|
[
"Case or the kind of conditional construct your RDBMS supports is a way to do it\nselect *, case when col like '%foo%' and col like '%bar%' then 2 end \nelse 1 end as ordcol \nfrom items \nwhere col like '%foo%' or col like '%bar%' order by ordcol\n\n",
"SELECT * FROM Items WHERE column LIKE '%foo%' OR column LIKE '%bar%' \nORDER BY \n(IF(column LIKE '%foo%',1,0) + IF(column LIKE '%bar%',1,0)) \nDESC\n\nThe syntax for if is \nIF ( condition, true_value, false_value )\n",
"You could use a UNION:\nSELECT * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%'\nUNION\nSELECT * FROM Items WHERE column LIKE '%foo%' AND NOT (column LIKE '%bar%')\nUNION\nSELECT * FROM Items WHERE column LIKE '%bar%' AND NOT (column LIKE '%foo%');\n\nBut this may be bad performance-wise. Worse, I'm guessing that you want to use this to construct a search engine that gives the most meaningful results first, and then the number of words does not remain limited to 2.\nIn that case, you could create a score column which contains the number of matches. Something like this:\nSELECT\n *,\n (IF(column LIKE '%bar%', 1, 0) + IF(column LIKE '%foo%', 1, 0)) AS score\nFROM Items\nWHERE column LIKE '%foo%' OR column LIKE '%bar%'\nORDER BY score DESC;\n\nMy SQL is a bit rusty, but something like this should be possible in at least MySQL 5.0. See also the manual for the IF function:\nhttp://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html\n",
"SELECT * FROM Items\nWHERE col LIKE '%foo%'\n OR col LIKE '%bar%'\nORDER BY CASE WHEN col LIKE '%foo%' THEN 1\n WHEN col LIKE '%bar%' THEN 2\n END\n\n",
"Which DBMS?\nIt can be done via CTE or Union for example, but if you are using, for example, MySQL, then you can forget about it.\n",
"Try this code:\nSELECT * FROM Items WHERE column LIKE '%foo%' OR column LIKE '%bar%'\norder by (select count(*) from items i where i.column= item.column) DESC \n\nYou could also group by column and count(*) then ORDER, if you don't care about the details. \n",
"You might want to give this a go:\nSELECT *\nFROM Items\nWHERE column LIKE '%foo%' OR column LIKE '%bar%'\nORDER BY CASE WHEN column LIKE '%foo%' AND column LIKE '%bar%' THEN 1 ELSE 0 END DESC\n\nNote: this is drycoded and probably not very portable.\n",
"2 Queries: \nSELECT * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%';\nSELECT * FROM Items WHERE (column LIKE '%foo%' AND column NOT LIKE '%bar%') OR (column NOT LIKE '%foo%' AND LIKE '%bar%')\n(No XOR in SQL)\n",
"Not all RDBMS support IF (or DECODE in Oracle) statements. If not you could use a subquery to define table \"a\" and search for all employee's named JO SMITH or a combination.\nSELECT \n a.employee_id,\n a.surname,\n sum(a.counter)\nFROM\n\n (SELECT\n employee_id,\n surname,\n 1 as counter\n FROM\n MyTable\n WHERE\n surname like '%SMITH%'\n\n UNION ALL\n\n SELECT\n employee_id,\n surname,\n 1 as counter\n FROM\n MyTable\n WHERE\n surname like '%JO%'\n ) a\n\nGROUP BY \n a.employee_id,\n a.surname\nORDER BY 3,1,2\n\nMake sure you use UNION ALL otherwise it will not work. Also you may way to use UPPER() to make your search non-case sensitive.\n",
"As your query is currently written, the WHERE clause will not give you any information that can be used to sort your results. I like Brian's idea; add a constant column and UNION the queries and you could even get everything in one result set. For example:\nSELECT 1 as rank, * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%'\nUNION\nSELECT 2 as rank, * FROM Items WHERE column LIKE '%foo%' AND column NOT LIKE '%bar%'\nUNION\nSELECT 2 as rank, * FROM Items WHERE column LIKE '%bar%' AND column NOT LIKE '%foo%'\nORDER BY rank\n\nHowever, this would only give you something like this:\n\nThe unordered set of all rows that match foo and match bar\nfollowed by (the unordered set of) all rows that match foo or bar, but not both (although you could break this up into two separate groups using a different constant in the last SELECT statement).\n\nWhich might be just what you're looking for, but it wouldn't tell you which rows matched foo three times, or sort them ahead of rows that only contained one instance of foo. Also all those LIKEs can get expensive. If what you're really looking to do is sort results based on relevance (however you define that) you might be better off using a full text index. If you're using MS SQL Server, it has a built-in service that will do this, and there are also third-party products that will do the same.\nEDIT: After looking at all the other answers (there were only two when I started mine - I'm obviously going to have to learn to think faster ;-) ) it's obvious that there are several ways to go about this, depending on exactly what you're trying to accomplish. I would advise you to test and compare solutions based on how they perform on your system. I'm not a performance/tuning expert, but functions tend to slow things down, especially if you're sorting on the result of a function. The LIKE operator isn't necessarily spry, either. As a developer, it seems natural to use familiar constructs like \"IF\" and \"CASE\", but queries that use more of a set-based approach usually have better performance in a RDMS. Again, YMMV, so it's best to test if you're at all concerned about performance.\n"
] |
[
4,
2,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"search",
"sql"
] |
stackoverflow_0000079367_search_sql.txt
|
Q:
Is there a folder in both WinXP and WinVista to which all users have writing permissions?
We have a NET app that gets installed to the Program Files folder.
The app itself writes some files and creates some directories to its app folder.
But when a normal windows user tries to use our application it crashes because that user does not have permission to write to app folder.
Is there any folder in both WinXP and WinVista to which all users have writing permissions by default? All User folder or something like that?
A:
There is no such folder.
But you can create one.
There is CSIDL_COMMON_APPDATA which in Vista maps to %ProgramData% (c:\ProgramData) and in XP maps to c:\Documents and Settings\AllUsers\Application Data
Feel free to create a folder there in your installer and set the ACL so that everyone can write to that folder.
Keep in mind that COMMON_APPDATA was implemented in Version 5 of the common controls library which means that it's available in Windows 2000 and later. In NT4, you can create that folder in your installation directory and in Windows 98 and below it doesn't matter anyways due to these systems not having a permission system anyways.
Here is some sample InnoSetup code to create that folder:
[Dirs]
Name: {code:getDBPath}; Flags: uninsalwaysuninstall; Permissions: authusers-modify
[Code]
function getDBPath(Param: String): String;
var
Version: TWindowsVersion;
begin
Result := ExpandConstant('{app}\data');
GetWindowsVersionEx(Version);
if (Version.Major >= 5) then begin
Result := ExpandConstant('{commonappdata}\myprog');
end;
end;
A:
I'm not sure that there is a single path to which all non-administrator users have permission to write to.
I think the correct one would be <User>\Application Data
|
Is there a folder in both WinXP and WinVista to which all users have writing permissions?
|
We have a NET app that gets installed to the Program Files folder.
The app itself writes some files and creates some directories to its app folder.
But when a normal windows user tries to use our application it crashes because that user does not have permission to write to app folder.
Is there any folder in both WinXP and WinVista to which all users have writing permissions by default? All User folder or something like that?
|
[
"There is no such folder.\nBut you can create one.\nThere is CSIDL_COMMON_APPDATA which in Vista maps to %ProgramData% (c:\\ProgramData) and in XP maps to c:\\Documents and Settings\\AllUsers\\Application Data\nFeel free to create a folder there in your installer and set the ACL so that everyone can write to that folder.\nKeep in mind that COMMON_APPDATA was implemented in Version 5 of the common controls library which means that it's available in Windows 2000 and later. In NT4, you can create that folder in your installation directory and in Windows 98 and below it doesn't matter anyways due to these systems not having a permission system anyways.\nHere is some sample InnoSetup code to create that folder:\n[Dirs]\nName: {code:getDBPath}; Flags: uninsalwaysuninstall; Permissions: authusers-modify\n\n[Code]\n\n\nfunction getDBPath(Param: String): String;\nvar\n Version: TWindowsVersion;\nbegin\n Result := ExpandConstant('{app}\\data');\n GetWindowsVersionEx(Version);\n if (Version.Major >= 5) then begin\n Result := ExpandConstant('{commonappdata}\\myprog');\n end;\nend;\n\n",
"I'm not sure that there is a single path to which all non-administrator users have permission to write to.\nI think the correct one would be <User>\\Application Data\n"
] |
[
2,
0
] |
[] |
[] |
[
"installation",
"windows"
] |
stackoverflow_0000081686_installation_windows.txt
|
Q:
Does the VFW (Video For Windows) API support Alpha Channel Transparency?
Does the VFW (Video For Windows) API support Alpha Channel Transparency? I want to be able to export video with Alpha channel information. How can I do this in VC6?
A:
I'm pretty sure it does; just set the pixel format to RGB32, which should give you an alpha channel to use.
Of course, finding a video compression format that fits all your needs and supports alpha channel is another problem.
|
Does the VFW (Video For Windows) API support Alpha Channel Transparency?
|
Does the VFW (Video For Windows) API support Alpha Channel Transparency? I want to be able to export video with Alpha channel information. How can I do this in VC6?
|
[
"I'm pretty sure it does; just set the pixel format to RGB32, which should give you an alpha channel to use.\nOf course, finding a video compression format that fits all your needs and supports alpha channel is another problem.\n"
] |
[
2
] |
[] |
[] |
[
"alpha",
"color_channel",
"mfc",
"vfw",
"visual_c++_6"
] |
stackoverflow_0000081791_alpha_color_channel_mfc_vfw_visual_c++_6.txt
|
Q:
HTML using Groovy MarkupBuilder, how do I elegantly mix tags and text?
When using Groovy MarkupBuilder, I have places where I need to output text into the document, or call a function which outputs text into the document. Currently, I'm using the undefined tag "text" to do the output. Is there a better way to write this code?
li {
text("${type.getAlias()} blah blah ")
function1(type.getXYZ())
if (type instanceof Class1) {
text(" implements ")
ft.getList().each {
if (it == '') return
text(it)
if (!function2(type, it)) text(", ")
}
}
}
A:
Actually, the recommended way now is to use mkp.yield, e.g.,
src.p {
mkp.yield 'Some element that has a '
strong 'child element'
mkp.yield ' which seems pretty basic.'
}
to produce
<p>Some element that has a <strong>child element</strong> which seems pretty basic.</p>
A:
Include a method:
void text(n){
builder.yield n
}
Most likely you (I) copied this code from somewhere that had a text method, but you didn't also copy the text method. Since MarkupBuilder accepts any name for the name of a tag and browsers ignore unknown markup, it just happened to work.
|
HTML using Groovy MarkupBuilder, how do I elegantly mix tags and text?
|
When using Groovy MarkupBuilder, I have places where I need to output text into the document, or call a function which outputs text into the document. Currently, I'm using the undefined tag "text" to do the output. Is there a better way to write this code?
li {
text("${type.getAlias()} blah blah ")
function1(type.getXYZ())
if (type instanceof Class1) {
text(" implements ")
ft.getList().each {
if (it == '') return
text(it)
if (!function2(type, it)) text(", ")
}
}
}
|
[
"Actually, the recommended way now is to use mkp.yield, e.g.,\nsrc.p {\n mkp.yield 'Some element that has a '\n strong 'child element'\n mkp.yield ' which seems pretty basic.'\n}\n\nto produce\n<p>Some element that has a <strong>child element</strong> which seems pretty basic.</p>\n\n",
"Include a method:\nvoid text(n){\n builder.yield n\n}\n\nMost likely you (I) copied this code from somewhere that had a text method, but you didn't also copy the text method. Since MarkupBuilder accepts any name for the name of a tag and browsers ignore unknown markup, it just happened to work.\n"
] |
[
8,
2
] |
[] |
[] |
[
"groovy",
"html"
] |
stackoverflow_0000023169_groovy_html.txt
|
Q:
Generic Method Type Safety
I have the concept of NodeTypes and Nodes. A NodeType is a bunch of meta-data which you can create Node instances from (a lot like the whole Class / Object relationship).
I have various NodeType implementations and various Node implementations.
In my AbstractNodeType (top level for NodeTypes) I have ab abstract createInstance() method that will, once implemented by the subclass, creates the correct Node instance:
public abstract class AbstractNodeType {
// ..
public abstract <T extends AbstractNode> T createInstance();
}
In my NodeType implementations I implement the method like this:
public class ThingType {
// ..
public Thing createInstance() {
return new Thing(/* .. */);
}
}
// FYI
public class Thing extends AbstractNode { /* .. */ }
This is all well and good, but public Thing createInstance() creates a warning about type safety. Specifically:
Type safety: The return type Thing for
createInstance() from the type
ThingType needs unchecked conversion
to conform to T from the type
AbstractNodeType
What am I doing wrong to cause such a warning?
How can I re-factor my code to fix this?
@SuppressWarnings("unchecked") is not good, I wish to fix this by coding it correctly, not ignoring the problem!
A:
You can just replace <T extends AbstractNode> T with AbstractNode thanks to the magic of covariant returns. Java 5 added support, but it didn't receive the pub it deserved.
A:
Two ways:
(a) Don't use generics. It's probably not necessary in this case. (Although that depends on the code you havn't shown.)
(b) Generify AbstractNodeType as follows:
public abstract class AbstractNodeType<T extends AbstractNode> {
public abstract T createInstance();
}
public class ThingType<Thing> {
public Thing createInstance() {
return new Thing(...);
}
}
A:
Something like that should work:
interface Node{
}
interface NodeType<T extends Node>{
T createInstance();
}
class Thing implements Node{}
class ThingType implements NodeType<Thing>{
public Thing createInstance() {
return new Thing();
}
}
class UberThing extends Thing{}
class UberThingType extends ThingType{
@Override
public UberThing createInstance() {
return new UberThing();
}
}
|
Generic Method Type Safety
|
I have the concept of NodeTypes and Nodes. A NodeType is a bunch of meta-data which you can create Node instances from (a lot like the whole Class / Object relationship).
I have various NodeType implementations and various Node implementations.
In my AbstractNodeType (top level for NodeTypes) I have ab abstract createInstance() method that will, once implemented by the subclass, creates the correct Node instance:
public abstract class AbstractNodeType {
// ..
public abstract <T extends AbstractNode> T createInstance();
}
In my NodeType implementations I implement the method like this:
public class ThingType {
// ..
public Thing createInstance() {
return new Thing(/* .. */);
}
}
// FYI
public class Thing extends AbstractNode { /* .. */ }
This is all well and good, but public Thing createInstance() creates a warning about type safety. Specifically:
Type safety: The return type Thing for
createInstance() from the type
ThingType needs unchecked conversion
to conform to T from the type
AbstractNodeType
What am I doing wrong to cause such a warning?
How can I re-factor my code to fix this?
@SuppressWarnings("unchecked") is not good, I wish to fix this by coding it correctly, not ignoring the problem!
|
[
"You can just replace <T extends AbstractNode> T with AbstractNode thanks to the magic of covariant returns. Java 5 added support, but it didn't receive the pub it deserved.\n",
"Two ways:\n(a) Don't use generics. It's probably not necessary in this case. (Although that depends on the code you havn't shown.)\n(b) Generify AbstractNodeType as follows:\npublic abstract class AbstractNodeType<T extends AbstractNode> {\n public abstract T createInstance();\n}\npublic class ThingType<Thing> {\n public Thing createInstance() {\n return new Thing(...);\n }\n}\n\n",
"Something like that should work:\ninterface Node{\n}\ninterface NodeType<T extends Node>{\n T createInstance();\n}\nclass Thing implements Node{}\nclass ThingType implements NodeType<Thing>{\n public Thing createInstance() {\n return new Thing();\n }\n}\nclass UberThing extends Thing{}\nclass UberThingType extends ThingType{\n @Override\n public UberThing createInstance() {\n return new UberThing();\n }\n}\n\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"generics",
"java"
] |
stackoverflow_0000081723_generics_java.txt
|
Q:
Orthogonal variables code duplication problem
I've started refactoring some legacy code recently and came across two functions for drawing a coordinate grid, the problem is that these functions differ only in orthogonal variables they treat, something like that
void DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1)
{
for(int x = x0; x < x1; x += step)
{
MoveToEx(dc, x, y0, NULL);
LineTo(dc, x, y1);
}
}
void DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1)
{
for(int y = y0; y < y1; y += step)
{
MoveToEx(dc, x0, y, NULL);
LineTo(dc, x1, y);
}
}
So if I decide to add some fancy stuff, like antialiasing or merely change drawing pencil or whatever I'll have to put the same code in both of them and it's code duplication and it's bad we all know why.
My question is how would you rewrite these two functions into a single one to avoid this problem?
A:
Why you just do not extract the body of the for cycle into a separate function? Then you can do the funny stuff in the extracted function.
void DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1)
{
for(int x = x0; x < x1; x += step)
{
DrawScale(dc, x, y0, x, y1);
}
}
void DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1)
{
for(int y = y0; y < y1; y += step)
{
DrawScale(dc, x0, y, x1, y);
}
}
private void DrawScale(HDC dc, int x0, int y0, int x1, int y1)
{
//Add funny stuff here
MoveToEx(dc, x0, y0, NULL);
LineTo(dc, x1, y1);
//Add funny stuff here
}
A:
Drawing a line is simply joining two points, and drawing a scaling incrementing (x0,y0) and(x1,y1) in a particular direction, through X, and/or through Y.
This boils down to, in the scale case, which direction(s) stepping occurs (maybe both directions for fun).
template< int XIncrement, YIncrement >
struct DrawScale
{
void operator()(HDC dc, int step, int x0, int x1, int y0, int y1)
{
const int deltaX = XIncrement*step;
const int deltaY = YIncrement*step;
const int ymax = y1;
const int xmax = x1;
while( x0 < xmax && y0 < ymax )
{
MoveToEx(dc, x0, y0, NULL);
LineTo(dc, x1, y1);
x0 += deltaX;
x1 += deltaX;
y0 += deltaY;
y1 += deltaY;
}
}
};
typedef DrawScale< 1, 0 > DrawScaleX;
typedef DrawScale< 0, 1 > DrawScaleY;
The template will do its job: at compile time the compiler will remove all the null statements i.e. deltaX or deltaY is 0 regarding which function is called and half of the code goes away in each functor.
You can add you anti-alias, pencil stuff inside this uniq function and get the code properly generated generated by the compiler.
This is cut and paste on steroids ;-)
-- ppi
A:
Here is my own solution
class CoordGenerator
{
public:
CoordGenerator(int _from, int _to, int _step)
:from(_from), to(_to), step(_step), pos(_from){}
virtual POINT GetPoint00() const = 0;
virtual POINT GetPoint01() const = 0;
bool Next()
{
if(pos > step) return false;
pos += step;
}
protected:
int from;
int to;
int step;
int pos;
};
class GenX: public CoordGenerator
{
public:
GenX(int x0, int x1, int step, int _y0, int _y1)
:CoordGenerator(x0, x1, step),y0(_y0), y1(_y1){}
virtual POINT GetPoint00() const
{
const POINT p = {pos, y0};
return p;
}
virtual POINT GetPoint01() const
{
const POINT p = {pos, y1};
return p;
}
private:
int y0;
int y1;
};
class GenY: public CoordGenerator
{
public:
GenY(int y0, int y1, int step, int _x0, int _x1)
:CoordGenerator(y0, y1, step),x0(_x0), x1(_x1){}
virtual POINT GetPoint00() const
{
const POINT p = {x0, pos};
return p;
}
virtual POINT GetPoint01() const
{
const POINT p = {x1, pos};
return p;
}
private:
int x1;
int x0;
};
void DrawScale(HDC dc, CoordGenerator* g)
{
do
{
POINT p = g->GetPoint00();
MoveToEx(dc, p.x, p.y, 0);
p = g->GetPoint01();
LineTo(dc, p.x, p.y);
}while(g->Next());
}
But I it seems to me too complicated for such a tiny problem, so I'm looking forward to still see your solutions.
A:
Well, an obvious "solution" would be to make a single function and add one extra parameter (of enum-like type). And then do an if() or switch() inside, and perform the appropriate actions. Because hey, the functionality of the functions is different, so you have to do those different actions somewhere.
However, this adds runtime complexity (check things at runtime) in a place that could be just better checked at compile time.
I don't understand what's the problem in adding extra parameters in the future in both (or more functions). It goes like this:
add more parameters to all functions
compile your code, it won't compile in a bunch of places because it does not pass new parameters.
fix all places that call those functions by passing new parameters.
profit! :)
If it's C++, of course you could make the function be a template, and instead adding an extra parameter, you add a template parameter, and then specialize template implementations to do different things. But this is just obfuscating the point, in my opinion. Code becomes harder to understand, and the process of extending it with more parameters is still exactly the same:
add extra parameters
compile code, it won't compile in a bunch of places
fix all places that call that function
So you've won nothing, but made code harder to understand. Not a worthy goal, IMO.
A:
I think I'd move:
MoveToEx(dc, x0, y, NULL);
LineTo(dc, x1, y);
into their own function DrawLine(x0,y0,x0,y0), which you can call from each of the existing functions.
Then there's one place to add extra drawing effects?
A:
A little templates... :)
void DrawLine(HDC dc, int x0, int y0, int x0, int x1)
{
// anti-aliasing stuff
MoveToEx(dc, x0, y0, NULL);
LineTo(dc, x1, y1);
}
struct DrawBinderX
{
DrawBinderX(int y0, int y1) : y0_(y0), y1_(y1) {}
void operator()(HDC dc, int i)
{
DrawLine(dc, i, y0_, i, y1_);
}
private:
int y0_;
int y1_;
};
struct DrawBinderY
{
DrawBinderX(int x0, int x1) : x0_(x0), x1_(x1) {}
void operator()(HDC dc, int i)
{
DrawLine(dc, x0_, i, x1_, i);
}
private:
int x0_;
int x1_;
};
template< class Drawer >
void DrawScale(Drawer drawer, HDC dc, int from, int to, int step)
{
for (int i = from; i < to; i += step)
{
drawer(dc, i);
}
}
void DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1)
{
DrawBindexX drawer(y0, y1);
DrawScale(drawer, dc, x0, x1, step);
}
void DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1)
{
DrawBindexY drawer( x0, x1 );
DrawScale(drawer, dc, y0, y1, step);
}
|
Orthogonal variables code duplication problem
|
I've started refactoring some legacy code recently and came across two functions for drawing a coordinate grid, the problem is that these functions differ only in orthogonal variables they treat, something like that
void DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1)
{
for(int x = x0; x < x1; x += step)
{
MoveToEx(dc, x, y0, NULL);
LineTo(dc, x, y1);
}
}
void DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1)
{
for(int y = y0; y < y1; y += step)
{
MoveToEx(dc, x0, y, NULL);
LineTo(dc, x1, y);
}
}
So if I decide to add some fancy stuff, like antialiasing or merely change drawing pencil or whatever I'll have to put the same code in both of them and it's code duplication and it's bad we all know why.
My question is how would you rewrite these two functions into a single one to avoid this problem?
|
[
"Why you just do not extract the body of the for cycle into a separate function? Then you can do the funny stuff in the extracted function. \nvoid DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1)\n{\n for(int x = x0; x < x1; x += step)\n {\n DrawScale(dc, x, y0, x, y1);\n }\n}\n\nvoid DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1)\n{\n for(int y = y0; y < y1; y += step)\n {\n DrawScale(dc, x0, y, x1, y);\n }\n}\n\nprivate void DrawScale(HDC dc, int x0, int y0, int x1, int y1)\n{\n //Add funny stuff here\n\n MoveToEx(dc, x0, y0, NULL);\n LineTo(dc, x1, y1);\n\n //Add funny stuff here\n}\n\n",
"Drawing a line is simply joining two points, and drawing a scaling incrementing (x0,y0) and(x1,y1) in a particular direction, through X, and/or through Y.\nThis boils down to, in the scale case, which direction(s) stepping occurs (maybe both directions for fun).\ntemplate< int XIncrement, YIncrement >\nstruct DrawScale\n{\n void operator()(HDC dc, int step, int x0, int x1, int y0, int y1)\n {\n const int deltaX = XIncrement*step;\n const int deltaY = YIncrement*step;\n const int ymax = y1;\n const int xmax = x1;\n while( x0 < xmax && y0 < ymax )\n {\n MoveToEx(dc, x0, y0, NULL);\n LineTo(dc, x1, y1);\n x0 += deltaX;\n x1 += deltaX;\n y0 += deltaY;\n y1 += deltaY;\n }\n }\n};\ntypedef DrawScale< 1, 0 > DrawScaleX;\ntypedef DrawScale< 0, 1 > DrawScaleY;\n\nThe template will do its job: at compile time the compiler will remove all the null statements i.e. deltaX or deltaY is 0 regarding which function is called and half of the code goes away in each functor.\nYou can add you anti-alias, pencil stuff inside this uniq function and get the code properly generated generated by the compiler.\nThis is cut and paste on steroids ;-)\n-- ppi\n",
"Here is my own solution\n\nclass CoordGenerator\n{\npublic:\n CoordGenerator(int _from, int _to, int _step)\n :from(_from), to(_to), step(_step), pos(_from){}\n virtual POINT GetPoint00() const = 0;\n virtual POINT GetPoint01() const = 0;\n bool Next()\n {\n if(pos > step) return false;\n pos += step;\n }\nprotected:\n int from;\n int to;\n int step;\n int pos;\n};\n\nclass GenX: public CoordGenerator\n{\npublic:\n GenX(int x0, int x1, int step, int _y0, int _y1)\n :CoordGenerator(x0, x1, step),y0(_y0), y1(_y1){}\n virtual POINT GetPoint00() const\n {\n const POINT p = {pos, y0};\n return p;\n }\n virtual POINT GetPoint01() const\n {\n const POINT p = {pos, y1};\n return p;\n }\nprivate:\n int y0;\n int y1;\n};\n\nclass GenY: public CoordGenerator\n{\npublic:\n GenY(int y0, int y1, int step, int _x0, int _x1)\n :CoordGenerator(y0, y1, step),x0(_x0), x1(_x1){}\n virtual POINT GetPoint00() const\n {\n const POINT p = {x0, pos};\n return p;\n }\n virtual POINT GetPoint01() const\n {\n const POINT p = {x1, pos};\n return p;\n }\nprivate:\n int x1;\n int x0;\n};\n\nvoid DrawScale(HDC dc, CoordGenerator* g)\n{\n do\n {\n POINT p = g->GetPoint00();\n MoveToEx(dc, p.x, p.y, 0);\n p = g->GetPoint01();\n LineTo(dc, p.x, p.y);\n }while(g->Next());\n}\n\nBut I it seems to me too complicated for such a tiny problem, so I'm looking forward to still see your solutions.\n",
"Well, an obvious \"solution\" would be to make a single function and add one extra parameter (of enum-like type). And then do an if() or switch() inside, and perform the appropriate actions. Because hey, the functionality of the functions is different, so you have to do those different actions somewhere.\nHowever, this adds runtime complexity (check things at runtime) in a place that could be just better checked at compile time.\nI don't understand what's the problem in adding extra parameters in the future in both (or more functions). It goes like this:\n\nadd more parameters to all functions\ncompile your code, it won't compile in a bunch of places because it does not pass new parameters.\nfix all places that call those functions by passing new parameters.\nprofit! :)\n\nIf it's C++, of course you could make the function be a template, and instead adding an extra parameter, you add a template parameter, and then specialize template implementations to do different things. But this is just obfuscating the point, in my opinion. Code becomes harder to understand, and the process of extending it with more parameters is still exactly the same:\n\nadd extra parameters\ncompile code, it won't compile in a bunch of places\nfix all places that call that function\n\nSo you've won nothing, but made code harder to understand. Not a worthy goal, IMO.\n",
"I think I'd move:\n MoveToEx(dc, x0, y, NULL);\n LineTo(dc, x1, y);\n\ninto their own function DrawLine(x0,y0,x0,y0), which you can call from each of the existing functions.\nThen there's one place to add extra drawing effects?\n",
"A little templates... :)\nvoid DrawLine(HDC dc, int x0, int y0, int x0, int x1)\n{\n // anti-aliasing stuff\n MoveToEx(dc, x0, y0, NULL);\n LineTo(dc, x1, y1);\n}\n\nstruct DrawBinderX\n{\n DrawBinderX(int y0, int y1) : y0_(y0), y1_(y1) {}\n\n void operator()(HDC dc, int i)\n {\n DrawLine(dc, i, y0_, i, y1_);\n }\n\nprivate:\n int y0_;\n int y1_;\n\n};\n\nstruct DrawBinderY\n{\n DrawBinderX(int x0, int x1) : x0_(x0), x1_(x1) {}\n\n void operator()(HDC dc, int i)\n {\n DrawLine(dc, x0_, i, x1_, i);\n }\n\nprivate:\n int x0_;\n int x1_;\n\n};\n\ntemplate< class Drawer >\nvoid DrawScale(Drawer drawer, HDC dc, int from, int to, int step)\n{\n for (int i = from; i < to; i += step)\n {\n drawer(dc, i);\n }\n}\n\nvoid DrawScaleX(HDC dc, int step, int x0, int x1, int y0, int y1)\n{\n DrawBindexX drawer(y0, y1);\n DrawScale(drawer, dc, x0, x1, step);\n}\n\nvoid DrawScaleY(HDC dc, int step, int x0, int x1, int y0, int y1)\n{\n DrawBindexY drawer( x0, x1 );\n DrawScale(drawer, dc, y0, y1, step);\n}\n\n"
] |
[
6,
2,
0,
0,
0,
0
] |
[] |
[] |
[
"c++",
"code_duplication"
] |
stackoverflow_0000080691_c++_code_duplication.txt
|
Q:
storing revision changes of a message
What algorithms and processes are involved in storing revision changes like stackoverflow and wikipedia do?
Is only one copy of the message kept? And if so is it only the latest copy? Then only changes to go back to the previous version(s) are stored from there? (This would make for a faster display of the main message).
Or are complete messages stored? And if so is the compare done between these on each display?
What algorithms are best used to determine the exact changes in the message? How is this data stored in a database?
If anyone knows exactly what wikipedia or stackoverlfow does I'd love to know.
A:
Mediawiki (the sotware for wikipedia) stores full text for all revision see the database schema. Each entry in the text table in Mediawiki has flags that tells if the content has been e.g. gziped, using a standard compression is often the sanest option.
I can't tell you how to do the diffs algorithmically, but what ever algorithm you use you should do it from two full versions of the text. That is fetch the complete version of old and new object from database then do the diff. This makes it possible to easily change the diffing algorithm.
Git is a great example of a Unix application that can do very cheap (storage and speedwise) delta storage. There are wikis that can use git e.g. ikiwiki, but I'm guessing you want to do it with a database.
A:
Usually messages are stored as complete snapshots. Previous versions are disabled, and the most recent is displayed. There may be optimizations used like caching which version is the most recent.
A:
The longest common substring algorithm can be used to detect differences between versions, but it is limited. For example, it does not detect the moving around of text as such, but it would see this as unrelated removals and insertions.
I suppose that websites normally store the latest copy in full, and apply reverse diffs from there. This is also the way CVS works, but Subversion uses forward diffs, which results in slower checkouts.
To store this in a database, one could maintain a main table with the latest versions, and have a separate table with the reverse differences. This table would have rows in the format (article_id, revision_id, differences).
A:
Typical revision changes are stored using a delta algorithm, so the only data stored are the changes in each revision in relation to the original. I am unsure of wikipedia or stackoverflow how they have it implemented.
A:
I would use the following technique:
Store the current message as complete text.
Store the history using the delta algorithm.
This will keep your performance good with regular display, while keeping the storage to a minimum for the history.
|
storing revision changes of a message
|
What algorithms and processes are involved in storing revision changes like stackoverflow and wikipedia do?
Is only one copy of the message kept? And if so is it only the latest copy? Then only changes to go back to the previous version(s) are stored from there? (This would make for a faster display of the main message).
Or are complete messages stored? And if so is the compare done between these on each display?
What algorithms are best used to determine the exact changes in the message? How is this data stored in a database?
If anyone knows exactly what wikipedia or stackoverlfow does I'd love to know.
|
[
"Mediawiki (the sotware for wikipedia) stores full text for all revision see the database schema. Each entry in the text table in Mediawiki has flags that tells if the content has been e.g. gziped, using a standard compression is often the sanest option.\nI can't tell you how to do the diffs algorithmically, but what ever algorithm you use you should do it from two full versions of the text. That is fetch the complete version of old and new object from database then do the diff. This makes it possible to easily change the diffing algorithm. \nGit is a great example of a Unix application that can do very cheap (storage and speedwise) delta storage. There are wikis that can use git e.g. ikiwiki, but I'm guessing you want to do it with a database.\n",
"Usually messages are stored as complete snapshots. Previous versions are disabled, and the most recent is displayed. There may be optimizations used like caching which version is the most recent.\n",
"The longest common substring algorithm can be used to detect differences between versions, but it is limited. For example, it does not detect the moving around of text as such, but it would see this as unrelated removals and insertions.\nI suppose that websites normally store the latest copy in full, and apply reverse diffs from there. This is also the way CVS works, but Subversion uses forward diffs, which results in slower checkouts.\nTo store this in a database, one could maintain a main table with the latest versions, and have a separate table with the reverse differences. This table would have rows in the format (article_id, revision_id, differences).\n",
"Typical revision changes are stored using a delta algorithm, so the only data stored are the changes in each revision in relation to the original. I am unsure of wikipedia or stackoverflow how they have it implemented.\n",
"I would use the following technique:\n\nStore the current message as complete text. \nStore the history using the delta algorithm.\n\nThis will keep your performance good with regular display, while keeping the storage to a minimum for the history.\n"
] |
[
4,
1,
1,
0,
0
] |
[] |
[] |
[
"algorithm",
"version_control"
] |
stackoverflow_0000080141_algorithm_version_control.txt
|
Q:
Is there a working on-the-fly compilation in NetBeans 6.5 and how well is it doing?
I learned today that NetBeans 6.5 should have an on-the-fly compilation of (single) Java files. This feature is well known from Eclipse: Simply store the file and the compiled class is stored, too. Is NetBeans working the same way? If not, how is it ticking?
A:
Yes, it's the same I believe...
Here's a video showing it in action
|
Is there a working on-the-fly compilation in NetBeans 6.5 and how well is it doing?
|
I learned today that NetBeans 6.5 should have an on-the-fly compilation of (single) Java files. This feature is well known from Eclipse: Simply store the file and the compiled class is stored, too. Is NetBeans working the same way? If not, how is it ticking?
|
[
"Yes, it's the same I believe...\nHere's a video showing it in action\n"
] |
[
1
] |
[] |
[] |
[
"compilation",
"java",
"netbeans"
] |
stackoverflow_0000075168_compilation_java_netbeans.txt
|
Q:
What is the best resource for learning about Safety Critical Systems Development (C/C++)
I'm looking to locate a good resource (book or otherwise) on safety critical systems development techniques/methodologies, especially something that will cover both hardware and software . I have a sound working knowledge of C/C++, so even if it is just code on SourceForge etc I would still appreciate a link to it to have a browse.
Thanks.
A:
The podcast Software Engineering Radio has some episodes which talk about e.g. real-time and fault tolerant systems which I found very informative. Those episodes also had good references to books.
|
What is the best resource for learning about Safety Critical Systems Development (C/C++)
|
I'm looking to locate a good resource (book or otherwise) on safety critical systems development techniques/methodologies, especially something that will cover both hardware and software . I have a sound working knowledge of C/C++, so even if it is just code on SourceForge etc I would still appreciate a link to it to have a browse.
Thanks.
|
[
"The podcast Software Engineering Radio has some episodes which talk about e.g. real-time and fault tolerant systems which I found very informative. Those episodes also had good references to books.\n"
] |
[
4
] |
[] |
[] |
[
"safety_critical",
"system"
] |
stackoverflow_0000081832_safety_critical_system.txt
|
Q:
Dealing with Date only dates across timezones in .Net
Ok - a bit of a mouthful. So the problem I have is this - I need to store a Date for expiry where only the date part is required and I don't want any timezone conversion. So for example if I have an expiry set to "08 March 2008" I want that value to be returned to any client - no matter what their timezone is.
The problem with remoting it as a DateTime is that it gets stored/sent as "08 March 2008 00:00", which means for clients connecting from any timezone West of me it gets converted and therefore flipped to "07 March 2008"
Any suggestions for cleanly handling this scenario ? Obviously sending it as a string would work. anything else ?
thanks,
Ian
A:
I'm not sure what remoting technology you're referring to, but this is a real problem with WCF, which only currently supports serializing DateTime as xs:DateTime, inappropriate for a date-only value where you are not interested in timezones.
.NET 3.5 introduces the new DateTimeOffset type, which is good for transferring a DateTime between timezones, but doesn't help with the date-only scenario.
Ideally WCF needs to optionally support xs:Date for serializing dates as requested here:
http://connect.microsoft.com/wcf/feedback/ViewFeedback.aspx?FeedbackID=349215
A:
I do it like this: Whenever I have a date in memory or stored in a file it is always in a DateTime in UTC. When I show the date to the user it is always a string. When I convert between the string and the DateTime I also do the time zone conversion.
This way I never have to deal with time zones in my logic, only in the presentation.
A:
You can send it as UTC Time
dateTime1.ToUniversalTime()
A:
I think sending as a timestamp string would be the quickest / easiest way although you could look at forcing a locale to stop the time conversion from occuring.
A:
You could create a struct Date that provides access to the details you want/need, like:
public struct Date
{
public int Month; //or string instead of int
public int Day;
public int Year;
}
This is lightweight, flexible and gives you full control.
A:
Why don't you send it as a string then convert it back to a date type as needed? This way it will not be converted over different timezones. Keep it simple.
Edit: I like the Struct idea, allows for good functionality.
A:
The easiest way I've handled this on apps in the past is to just store the date as a string in yyyy-mm-dd format. It's unambigious and doesn't get automatically translated by anything.
Yes, it's a pain...
|
Dealing with Date only dates across timezones in .Net
|
Ok - a bit of a mouthful. So the problem I have is this - I need to store a Date for expiry where only the date part is required and I don't want any timezone conversion. So for example if I have an expiry set to "08 March 2008" I want that value to be returned to any client - no matter what their timezone is.
The problem with remoting it as a DateTime is that it gets stored/sent as "08 March 2008 00:00", which means for clients connecting from any timezone West of me it gets converted and therefore flipped to "07 March 2008"
Any suggestions for cleanly handling this scenario ? Obviously sending it as a string would work. anything else ?
thanks,
Ian
|
[
"I'm not sure what remoting technology you're referring to, but this is a real problem with WCF, which only currently supports serializing DateTime as xs:DateTime, inappropriate for a date-only value where you are not interested in timezones.\n.NET 3.5 introduces the new DateTimeOffset type, which is good for transferring a DateTime between timezones, but doesn't help with the date-only scenario.\nIdeally WCF needs to optionally support xs:Date for serializing dates as requested here:\nhttp://connect.microsoft.com/wcf/feedback/ViewFeedback.aspx?FeedbackID=349215\n",
"I do it like this: Whenever I have a date in memory or stored in a file it is always in a DateTime in UTC. When I show the date to the user it is always a string. When I convert between the string and the DateTime I also do the time zone conversion.\nThis way I never have to deal with time zones in my logic, only in the presentation.\n",
"You can send it as UTC Time\ndateTime1.ToUniversalTime()\n",
"I think sending as a timestamp string would be the quickest / easiest way although you could look at forcing a locale to stop the time conversion from occuring.\n",
"You could create a struct Date that provides access to the details you want/need, like:\npublic struct Date\n{\n public int Month; //or string instead of int\n public int Day;\n public int Year;\n}\n\nThis is lightweight, flexible and gives you full control.\n",
"Why don't you send it as a string then convert it back to a date type as needed? This way it will not be converted over different timezones. Keep it simple.\nEdit: I like the Struct idea, allows for good functionality. \n",
"The easiest way I've handled this on apps in the past is to just store the date as a string in yyyy-mm-dd format. It's unambigious and doesn't get automatically translated by anything.\nYes, it's a pain...\n"
] |
[
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"date"
] |
stackoverflow_0000079774_.net_date.txt
|
Q:
Choosing a desktop database
I'm looking for a desktop/embedded database. The two candidates I'm looking at are
Microsoft SQL Server CE and Oracle Lite. If anyone's used both of these products, it'd be great if you could compare them. I haven't been able to find any comparisons online.
The backend DB is Oracle10g.
Update: Clarification, the business need is a client-server app with offline functionality (hence the need for a local data store on the client)
A:
If the backend database is Oracle 10g it will probably be easier for you to use Oracle Lite - that way you don't have to use two completely different SQL dialects in the same project.
BTW, In my product I use SQLite as the desktop database
A:
I also used SQLite as a desktop database. It's lightning quick and doesn't need a seperate process or any prior installation. All you need is a library to access the data as part of your code.
In light of your clarification I'd evaluate both OracleXE and Oracle 10g Lite before the others. Stick with the same tech, SQL/Oracle have some funny disagreements about SQL syntax and datatypes. I imagine you'd get the same issue with SQLite.
A:
I'll second the vote for SQLite. I'm not sure what you're trying to accomplish but if you're doing any sort of local storage with syncing SQLite is a good choice. It has very widespread adoption and a lot of community support.
A:
Perhaps I'm not fully understanding the need here. You are developing against 10g, but for your own test/dev environment you want a more lightweight database?
Or, are you developing an application that synchs with 10g database when online, but when offline uses a local store?
In both cases, I'd recommend staying with Oracle only because it will simplify your code.
In the first case, I'd wonder why you don't have a 10g QA machine somewhere that all the developers can connect to.
A:
One advantage you have with SQL Server CE is that it is free and you can use the Sync Framework to syncronize it with any ADO.NET accesible database.
Also, the same SQL CE file is usable from the PC and mobile devices, and if you develop your application using .NET, you can use the same code for the desktop and the mobile device without changes.
A:
You might want to look at Oracle XE. I cannot remember all of the differences, but O-Lite didn't fit my project needs. Oracle XE is a very good database for local development.
Brad
A:
As @Nir mentioned, it's better to have homogeneous environment. However if you decide to not use Oracle Light, I would highly recommend you to take a look at Firebird. It's one of best choices for desktop database scenarios.
|
Choosing a desktop database
|
I'm looking for a desktop/embedded database. The two candidates I'm looking at are
Microsoft SQL Server CE and Oracle Lite. If anyone's used both of these products, it'd be great if you could compare them. I haven't been able to find any comparisons online.
The backend DB is Oracle10g.
Update: Clarification, the business need is a client-server app with offline functionality (hence the need for a local data store on the client)
|
[
"If the backend database is Oracle 10g it will probably be easier for you to use Oracle Lite - that way you don't have to use two completely different SQL dialects in the same project.\nBTW, In my product I use SQLite as the desktop database \n",
"I also used SQLite as a desktop database. It's lightning quick and doesn't need a seperate process or any prior installation. All you need is a library to access the data as part of your code.\nIn light of your clarification I'd evaluate both OracleXE and Oracle 10g Lite before the others. Stick with the same tech, SQL/Oracle have some funny disagreements about SQL syntax and datatypes. I imagine you'd get the same issue with SQLite.\n",
"I'll second the vote for SQLite. I'm not sure what you're trying to accomplish but if you're doing any sort of local storage with syncing SQLite is a good choice. It has very widespread adoption and a lot of community support.\n",
"Perhaps I'm not fully understanding the need here. You are developing against 10g, but for your own test/dev environment you want a more lightweight database?\nOr, are you developing an application that synchs with 10g database when online, but when offline uses a local store?\nIn both cases, I'd recommend staying with Oracle only because it will simplify your code.\nIn the first case, I'd wonder why you don't have a 10g QA machine somewhere that all the developers can connect to.\n",
"One advantage you have with SQL Server CE is that it is free and you can use the Sync Framework to syncronize it with any ADO.NET accesible database.\nAlso, the same SQL CE file is usable from the PC and mobile devices, and if you develop your application using .NET, you can use the same code for the desktop and the mobile device without changes.\n",
"You might want to look at Oracle XE. I cannot remember all of the differences, but O-Lite didn't fit my project needs. Oracle XE is a very good database for local development.\nBrad\n",
"As @Nir mentioned, it's better to have homogeneous environment. However if you decide to not use Oracle Light, I would highly recommend you to take a look at Firebird. It's one of best choices for desktop database scenarios.\n"
] |
[
9,
4,
4,
3,
1,
0,
0
] |
[] |
[] |
[
"database",
"oracle",
"sql_server_ce"
] |
stackoverflow_0000048486_database_oracle_sql_server_ce.txt
|
Q:
Hide google Toolbar by javascript
Is there a way to hide the google toolbar in my browser programmable?
A:
You haven't said which browser you are using so I'm going to assume Internet Explorer* and answer No.
If JavaScript on a web page could manipulate the browser, it would be a serious security hole and could create a lot of confusion for users.
So no... for a good reason: Security.
*. If you were using Firefox, and were talking about JavaScript within an extension to manipulate and theme the window chrome then this would be a different story.
A:
I really think that it is imposible to do that with javascript. This is because javascript is designed to control the behaviour of the site. And the browser is not part of the site.
Of course maby you are talking about some other Google toolbar then the plugin in the browser.
A:
As far as I know, you cannot access these parts of the browser due to security issues. But you can load new browser windows without toolbars as such. I don't know exactly how (hopefully other users will help yout out), but maybe start here: http://www.experts-exchange.com/Web/Web_Languages/JavaScript/Q_20782379.html
(PS: I know, it's experts-exchange, but I'm not going to copy someone elses work, even if it's posted on EE).
|
Hide google Toolbar by javascript
|
Is there a way to hide the google toolbar in my browser programmable?
|
[
"You haven't said which browser you are using so I'm going to assume Internet Explorer* and answer No.\nIf JavaScript on a web page could manipulate the browser, it would be a serious security hole and could create a lot of confusion for users.\nSo no... for a good reason: Security.\n*. If you were using Firefox, and were talking about JavaScript within an extension to manipulate and theme the window chrome then this would be a different story.\n",
"I really think that it is imposible to do that with javascript. This is because javascript is designed to control the behaviour of the site. And the browser is not part of the site.\n\nOf course maby you are talking about some other Google toolbar then the plugin in the browser.\n",
"As far as I know, you cannot access these parts of the browser due to security issues. But you can load new browser windows without toolbars as such. I don't know exactly how (hopefully other users will help yout out), but maybe start here: http://www.experts-exchange.com/Web/Web_Languages/JavaScript/Q_20782379.html\n(PS: I know, it's experts-exchange, but I'm not going to copy someone elses work, even if it's posted on EE).\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"browser",
"google_toolbar",
"javascript"
] |
stackoverflow_0000081945_browser_google_toolbar_javascript.txt
|
Q:
how to get login credentials by openId?
Is is possible to get login credentials such as name/id if user does login by OpenId?
A:
There are two accepted methods for retrieving these kind of things by OpenID: SReg and Attribute Exchange (AX). Both of these are extensions to the standard OpenID specification; SReg is the older of the two and specifies a set of fields that can be requested and sent with authentication, whereas AX allows requesting of any attribute.
Both of the specification documents are pretty concise on how they work, although it's difficult to guage what the standard "names" are for attributes to be requested from AX. Usually, servers tend to implement the SReg names.
OpenID Simple Registration Extension Specification 1.0
OpenID Attribute Exchange Specification 1.0 Final
A:
You will not get their actual username (or password), but you will get their OpenID wich is unique.
|
how to get login credentials by openId?
|
Is is possible to get login credentials such as name/id if user does login by OpenId?
|
[
"There are two accepted methods for retrieving these kind of things by OpenID: SReg and Attribute Exchange (AX). Both of these are extensions to the standard OpenID specification; SReg is the older of the two and specifies a set of fields that can be requested and sent with authentication, whereas AX allows requesting of any attribute.\nBoth of the specification documents are pretty concise on how they work, although it's difficult to guage what the standard \"names\" are for attributes to be requested from AX. Usually, servers tend to implement the SReg names.\nOpenID Simple Registration Extension Specification 1.0\nOpenID Attribute Exchange Specification 1.0 Final\n",
"You will not get their actual username (or password), but you will get their OpenID wich is unique.\n"
] |
[
5,
1
] |
[] |
[] |
[
"openid",
"web_applications"
] |
stackoverflow_0000081994_openid_web_applications.txt
|
Q:
C++ libraries to manipulate images
Do you know any open source/free software C++ libraries to manipulate images in these formats:
.jpg .gif .png .bmp ? The more formats it supports, the better. I am implementing a free program in C++ which hides a text file into one or more images, using steganography.
I am working under Unix.
A:
ImageMagick can manipulate about anything and has interfaces for a dozen of languages, including the Magick++ API for C++.
A:
@lurks: I assume that you are looking for LSB shifting? I did some stego work a couple of years ago, and that's how it appeared most apps worked. It appears that ImageMagick (suggested by others) allows you to identify and manipulate the LSBs.
A:
It takes some setting up, but I'm a fan of Adobe's GIL (now part of Boost).
A:
Have you considered GDI?
-- Kevin Fairchild
A:
FreeImage is pretty solid. It has a C interface but is more C++-like in its implementation.
A:
I like vxl
VXL (the Vision-something-Libraries) is a collection of C++ libraries designed for computer vision research and implementation. It was created from TargetJr and the IUE with the aim of making a light, fast and consistent system. VXL is written in ANSI/ISO C++ and is designed to be portable over many platforms.
A:
For .png images you could look into Cairo (and CairoMM). There's also Anti-Grain which people consider very fast.
|
C++ libraries to manipulate images
|
Do you know any open source/free software C++ libraries to manipulate images in these formats:
.jpg .gif .png .bmp ? The more formats it supports, the better. I am implementing a free program in C++ which hides a text file into one or more images, using steganography.
I am working under Unix.
|
[
"ImageMagick can manipulate about anything and has interfaces for a dozen of languages, including the Magick++ API for C++.\n",
"@lurks: I assume that you are looking for LSB shifting? I did some stego work a couple of years ago, and that's how it appeared most apps worked. It appears that ImageMagick (suggested by others) allows you to identify and manipulate the LSBs.\n",
"It takes some setting up, but I'm a fan of Adobe's GIL (now part of Boost).\n",
"Have you considered GDI?\n-- Kevin Fairchild\n",
"FreeImage is pretty solid. It has a C interface but is more C++-like in its implementation.\n",
"I like vxl\n\nVXL (the Vision-something-Libraries) is a collection of C++ libraries designed for computer vision research and implementation. It was created from TargetJr and the IUE with the aim of making a light, fast and consistent system. VXL is written in ANSI/ISO C++ and is designed to be portable over many platforms.\n\n",
"For .png images you could look into Cairo (and CairoMM). There's also Anti-Grain which people consider very fast.\n"
] |
[
7,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"c++",
"image",
"steganography"
] |
stackoverflow_0000041654_c++_image_steganography.txt
|
Q:
Testing GUI code: should I use a mocking library?
Recently I've been experimenting with TDD while developing a GUI application in Python. I find it very reassuring to have tests that verify the functionality of my code, but it's been tricky to follow some of the recommened practices of TDD. Namely, writing tests first has been hard. And I'm finding it difficult to make my tests readable (due to extensive use of a mocking library).
I chose a mocking library called mocker. I use it a lot since much of the code I'm testing makes calls to (a) other methods in my application that depend on system state or (b) ObjC/Cocoa objects that cannot exist without an event loop, etc.
Anyway, I've got a lot of tests that look like this:
def test_current_window_controller():
def test(config):
ac = AppController()
m = Mocker()
ac.iter_window_controllers = iwc = m.replace(ac.iter_window_controllers)
expect(iwc()).result(iter(config))
with m:
result = ac.current_window_controller()
assert result == (config[0] if config else None)
yield test, []
yield test, [0]
yield test, [1, 0]
Notice that this is actually three tests; all use the same parameterized test function. Here's the code that is being tested:
def current_window_controller(self):
try:
# iter_window_controllers() iterates in z-order starting
# with the controller of the top-most window
# assumption: the top-most window is the "current" one
wc = self.iter_window_controllers().next()
except StopIteration:
return None
return wc
One of the things I've noticed with using mocker is that it's easier to write the application code first and then go back and write the tests second, since most of the time I'm mocking many method calls and the syntax to write the mocked calls is much more verbose (thus harder to write) than the application code. It's easier to write the app code and then model the test code off of that.
I find that with this testing method (and a bit of discipline) I can easily write code with 100% test coverage.
I'm wondering if these tests are good tests? Will I regret doing it this way down the road when I finally discover the secret to writing good tests?
Am I violating the core principles of TDD so much that my testing is in vain?
A:
If you are writing your tests after you've written your code and making them pass, you are not doing TDD (nor are you getting any benefits of Test-First or Test-Driven development.. check out SO questions for definitive books on TDD)
One of the things I've noticed with
using mocker is that it's easier to
write the application code first and
then go back and write the tests
second, since most of the time I'm
mocking many method calls and the
syntax to write the mocked calls is
much more verbose (thus harder to
write) than the application code. It's
easier to write the app code and then
model the test code off of that.
Of course, its easier because you are just testing that the sky is orange after you made it orange by painting it with a specific kind of brush.
This is retrofitting tests (for self-assurance). Mocks are good but you should know how and when to use them - Like the saying goes 'When you have a hammer everything looks like a nail' It's also easy to write a whole load of unreadable and not-as-helpful-as-can-be tests. The time spent understanding what the test is about is time lost that can be used to fix broken ones.
And the point is:
Read Mocks aren't stubs - Martin Fowler if you haven't already. Google out some documented instances of good ModelViewPresenter patterned GUIs (Fake/Mock out the UIs if necessary).
Study your options and choose wisely. I'll play the guy with the halo on your left shoulder in white saying 'Don't do it.' Read this question as to my reasons - St. Justin is on your right shoulder. I believe he has also something to say:)
|
Testing GUI code: should I use a mocking library?
|
Recently I've been experimenting with TDD while developing a GUI application in Python. I find it very reassuring to have tests that verify the functionality of my code, but it's been tricky to follow some of the recommened practices of TDD. Namely, writing tests first has been hard. And I'm finding it difficult to make my tests readable (due to extensive use of a mocking library).
I chose a mocking library called mocker. I use it a lot since much of the code I'm testing makes calls to (a) other methods in my application that depend on system state or (b) ObjC/Cocoa objects that cannot exist without an event loop, etc.
Anyway, I've got a lot of tests that look like this:
def test_current_window_controller():
def test(config):
ac = AppController()
m = Mocker()
ac.iter_window_controllers = iwc = m.replace(ac.iter_window_controllers)
expect(iwc()).result(iter(config))
with m:
result = ac.current_window_controller()
assert result == (config[0] if config else None)
yield test, []
yield test, [0]
yield test, [1, 0]
Notice that this is actually three tests; all use the same parameterized test function. Here's the code that is being tested:
def current_window_controller(self):
try:
# iter_window_controllers() iterates in z-order starting
# with the controller of the top-most window
# assumption: the top-most window is the "current" one
wc = self.iter_window_controllers().next()
except StopIteration:
return None
return wc
One of the things I've noticed with using mocker is that it's easier to write the application code first and then go back and write the tests second, since most of the time I'm mocking many method calls and the syntax to write the mocked calls is much more verbose (thus harder to write) than the application code. It's easier to write the app code and then model the test code off of that.
I find that with this testing method (and a bit of discipline) I can easily write code with 100% test coverage.
I'm wondering if these tests are good tests? Will I regret doing it this way down the road when I finally discover the secret to writing good tests?
Am I violating the core principles of TDD so much that my testing is in vain?
|
[
"If you are writing your tests after you've written your code and making them pass, you are not doing TDD (nor are you getting any benefits of Test-First or Test-Driven development.. check out SO questions for definitive books on TDD)\n\nOne of the things I've noticed with\n using mocker is that it's easier to\n write the application code first and\n then go back and write the tests\n second, since most of the time I'm\n mocking many method calls and the\n syntax to write the mocked calls is\n much more verbose (thus harder to\n write) than the application code. It's\n easier to write the app code and then\n model the test code off of that.\n\nOf course, its easier because you are just testing that the sky is orange after you made it orange by painting it with a specific kind of brush. \nThis is retrofitting tests (for self-assurance). Mocks are good but you should know how and when to use them - Like the saying goes 'When you have a hammer everything looks like a nail' It's also easy to write a whole load of unreadable and not-as-helpful-as-can-be tests. The time spent understanding what the test is about is time lost that can be used to fix broken ones. \nAnd the point is: \n\nRead Mocks aren't stubs - Martin Fowler if you haven't already. Google out some documented instances of good ModelViewPresenter patterned GUIs (Fake/Mock out the UIs if necessary). \nStudy your options and choose wisely. I'll play the guy with the halo on your left shoulder in white saying 'Don't do it.' Read this question as to my reasons - St. Justin is on your right shoulder. I believe he has also something to say:) \n\n"
] |
[
8
] |
[
"Unit tests are really useful when you refactor your code (ie. completely rewrite or move a module). As long as you have unit tests before you do the big changes, you'll have confidence that you havent forgotten to move or include something when you finish.\n",
"Please remember that TDD is not a panaceum. It's hard, it's supposed to be hard, and it's especially hard to write mocking tests \"in advance\".\nSo I would say - do what works for you. Even it's not \"certified TDD\". I do basically the same thing.\nYou may want to provide your own API for GUI that would sit between controller code and GUI library code. That could be easier to mock, or you can even add some testing hooks to it.\nLast but not least, your code doesn't look too unreadable to me. Code using mocks is generally harder to understand. Fortunately in Python mocking is much easier and cleaner than i n other languages.\n"
] |
[
-2,
-3
] |
[
"python",
"tdd",
"unit_testing",
"user_interface"
] |
stackoverflow_0000079454_python_tdd_unit_testing_user_interface.txt
|
Q:
Getting QMake to generate a proper .app
I have a large exiting C++ project involving:
4 applications
50+ libraries
20+ third party libraries
The project uses QMake (part of Trolltech's Qt) to build the production version on Linux, but I've been playing around at building it on MacOS.
I can build in on MacOS using QMake just fine but I'm having trouble producing the final .app. It needs collecting all the third party frameworks and dynamic libraries, all the project's dynamic libraries and making sure the application finds them.
I've read online about using install_name_tool but was wondering if there's a process to automate it.
(Maybe the answer is to use XCode, see related question, but it would have issues with building uic and moc)
Thanks
A:
I'm sure this could be of some great help for you :
deployqt
Hope this helps !
A:
We have the same problem at Last.fm, I looked at DeployQt and it's not much use if you have third party libraries. In the end I wrote a perl script that generates a Makefile, which you can use to generate a .app and/or .dmg.
I uploaded it here: http://www.methylblue.com/detritus/QMake.dmg/
To use it add this to your application's pro file:
macx*:!macx-xcode:release {
system( QT=\'$$QT\' QMAKE_LIBDIR_QT=\'$$QMAKE_LIBDIR_QT\' $$ROOT_DIR/common/dist/mac/Makefile.dmg.pl $$DESTDIR $$VERSION $$LIBS > Makefile.dmg )
QMAKE_EXTRA_INCLUDES += Makefile.dmg
}
I'm sure it's not all yet portable, but it would be good for someone else to use and see if that is so.
This is basically the first official release of this code, so please send me bug reports, and also, improvements. Thanks.
A:
I side-stepped this problem completely by building my Qt app statically on OS X. That might not be practical for you though.
|
Getting QMake to generate a proper .app
|
I have a large exiting C++ project involving:
4 applications
50+ libraries
20+ third party libraries
The project uses QMake (part of Trolltech's Qt) to build the production version on Linux, but I've been playing around at building it on MacOS.
I can build in on MacOS using QMake just fine but I'm having trouble producing the final .app. It needs collecting all the third party frameworks and dynamic libraries, all the project's dynamic libraries and making sure the application finds them.
I've read online about using install_name_tool but was wondering if there's a process to automate it.
(Maybe the answer is to use XCode, see related question, but it would have issues with building uic and moc)
Thanks
|
[
"I'm sure this could be of some great help for you :\ndeployqt\nHope this helps !\n",
"We have the same problem at Last.fm, I looked at DeployQt and it's not much use if you have third party libraries. In the end I wrote a perl script that generates a Makefile, which you can use to generate a .app and/or .dmg.\nI uploaded it here: http://www.methylblue.com/detritus/QMake.dmg/\nTo use it add this to your application's pro file: \n macx*:!macx-xcode:release {\n system( QT=\\'$$QT\\' QMAKE_LIBDIR_QT=\\'$$QMAKE_LIBDIR_QT\\' $$ROOT_DIR/common/dist/mac/Makefile.dmg.pl $$DESTDIR $$VERSION $$LIBS > Makefile.dmg )\n QMAKE_EXTRA_INCLUDES += Makefile.dmg \n}\n\nI'm sure it's not all yet portable, but it would be good for someone else to use and see if that is so.\nThis is basically the first official release of this code, so please send me bug reports, and also, improvements. Thanks.\n",
"I side-stepped this problem completely by building my Qt app statically on OS X. That might not be practical for you though.\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"qmake",
"qt",
"xcode"
] |
stackoverflow_0000026904_qmake_qt_xcode.txt
|
Q:
How to add a constant column when replicating a database?
I am using SQL Server 2000 and I have two databases that both replicate (transactional push subscription) to a single database. I need to know which database the records came from.
So I want to add a fixed column specified in the publication to my table so I can tell which database the row originated from.
How do I go about doing this?
I would like to avoid altering the main databases mostly due to the fact there are many tables I would need to do this to. I was hoping for some built in feature of replication that would do this for me some where. Other than that I would go with the view idea.
A:
You could use a calculated column Use the following on the two databases:
ALTER TABLE TableName ADD
MyColumn AS 'Server1'
Then just define the single "master" database to use a VARCHAR column (or whatever you want) that you fill using the calculated columns value.
A:
You can create a view, which adds the "constant" column, and use it as a replication source.
A:
So the solution for me was to set up the replication publications to allow transformations and create a DTS package for each site that appends the siteid into the tables to keep the ids unique as I can't use guids.
|
How to add a constant column when replicating a database?
|
I am using SQL Server 2000 and I have two databases that both replicate (transactional push subscription) to a single database. I need to know which database the records came from.
So I want to add a fixed column specified in the publication to my table so I can tell which database the row originated from.
How do I go about doing this?
I would like to avoid altering the main databases mostly due to the fact there are many tables I would need to do this to. I was hoping for some built in feature of replication that would do this for me some where. Other than that I would go with the view idea.
|
[
"You could use a calculated column Use the following on the two databases:\n\nALTER TABLE TableName ADD\n MyColumn AS 'Server1'\n\nThen just define the single \"master\" database to use a VARCHAR column (or whatever you want) that you fill using the calculated columns value.\n",
"You can create a view, which adds the \"constant\" column, and use it as a replication source.\n",
"So the solution for me was to set up the replication publications to allow transformations and create a DTS package for each site that appends the siteid into the tables to keep the ids unique as I can't use guids.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"replication",
"sql_server"
] |
stackoverflow_0000064202_replication_sql_server.txt
|
Q:
How can I set triggers for sendmail?
If my email id receives an email from a particular sender, can I ask sendmail to trigger a different program and pass on the newly arrived email to it for further processing? This is similar to filters in gmail. Wait for some email to arrive, see if it matches the criteria and take some action if it does.
A:
This is what Procmail is for.
Set Sendmail up to use procmail as the mail delivery agent (MDA), or set up your .forward to pipe stuff through procmail. (See the man page.)
Then you can write your .procmailrc to do all sorts of things along these lines.
This filter predates gmail. Still useful if you're running a mail server.
A:
We handle this by having a cron process running on the mail server which watches the inbox directory and scans any new messages (files) every 10 minutes or so.
When the process finds an email of interest, it fires the information off to another process which then reacts to the new message (and, in our case, removes the message from the inbox).
--edit--
Finding the email inbox depends on your implementation - check the 'manual' your version of sendmail for details - we direct incoming email to a special directory or have parameters to work out the inbox details. I don't feel it would be useful to be more specific as the answer to 'where is the inbox' is 'it depends'.
As for the pattern to search for - we decode the email message (a text file) into a DOM that we can manipulate. For example, we can then look for specific words in property 'subject'.
A:
are you talking about email clients? If so then you can set rules in outlook and I am sure there mustbe ways in other email cleints too!! If u are asking something else. sorry
A:
ok. then I suggest Colins method.. I use cron to monitor emails (for a particluar domain) and send text messages as alerts!. Similar to what you are asking!
|
How can I set triggers for sendmail?
|
If my email id receives an email from a particular sender, can I ask sendmail to trigger a different program and pass on the newly arrived email to it for further processing? This is similar to filters in gmail. Wait for some email to arrive, see if it matches the criteria and take some action if it does.
|
[
"This is what Procmail is for.\nSet Sendmail up to use procmail as the mail delivery agent (MDA), or set up your .forward to pipe stuff through procmail. (See the man page.)\nThen you can write your .procmailrc to do all sorts of things along these lines.\nThis filter predates gmail. Still useful if you're running a mail server.\n",
"We handle this by having a cron process running on the mail server which watches the inbox directory and scans any new messages (files) every 10 minutes or so.\nWhen the process finds an email of interest, it fires the information off to another process which then reacts to the new message (and, in our case, removes the message from the inbox).\n--edit--\nFinding the email inbox depends on your implementation - check the 'manual' your version of sendmail for details - we direct incoming email to a special directory or have parameters to work out the inbox details. I don't feel it would be useful to be more specific as the answer to 'where is the inbox' is 'it depends'.\nAs for the pattern to search for - we decode the email message (a text file) into a DOM that we can manipulate. For example, we can then look for specific words in property 'subject'.\n",
"are you talking about email clients? If so then you can set rules in outlook and I am sure there mustbe ways in other email cleints too!! If u are asking something else. sorry\n",
"ok. then I suggest Colins method.. I use cron to monitor emails (for a particluar domain) and send text messages as alerts!. Similar to what you are asking!\n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"sendmail",
"unix"
] |
stackoverflow_0000081591_sendmail_unix.txt
|
Q:
Best technology for adding plugin support to a J2SE application?
I'm writing a J2SE desktop application that requires one of its components to be pluggable. I've already defined the Java interface for this plugin. The user should be able to select at runtime (via the GUI) which implementation of this interface they want to use (e.g. in an initialisation dialog). I envisage each plugin being packaged as a JAR file containing the implementing class plus any helper classes it may require.
What's the best technology for doing this type of thing in a desktop Java app?
A:
After many tries for plugin-based Java architectures (what is precisely what you seem to look for), I finally found JSPF to be the best solution for Java5 code. it do not have the huge needs of OSGI like solutions, but is instead rather easy to use.
A:
OSGI is certainly a valid way to go. But, assuming you dont need to unload to reload the plugin, it might be using a hammer to crack a nut.
You could use the classes in 'java.util.jar' to scan each JAR file in your plugins folder and then use a 'java.net.URLClassLoader' to load in the correct one.
A:
If you are "just" needing one component to be pluggable, it's enough to simply instantiate the classes based on meta information, e.g. read via a classloaders META-INF/ information from the various jars that are on your classpath or in a certain plugin directory.
OSGi on the other hand provides means to structure your whole application. If you already have a large Desktop application that needs one part pluggable, this would be a steep learning curve. If you start blank with what will be a Desktop app, OSGi provides means to modularizing the whole application. It's about "isolation of components" and independence of modules.
Apache Felix provides a nice start if you want to go down OSGi lane. It might look complicated and heavyweight, but that's only because one is not used to that level of isolation between modules. It used to be so easy to just call any public method...
A:
One approach I'm considering is having my application start up a lightweight OSGi container, which if I understand correctly would be able to discover what plugin JAR files exist in a designated folder, which in turn would let me list them for the user to choose from. Is this feasible?
I also found this article by Richard Deadman, but it looks a little dated (2006?) and mentions neither OSGi (at least not by name) nor the java.util.jar package
A:
Did you think of using OSGi as a plugin framework? With OSGi you are able to update/replace, load or unload your modules on demand.
|
Best technology for adding plugin support to a J2SE application?
|
I'm writing a J2SE desktop application that requires one of its components to be pluggable. I've already defined the Java interface for this plugin. The user should be able to select at runtime (via the GUI) which implementation of this interface they want to use (e.g. in an initialisation dialog). I envisage each plugin being packaged as a JAR file containing the implementing class plus any helper classes it may require.
What's the best technology for doing this type of thing in a desktop Java app?
|
[
"After many tries for plugin-based Java architectures (what is precisely what you seem to look for), I finally found JSPF to be the best solution for Java5 code. it do not have the huge needs of OSGI like solutions, but is instead rather easy to use.\n",
"OSGI is certainly a valid way to go. But, assuming you dont need to unload to reload the plugin, it might be using a hammer to crack a nut. \nYou could use the classes in 'java.util.jar' to scan each JAR file in your plugins folder and then use a 'java.net.URLClassLoader' to load in the correct one.\n",
"If you are \"just\" needing one component to be pluggable, it's enough to simply instantiate the classes based on meta information, e.g. read via a classloaders META-INF/ information from the various jars that are on your classpath or in a certain plugin directory.\nOSGi on the other hand provides means to structure your whole application. If you already have a large Desktop application that needs one part pluggable, this would be a steep learning curve. If you start blank with what will be a Desktop app, OSGi provides means to modularizing the whole application. It's about \"isolation of components\" and independence of modules. \nApache Felix provides a nice start if you want to go down OSGi lane. It might look complicated and heavyweight, but that's only because one is not used to that level of isolation between modules. It used to be so easy to just call any public method...\n",
"One approach I'm considering is having my application start up a lightweight OSGi container, which if I understand correctly would be able to discover what plugin JAR files exist in a designated folder, which in turn would let me list them for the user to choose from. Is this feasible?\nI also found this article by Richard Deadman, but it looks a little dated (2006?) and mentions neither OSGi (at least not by name) nor the java.util.jar package\n",
"Did you think of using OSGi as a plugin framework? With OSGi you are able to update/replace, load or unload your modules on demand.\n"
] |
[
5,
2,
1,
0,
0
] |
[] |
[] |
[
"java",
"plugin_architecture",
"plugins"
] |
stackoverflow_0000081495_java_plugin_architecture_plugins.txt
|
Q:
HTML parser in Python
Using the Python Documentation I found the HTML parser but I have no idea which library to import to use it, how do I find this out (bearing in mind it doesn't say on the page).
A:
You probably really want BeautifulSoup, check the link for an example.
But in any case
>>> import HTMLParser
>>> h = HTMLParser.HTMLParser()
>>> h.feed('<html></html>')
>>> h.get_starttag_text()
'<html>'
>>> h.close()
A:
Try:
import HTMLParser
In Python 3.0, the HTMLParser module has been renamed to html.parser
you can check about this here
Python 3.0
import html.parser
Python 2.2 and above
import HTMLParser
A:
I would recommend using Beautiful Soup module instead and it has good documentation.
A:
You should also look at html5lib for Python as it tries to parse HTML in a way that very much resembles what web browsers do, especially when dealing with invalid HTML (which is more than 90% of today's web).
A:
You may be interested in lxml. It is a separate package and has C components, but is the fastest. It has also very nice API, allowing you to easily list links in HTML documents, or list forms, sanitize HTML, and more. It also has capabilities to parse not well-formed HTML (it's configurable).
A:
I don't recommend BeautifulSoup if you want speed. lxml is much, much faster, and you can fall back in lxml's BS soupparser if the default parser doesn't work.
A:
There's a link to an example on the bottom of (http://docs.python.org/2/library/htmlparser.html) , it just doesn't work with the original python or python3. It has to be python2 as it says on the top.
A:
For real world HTML processing I'd recommend BeautifulSoup. It is great and takes away much of the pain. Installation is easy.
|
HTML parser in Python
|
Using the Python Documentation I found the HTML parser but I have no idea which library to import to use it, how do I find this out (bearing in mind it doesn't say on the page).
|
[
"You probably really want BeautifulSoup, check the link for an example. \nBut in any case\n>>> import HTMLParser\n>>> h = HTMLParser.HTMLParser()\n>>> h.feed('<html></html>')\n>>> h.get_starttag_text()\n'<html>'\n>>> h.close()\n\n",
"Try:\nimport HTMLParser\n\nIn Python 3.0, the HTMLParser module has been renamed to html.parser\nyou can check about this here\nPython 3.0\nimport html.parser\n\nPython 2.2 and above\nimport HTMLParser\n\n",
"I would recommend using Beautiful Soup module instead and it has good documentation.\n",
"You should also look at html5lib for Python as it tries to parse HTML in a way that very much resembles what web browsers do, especially when dealing with invalid HTML (which is more than 90% of today's web).\n",
"You may be interested in lxml. It is a separate package and has C components, but is the fastest. It has also very nice API, allowing you to easily list links in HTML documents, or list forms, sanitize HTML, and more. It also has capabilities to parse not well-formed HTML (it's configurable).\n",
"I don't recommend BeautifulSoup if you want speed. lxml is much, much faster, and you can fall back in lxml's BS soupparser if the default parser doesn't work.\n",
"There's a link to an example on the bottom of (http://docs.python.org/2/library/htmlparser.html) , it just doesn't work with the original python or python3. It has to be python2 as it says on the top.\n",
"For real world HTML processing I'd recommend BeautifulSoup. It is great and takes away much of the pain. Installation is easy.\n"
] |
[
24,
20,
4,
4,
4,
3,
1,
1
] |
[] |
[] |
[
"import",
"python"
] |
stackoverflow_0000071151_import_python.txt
|
Q:
How to enforce all children to override the parent's Clone() method?
How to make sure that all derived C++/CLI classes will override the ICloneable::Clone() method of the base class?
Do you think I should worry about this? Or this is not a responsibility of the base class' writer?
Amendment: Sorry, I forgot to mention that the base class is a non-abstract class.
A:
Declare it pure virtual in the base class.
class Base
{
...
vitual void Clone() = 0;
};
A:
Well, I can't say if this is the responsibility of the base class or not, and won't get into the perils of inheritance based contracts here.
In any case, you can force some class to override a method - "Clone()" for example, by making it a pure virtual member of an abstract class
public ref class ClonableBase abstract
{
public:
virtual void Clone() = 0;
}
note the "abstract" and the "=0;". The abstract allows the class to contain pure virtual members without warning, and the =0; means that this method is pure virtual - that is, it doesn't contain a body. Note that you can not instantiate an abstract class.
Now you can
public ref class ClonableChild : public ClonableBase
{
public:
virtual void Clone();
}
void ConableChild::Clone()
{
//some stuff here
}
If you do NOT have the Clone override in ClonableChild, you get a compiler error.
A:
Declare the Clone() method as abstract. This should work even when the parent class does have a concrete implementation.
Of course, the risk when enforcing such things is that the writer of the derived class will become annoyed, say "I'm not going to use Clone anyway" and does something like a bytewise copy, or even a "return this", to get rid of the errors.
A:
class Base
{
...
virtual void Clone() = 0;
};
is correct.
If you want some default behaviour for Clone, try:
class Base
{
...
virtual void Clone()
{
...
doClone();
...
};
...
private:
virtual void doClone() = 0;
};
A:
If the base class is non-abstract, then there is no way to force it to be overridden at compile time. The best you can probably do is something like:
virtual void Clone()
{
throw gcnew NotSupportedException();
}
With this, derived classes would have to override the method or your application will encounter a NotSupportedException. This at least would make it immediately obvious during testing that something was incorrect. It would give you something to look for so that you know when you encounter a class that did not correctly override Clone. Depending on how much control you have over derived classes, this could be important for robustness.
A:
Amendment: Sorry, I forgot to mention that the base class is a non-abstract class.
In this new light, I'm pretty sure that you do not want to force anyone to override Clone() at all. For example, if my derived class does not add any fields it probably does not need its own specialized Clone() method.
A:
After some pondering I found this solution:
Object^ BaseClass::Clone()
{
if(this->GetType() != BaseClass::typeid)
{
throw gcnew System::NotImplementedException("The Clone() method is not implemented for " + this->GetType()->ToString() + "!");
}
BaseClass^ base = gcnew BaseClass();
... // Copy the fields here
return base;
}
It throws NotImplementedException if you attempt to Clone an instance of a derived class that hasn't overridden the Clone() method of the base class.
A:
read this by herb sutter. It's exactly what you are asking
|
How to enforce all children to override the parent's Clone() method?
|
How to make sure that all derived C++/CLI classes will override the ICloneable::Clone() method of the base class?
Do you think I should worry about this? Or this is not a responsibility of the base class' writer?
Amendment: Sorry, I forgot to mention that the base class is a non-abstract class.
|
[
"Declare it pure virtual in the base class.\nclass Base \n{\n...\nvitual void Clone() = 0;\n};\n",
"Well, I can't say if this is the responsibility of the base class or not, and won't get into the perils of inheritance based contracts here.\nIn any case, you can force some class to override a method - \"Clone()\" for example, by making it a pure virtual member of an abstract class\npublic ref class ClonableBase abstract\n{\n public:\n virtual void Clone() = 0;\n}\n\nnote the \"abstract\" and the \"=0;\". The abstract allows the class to contain pure virtual members without warning, and the =0; means that this method is pure virtual - that is, it doesn't contain a body. Note that you can not instantiate an abstract class.\nNow you can\npublic ref class ClonableChild : public ClonableBase\n{\n public:\n virtual void Clone();\n}\n\nvoid ConableChild::Clone()\n{\n //some stuff here\n}\n\nIf you do NOT have the Clone override in ClonableChild, you get a compiler error.\n",
"Declare the Clone() method as abstract. This should work even when the parent class does have a concrete implementation.\nOf course, the risk when enforcing such things is that the writer of the derived class will become annoyed, say \"I'm not going to use Clone anyway\" and does something like a bytewise copy, or even a \"return this\", to get rid of the errors.\n",
"class Base\n{\n...\nvirtual void Clone() = 0;\n};\n\nis correct.\nIf you want some default behaviour for Clone, try:\nclass Base\n{\n...\nvirtual void Clone()\n{ \n ...\n doClone();\n ...\n};\n\n...\n\nprivate:\nvirtual void doClone() = 0;\n};\n\n",
"If the base class is non-abstract, then there is no way to force it to be overridden at compile time. The best you can probably do is something like:\nvirtual void Clone()\n{\n throw gcnew NotSupportedException();\n}\n\nWith this, derived classes would have to override the method or your application will encounter a NotSupportedException. This at least would make it immediately obvious during testing that something was incorrect. It would give you something to look for so that you know when you encounter a class that did not correctly override Clone. Depending on how much control you have over derived classes, this could be important for robustness.\n",
"\nAmendment: Sorry, I forgot to mention that the base class is a non-abstract class.\n\nIn this new light, I'm pretty sure that you do not want to force anyone to override Clone() at all. For example, if my derived class does not add any fields it probably does not need its own specialized Clone() method.\n",
"After some pondering I found this solution:\nObject^ BaseClass::Clone()\n{\n if(this->GetType() != BaseClass::typeid)\n {\n throw gcnew System::NotImplementedException(\"The Clone() method is not implemented for \" + this->GetType()->ToString() + \"!\");\n }\n\n BaseClass^ base = gcnew BaseClass();\n ... // Copy the fields here\n return base;\n}\n\nIt throws NotImplementedException if you attempt to Clone an instance of a derived class that hasn't overridden the Clone() method of the base class.\n",
"read this by herb sutter. It's exactly what you are asking\n"
] |
[
2,
2,
1,
1,
1,
0,
0,
0
] |
[
"Thomas is correct but one way you would make that class abstract is to define a pure virtual method.\nThis is done by saying:\nvirtual void Clone() = 0;\nUnless the derived class implements Clone they won't be able to instantiate it so they'll have little choice if they want their class to be useful.\n"
] |
[
-1
] |
[
".net",
"c++_cli"
] |
stackoverflow_0000079244_.net_c++_cli.txt
|
Q:
Including files case-sensitively on Windows from PHP
We have an issue using the PEAR libraries on Windows from PHP.
Pear contains many classes, we are making use of a fair few, one of which is the Mail class found in Mail.php. We use PEAR on the path, rather than providing the full explicit path to individual PEAR files:
require_once('Mail.php');
Rather than:
require_once('/path/to/pear/Mail.php');
This causes issues in the administration module of the site, where there is a mail.php file (used to send mails to users). If we are in an administrative screen that sends an email (such as the user administration screen that can generate and email new random passwords to users when they are approved from the moderation queue) and we attempt to include Mail.php we "accidentally" include mail.php.
Without changing to prepend the full path to the PEAR install explicitly requiring the PEAR modules (non-standard, typically you install PEAR to your path...) is there a way to enforce PHP on Windows to require files case-sensitively?
We are adding the PEAR path to the include path ourselves, so have control over the path order. We also recognize that we should avoid using filenames that clash with PEAR names regardless of case, and in the future will do so. This page however (which is not an include file, but a controller), has been in the repository for some years, and plugins specifically generate URLS to provide links/redirects to this page in their processing.
(We support Apache, Microsoft IIS, LightHTTPD and Zeus, using PHP 4.3 or later (including PHP5))
A:
As it's an OS level thing, I don't believe there's an easy way of doing this.
You could try changing your include from include('Mail.php'); to include('./Mail.php');, but I'm not certain if that'll work on a Windows box (not having one with PHP to test on).
A:
having 2 files with the same name in the include path is not a good idea, rename your files so the files that you wrote have different names from third party libraries. anyway for your current situation I think by changing the order of paths in your include path, you can fix this.
PHP searches for the files in the include paths, one by one. when the required file is found in the include path, PHP will stop searching for the file. so in the administration section of your application, if you want to include the PEAR Mail file, instead of the mail.php that you wrote, change your include path so the PEAR path is before the current directory.
do something like this:
<?php
$path_to_pear = '/usr/share/php/pear';
set_include_path( $path_to_pear . PATH_SEPARATOR . get_include_path() );
?>
A:
If you are using PHP 4, you can take advantage of this bug. Off course that is a messy solution...
Or you could just rename your mail.php file to something else...
A:
I'm fairly certain this problem is caused by the NTFS code in the Win32 subsystem. If you use an Ext2 Installable File System (IFS), you should get case sensitivity on that drive.
|
Including files case-sensitively on Windows from PHP
|
We have an issue using the PEAR libraries on Windows from PHP.
Pear contains many classes, we are making use of a fair few, one of which is the Mail class found in Mail.php. We use PEAR on the path, rather than providing the full explicit path to individual PEAR files:
require_once('Mail.php');
Rather than:
require_once('/path/to/pear/Mail.php');
This causes issues in the administration module of the site, where there is a mail.php file (used to send mails to users). If we are in an administrative screen that sends an email (such as the user administration screen that can generate and email new random passwords to users when they are approved from the moderation queue) and we attempt to include Mail.php we "accidentally" include mail.php.
Without changing to prepend the full path to the PEAR install explicitly requiring the PEAR modules (non-standard, typically you install PEAR to your path...) is there a way to enforce PHP on Windows to require files case-sensitively?
We are adding the PEAR path to the include path ourselves, so have control over the path order. We also recognize that we should avoid using filenames that clash with PEAR names regardless of case, and in the future will do so. This page however (which is not an include file, but a controller), has been in the repository for some years, and plugins specifically generate URLS to provide links/redirects to this page in their processing.
(We support Apache, Microsoft IIS, LightHTTPD and Zeus, using PHP 4.3 or later (including PHP5))
|
[
"As it's an OS level thing, I don't believe there's an easy way of doing this.\nYou could try changing your include from include('Mail.php'); to include('./Mail.php');, but I'm not certain if that'll work on a Windows box (not having one with PHP to test on).\n",
"having 2 files with the same name in the include path is not a good idea, rename your files so the files that you wrote have different names from third party libraries. anyway for your current situation I think by changing the order of paths in your include path, you can fix this.\nPHP searches for the files in the include paths, one by one. when the required file is found in the include path, PHP will stop searching for the file. so in the administration section of your application, if you want to include the PEAR Mail file, instead of the mail.php that you wrote, change your include path so the PEAR path is before the current directory.\ndo something like this:\n<?php\n $path_to_pear = '/usr/share/php/pear';\n set_include_path( $path_to_pear . PATH_SEPARATOR . get_include_path() );\n?>\n\n",
"If you are using PHP 4, you can take advantage of this bug. Off course that is a messy solution...\nOr you could just rename your mail.php file to something else...\n",
"I'm fairly certain this problem is caused by the NTFS code in the Win32 subsystem. If you use an Ext2 Installable File System (IFS), you should get case sensitivity on that drive.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"apache",
"pear",
"php",
"windows"
] |
stackoverflow_0000063599_apache_pear_php_windows.txt
|
Q:
is Microsoft sort.exe able to sort unicode UTF-16 (LE) files?
Is Microsoft sort.exe 5.1.2600.0 (xpclient.010817-1148) able to sort UTF-16 (LE) files?
A:
sort.exe has a number of limitations that can make it somewhat difficult to use. For example, although sort.exe appears to read UTF-16 (LE) files okay, it appears to output files using the current locale settings.
|
is Microsoft sort.exe able to sort unicode UTF-16 (LE) files?
|
Is Microsoft sort.exe 5.1.2600.0 (xpclient.010817-1148) able to sort UTF-16 (LE) files?
|
[
"sort.exe has a number of limitations that can make it somewhat difficult to use. For example, although sort.exe appears to read UTF-16 (LE) files okay, it appears to output files using the current locale settings.\n"
] |
[
1
] |
[] |
[] |
[
"command_line",
"shell",
"sorting"
] |
stackoverflow_0000081617_command_line_shell_sorting.txt
|
Q:
The most efficient way to move psql databases
What is the most efficient, secure way to pipe the contents of a postgresSQL database into a compressed tarfile, then copy to another machine?
This would be used for localhosting development, or backing up to a remote server, using *nix based machines at both ends.
A:
This page has a complete backup script for a webserver, including the pg_dump output.
Here is the syntax it uses:
BACKUP="/backup/$NOW"
PFILE="$(hostname).$(date +'%T').pg.sql.gz"
PGSQLUSER="vivek"
PGDUMP="/usr/bin/pg_dump"
$PGDUMP -x -D -U${PGSQLUSER} | $GZIP -c > ${BACKUP}/${PFILE}
After you have gzipped it, you can transfer it to the other server with scp, rsync or nfs depending on your network and services.
A:
pg_dump is indeed the proper solution. Be sure to read the man page. In Espo's example, some options are questionable (-x and -D) and may not suit you.
As with every other database manipulation, test a lot!
|
The most efficient way to move psql databases
|
What is the most efficient, secure way to pipe the contents of a postgresSQL database into a compressed tarfile, then copy to another machine?
This would be used for localhosting development, or backing up to a remote server, using *nix based machines at both ends.
|
[
"This page has a complete backup script for a webserver, including the pg_dump output.\nHere is the syntax it uses:\nBACKUP=\"/backup/$NOW\"\nPFILE=\"$(hostname).$(date +'%T').pg.sql.gz\"\nPGSQLUSER=\"vivek\"\nPGDUMP=\"/usr/bin/pg_dump\"\n\n$PGDUMP -x -D -U${PGSQLUSER} | $GZIP -c > ${BACKUP}/${PFILE}\n\nAfter you have gzipped it, you can transfer it to the other server with scp, rsync or nfs depending on your network and services.\n",
"pg_dump is indeed the proper solution. Be sure to read the man page. In Espo's example, some options are questionable (-x and -D) and may not suit you.\nAs with every other database manipulation, test a lot!\n"
] |
[
1,
0
] |
[] |
[] |
[
"pg_dump",
"sql"
] |
stackoverflow_0000081657_pg_dump_sql.txt
|
Q:
How can I capture the stdin and stdout of system command from a Perl script?
In the middle of a Perl script, there is a system command I want to execute. I have a string that contains the data that needs to be fed into stdin (the command only accepts input from stdin), and I need to capture the output written to stdout. I've looked at the various methods of executing system commands in Perl, and the open function seems to be what I need, except that it looks like I can only capture stdin or stdout, not both.
At the moment, it seems like my best solution is to use open, redirect stdout into a temporary file, and read from the file after the command finishes. Is there a better solution?
A:
IPC::Open2/3 are fine, but I've found that usually all I really need is IPC::Run3, which handles the simple cases really well with minimal complexity:
use IPC::Run3; # Exports run3() by default
run3( \@cmd, \$in, \$out, \$err );
The documentation compares IPC::Run3 to other alternatives. It's worth a read even if you don't decide to use it.
A:
IPC::Open3 would probably do what you want. It can capture STDERR and STDOUT.
http://metacpan.org/pod/IPC::Open3
A:
Somewhere at the top of your script, include the line
use IPC::Open2;
That will include the necessary module, usually installed with most Perl distributions by default. (If you don't have it, you could install it using CPAN.) Then, instead of open, call:
$pid = open2($cmd_out, $cmd_in, 'some cmd and args');
You can send data to your command by sending it to $cmd_in and then read your command's output by reading from $cmd_out.
If you also want to be able to read the command's stderr stream, you can use the IPC::Open3 module instead.
A:
The perlipc documentation covers many ways that you can do this, including IPC::Open2 and IPC::Open3.
A:
A very easy way to do this that I recently found is the IPC::Filter module. It lets you do the job extremely intuitively:
$output = filter $input, 'somecmd', '--with', 'various=args', '--etc';
Note how it invokes your command without going through the shell if you pass it a list. It also does a reasonable job of handling errors for common utilities. (On failure, it dies, using the text from STDERR as its error message; on success, STDERR is just discarded.)
Of course, it’s not suitable for huge amounts of data since it provides no way of doing any streaming processing; also, the error handling might not be granular enough for your needs. But it makes the many simple cases really really simple.
A:
I think you want to take a look at IPC::Open2
A:
There is a special perl command for it
open2()
More info can be found on: http://sunsite.ualberta.ca/Documentation/Misc/perl-5.6.1/lib/IPC/Open2.html
A:
I always do it this way if I'm only expecting a single line of output or want to split the result on something other than a newline:
my $result = qx( command args 2>&1 );
my $rc=$?;
# $rc >> 8 is the exit code of the called program.
if ($rc != 0 ) {
error();
}
If you want to deal with a multi-line response, get the result as an array:
my @lines = qx( command args 2>&1 );
foreach ( my $line ) (@lines) {
if ( $line =~ /some pattern/ ) {
do_something();
}
}
A:
If you do not want to include extra packages, you can just do
open(TMP,">tmpfile");
print TMP $tmpdata ;
open(RES,"$yourcommand|");
$res = "" ;
while(<RES>){
$res .= $_ ;
}
which is the contrary of what you suggested, but should work also.
|
How can I capture the stdin and stdout of system command from a Perl script?
|
In the middle of a Perl script, there is a system command I want to execute. I have a string that contains the data that needs to be fed into stdin (the command only accepts input from stdin), and I need to capture the output written to stdout. I've looked at the various methods of executing system commands in Perl, and the open function seems to be what I need, except that it looks like I can only capture stdin or stdout, not both.
At the moment, it seems like my best solution is to use open, redirect stdout into a temporary file, and read from the file after the command finishes. Is there a better solution?
|
[
"IPC::Open2/3 are fine, but I've found that usually all I really need is IPC::Run3, which handles the simple cases really well with minimal complexity:\nuse IPC::Run3; # Exports run3() by default\n\nrun3( \\@cmd, \\$in, \\$out, \\$err );\n\nThe documentation compares IPC::Run3 to other alternatives. It's worth a read even if you don't decide to use it.\n",
"IPC::Open3 would probably do what you want. It can capture STDERR and STDOUT.\nhttp://metacpan.org/pod/IPC::Open3\n",
"Somewhere at the top of your script, include the line\nuse IPC::Open2;\n\nThat will include the necessary module, usually installed with most Perl distributions by default. (If you don't have it, you could install it using CPAN.) Then, instead of open, call:\n$pid = open2($cmd_out, $cmd_in, 'some cmd and args');\n\nYou can send data to your command by sending it to $cmd_in and then read your command's output by reading from $cmd_out.\nIf you also want to be able to read the command's stderr stream, you can use the IPC::Open3 module instead.\n",
"The perlipc documentation covers many ways that you can do this, including IPC::Open2 and IPC::Open3.\n",
"A very easy way to do this that I recently found is the IPC::Filter module. It lets you do the job extremely intuitively:\n$output = filter $input, 'somecmd', '--with', 'various=args', '--etc';\n\nNote how it invokes your command without going through the shell if you pass it a list. It also does a reasonable job of handling errors for common utilities. (On failure, it dies, using the text from STDERR as its error message; on success, STDERR is just discarded.)\nOf course, it’s not suitable for huge amounts of data since it provides no way of doing any streaming processing; also, the error handling might not be granular enough for your needs. But it makes the many simple cases really really simple.\n",
"I think you want to take a look at IPC::Open2\n",
"There is a special perl command for it\nopen2()\n\nMore info can be found on: http://sunsite.ualberta.ca/Documentation/Misc/perl-5.6.1/lib/IPC/Open2.html\n",
"I always do it this way if I'm only expecting a single line of output or want to split the result on something other than a newline:\nmy $result = qx( command args 2>&1 ); \nmy $rc=$?; \n# $rc >> 8 is the exit code of the called program.\n\nif ($rc != 0 ) { \n error(); \n} \n\nIf you want to deal with a multi-line response, get the result as an array:\nmy @lines = qx( command args 2>&1 ); \n\nforeach ( my $line ) (@lines) { \n if ( $line =~ /some pattern/ ) { \n do_something(); \n } \n} \n\n",
"If you do not want to include extra packages, you can just do\nopen(TMP,\">tmpfile\");\nprint TMP $tmpdata ;\nopen(RES,\"$yourcommand|\");\n$res = \"\" ;\nwhile(<RES>){\n$res .= $_ ;\n}\n\nwhich is the contrary of what you suggested, but should work also.\n"
] |
[
6,
3,
3,
3,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"ipc",
"perl"
] |
stackoverflow_0000078091_ipc_perl.txt
|
Q:
Visual Basic 6.0 to VB.NET declaration
How do I declare "as any" in VB.NET, or what is the equivalent?
A:
The closest you can get is:
Dim var as Object
It's not exactly the same as VB6's as Any (which stores values in a Variant) but you can store variables of any type as Object, albeit boxed.
A:
VB.NET does not support the as any keyword, VB.NET is a strongly typed language, you can however (with .NET 3.5) use implicit typing in VB
Dim fred = "Hello World" will implicitly type fred as a string variable. If you want to simply hold a value that you do not know the type of at design time then you can simply declare your variable as object (the mother of all objects) NOTE, this usually is a red flag for code reviewers, so make sure you have a good reason ready :-)
A:
As Any must be referring to Windows API declarations, as it can't be used in variable declarations. You can use overloading: just repeat the declarations for each different data type you wish to pass. VB.NET picks out the one that matches the argument you pass in your call.
This is better than As Any was in VB6 because the compiler can still do type-checking.
A:
I suppose you have problems with converting WinAPI declarations. Sometimes you can get away if you just declare your variable as string or integer because that is the real type of value returned.
You can also try marshaling:
<MarshalAsAttribute(UnmanagedType.AsAny)> ByRef buff As Object
A:
VB.NET doesn't support the "As Any" keyword. You'll need to explicitly specify the type.
|
Visual Basic 6.0 to VB.NET declaration
|
How do I declare "as any" in VB.NET, or what is the equivalent?
|
[
"The closest you can get is:\nDim var as Object\nIt's not exactly the same as VB6's as Any (which stores values in a Variant) but you can store variables of any type as Object, albeit boxed.\n",
"VB.NET does not support the as any keyword, VB.NET is a strongly typed language, you can however (with .NET 3.5) use implicit typing in VB\nDim fred = \"Hello World\" will implicitly type fred as a string variable. If you want to simply hold a value that you do not know the type of at design time then you can simply declare your variable as object (the mother of all objects) NOTE, this usually is a red flag for code reviewers, so make sure you have a good reason ready :-)\n",
"As Any must be referring to Windows API declarations, as it can't be used in variable declarations. You can use overloading: just repeat the declarations for each different data type you wish to pass. VB.NET picks out the one that matches the argument you pass in your call. \nThis is better than As Any was in VB6 because the compiler can still do type-checking.\n",
"I suppose you have problems with converting WinAPI declarations. Sometimes you can get away if you just declare your variable as string or integer because that is the real type of value returned.\nYou can also try marshaling:\n\n<MarshalAsAttribute(UnmanagedType.AsAny)> ByRef buff As Object\n\n",
"VB.NET doesn't support the \"As Any\" keyword. You'll need to explicitly specify the type.\n"
] |
[
4,
3,
3,
1,
0
] |
[] |
[] |
[
"declaration",
"vb.net",
"vb6",
"vb6_migration"
] |
stackoverflow_0000070197_declaration_vb.net_vb6_vb6_migration.txt
|
Q:
Build setup project with NAnt
I've already got a NAnt build script that builds/runs tests/zips web project together, etc. but I'm working on a basic desktop application. How would I go about building the setup project using NAnt so I can include it with the build report on TeamCity.
Edit: The setup is the basic Setup Project supplied with Visual Studio. It's for internal to a company so it doesn't do anything fancy.
A:
The only way to build a Visual Studio setup project is through Visual Studio. You will need to have a copy of VS installed on the build machine and run it as a command line tool (exec devenv.exe) with the appropriate parameters (which should be the build mode (release or debug) and the project name to build, there might be a few others but you can run devenv /? to get a list of the different command line options).
A:
It's been a few years, but the last time I had to do this, I used a tool called Wix, which had utilities named Candle and Light. I used these tools in my NAnt script to create an MSI Installer.
A:
Instead of trying to build using MSBUILD (assumption), build the solution or project using DEVENV.EXE. The command line is something along the lines of:
DEVENV MySolutionFile.sln /build DEBUG /project SetupProject.vdproj
You can change the DEBUG to RELEASE or any other build configuration you've set up. You can also leave out the /project... part to build the whole solution.
|
Build setup project with NAnt
|
I've already got a NAnt build script that builds/runs tests/zips web project together, etc. but I'm working on a basic desktop application. How would I go about building the setup project using NAnt so I can include it with the build report on TeamCity.
Edit: The setup is the basic Setup Project supplied with Visual Studio. It's for internal to a company so it doesn't do anything fancy.
|
[
"The only way to build a Visual Studio setup project is through Visual Studio. You will need to have a copy of VS installed on the build machine and run it as a command line tool (exec devenv.exe) with the appropriate parameters (which should be the build mode (release or debug) and the project name to build, there might be a few others but you can run devenv /? to get a list of the different command line options).\n",
"It's been a few years, but the last time I had to do this, I used a tool called Wix, which had utilities named Candle and Light. I used these tools in my NAnt script to create an MSI Installer.\n",
"Instead of trying to build using MSBUILD (assumption), build the solution or project using DEVENV.EXE. The command line is something along the lines of:\nDEVENV MySolutionFile.sln /build DEBUG /project SetupProject.vdproj\nYou can change the DEBUG to RELEASE or any other build configuration you've set up. You can also leave out the /project... part to build the whole solution.\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
".net",
"build",
"continuous_integration",
"deployment",
"nant"
] |
stackoverflow_0000082169_.net_build_continuous_integration_deployment_nant.txt
|
Q:
What tools can be used to find which DLLs are referenced?
This is an antique problem with VB6 DLL and COM objects but I still face it day to day. What tools or procedures can be used to see which DLL file or version another DLL is referencing?
I am referring to compiled DLLs at runtime, not from within VB6 IDE.
It's DLL hell.
A:
Dependency Walker shows you all the files that a DLL links to (or is trying to link to) and it's free.
A:
ProcessExplorer shows you all the DLLs that are currently loaded in a process at a particular moment. This gives you another angle on Dependency Walker which I believe does a static scan and can miss some DLLs that are dynamically loaded on demand. Raymond says that's unavoidable.
|
What tools can be used to find which DLLs are referenced?
|
This is an antique problem with VB6 DLL and COM objects but I still face it day to day. What tools or procedures can be used to see which DLL file or version another DLL is referencing?
I am referring to compiled DLLs at runtime, not from within VB6 IDE.
It's DLL hell.
|
[
"Dependency Walker shows you all the files that a DLL links to (or is trying to link to) and it's free.\n",
"ProcessExplorer shows you all the DLLs that are currently loaded in a process at a particular moment. This gives you another angle on Dependency Walker which I believe does a static scan and can miss some DLLs that are dynamically loaded on demand. Raymond says that's unavoidable.\n"
] |
[
7,
6
] |
[] |
[] |
[
"dll",
"reference",
"vb6"
] |
stackoverflow_0000069538_dll_reference_vb6.txt
|
Q:
Force IIS7 to suggest downloading *.exe files in "Local intranet" zone
Problem:
html file on local server (inside our organization) with link to an exe on the same server.
clicking the link runs the exe on the client. Instead I want it to offer downloading it.
Tried so far:
Changed permissions on the exe's virtual directory to be read and script.
Added Content-disposition header on the exe's directory.
I can't change settings in the browser. It's intended for a lot of people to consume.
A:
You need to set content-disposition in the HTTP header.
This Microsoft Knowledge Base entry has more detail on how to do this.
A:
Runs them where: on the server, or on the client? If on the server: set the handler mappings of the file so that CGI-exe is disabled. If on the client: then this is a web browser issue - it shouldn't be running EXEs directly! What browser is it?
As Dave Webb mentions, you could use the Content-Disposition HTTP header: these can be added using HTTP Response Headers in IIS7 for that directory/file.
|
Force IIS7 to suggest downloading *.exe files in "Local intranet" zone
|
Problem:
html file on local server (inside our organization) with link to an exe on the same server.
clicking the link runs the exe on the client. Instead I want it to offer downloading it.
Tried so far:
Changed permissions on the exe's virtual directory to be read and script.
Added Content-disposition header on the exe's directory.
I can't change settings in the browser. It's intended for a lot of people to consume.
|
[
"You need to set content-disposition in the HTTP header.\nThis Microsoft Knowledge Base entry has more detail on how to do this.\n",
"Runs them where: on the server, or on the client? If on the server: set the handler mappings of the file so that CGI-exe is disabled. If on the client: then this is a web browser issue - it shouldn't be running EXEs directly! What browser is it? \nAs Dave Webb mentions, you could use the Content-Disposition HTTP header: these can be added using HTTP Response Headers in IIS7 for that directory/file.\n"
] |
[
2,
0
] |
[
"Whether a file is downloaded or opened automatically is a browser, not a server, side setting.\nThe other way of doing it would be to change the MIME type for the file to something like application/octet-stream or similar to try and force your browser to download it.\n"
] |
[
-2
] |
[
"download",
"executable",
"iis_7"
] |
stackoverflow_0000082232_download_executable_iis_7.txt
|
Q:
How do I use calculated value date in Sharepoint lists field to find a date+30 days?
I have a list I've built in Sharepoint, where one of the fields is a date that the user enters.
I want to add another field, which is a calculated value field that needs to be the date provided by the user + 30 days.
What formula do I need to pass to the calculated value field to achieve that?
A:
Try this:
Create a new Calculated column
In the Forumla box, enter something like this:
=TEXT([existing date column]+30,"yyyy-mm-dd")
You can use any date format string you like instead of "yyyy-mm-dd"
Make the data type "Date and Time"
Make the date and time format "Date Only"
|
How do I use calculated value date in Sharepoint lists field to find a date+30 days?
|
I have a list I've built in Sharepoint, where one of the fields is a date that the user enters.
I want to add another field, which is a calculated value field that needs to be the date provided by the user + 30 days.
What formula do I need to pass to the calculated value field to achieve that?
|
[
"Try this:\n\nCreate a new Calculated column\nIn the Forumla box, enter something like this:\n =TEXT([existing date column]+30,\"yyyy-mm-dd\")\nYou can use any date format string you like instead of \"yyyy-mm-dd\"\nMake the data type \"Date and Time\"\nMake the date and time format \"Date Only\"\n\n"
] |
[
10
] |
[] |
[] |
[
"sharepoint"
] |
stackoverflow_0000082220_sharepoint.txt
|
Q:
What is a good markup language to use for tests?
I'm writing a tool to run a series of integration tests on my product. It will install it and then run a bunch of commands against it to make sure its doing what it is supposed to. I'm exploring different options for how to markup the commands for each test case and wondering if folks had insight to share on this. I'm thinking of using YAML and doing something like this (kinda adapted from rails fixtures):
case:
name: caseN
description: this tests foo to make sure bar happens
expected_results: bar should happen
commands: |
command to run
next command to run
verification: command to see if it worked
Does anyone have another, or better idea? Or is there a domain specific language I'm unaware of?
A:
Go and have a look at the XUnit suite of test tools. This framework was originally designed for Smalltalk by Kent Beck and, I think, Erich Gamma, and it has now been ported to a whole stack of other languages, e.g. CUnit
A:
You might want to check out CPAN. It does for Perl scripts exactly what it sounds like your utility will do for your app.
A:
Did you take a look at RSpec?
|
What is a good markup language to use for tests?
|
I'm writing a tool to run a series of integration tests on my product. It will install it and then run a bunch of commands against it to make sure its doing what it is supposed to. I'm exploring different options for how to markup the commands for each test case and wondering if folks had insight to share on this. I'm thinking of using YAML and doing something like this (kinda adapted from rails fixtures):
case:
name: caseN
description: this tests foo to make sure bar happens
expected_results: bar should happen
commands: |
command to run
next command to run
verification: command to see if it worked
Does anyone have another, or better idea? Or is there a domain specific language I'm unaware of?
|
[
"Go and have a look at the XUnit suite of test tools. This framework was originally designed for Smalltalk by Kent Beck and, I think, Erich Gamma, and it has now been ported to a whole stack of other languages, e.g. CUnit\n",
"You might want to check out CPAN. It does for Perl scripts exactly what it sounds like your utility will do for your app.\n",
"Did you take a look at RSpec?\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"integration_testing",
"testing"
] |
stackoverflow_0000041839_integration_testing_testing.txt
|
Q:
Creating a DNN Module that uses a end-user modifyable template
I'd like to create a module in DNN that, similar to the Announcements control, offers a template that the portal admin can modify for formatting. I have a control that currently uses a Repeater control with templates. Is there a way to override the contents of the repeater ItemTemplate, HeaderTemplate, and FooterTemplate properties?
A:
That are many different ways that you can accomplish this, typically the best/easiest manner is to simply put a literal control in for Header, Footer, and Item templates. Then handle the ItemDataBound event, you can look at the item type and take a specific action on it there to load the needed data.
If you want to see some implementations of this model, you can download the code for my Expandable Text/HTML module, as well as my Guesbook Module both available for free, without login at http://www.iowacomputergurus.com
A:
You can see examples of templating in the default Starertkit module, the FAQ module, repository module and UDT. All of these have varying levels of control for templating.
|
Creating a DNN Module that uses a end-user modifyable template
|
I'd like to create a module in DNN that, similar to the Announcements control, offers a template that the portal admin can modify for formatting. I have a control that currently uses a Repeater control with templates. Is there a way to override the contents of the repeater ItemTemplate, HeaderTemplate, and FooterTemplate properties?
|
[
"That are many different ways that you can accomplish this, typically the best/easiest manner is to simply put a literal control in for Header, Footer, and Item templates. Then handle the ItemDataBound event, you can look at the item type and take a specific action on it there to load the needed data.\nIf you want to see some implementations of this model, you can download the code for my Expandable Text/HTML module, as well as my Guesbook Module both available for free, without login at http://www.iowacomputergurus.com\n",
"You can see examples of templating in the default Starertkit module, the FAQ module, repository module and UDT. All of these have varying levels of control for templating.\n"
] |
[
1,
1
] |
[] |
[] |
[
"asp.net",
"dotnetnuke",
"module"
] |
stackoverflow_0000076464_asp.net_dotnetnuke_module.txt
|
Q:
PHP Deployment to windows/unix servers
We have various php projects developed on windows (xampp) that need to be deployed to a mix of linux/windows servers.
We've used capistrano in the past to deploy from windows to the linux servers, but recent changes in architecture and windows servers left the old config not working. The recipe works fine for the linux deployment, but setting up the windows servers has required more time than we have right now. Ideas for the Capistrano recipe are valid answers. obviously the windows/linux servers don't share users, so this complicates it a tad (for the capistrano assumption of same username/password everywhere).
Currently we're using svn-update for the windows servers, which i dislike, since it leaves all the svn files hanging on the production servers. (and we still have to manually svn-update them on windows) And manual updating of files using winscp and syncing the directories with their linux counterparts.
My question is, what tools/setup do you suggest to automatize this deployment scenario:
"Various php windows/linux developers deploying to 2+ mixed windows/linux machines"
(ps: we have no problems using linux tools or anything working through cygwin, we simply need to make deployment a simple one-step operation)
edit: Currently we can't work on a all-linux enviroment, we have to deploy to both linux and windows server. We can start the deploy from anywhere, but we'd prefer to be able to do it from either enviroment.
A:
I use 4 different approaches depending on the client environment:
Capistrano and similar tools (effective, but complex)
rsync from + to Windows, Linux, Mac (simple, doesn't enforce discipline)
svn from + to Windows, Linux, Mac (simple, doesn't enforce discipline)
On-server scripts (run through the browser, complex)
There are some requirements that drive what you need:
How much discipline you want to enforce
If you need database (or configuration) migrations (up and/or down)
If you want a static "we're down" page
Who can do the update
Configuration differences between servers
I strongly suggest enforcing enough discipline to save you from yourself: deploy to a development server, allow for upward migrations and simple database restore, and limit who can update the live server to a small number of responsible admins (where the dev server is open to more developers). Also consider pushing via a cron job (to the development server), so there's a daily snapshot of your incremental changes.
Most of the time, I find that either svn or rsync setups are enough, with a few server-side scripts, especially when the admin set is limited to a few developers.
A:
This will probably sound silly but... I used to have this kind of problem all the time until I decided in the end that if I'm always deploying on Linux, I ought really to at least try developing on Linux also. I did. It was pain free. I never went back.
Now. I am not suggesting this is for everyone. But, if you install VirtualBox you could run a Linux install as a local server on your windows box. Share a folder in the virtual machine and you can use all your known and trusted Windows software and techniques and have the piece of mind of knowing that everything is working well on its target platform.
Plus you'll be able to go back to Capistrano (a fine choice) for deployment.
Best of all, if you thought you knew Linux / Unix wait until you use it everyday on your desktop! Who knows you may even like it :)
A:
Capistrano is the nicest deployment tool I've seen. Do the architecture changes make it impossible to fix the configs so it works again?
A:
Why you can't use capistrano anymore?
Why you dislike svn-update?
What things in your app requires an special deployment ?
A:
You can setup svn:ignore property on configuration files, so that svn update doesn't erase them, and then use svn export /target/path/ to get rid of .svn files in your Subversion repository.
|
PHP Deployment to windows/unix servers
|
We have various php projects developed on windows (xampp) that need to be deployed to a mix of linux/windows servers.
We've used capistrano in the past to deploy from windows to the linux servers, but recent changes in architecture and windows servers left the old config not working. The recipe works fine for the linux deployment, but setting up the windows servers has required more time than we have right now. Ideas for the Capistrano recipe are valid answers. obviously the windows/linux servers don't share users, so this complicates it a tad (for the capistrano assumption of same username/password everywhere).
Currently we're using svn-update for the windows servers, which i dislike, since it leaves all the svn files hanging on the production servers. (and we still have to manually svn-update them on windows) And manual updating of files using winscp and syncing the directories with their linux counterparts.
My question is, what tools/setup do you suggest to automatize this deployment scenario:
"Various php windows/linux developers deploying to 2+ mixed windows/linux machines"
(ps: we have no problems using linux tools or anything working through cygwin, we simply need to make deployment a simple one-step operation)
edit: Currently we can't work on a all-linux enviroment, we have to deploy to both linux and windows server. We can start the deploy from anywhere, but we'd prefer to be able to do it from either enviroment.
|
[
"I use 4 different approaches depending on the client environment:\n\nCapistrano and similar tools (effective, but complex)\nrsync from + to Windows, Linux, Mac (simple, doesn't enforce discipline)\nsvn from + to Windows, Linux, Mac (simple, doesn't enforce discipline)\nOn-server scripts (run through the browser, complex)\n\nThere are some requirements that drive what you need:\n\nHow much discipline you want to enforce\nIf you need database (or configuration) migrations (up and/or down)\nIf you want a static \"we're down\" page\nWho can do the update\nConfiguration differences between servers\n\nI strongly suggest enforcing enough discipline to save you from yourself: deploy to a development server, allow for upward migrations and simple database restore, and limit who can update the live server to a small number of responsible admins (where the dev server is open to more developers). Also consider pushing via a cron job (to the development server), so there's a daily snapshot of your incremental changes.\nMost of the time, I find that either svn or rsync setups are enough, with a few server-side scripts, especially when the admin set is limited to a few developers.\n",
"This will probably sound silly but... I used to have this kind of problem all the time until I decided in the end that if I'm always deploying on Linux, I ought really to at least try developing on Linux also. I did. It was pain free. I never went back. \nNow. I am not suggesting this is for everyone. But, if you install VirtualBox you could run a Linux install as a local server on your windows box. Share a folder in the virtual machine and you can use all your known and trusted Windows software and techniques and have the piece of mind of knowing that everything is working well on its target platform. \nPlus you'll be able to go back to Capistrano (a fine choice) for deployment. \nBest of all, if you thought you knew Linux / Unix wait until you use it everyday on your desktop! Who knows you may even like it :)\n",
"Capistrano is the nicest deployment tool I've seen. Do the architecture changes make it impossible to fix the configs so it works again? \n",
"Why you can't use capistrano anymore?\nWhy you dislike svn-update?\nWhat things in your app requires an special deployment ?\n",
"You can setup svn:ignore property on configuration files, so that svn update doesn't erase them, and then use svn export /target/path/ to get rid of .svn files in your Subversion repository.\n"
] |
[
4,
1,
0,
0,
0
] |
[] |
[] |
[
"automation",
"deployment",
"linux",
"php",
"windows"
] |
stackoverflow_0000077128_automation_deployment_linux_php_windows.txt
|
Q:
How to use a mutex in Visual Basic
I have imported the kernel32 library. So, I have the createMutex function available but I am not quite sure of the various parameters and return values.
This is classic Visual Basic, not Visual Basic.NET but I can probably work with either language in the form of an answer.
A:
Here's the VB6 declarations for CreateMutex - I just copied them from the API viewer, which you should have as part of your VB6 installation. VB6 marshalls strings to null-terminated ANSI using the current code page.
Public Type SECURITY_ATTRIBUTES
nLength As Long
lpSecurityDescriptor As Long
bInheritHandle As Long
End Type
Public Declare Function CreateMutex Lib "kernel32" Alias "CreateMutexA" _
(lpMutexAttributes As SECURITY_ATTRIBUTES, ByVal bInitialOwner As Long, _
ByVal lpName As String) As Long
Bear in mind that if you create a mutex from the VB6 IDE, the mutex belongs to the IDE and won't be destroyed when you stop running your program - only when you close the IDE.
A:
The VB code looks something like this:
hMutex = CreateMutex(ByVal 0&, 1, ByVal 0&)
The first parameter is a pointer to an SECURITY_ATTRIBUTES structure. If you don't know what it is, you don't need it. Pass NULL (0).
The second parameter is TRUE (non-zero, or 1) if the calling thread should take ownership of the mutex. FALSE otherwise.
The third parameter is the mutex name and may be NULL (0), as shown. If you need a named mutex, pass the name (anything unique) in. Not sure whether the VB wrapper marshals the length-prefixed VB string type (BSTR) over to a null-terminated Ascii/Unicode string if not, you'll need to do that and numerous examples are out there.
Good luck!
A:
Well, based on the documentation it looks like:
Security attributes (can pass null)
Whether it's initially owned (can pass false)
The name of it
HTH
|
How to use a mutex in Visual Basic
|
I have imported the kernel32 library. So, I have the createMutex function available but I am not quite sure of the various parameters and return values.
This is classic Visual Basic, not Visual Basic.NET but I can probably work with either language in the form of an answer.
|
[
"Here's the VB6 declarations for CreateMutex - I just copied them from the API viewer, which you should have as part of your VB6 installation. VB6 marshalls strings to null-terminated ANSI using the current code page.\nPublic Type SECURITY_ATTRIBUTES\n nLength As Long\n lpSecurityDescriptor As Long\n bInheritHandle As Long \nEnd Type\n\nPublic Declare Function CreateMutex Lib \"kernel32\" Alias \"CreateMutexA\" _\n (lpMutexAttributes As SECURITY_ATTRIBUTES, ByVal bInitialOwner As Long, _\n ByVal lpName As String) As Long\n\nBear in mind that if you create a mutex from the VB6 IDE, the mutex belongs to the IDE and won't be destroyed when you stop running your program - only when you close the IDE.\n",
"The VB code looks something like this:\nhMutex = CreateMutex(ByVal 0&, 1, ByVal 0&)\n\nThe first parameter is a pointer to an SECURITY_ATTRIBUTES structure. If you don't know what it is, you don't need it. Pass NULL (0).\nThe second parameter is TRUE (non-zero, or 1) if the calling thread should take ownership of the mutex. FALSE otherwise.\nThe third parameter is the mutex name and may be NULL (0), as shown. If you need a named mutex, pass the name (anything unique) in. Not sure whether the VB wrapper marshals the length-prefixed VB string type (BSTR) over to a null-terminated Ascii/Unicode string if not, you'll need to do that and numerous examples are out there.\nGood luck!\n",
"Well, based on the documentation it looks like:\n\nSecurity attributes (can pass null)\nWhether it's initially owned (can pass false)\nThe name of it\n\nHTH\n"
] |
[
10,
8,
1
] |
[] |
[] |
[
"mutex",
"vb6"
] |
stackoverflow_0000000947_mutex_vb6.txt
|
Q:
Is it possible to build a Linux/Motif Eclipse RCP application?
I am trying to build an Eclipse application that would work with a linux/motif installation target. However, this seems not to be possible even though the export option is available in the product export wizard.
I've checked the content of the delta pack and indeed, the packages for linux/motif are missing. After checking the downloads page for eclipse 3.4 at:
http://download.eclipse.org/eclipse/downloads/drops/R-3.4-200806172000/index.php
I see that even though there is an Eclipse version marked for Linux/motif, it is marked as Testing only. Additionally, there is no delta pack for this target.
Has anyone been successful building an RCP application targeting linux/motif? Would it work if I download this testing only version of eclipse and copy the missing plugins?
A:
We have a similar issue. We are building Eclipse applications and one of our platforms is Solaris 10 x86 which was supported for a short time as an early access build in 3.2 and dropped. I believe 3.2 and 3.3 supported motif so your best bet may be to revert to an older version of Eclipse. I develop in 3.4 and when we do the Solaris specific release we switch back to 3.2, it is usually about 10 minutes of changes to fix everything for the prior version. Usually it is removing @overides in a few locations and changing a function or two that Eclipse no longer uses.
The other thing you can do is get the Linux/Motif package for Eclipse, and install it on a Linux box running Motif. Check out your project on that Eclipse machine and export it there. I tried out VirtualBox (a free Virtual Machine from Sun Microsystems) it should make this easy for you.
|
Is it possible to build a Linux/Motif Eclipse RCP application?
|
I am trying to build an Eclipse application that would work with a linux/motif installation target. However, this seems not to be possible even though the export option is available in the product export wizard.
I've checked the content of the delta pack and indeed, the packages for linux/motif are missing. After checking the downloads page for eclipse 3.4 at:
http://download.eclipse.org/eclipse/downloads/drops/R-3.4-200806172000/index.php
I see that even though there is an Eclipse version marked for Linux/motif, it is marked as Testing only. Additionally, there is no delta pack for this target.
Has anyone been successful building an RCP application targeting linux/motif? Would it work if I download this testing only version of eclipse and copy the missing plugins?
|
[
"We have a similar issue. We are building Eclipse applications and one of our platforms is Solaris 10 x86 which was supported for a short time as an early access build in 3.2 and dropped. I believe 3.2 and 3.3 supported motif so your best bet may be to revert to an older version of Eclipse. I develop in 3.4 and when we do the Solaris specific release we switch back to 3.2, it is usually about 10 minutes of changes to fix everything for the prior version. Usually it is removing @overides in a few locations and changing a function or two that Eclipse no longer uses.\nThe other thing you can do is get the Linux/Motif package for Eclipse, and install it on a Linux box running Motif. Check out your project on that Eclipse machine and export it there. I tried out VirtualBox (a free Virtual Machine from Sun Microsystems) it should make this easy for you.\n"
] |
[
1
] |
[] |
[] |
[
"eclipse",
"java",
"linux",
"motif",
"rcp"
] |
stackoverflow_0000082305_eclipse_java_linux_motif_rcp.txt
|
Q:
How do get element out of xml file
I get an XML file From a web service. Now I want to get one of those elements out of the file.
I think I should go use XPath - any good starter reference?
A:
I've just been recovering my XPath skills- this Xslt and XPath Quick Reference sheet is quite a useful reference - it doesn't go into depth but it does list what is available and what you might want to search for more information on.
The w3schools tutorial linked previously isn't that great - it takes a long time to not cover a lot of ground - but it is still worth reading.
A:
Not VB specific, but try this: http://www.w3schools.com/xsl/xpath_intro.asp
A:
One way would be to only extract the needed informations with an xslt file into a new xml and use this new xml as data basis for further processing
A:
If I need to do some XPath, I just tweak one of these examples.
child::node() selects all the children of the context node, whatever their node type
attribute::name selects the name attribute of the context node
attribute::* selects all the attributes of the context node
descendant::para selects the para element descendants of the context node
ancestor::div selects all div ancestors of the context node
ancestor-or-self::div selects the div ancestors of the context node and, if the context node is a div element, the context node as well
descendant-or-self::para selects the para element descendants of the context node and, if the context node is a para element, the context node as well
self::para selects the context node if it is a para element, and otherwise selects nothing
child::chapter/descendant::para selects the para element descendants of the chapter element children of the context node
child::*/child::para selects all para grandchildren of the context node
/ selects the document root (which is always the parent of the document element)
/descendant::para selects all the para elements in the same document as the context node
/descendant::olist/child::item selects all the item elements that have an olist parent and that are in the same document as the context node
child::para[position()=1] selects the first para child of the context node
child::para[position()=last()] selects the last para child of the context node
child::para[position()=last()-1] selects the last but one para child of the context node
child::para[position()>1] selects all the para children of the context node other than the first para child of the context node
following-sibling::chapter[position()=1] selects the next chapter sibling of the context node
preceding-sibling::chapter[position()=1] selects the previous chapter sibling of the context node
/descendant::figure[position()=42] selects the forty-second figure element in the document
/child::doc/child::chapter[position()=5]/child::section[position()=2] selects the second section of the fifth chapter of the doc document element
child::para[attribute::type="warning"] selects all para children of the context node that have a type attribute with value warning
child::para[attribute::type='warning'][position()=5] selects the fifth para child of the context node that has a type attribute with value warning
child::para[position()=5][attribute::type="warning"] selects the fifth para child of the context node if that child has a type attribute with value warning
child::chapter[child::title='Introduction'] selects the chapter children of the context node that have one or more title children with string-value equal to Introduction
child::chapter[child::title] selects the chapter children of the context node that have one or more title children
child::*[self::chapter or self::appendix] selects the chapter and appendix children of the context node
child::*[self::chapter or self::appendix][position()=last()] selects the last chapter or appendix child of the context node
An in depth documentation can be found here. Also these example are taken from there.
|
How do get element out of xml file
|
I get an XML file From a web service. Now I want to get one of those elements out of the file.
I think I should go use XPath - any good starter reference?
|
[
"I've just been recovering my XPath skills- this Xslt and XPath Quick Reference sheet is quite a useful reference - it doesn't go into depth but it does list what is available and what you might want to search for more information on.\nThe w3schools tutorial linked previously isn't that great - it takes a long time to not cover a lot of ground - but it is still worth reading.\n",
"Not VB specific, but try this: http://www.w3schools.com/xsl/xpath_intro.asp\n",
"One way would be to only extract the needed informations with an xslt file into a new xml and use this new xml as data basis for further processing\n",
"If I need to do some XPath, I just tweak one of these examples.\n\nchild::node() selects all the children of the context node, whatever their node type\nattribute::name selects the name attribute of the context node\nattribute::* selects all the attributes of the context node\ndescendant::para selects the para element descendants of the context node\nancestor::div selects all div ancestors of the context node\nancestor-or-self::div selects the div ancestors of the context node and, if the context node is a div element, the context node as well\ndescendant-or-self::para selects the para element descendants of the context node and, if the context node is a para element, the context node as well\nself::para selects the context node if it is a para element, and otherwise selects nothing\nchild::chapter/descendant::para selects the para element descendants of the chapter element children of the context node\nchild::*/child::para selects all para grandchildren of the context node\n/ selects the document root (which is always the parent of the document element)\n/descendant::para selects all the para elements in the same document as the context node\n/descendant::olist/child::item selects all the item elements that have an olist parent and that are in the same document as the context node\nchild::para[position()=1] selects the first para child of the context node\nchild::para[position()=last()] selects the last para child of the context node\nchild::para[position()=last()-1] selects the last but one para child of the context node\nchild::para[position()>1] selects all the para children of the context node other than the first para child of the context node\nfollowing-sibling::chapter[position()=1] selects the next chapter sibling of the context node\npreceding-sibling::chapter[position()=1] selects the previous chapter sibling of the context node\n/descendant::figure[position()=42] selects the forty-second figure element in the document\n/child::doc/child::chapter[position()=5]/child::section[position()=2] selects the second section of the fifth chapter of the doc document element\nchild::para[attribute::type=\"warning\"] selects all para children of the context node that have a type attribute with value warning\nchild::para[attribute::type='warning'][position()=5] selects the fifth para child of the context node that has a type attribute with value warning\nchild::para[position()=5][attribute::type=\"warning\"] selects the fifth para child of the context node if that child has a type attribute with value warning\nchild::chapter[child::title='Introduction'] selects the chapter children of the context node that have one or more title children with string-value equal to Introduction\nchild::chapter[child::title] selects the chapter children of the context node that have one or more title children\nchild::*[self::chapter or self::appendix] selects the chapter and appendix children of the context node\nchild::*[self::chapter or self::appendix][position()=last()] selects the last chapter or appendix child of the context node\n\nAn in depth documentation can be found here. Also these example are taken from there.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"reference",
"vb.net",
"web_services",
"xpath"
] |
stackoverflow_0000081635_reference_vb.net_web_services_xpath.txt
|
Q:
What open source virtual private server program do you recommend with windows as host
I am looking to have 4 Virtual servers(various linux flavors) running on a Windows server 2003 R2 64 bit edition server located at a datacenter. I can also purchase a 2008 server or 32 bit 2k3 if needed. They would each have their own ip address for networking so that they could be publicly accessed. I do not know much about VPS software but have worked with it before.
A:
Virtual Server 2005 R2 SP1 is free (registration required) and supports x64 hosts. It does not support x64 guests.
Windows Server 2008 includes Hyper-V, Microsoft's new virtualization technology, which supports x64 guests and multiple virtual processors. There are editions without Hyper-V as well, for marginally less money, to satisfy the anti-trust authorities. The Hyper-V update has to be downloaded as it was completed after the rest of Windows Server 2008 was released.
VMware Server is also free. It supports (experimentally) up to 2 virtual CPUs.
To get best performance you need drivers and patches in the virtual machine which work well with the virtualization environment. In Virtual Server these are called Additions, in Hyper-V they are Integration Components, and for VMware, VMware Tools. Because of the nature of kernel binary compatibility (there are no guarantees), only specific distributions are generally supported.
Download Virtual Server Additions for Linux
Download Hyper-V Linux Integration Components
A:
Unfortunately, the only way you are going to get decent performance is by using Linux as the host and Windows as the guest. The signed driver requirement on x64 essentially prevents any open source implementation from having reasonable performance.
A:
If you're going to run 4 virtual servers all of which are going to be linux flavours why wouldn't you run the host in a linux as well?
If for what ever reason you have to use a Windows box, I would say grab 2003 32bit the signed drivers are really only a problem on 2008, but even in 2003 I can't really recommend 64bit unless there is a pressing requirement (like Exchange 2007)
|
What open source virtual private server program do you recommend with windows as host
|
I am looking to have 4 Virtual servers(various linux flavors) running on a Windows server 2003 R2 64 bit edition server located at a datacenter. I can also purchase a 2008 server or 32 bit 2k3 if needed. They would each have their own ip address for networking so that they could be publicly accessed. I do not know much about VPS software but have worked with it before.
|
[
"Virtual Server 2005 R2 SP1 is free (registration required) and supports x64 hosts. It does not support x64 guests.\nWindows Server 2008 includes Hyper-V, Microsoft's new virtualization technology, which supports x64 guests and multiple virtual processors. There are editions without Hyper-V as well, for marginally less money, to satisfy the anti-trust authorities. The Hyper-V update has to be downloaded as it was completed after the rest of Windows Server 2008 was released.\nVMware Server is also free. It supports (experimentally) up to 2 virtual CPUs.\nTo get best performance you need drivers and patches in the virtual machine which work well with the virtualization environment. In Virtual Server these are called Additions, in Hyper-V they are Integration Components, and for VMware, VMware Tools. Because of the nature of kernel binary compatibility (there are no guarantees), only specific distributions are generally supported.\n\nDownload Virtual Server Additions for Linux\nDownload Hyper-V Linux Integration Components\n\n",
"Unfortunately, the only way you are going to get decent performance is by using Linux as the host and Windows as the guest. The signed driver requirement on x64 essentially prevents any open source implementation from having reasonable performance.\n",
"If you're going to run 4 virtual servers all of which are going to be linux flavours why wouldn't you run the host in a linux as well?\nIf for what ever reason you have to use a Windows box, I would say grab 2003 32bit the signed drivers are really only a problem on 2008, but even in 2003 I can't really recommend 64bit unless there is a pressing requirement (like Exchange 2007)\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"open_source",
"vps",
"windows"
] |
stackoverflow_0000079350_open_source_vps_windows.txt
|
Q:
Non-unicode XML representation
I have xml where some of the element values are unicode characters. Is it possible to represent this in an ANSI encoding?
E.g.
<?xml version="1.0" encoding="utf-8"?>
<xml>
<value>受</value>
</xml>
to
<?xml version="1.0" encoding="Windows-1252"?>
<xml>
<value>殘</value>
</xml>
I deserialize the XML and then attempt to serialize it using XmlTextWriter specifying the Default encoding (Default is Windows-1252). All the unicode characters end up as question marks. I'm using VS 2008, C# 3.5
A:
Okay I tested it with the following code:
string xml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><xml><value>受</value></xml>";
XmlWriterSettings settings = new XmlWriterSettings { Encoding = Encoding.Default };
MemoryStream ms = new MemoryStream();
using (XmlWriter writer = XmlTextWriter.Create(ms, settings))
XElement.Parse(xml).WriteTo(writer);
string value = Encoding.Default.GetString(ms.ToArray());
And it correctly escaped the unicode character thus:
<?xml version="1.0" encoding="Windows-1252"?><xml><value>受</value></xml>
I must be doing something wrong somewhere else. Thanks for the help.
A:
If I understand the question, then yes. You just need a ; after the 27544:
<?xml version="1.0" encoding="Windows-1252"?>
<xml>
<value>殘</value>
</xml>
Or are you wondering how to generate this XML programmatically? If so, what language/environment are you working in?
|
Non-unicode XML representation
|
I have xml where some of the element values are unicode characters. Is it possible to represent this in an ANSI encoding?
E.g.
<?xml version="1.0" encoding="utf-8"?>
<xml>
<value>受</value>
</xml>
to
<?xml version="1.0" encoding="Windows-1252"?>
<xml>
<value>殘</value>
</xml>
I deserialize the XML and then attempt to serialize it using XmlTextWriter specifying the Default encoding (Default is Windows-1252). All the unicode characters end up as question marks. I'm using VS 2008, C# 3.5
|
[
"Okay I tested it with the following code:\n string xml = \"<?xml version=\\\"1.0\\\" encoding=\\\"utf-8\\\"?><xml><value>受</value></xml>\";\n\n XmlWriterSettings settings = new XmlWriterSettings { Encoding = Encoding.Default };\n MemoryStream ms = new MemoryStream();\n using (XmlWriter writer = XmlTextWriter.Create(ms, settings))\n XElement.Parse(xml).WriteTo(writer);\n\n string value = Encoding.Default.GetString(ms.ToArray());\n\nAnd it correctly escaped the unicode character thus:\n<?xml version=\"1.0\" encoding=\"Windows-1252\"?><xml><value>受</value></xml>\n\nI must be doing something wrong somewhere else. Thanks for the help.\n",
"If I understand the question, then yes. You just need a ; after the 27544:\n<?xml version=\"1.0\" encoding=\"Windows-1252\"?>\n<xml>\n<value>殘</value>\n</xml>\n\nOr are you wondering how to generate this XML programmatically? If so, what language/environment are you working in?\n"
] |
[
5,
4
] |
[] |
[] |
[
"character",
"string",
"unicode",
"xml"
] |
stackoverflow_0000082008_character_string_unicode_xml.txt
|
Q:
Y-Modem Implementation for .Net
Is there a ready and free Y-Modem Implementation for .Net, preferrably in C#?
I found only C/C++ Solutions.
A:
There is a library for XModem that you could adapt to use Y-Modem without much effort.
|
Y-Modem Implementation for .Net
|
Is there a ready and free Y-Modem Implementation for .Net, preferrably in C#?
I found only C/C++ Solutions.
|
[
"There is a library for XModem that you could adapt to use Y-Modem without much effort.\n"
] |
[
2
] |
[] |
[] |
[
".net",
"c#"
] |
stackoverflow_0000082093_.net_c#.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.