content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Infopath 2007 - How do I perform data validation on the current view ONLY?
I have an infopath 2007 form that I am developing which uses 3 different views.
The 3 different views are basically the same form, but have different text boxes shown, depending upon what button the user selects.
I run into a problem where 'view 1' has some form validation, but the user has selected 'view 2' and submits it. The form validation on 'view 1' is triggered, and the user cannot submit the form.
How can I ignore the form validation on 'view 1' if the user is currently submitting 'view 2'?
A:
Rather than tick the standard "this field cannot be blank" checkbox (for example), you need to use the Data Validation rules instead. Lets say you have two views with a textbox in each that cannot be blank, but you want to only enforce the current view. Here's the structure of the form:
fields:
currentView (number) (default = 1)
text1 (text)
text2 (text)
button1
button2
view 1 ( default)
text1 - rule: if (currentView = 1 AND text1 is blank) show "cannot be blank"
button1 - action: set a fields value (currentView = 2); switch views (to 2)
view 2:
text2 - rule: if (currentView = 2 AND text2 is blank) show "cannot be blank"
button2 - action: set a fields value (currentView = 1); switch views (to 1)
Make sense?
Oisin
|
Infopath 2007 - How do I perform data validation on the current view ONLY?
|
I have an infopath 2007 form that I am developing which uses 3 different views.
The 3 different views are basically the same form, but have different text boxes shown, depending upon what button the user selects.
I run into a problem where 'view 1' has some form validation, but the user has selected 'view 2' and submits it. The form validation on 'view 1' is triggered, and the user cannot submit the form.
How can I ignore the form validation on 'view 1' if the user is currently submitting 'view 2'?
|
[
"Rather than tick the standard \"this field cannot be blank\" checkbox (for example), you need to use the Data Validation rules instead. Lets say you have two views with a textbox in each that cannot be blank, but you want to only enforce the current view. Here's the structure of the form:\nfields:\n\ncurrentView (number) (default = 1)\ntext1 (text)\ntext2 (text)\nbutton1\nbutton2\n\nview 1 ( default)\ntext1 - rule: if (currentView = 1 AND text1 is blank) show \"cannot be blank\"\nbutton1 - action: set a fields value (currentView = 2); switch views (to 2)\nview 2:\ntext2 - rule: if (currentView = 2 AND text2 is blank) show \"cannot be blank\"\nbutton2 - action: set a fields value (currentView = 1); switch views (to 1)\nMake sense?\nOisin\n"
] |
[
2
] |
[] |
[] |
[
"infopath",
"office_2007",
"validation",
"views"
] |
stackoverflow_0000077317_infopath_office_2007_validation_views.txt
|
Q:
Do you have "Slack" time?
The CodePlex team has a Slack time policy, and it's worked out very well for them.
Jim Newkirk and myself used it to work on the xUnit.net project.
Jonathan Wanagel used it to work on SvnBridge.
Scott Densmore and myself used it to work on an ObjectBuilder 2.0 prototype.
For others, it was a great time to explore things that were technically not on the schedule, but could eventually end up being of great use to the rest of the team. I'm so convinced of the value of this that if I'm ever running a team again, I'm going to make it part of the team culture.
Have you had a formalized Slack policy on your team? How did it work out?
Edited: I just realized I didn't define Slack. For those who haven't read the book, Slack is what Google's "20% time" is: you're given some slice of your day/week/month/year on which to work on things that are not necessarily directly related to your day-to-day job, but might have an indirect benefit (obviously if you work on stuff that's totally not useful for your job or your company, your manager probably won't think very well of the way you spent the time :-p).
A:
I just want to mention Google's policy on the subject.
20% of the day should be used for private projects and research.
I think it is time for managers to face the fact that most good developers are a bit lazy. If they weren't, we wouldn't have concepts like code reuse.
If this laziness can be focused into a creative force, and the developers can read up on technical issues and experiment with architecture and language features, I am certain that the end result will be better code and a more satisfied developer.
So, if you are a manager: Let your developers slack of now and then. Encourage them to hold small seminars with the team to discuss new ways of doing stuff.
If you are a developer: Read, learn and love your craft. You have one of the best jobs in the world, as long as you are willing to put some time into learning the best ways to do your job.
A:
I am currently a full time freelancer working for a single client. If I want to get a full 40 hours of pay, then every minute I spend coding needs to be accounted for on the approved project plan. Or at least it has to go towards some sort of realistic maintenance task. I guess you could say this is one of the disadvantages of contracting... there's really no room for slack or being idle. You just have to keep going and going on the task at hand. It can be quite draining, but then again I kinda like how it keeps me accountable. And of course the pay is a bit better than usual.
That said, I would love to have slack time available for working on pet projects, but no client would ever agree to pay for that.
Anyway, I just thought I'd point out how this exemplifies some of the big differences between freelancing and full time employment.
A:
I've also never worked anywhere where there was a formal policy but I have always found was to squeeze in a little R&D/tool-building time on the side. Often times I will get productivity gains out of that which will allow me even more 'slack' time.
A:
I've never worked anywhere that had a formalized policy, but practically every manager I've ever had has allowed me to spend some time on things that weren't directly related to the current project or fighting a fire.
I think the key is to talk about the things you'd like to try. Most managers want their teams to do something cool, something extraordinary, so if you can convince them that you might deliver something, you might get the chance. Or they might let you do it just to keep you happy.
Now that I'm a contractor rather than an employee, I don't get paid to do fun stuff, but I generally only work 30-35 hours per week, so I still have time to learn and to play.
A:
We have slack time and we try to schedule them between releases. Once a release is out, we ask our developers to spend 60% of the day fixing bugs and then the other 40% for slack time. We have policies on what you can use the slack time for though. Then when a release creeps up again, we ask all the developers to spend all day on implementing features or fixing bugs for that release.
The policy lets the developer use the slack time for training, creating something new that the company could use, or just creating tools within the company to make things easier for ourselves. It has worked well for us. We think it is an awesome benefit.
A:
We don't have a formal policy in my team - mostly because there is just so much work to do that justifying it would be hard. Which is pretty ironic.
I've started doing some formal things in the guise of "Development Meetings" in order to at least inject the essence of this into the team. An example of this is a development project that is intended to both teach new technologies and produce a cool app at the end of it.
It's early days, we'll see how it goes.
|
Do you have "Slack" time?
|
The CodePlex team has a Slack time policy, and it's worked out very well for them.
Jim Newkirk and myself used it to work on the xUnit.net project.
Jonathan Wanagel used it to work on SvnBridge.
Scott Densmore and myself used it to work on an ObjectBuilder 2.0 prototype.
For others, it was a great time to explore things that were technically not on the schedule, but could eventually end up being of great use to the rest of the team. I'm so convinced of the value of this that if I'm ever running a team again, I'm going to make it part of the team culture.
Have you had a formalized Slack policy on your team? How did it work out?
Edited: I just realized I didn't define Slack. For those who haven't read the book, Slack is what Google's "20% time" is: you're given some slice of your day/week/month/year on which to work on things that are not necessarily directly related to your day-to-day job, but might have an indirect benefit (obviously if you work on stuff that's totally not useful for your job or your company, your manager probably won't think very well of the way you spent the time :-p).
|
[
"I just want to mention Google's policy on the subject.\n20% of the day should be used for private projects and research. \nI think it is time for managers to face the fact that most good developers are a bit lazy. If they weren't, we wouldn't have concepts like code reuse.\nIf this laziness can be focused into a creative force, and the developers can read up on technical issues and experiment with architecture and language features, I am certain that the end result will be better code and a more satisfied developer. \nSo, if you are a manager: Let your developers slack of now and then. Encourage them to hold small seminars with the team to discuss new ways of doing stuff. \nIf you are a developer: Read, learn and love your craft. You have one of the best jobs in the world, as long as you are willing to put some time into learning the best ways to do your job.\n",
"I am currently a full time freelancer working for a single client. If I want to get a full 40 hours of pay, then every minute I spend coding needs to be accounted for on the approved project plan. Or at least it has to go towards some sort of realistic maintenance task. I guess you could say this is one of the disadvantages of contracting... there's really no room for slack or being idle. You just have to keep going and going on the task at hand. It can be quite draining, but then again I kinda like how it keeps me accountable. And of course the pay is a bit better than usual.\nThat said, I would love to have slack time available for working on pet projects, but no client would ever agree to pay for that.\nAnyway, I just thought I'd point out how this exemplifies some of the big differences between freelancing and full time employment.\n",
"I've also never worked anywhere where there was a formal policy but I have always found was to squeeze in a little R&D/tool-building time on the side. Often times I will get productivity gains out of that which will allow me even more 'slack' time.\n",
"I've never worked anywhere that had a formalized policy, but practically every manager I've ever had has allowed me to spend some time on things that weren't directly related to the current project or fighting a fire.\nI think the key is to talk about the things you'd like to try. Most managers want their teams to do something cool, something extraordinary, so if you can convince them that you might deliver something, you might get the chance. Or they might let you do it just to keep you happy.\nNow that I'm a contractor rather than an employee, I don't get paid to do fun stuff, but I generally only work 30-35 hours per week, so I still have time to learn and to play.\n",
"We have slack time and we try to schedule them between releases. Once a release is out, we ask our developers to spend 60% of the day fixing bugs and then the other 40% for slack time. We have policies on what you can use the slack time for though. Then when a release creeps up again, we ask all the developers to spend all day on implementing features or fixing bugs for that release.\nThe policy lets the developer use the slack time for training, creating something new that the company could use, or just creating tools within the company to make things easier for ourselves. It has worked well for us. We think it is an awesome benefit.\n",
"We don't have a formal policy in my team - mostly because there is just so much work to do that justifying it would be hard. Which is pretty ironic.\nI've started doing some formal things in the guise of \"Development Meetings\" in order to at least inject the essence of this into the team. An example of this is a development project that is intended to both teach new technologies and produce a cool app at the end of it.\nIt's early days, we'll see how it goes.\n"
] |
[
19,
8,
5,
1,
1,
1
] |
[] |
[] |
[
"time_management"
] |
stackoverflow_0000013409_time_management.txt
|
Q:
Any recommendation on tools for doing translations / localization in .NET?
We have made use of Passolo for a number of years, but it's kind of clunky and overpriced.
It's got to be able to handle winforms and WPF....
Are there any open source alternatives?
A:
Your question could use some clarification as to exactly what aspects of translation / localization you need help with. Do you need help extracting strings from code? Tracking down improper use of non-localizable String.Formats in code (i.e. mm/dd/yyyy vs. dd/mm/yyyy)? Help managing all the resources once you've extracted them? Help managing the actual translation process while working with translators?
There are many aspects to consider. That having been said, some tools I am currently evaluating are:
Multi-Language Add-In for Visual Studio
http://www.jollans.com/tiki/tiki-index.php?page=multilangvsnet
Sisulizer
http://www.sisulizer.com/
RGreatEx (requires Resharper, which we use)
http://www.safedevelop.com
I also got a lot out of reading ".NET Internationalization" by Guy Smith-Ferrier, ISBN 0-321-34138-4. He provides some downloadable tools of his own design.
A:
Coincidentally I saw this on MS Channel 9 this morning - Babylon.NET http://www.redpin.eu/
Sadly I can't vouch for it as I haven't used it, but looks like a reasonable alternative to Passolo (well, at least it's cheaper).
|
Any recommendation on tools for doing translations / localization in .NET?
|
We have made use of Passolo for a number of years, but it's kind of clunky and overpriced.
It's got to be able to handle winforms and WPF....
Are there any open source alternatives?
|
[
"Your question could use some clarification as to exactly what aspects of translation / localization you need help with. Do you need help extracting strings from code? Tracking down improper use of non-localizable String.Formats in code (i.e. mm/dd/yyyy vs. dd/mm/yyyy)? Help managing all the resources once you've extracted them? Help managing the actual translation process while working with translators?\nThere are many aspects to consider. That having been said, some tools I am currently evaluating are:\nMulti-Language Add-In for Visual Studio\nhttp://www.jollans.com/tiki/tiki-index.php?page=multilangvsnet\nSisulizer\nhttp://www.sisulizer.com/\nRGreatEx (requires Resharper, which we use)\nhttp://www.safedevelop.com\nI also got a lot out of reading \".NET Internationalization\" by Guy Smith-Ferrier, ISBN 0-321-34138-4. He provides some downloadable tools of his own design.\n",
"Coincidentally I saw this on MS Channel 9 this morning - Babylon.NET http://www.redpin.eu/\nSadly I can't vouch for it as I haven't used it, but looks like a reasonable alternative to Passolo (well, at least it's cheaper).\n"
] |
[
3,
2
] |
[] |
[] |
[
".net",
"internationalization"
] |
stackoverflow_0000069005_.net_internationalization.txt
|
Q:
Animation Extender Problems
I have just started working with the AnimationExtender. I am using it to show a new div with a list gathered from a database when a button is pressed. The problem is the button needs to do a postback to get this list as I don't want to make the call to the database unless it's needed. The postback however stops the animation mid flow and resets it. The button is within an update panel.
Ideally I would want the animation to start once the postback is complete and the list has been gathered. I have looked into using the ScriptManager to detect when the postback is complete and have made some progress. I have added two javascript methods to the page.
function linkPostback() {
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_endRequest(playAnimation)
}
function playAnimation() {
var onclkBehavior = $find("ctl00_btnOpenList").get_OnClickBehavior().get_animation();
onclkBehavior.play();
}
And I’ve changed the btnOpenList.OnClientClick=”linkPostback();”
This almost solves the problem. I’m still get some animation stutter. The animation starts to play before the postback and then plays properly after postback. Using the onclkBehavior.pause() has no effect. I can get around this by setting the AnimationExtender.Enabled = false and setting it to true in the buttons postback event. This however works only once as now the AnimationExtender is enabled again. I have also tried disabling the AnimationExtender via javascript but this has no effect.
Is there a way of playing the animations only via javascript calls? I need to decouple the automatic link to the
buttons click event so I can control when the animation is fired.
Hope that makes sense.
Thanks
DG
A:
The flow you are seeing is something like this:
Click on button
AnimationExtender catches action and call clickOn callback
linkPostback starts asynchronous request for page and then returns flow to AnimationExtender
Animation begins
pageRequest returns and calls playAnimation, which starts the animation again
I think there are at least two ways around this issue. It seems you have almost all the javascript you need, you just need to work around AnimationExtender starting the animation on a click.
Option 1: Hide the AnimationExtender button and add a new button of your own that plays the animation. This should be as simple as setting the AE button's style to "display: none;" and having your own button call linkPostback().
Option 2: Re-disable the Animation Extender once the animation has finished with. This should work, as long as the playAnimation call is blocking, which it probably is:
function linkPostback() {
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_endRequest(playAnimation)
}
function playAnimation() {
AnimationExtender.Enabled = true;
var onclkBehavior = $find("ctl00_btnOpenList").get_OnClickBehavior().get_animation();
onclkBehavior.play();
AnimationExtender.Enabled = false;
}
As an aside, it seems your general approach may face issues if there is a delay in receiving the pageRequest. It may be a bit weird to click a button and several seconds later have the animation happen. It may be better to either pre-load the data, or to pre-fill the div with some "Loading..." thing, make it about the right size, and then populate the actual contents when it arrives.
A:
With help from the answer given the final solution was as follows:
Add another button and hide it.
<input id="btnHdn" runat="server" type="button" value="button" style="display:none;" />
Point the AnimationExtender to the hidden button so the firing of the unwanted click event never happens.
<cc1:AnimationExtender ID="aniExt" runat="server" TargetControlID="btnHdn">
Wire the javascript to the button you want to trigger the animation after the postback is complete.
<asp:ImageButton ID="btnShowList" runat="server" OnClick="btnShowList_Click" OnClientClick="linkPostback();" />
Add the required Javascript to the page.
function linkPostback() {
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_endRequest(playOpenAnimation)
}
function playOpenAnimation() {
var onclkBehavior = ind("ctl00_aniExt").get_OnClickBehavior().get_animation();
onclkBehavior.play();
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.remove_endRequest(playOpenAnimation)
}
|
Animation Extender Problems
|
I have just started working with the AnimationExtender. I am using it to show a new div with a list gathered from a database when a button is pressed. The problem is the button needs to do a postback to get this list as I don't want to make the call to the database unless it's needed. The postback however stops the animation mid flow and resets it. The button is within an update panel.
Ideally I would want the animation to start once the postback is complete and the list has been gathered. I have looked into using the ScriptManager to detect when the postback is complete and have made some progress. I have added two javascript methods to the page.
function linkPostback() {
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_endRequest(playAnimation)
}
function playAnimation() {
var onclkBehavior = $find("ctl00_btnOpenList").get_OnClickBehavior().get_animation();
onclkBehavior.play();
}
And I’ve changed the btnOpenList.OnClientClick=”linkPostback();”
This almost solves the problem. I’m still get some animation stutter. The animation starts to play before the postback and then plays properly after postback. Using the onclkBehavior.pause() has no effect. I can get around this by setting the AnimationExtender.Enabled = false and setting it to true in the buttons postback event. This however works only once as now the AnimationExtender is enabled again. I have also tried disabling the AnimationExtender via javascript but this has no effect.
Is there a way of playing the animations only via javascript calls? I need to decouple the automatic link to the
buttons click event so I can control when the animation is fired.
Hope that makes sense.
Thanks
DG
|
[
"The flow you are seeing is something like this:\n\nClick on button\nAnimationExtender catches action and call clickOn callback\nlinkPostback starts asynchronous request for page and then returns flow to AnimationExtender\nAnimation begins\npageRequest returns and calls playAnimation, which starts the animation again\n\nI think there are at least two ways around this issue. It seems you have almost all the javascript you need, you just need to work around AnimationExtender starting the animation on a click.\nOption 1: Hide the AnimationExtender button and add a new button of your own that plays the animation. This should be as simple as setting the AE button's style to \"display: none;\" and having your own button call linkPostback().\nOption 2: Re-disable the Animation Extender once the animation has finished with. This should work, as long as the playAnimation call is blocking, which it probably is:\nfunction linkPostback() {\n\n var prm = Sys.WebForms.PageRequestManager.getInstance();\n prm.add_endRequest(playAnimation)\n}\n\nfunction playAnimation() {\n\n AnimationExtender.Enabled = true;\n var onclkBehavior = $find(\"ctl00_btnOpenList\").get_OnClickBehavior().get_animation();\n onclkBehavior.play();\n AnimationExtender.Enabled = false;\n}\n\nAs an aside, it seems your general approach may face issues if there is a delay in receiving the pageRequest. It may be a bit weird to click a button and several seconds later have the animation happen. It may be better to either pre-load the data, or to pre-fill the div with some \"Loading...\" thing, make it about the right size, and then populate the actual contents when it arrives.\n",
"With help from the answer given the final solution was as follows:\nAdd another button and hide it.\n<input id=\"btnHdn\" runat=\"server\" type=\"button\" value=\"button\" style=\"display:none;\" />\n\nPoint the AnimationExtender to the hidden button so the firing of the unwanted click event never happens.\n<cc1:AnimationExtender ID=\"aniExt\" runat=\"server\" TargetControlID=\"btnHdn\">\n\nWire the javascript to the button you want to trigger the animation after the postback is complete.\n<asp:ImageButton ID=\"btnShowList\" runat=\"server\" OnClick=\"btnShowList_Click\" OnClientClick=\"linkPostback();\" />\n\nAdd the required Javascript to the page.\nfunction linkPostback() {\n var prm = Sys.WebForms.PageRequestManager.getInstance();\n prm.add_endRequest(playOpenAnimation)\n}\n\nfunction playOpenAnimation() {\n var onclkBehavior = ind(\"ctl00_aniExt\").get_OnClickBehavior().get_animation();\n onclkBehavior.play(); \n\n var prm = Sys.WebForms.PageRequestManager.getInstance();\n prm.remove_endRequest(playOpenAnimation) \n}\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"animationextender",
"c#"
] |
stackoverflow_0000049430_animationextender_c#.txt
|
Q:
Save us from VSS
I'm a 1-2 man band at work, and so far I've been using VSS for two reasons 1) the company was using that when I started a few months ago, and 2) it is friendly with Visual Studio.
Needless to say, I would very much like to upgrade to a not-so-archaic source control system. However, I don't want to give up the friendliness with Visual Studio, and I'd like to be able to migrate the existing codebase over to a better source control system.
I can't imagine I'm the only person in this situation. Does anyone have a success story they wouldn't mind sharing?
A:
If you can pay for it, Source Gear Vault is designed to be a drop-in replacement.
If you can't pay, Subversion with AnkhSVN works well but is a bit different.
A:
Consider Subversion (http://subversion.tigris.org/) and the Tortoise shell extension (http://tortoisesvn.tigris.org/).
A:
I'd recommend looking into Subversion or GIT if you need cheap or a free solution. There are some third party plugins like Visual Svn for Subversion to keep you in Visual Studio. If you want something that's close to home (VSS) then try Microsoft's Team System or Source Vault from SourceGear.
A:
You can't beat the easy install of the free Visual SVN Server and the VisualSVN plug-in is well worth the money. I paid for that part out of my own pocket.
A:
Another vote for Vault from SourceGear we moved from VSS to Vault about 7 months ago.
It was a very easy move and we have had a very good experience with Vault.
The little support we have needed was prompt and helpful.
A:
We're using Subversion 1.5, TortoiseSVN, and for Visual Studio integration, PushOk's SVN plugin. The plugin isn't free, but it's affordable and reliable.
A:
For one or two users, perforce is free as well. Once you need more that two users though, you have to start paying for it. They have a SCC plugin as well to allow integration into Visual Studio (and any other program that supports that interface).
A:
We migrated from VSS to SVN very easily. TortoiseSVN, in the Win32 environment, integrates well with Explorer.
To setup your server. I would recommend a mirrored raid setup with Ubuntu Server installed. Once you have this running, set up apache and svn to host the repository from the raid. For a small team like yours, you can just throw together an old PC with a few spare IDE ports for the raid drives. High capacity IDE drives are fairly affordable these days.
Raid Howto: https://wiki.ubuntu.com/Raid
Svn Howto: https://help.ubuntu.com/8.04/serverguide/C/subversion.html
I would estimate a day of effort to setup.
A:
The hardest part is going to be keeping your change history intact. I had to do this a couple of years ago. There was a lot of trial and error involved in the process.
I don't know if migration tools have gotten any better. Google for "sourcesafe svn migration". Once you're over that part, the rest is easy.
A:
If you are currently familiar with VSS, but want something more featureful, you should probably have a look at Visual Studio Team System. It does require a server, but you can get a "Action Pack" from MS that includes all the licencies that you need for "Team Foundation Server Workgroup Edition" from the Partner centre.
With this you wilkl get Bug, Risk and Issue tracking as well as many other features :)
A:
We are using Subversion with TortoiseSVN and VisualSVN. Works very well. If you only want to work on an internal network you don't need VisualSVN. Just install the Subversion server as a Windows Service.
Regarding the problem of keeping old revision history.
It may make sense to keep the VSS database. Just because you don't want to continue using VSS doesn't mean you have to get rid of it alltogether.
So if it is hard to find an easy migration path, why not keep the VSS database as a historical reference and then move all new development to Subversion.
|
Save us from VSS
|
I'm a 1-2 man band at work, and so far I've been using VSS for two reasons 1) the company was using that when I started a few months ago, and 2) it is friendly with Visual Studio.
Needless to say, I would very much like to upgrade to a not-so-archaic source control system. However, I don't want to give up the friendliness with Visual Studio, and I'd like to be able to migrate the existing codebase over to a better source control system.
I can't imagine I'm the only person in this situation. Does anyone have a success story they wouldn't mind sharing?
|
[
"If you can pay for it, Source Gear Vault is designed to be a drop-in replacement.\nIf you can't pay, Subversion with AnkhSVN works well but is a bit different.\n",
"Consider Subversion (http://subversion.tigris.org/) and the Tortoise shell extension (http://tortoisesvn.tigris.org/).\n",
"I'd recommend looking into Subversion or GIT if you need cheap or a free solution. There are some third party plugins like Visual Svn for Subversion to keep you in Visual Studio. If you want something that's close to home (VSS) then try Microsoft's Team System or Source Vault from SourceGear.\n",
"You can't beat the easy install of the free Visual SVN Server and the VisualSVN plug-in is well worth the money. I paid for that part out of my own pocket.\n",
"Another vote for Vault from SourceGear we moved from VSS to Vault about 7 months ago.\nIt was a very easy move and we have had a very good experience with Vault.\nThe little support we have needed was prompt and helpful.\n",
"We're using Subversion 1.5, TortoiseSVN, and for Visual Studio integration, PushOk's SVN plugin. The plugin isn't free, but it's affordable and reliable.\n",
"For one or two users, perforce is free as well. Once you need more that two users though, you have to start paying for it. They have a SCC plugin as well to allow integration into Visual Studio (and any other program that supports that interface).\n",
"We migrated from VSS to SVN very easily. TortoiseSVN, in the Win32 environment, integrates well with Explorer.\nTo setup your server. I would recommend a mirrored raid setup with Ubuntu Server installed. Once you have this running, set up apache and svn to host the repository from the raid. For a small team like yours, you can just throw together an old PC with a few spare IDE ports for the raid drives. High capacity IDE drives are fairly affordable these days.\nRaid Howto: https://wiki.ubuntu.com/Raid\nSvn Howto: https://help.ubuntu.com/8.04/serverguide/C/subversion.html\nI would estimate a day of effort to setup.\n",
"The hardest part is going to be keeping your change history intact. I had to do this a couple of years ago. There was a lot of trial and error involved in the process.\nI don't know if migration tools have gotten any better. Google for \"sourcesafe svn migration\". Once you're over that part, the rest is easy.\n",
"If you are currently familiar with VSS, but want something more featureful, you should probably have a look at Visual Studio Team System. It does require a server, but you can get a \"Action Pack\" from MS that includes all the licencies that you need for \"Team Foundation Server Workgroup Edition\" from the Partner centre.\nWith this you wilkl get Bug, Risk and Issue tracking as well as many other features :)\n",
"We are using Subversion with TortoiseSVN and VisualSVN. Works very well. If you only want to work on an internal network you don't need VisualSVN. Just install the Subversion server as a Windows Service. \nRegarding the problem of keeping old revision history.\nIt may make sense to keep the VSS database. Just because you don't want to continue using VSS doesn't mean you have to get rid of it alltogether.\nSo if it is hard to find an easy migration path, why not keep the VSS database as a historical reference and then move all new development to Subversion.\n"
] |
[
4,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"version_control",
"visual_sourcesafe",
"visual_studio_2008"
] |
stackoverflow_0000067339_version_control_visual_sourcesafe_visual_studio_2008.txt
|
Q:
Adding item to the Desktop context menu in Windows
I want to add an item into the Desktop context menu (the menu you see when you right-click on an empty space on the Windows Desktop).
Something like Catalyst Control Center in this screenshot:
I know how to add items to files' and folders' context menus through registry, but the Desktop seems to work differently: I didn't even find the text in the registry.
So, how can I add a new item into the Desktop menu and how can I associate some code with it?
I think the solution is language independent, if it's not, I'd appreciate any code that helps.
A:
Such a handler must be registered in HKCR\Directory\Background, instead of usual locations like HKCR\Directory, HKCR\Folder, etc.
Check out Creating Shell Extension Handlers in MSDN.
A:
There's a series of articles on CodeProject that details writing Shell Extensions and is very good:
http://www.codeproject.com/KB/shell/shellextguide1.aspx
|
Adding item to the Desktop context menu in Windows
|
I want to add an item into the Desktop context menu (the menu you see when you right-click on an empty space on the Windows Desktop).
Something like Catalyst Control Center in this screenshot:
I know how to add items to files' and folders' context menus through registry, but the Desktop seems to work differently: I didn't even find the text in the registry.
So, how can I add a new item into the Desktop menu and how can I associate some code with it?
I think the solution is language independent, if it's not, I'd appreciate any code that helps.
|
[
"Such a handler must be registered in HKCR\\Directory\\Background, instead of usual locations like HKCR\\Directory, HKCR\\Folder, etc.\nCheck out Creating Shell Extension Handlers in MSDN.\n",
"There's a series of articles on CodeProject that details writing Shell Extensions and is very good:\nhttp://www.codeproject.com/KB/shell/shellextguide1.aspx\n"
] |
[
2,
1
] |
[] |
[] |
[
"desktop",
"user_interface",
"windows"
] |
stackoverflow_0000076553_desktop_user_interface_windows.txt
|
Q:
XML node name clean up code
I am trying to create an XML file based on data fields from a table, and I want to have the nodes named based on the value in a field from the table. The problem is that sometimes values entered in that column contain spaces and other characters not allowed in Node names.
Does anyone have any code that will cleanup a passed in string and repalce invalid characters with replacement text so that it can be reversed on the other end and get the original value back?
I am using .net (vb.net but I can read/convert c#)
A:
It might be easier if you stored the original value if it were illegal as a node name in an attribute. Then you wouldn't worry about having some sort of complex to/from translation.
A:
As a matter of fact I would go so far as to say that unless you have complete control over the data, then no translation process would ever work. So I second storing the original data either as an attribute or a child node.
A:
You didn't say in your original post what language so here's a regex pattern that should get you started. This is QUICK so you'll need to test it.
([^A-Za-z0-9])|(\s)|(\t+)|(\c+)
|
XML node name clean up code
|
I am trying to create an XML file based on data fields from a table, and I want to have the nodes named based on the value in a field from the table. The problem is that sometimes values entered in that column contain spaces and other characters not allowed in Node names.
Does anyone have any code that will cleanup a passed in string and repalce invalid characters with replacement text so that it can be reversed on the other end and get the original value back?
I am using .net (vb.net but I can read/convert c#)
|
[
"It might be easier if you stored the original value if it were illegal as a node name in an attribute. Then you wouldn't worry about having some sort of complex to/from translation.\n",
"As a matter of fact I would go so far as to say that unless you have complete control over the data, then no translation process would ever work. So I second storing the original data either as an attribute or a child node.\n",
"You didn't say in your original post what language so here's a regex pattern that should get you started. This is QUICK so you'll need to test it.\n([^A-Za-z0-9])|(\\s)|(\\t+)|(\\c+)\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
".net",
"vb.net",
"xml"
] |
stackoverflow_0000084839_.net_vb.net_xml.txt
|
Q:
Test framework for black box regression testing
I am looking for a tool for regression testing a suite of equipment we are building.
The current concept is that you create an input file (text/csv) to the tool specifying inputs to the system under test. The tool then captures the outputs from the system and records the inputs and outputs to an output file.
The output is in the same format as the original input file and can be used as an input for following runs of the tool, with the measured outputs matched with the values from the previous run.
The results of two runs will not be exact matches, there are some timing differences that depend on the state of the battery, or which depend on other internal state of the equipment.
We would have to write our own interfaces to pass the commands from the tool to the equipment and to capture the output of the equipment.
This is a relatively simple task, but I am looking for an existing tool / package / library to avoid re-inventing the wheel / steal lessons from.
A:
I recently built a system like this on top of git (http://git.or.cz/). Basically, write a program that takes all your input files, sends them to the server, reads the output back, and writes it to a set of output files. After the first run, commit the output files to git.
For future runs, your success is determined by whether the git repository is clean after the run finishes:
test 0 == $(git diff data/output/ | wc -l)
As a bonus, you can use all the git tools to compare differences, and commit them if it turns out the differences were an improvement, so that future runs will pass. It also works great when merging between branches.
A:
I'm not sure there will be a single package that exactly suits your needs. You have a few considerations to make:
How to pass data to the equipment and how to collect it back. This is very application specific, but a usually good option is the old'n'good serial port (RS232) for which an easy interfact exists for any programming language.
How to run the tests. A unit-testing framework can definitely help you here. The existing frameworks have a lot of the basic features implemented - selecting tests to run, selecting the detail-level of the report (very important for detailed debugging at first and production-stage PASS/FAIL analysis later on). I've had good experience using the test frameworks of both Perl and Python from testing embedded devices.
You also have to decide how to make the comparisons. As you correctly noted, the results won't be equal. This is where your domain knowledge comes in. Usually, it is simply implemented using error margins that are applicable in your domain. Of course, you won't be able to use a basic diff tool and will have to write an intelligent script.
A:
You can just use any test framework. The hard part is writing the tools to send/retrieve the data from your test system, not the actual string comparisons.
Your tests would just all look like this:
x = read_input_file(ifilename);
y1 = read_expected_data(ofilename);
send_input_file_to_server();
y2 = read_output_from_server();
checkequal(y1, y2)
|
Test framework for black box regression testing
|
I am looking for a tool for regression testing a suite of equipment we are building.
The current concept is that you create an input file (text/csv) to the tool specifying inputs to the system under test. The tool then captures the outputs from the system and records the inputs and outputs to an output file.
The output is in the same format as the original input file and can be used as an input for following runs of the tool, with the measured outputs matched with the values from the previous run.
The results of two runs will not be exact matches, there are some timing differences that depend on the state of the battery, or which depend on other internal state of the equipment.
We would have to write our own interfaces to pass the commands from the tool to the equipment and to capture the output of the equipment.
This is a relatively simple task, but I am looking for an existing tool / package / library to avoid re-inventing the wheel / steal lessons from.
|
[
"I recently built a system like this on top of git (http://git.or.cz/). Basically, write a program that takes all your input files, sends them to the server, reads the output back, and writes it to a set of output files. After the first run, commit the output files to git.\nFor future runs, your success is determined by whether the git repository is clean after the run finishes:\ntest 0 == $(git diff data/output/ | wc -l)\n\nAs a bonus, you can use all the git tools to compare differences, and commit them if it turns out the differences were an improvement, so that future runs will pass. It also works great when merging between branches.\n",
"I'm not sure there will be a single package that exactly suits your needs. You have a few considerations to make:\n\nHow to pass data to the equipment and how to collect it back. This is very application specific, but a usually good option is the old'n'good serial port (RS232) for which an easy interfact exists for any programming language.\nHow to run the tests. A unit-testing framework can definitely help you here. The existing frameworks have a lot of the basic features implemented - selecting tests to run, selecting the detail-level of the report (very important for detailed debugging at first and production-stage PASS/FAIL analysis later on). I've had good experience using the test frameworks of both Perl and Python from testing embedded devices.\nYou also have to decide how to make the comparisons. As you correctly noted, the results won't be equal. This is where your domain knowledge comes in. Usually, it is simply implemented using error margins that are applicable in your domain. Of course, you won't be able to use a basic diff tool and will have to write an intelligent script.\n\n",
"You can just use any test framework. The hard part is writing the tools to send/retrieve the data from your test system, not the actual string comparisons.\nYour tests would just all look like this:\nx = read_input_file(ifilename);\ny1 = read_expected_data(ofilename);\nsend_input_file_to_server();\ny2 = read_output_from_server();\ncheckequal(y1, y2)\n\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"embedded",
"regression_testing",
"testing"
] |
stackoverflow_0000077582_embedded_regression_testing_testing.txt
|
Q:
What tool works for real-time tweaking of CSS in IE6, similar to what Firebug does for Firefox?
All front-end developers know the pain of coding for Firefox, then viewing our then mangled pages in IE6.
IE6 is still widely used (it is, however disappearing slowly but surely... in a year and a half from the writing of this, it will be irrelevant as usage will be less than 1%)
We usually used IE conditional comments to create separate CSS files for IE to render correctly. It would be great if there was a tool like Firebug for IE6. Firebug "lite" exists but it seems to be very beta and the new IE8 has an excellent built in developer tool that in many ways is better than firebug) What's the best tool for IE6?
A:
As far as I know Debugbar is currently the most sophisticated debugging tool for IE. It's definitely better than IE developer, but it's still not quite as slick as firebug.
A:
I used to use IE Developer Toolbar as a sort of Firebug alternative. While I don't think it allows you to enter normal CSS, you can adjust the CSS as a list of name/value pairs (adding, editing, removing, etc).
A:
Internet Explorer Developer Toolbar seems to be the best option.
|
What tool works for real-time tweaking of CSS in IE6, similar to what Firebug does for Firefox?
|
All front-end developers know the pain of coding for Firefox, then viewing our then mangled pages in IE6.
IE6 is still widely used (it is, however disappearing slowly but surely... in a year and a half from the writing of this, it will be irrelevant as usage will be less than 1%)
We usually used IE conditional comments to create separate CSS files for IE to render correctly. It would be great if there was a tool like Firebug for IE6. Firebug "lite" exists but it seems to be very beta and the new IE8 has an excellent built in developer tool that in many ways is better than firebug) What's the best tool for IE6?
|
[
"As far as I know Debugbar is currently the most sophisticated debugging tool for IE. It's definitely better than IE developer, but it's still not quite as slick as firebug.\n",
"I used to use IE Developer Toolbar as a sort of Firebug alternative. While I don't think it allows you to enter normal CSS, you can adjust the CSS as a list of name/value pairs (adding, editing, removing, etc).\n",
"Internet Explorer Developer Toolbar seems to be the best option.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"cross_browser",
"css",
"firebug",
"internet_explorer",
"internet_explorer_6"
] |
stackoverflow_0000084813_cross_browser_css_firebug_internet_explorer_internet_explorer_6.txt
|
Q:
Reuse of SQL stored procedures across applications
I'm curious about people's approaches to using stored procedures in a database that is accessed by many applications. Specifically, do you tend to keep different sets of stored procedures for each application, do you try to use a shared set, or do you do a mix?
On the one hand, reuse of SPs allows for fewer changes when there is a model change or something similar and ideally less maintenance. On the other hand, if the needs of the applications diverge, changes to a stored procedure for one application can break other applications. I should note that in our environment, each application has its own development team, with poor communication between them. The data team has better communication though, and is mostly tasked with the stored procedure writing.
Thanks!
A:
Stored procedures should be created based on the data you intend to return, not the application making the request. If you have a stored procedure that is GetAllItems, it should return all of the items in the database. If one of the applications would like to get all of the items by category, create GetAllItemsByCategory. There is no reason for the business rules of a stored procedure to change based on the application requesting the data.
A:
My experience has been that having SPs shared by multiple applications is a cause of pain. In fact, I would argue that having a database that is accessed directly by more than one application is not the best long term architecture.
The pattern I recommend and have implemented is that only one application should "own" each database, and provide APIs (services, etc.) for other applications to access and modify data.
This has several advantages:
The owning application can apply any business logic, logging, etc. to make sure it remains stable
If the schema is changed, all interfaces are known and can be tested to make sure external applications will still work
A:
Stored procedures should expose business rules which don't change depending on the application using them. This lets the rules be stored and updated once instead of every place they are used, which is a nightmare.
A:
Think of it this way: your stored procedures are about the data that's under them, and shouldn't really know about the applications above them. It's possible that one application will need to read or update data in a way that another doesn't, and so one would use SPs that the other wouldn't.
If it were my application / database / etc, and changes to an SP to improve one application broke another, I would consider that evidence of a deeper design issue.
A:
The last portion of your question I believe answered itself.
With already poor communication, sharing procedures between development teams would just add to the potential points of failure and could cause either team hardship.
If I'm on the same team working on multiple projects we will save some time and share procedures, but typically I have found that a little duplication (A few procedures here and there) helps avoid the catastrophic changes/duplication needed later when the applications start to diverge.
LordScarlet also points out a key element as well, if it is generic with no business logic sharing it shouldn't be an issue.
A:
Whenever we had stored procedures that were common to multiple applications, we would create a database just for those procedures (and views and tables, etc). That database (we named "base") would then have a developer (or team) responsible for it (maintenance and testing).
If a different team needed new functionality, they could write it and the base developer would either implement it in the base DB or suggest a simpler way.
A:
It all depends on your abstraction strategy. Are the stored procedures treated as a discrete point of abstraction, or are they treated as just another part of the application that calls them.
The answer to that will tell you how to manage them. If they are a discrete abstraction, they can be shared, as if you need new functionality, you'll add new procedures. If they are part of the app that calls them, they shouldn't be shared.
A:
We try to use a single, shared stored proc wherever possible, but we've run into the situation you describe as well. We handled it by adding an application prefix to the stored procs (ApplicationName_StoredProcName).
Often these stored procs call the centralized or "master" stored proc, but this method leaves room for app specific changes down the road.
A:
I don't think sharing Sprocs among multiple applications makes sense.
I can see the case for sharing a database in related applications, but presumably those applications are separate in large part because they treat the data very differently from one another.
Using the same architecture could work across applications, but imagine trying to use the same business logic layer in multiple applications. "But wait!" you say, "That's silly... if I'm using the same BLL, why would I have a separate app? They do the same thing!"
QED.
A:
Ideally use one proc not multiple versions. If you need versions per customer, investigate the idea of 1 db per customer as opposed to 1 db for all customers. This also allows for some interesting staging of db's on different servers ( allocate the larger/heavier usage ones to bigger servers while smaller ones can share hardware)
A:
If you look for ability to share the SQL code try building a library of abstract functions. This way you could reuse some code which does generic things and keep business logic separate for each application. The same could be done with the views - they could be kept quite generic and useful for many applications.
You will probably find out that there is not that many uses for common stored procedures as you go along.
That said we once implemented a project which was working with a very badly designed legacy database. We've implemented a set of stored procedures which made information retrieval easy. When other people from other teams wanted to use the same information we refactored our stored procedures to make them more generic, added an extra layer of comments and documentation and allowed other people to use our procedures. That solution worked rather well.
A:
Many stored procedures are application independent but there may be a few that are application dependent. For example, the CRUD (Create, Select, Update, Delete) stored procedures can go across applications. In particular you can throw in auditing logic (sometimes done in triggers but there is a limit to how complicated you can get in triggers). If you have some type of standard architecture in your software shop the middle tier may require a stored procedure to create/select/update/delete from the database regardless of the application in which case the procedure is shared.
At the same time there may be some useful ways of viewing the data, ie GetProductsSoldBySalesPerson, etc.. which will also be application independent. You may have a bunch of lookup tables for some fields like department, address, etc. so there may be a procedure to return a view of the table with all the text fields. Ie instead of SalesPersonID, SaleDate, CustomerID, DepartmentID, CustomerAddressID the procedure returns a view SalesPersonName, SaleDate, CustomerName, DepartmentName, CustomerAddress. This could also be used across applications. A customer relationship system would want Customer Name/Address/Other Attributes as would a billing system. So something that did all the joins and got all the customer information in one query would probably be used across applications. Admittedly creating ways to view the data is the domain of a view, but often people used stored procedures to do this.
So basically, when deleting from your table do you need to delete from 3 or 4 other tables to ensure data integrity. is the logic too complicated for a trigger? Then a stored procedure that all applications use to do deletions may be important. The same goes for things that need to be done on creation. If there are common joins that are always done, it may make sense to have one stored procedure to do all the joins. Then if later you change the tables around you could keep the procedure the same and just change the logic there.
A:
The concept of sharing a data schema across multiple applications is a tough one. Invariably, your schema gets compromised for performance reasons: denormalization, which indexes to create. If you can cut the size of a row in half, you can double the number of rows per page and, likely, halve the time it takes to scan the table. However, if you only include 'common' features on the main table and keep data only of interest to specific applications on different (but related) tables, you have to join everywhere to get back to the 'single table' idea.
More indexes to support different applications will cause ever-increasing time to insert, update and delete data from each table.
The database server will often become a bottleneck as well, because databases cannot be load-balanced. You can partition your data across multiple servers, but that gets very complicated too.
Finally, the degree of co-ordination required is typically huge, no doubt with fights between different departments over whose requirements get priority, and new developments will get bogged down.
In general, the 'isolated data silo per application' model works better. Almost everything we do - I work for a contract software house - is based on importing data from and exporting data to other systems, with our applications' own databases.
It may well be easier in data warehouse/decision support systems; I generally work on OLTP systems where transaction performance is paramount.
|
Reuse of SQL stored procedures across applications
|
I'm curious about people's approaches to using stored procedures in a database that is accessed by many applications. Specifically, do you tend to keep different sets of stored procedures for each application, do you try to use a shared set, or do you do a mix?
On the one hand, reuse of SPs allows for fewer changes when there is a model change or something similar and ideally less maintenance. On the other hand, if the needs of the applications diverge, changes to a stored procedure for one application can break other applications. I should note that in our environment, each application has its own development team, with poor communication between them. The data team has better communication though, and is mostly tasked with the stored procedure writing.
Thanks!
|
[
"Stored procedures should be created based on the data you intend to return, not the application making the request. If you have a stored procedure that is GetAllItems, it should return all of the items in the database. If one of the applications would like to get all of the items by category, create GetAllItemsByCategory. There is no reason for the business rules of a stored procedure to change based on the application requesting the data.\n",
"My experience has been that having SPs shared by multiple applications is a cause of pain. In fact, I would argue that having a database that is accessed directly by more than one application is not the best long term architecture.\nThe pattern I recommend and have implemented is that only one application should \"own\" each database, and provide APIs (services, etc.) for other applications to access and modify data.\nThis has several advantages:\n\nThe owning application can apply any business logic, logging, etc. to make sure it remains stable\nIf the schema is changed, all interfaces are known and can be tested to make sure external applications will still work\n\n",
"Stored procedures should expose business rules which don't change depending on the application using them. This lets the rules be stored and updated once instead of every place they are used, which is a nightmare.\n",
"Think of it this way: your stored procedures are about the data that's under them, and shouldn't really know about the applications above them. It's possible that one application will need to read or update data in a way that another doesn't, and so one would use SPs that the other wouldn't.\nIf it were my application / database / etc, and changes to an SP to improve one application broke another, I would consider that evidence of a deeper design issue.\n",
"The last portion of your question I believe answered itself.\nWith already poor communication, sharing procedures between development teams would just add to the potential points of failure and could cause either team hardship.\nIf I'm on the same team working on multiple projects we will save some time and share procedures, but typically I have found that a little duplication (A few procedures here and there) helps avoid the catastrophic changes/duplication needed later when the applications start to diverge.\nLordScarlet also points out a key element as well, if it is generic with no business logic sharing it shouldn't be an issue.\n",
"Whenever we had stored procedures that were common to multiple applications, we would create a database just for those procedures (and views and tables, etc). That database (we named \"base\") would then have a developer (or team) responsible for it (maintenance and testing). \nIf a different team needed new functionality, they could write it and the base developer would either implement it in the base DB or suggest a simpler way. \n",
"It all depends on your abstraction strategy. Are the stored procedures treated as a discrete point of abstraction, or are they treated as just another part of the application that calls them.\nThe answer to that will tell you how to manage them. If they are a discrete abstraction, they can be shared, as if you need new functionality, you'll add new procedures. If they are part of the app that calls them, they shouldn't be shared.\n",
"We try to use a single, shared stored proc wherever possible, but we've run into the situation you describe as well. We handled it by adding an application prefix to the stored procs (ApplicationName_StoredProcName). \nOften these stored procs call the centralized or \"master\" stored proc, but this method leaves room for app specific changes down the road.\n",
"I don't think sharing Sprocs among multiple applications makes sense. \nI can see the case for sharing a database in related applications, but presumably those applications are separate in large part because they treat the data very differently from one another. \nUsing the same architecture could work across applications, but imagine trying to use the same business logic layer in multiple applications. \"But wait!\" you say, \"That's silly... if I'm using the same BLL, why would I have a separate app? They do the same thing!\"\nQED.\n",
"Ideally use one proc not multiple versions. If you need versions per customer, investigate the idea of 1 db per customer as opposed to 1 db for all customers. This also allows for some interesting staging of db's on different servers ( allocate the larger/heavier usage ones to bigger servers while smaller ones can share hardware)\n",
"If you look for ability to share the SQL code try building a library of abstract functions. This way you could reuse some code which does generic things and keep business logic separate for each application. The same could be done with the views - they could be kept quite generic and useful for many applications.\nYou will probably find out that there is not that many uses for common stored procedures as you go along.\nThat said we once implemented a project which was working with a very badly designed legacy database. We've implemented a set of stored procedures which made information retrieval easy. When other people from other teams wanted to use the same information we refactored our stored procedures to make them more generic, added an extra layer of comments and documentation and allowed other people to use our procedures. That solution worked rather well.\n",
"Many stored procedures are application independent but there may be a few that are application dependent. For example, the CRUD (Create, Select, Update, Delete) stored procedures can go across applications. In particular you can throw in auditing logic (sometimes done in triggers but there is a limit to how complicated you can get in triggers). If you have some type of standard architecture in your software shop the middle tier may require a stored procedure to create/select/update/delete from the database regardless of the application in which case the procedure is shared.\nAt the same time there may be some useful ways of viewing the data, ie GetProductsSoldBySalesPerson, etc.. which will also be application independent. You may have a bunch of lookup tables for some fields like department, address, etc. so there may be a procedure to return a view of the table with all the text fields. Ie instead of SalesPersonID, SaleDate, CustomerID, DepartmentID, CustomerAddressID the procedure returns a view SalesPersonName, SaleDate, CustomerName, DepartmentName, CustomerAddress. This could also be used across applications. A customer relationship system would want Customer Name/Address/Other Attributes as would a billing system. So something that did all the joins and got all the customer information in one query would probably be used across applications. Admittedly creating ways to view the data is the domain of a view, but often people used stored procedures to do this.\nSo basically, when deleting from your table do you need to delete from 3 or 4 other tables to ensure data integrity. is the logic too complicated for a trigger? Then a stored procedure that all applications use to do deletions may be important. The same goes for things that need to be done on creation. If there are common joins that are always done, it may make sense to have one stored procedure to do all the joins. Then if later you change the tables around you could keep the procedure the same and just change the logic there.\n",
"The concept of sharing a data schema across multiple applications is a tough one. Invariably, your schema gets compromised for performance reasons: denormalization, which indexes to create. If you can cut the size of a row in half, you can double the number of rows per page and, likely, halve the time it takes to scan the table. However, if you only include 'common' features on the main table and keep data only of interest to specific applications on different (but related) tables, you have to join everywhere to get back to the 'single table' idea.\nMore indexes to support different applications will cause ever-increasing time to insert, update and delete data from each table.\nThe database server will often become a bottleneck as well, because databases cannot be load-balanced. You can partition your data across multiple servers, but that gets very complicated too.\nFinally, the degree of co-ordination required is typically huge, no doubt with fights between different departments over whose requirements get priority, and new developments will get bogged down.\nIn general, the 'isolated data silo per application' model works better. Almost everything we do - I work for a contract software house - is based on importing data from and exporting data to other systems, with our applications' own databases.\nIt may well be easier in data warehouse/decision support systems; I generally work on OLTP systems where transaction performance is paramount.\n"
] |
[
5,
4,
3,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"architecture",
"code_reuse",
"sql",
"stored_procedures"
] |
stackoverflow_0000084453_architecture_code_reuse_sql_stored_procedures.txt
|
Q:
In what situations would you get different users seeing different rows in a table on SQL Server?
SQL Server Version 2000.
We've a bunch of desktops talking to MSSQL Server. When looking for a specific record, some desktops return the correct data, but some do not.
The SQL Command is "SELECT * FROM PODORDH WHERE ([NO]=6141)"
On one or two desktops, this returns a record. On the server and on all other desktops, no record is returned.
What areas do I need to look at? What would cause this to happen?
A:
This error probably comes from an user who deleted/inserted that record within a transaction but did not yet commit said transaction.
A:
Check which database and server you are connecting to on each machine - the query is simple enough that you must get the same answer everywhere UNLESS you are connecting to different databases or servers.
A:
If it is just ONE workstation returning the row then it sounds like that workstation has an open transaction that has not been committed.
Otherwise is it possible that the isolation levels are different for different workstations, ie. some will see uncommitted data and other wont?
A:
You may want to look at the permissions for the table you are selecting from, if you are connecting to the server as a different user from each machine.
If some users but not others have access to read that table, you may get the result you describe.
A:
After you exhaust all the options mentioned above, I would look into row and table locks. If this is the case it should return an error saying it encountered a lock. Are you running an application that could be swallowing errors?
A:
Perhaps one or two users who find records are using a different schema-name and thus a different tables. IE most users are using dbo.PODORDH, but one or two users are using otheruser.PODORDH.
|
In what situations would you get different users seeing different rows in a table on SQL Server?
|
SQL Server Version 2000.
We've a bunch of desktops talking to MSSQL Server. When looking for a specific record, some desktops return the correct data, but some do not.
The SQL Command is "SELECT * FROM PODORDH WHERE ([NO]=6141)"
On one or two desktops, this returns a record. On the server and on all other desktops, no record is returned.
What areas do I need to look at? What would cause this to happen?
|
[
"This error probably comes from an user who deleted/inserted that record within a transaction but did not yet commit said transaction.\n",
"Check which database and server you are connecting to on each machine - the query is simple enough that you must get the same answer everywhere UNLESS you are connecting to different databases or servers.\n",
"If it is just ONE workstation returning the row then it sounds like that workstation has an open transaction that has not been committed.\nOtherwise is it possible that the isolation levels are different for different workstations, ie. some will see uncommitted data and other wont?\n",
"You may want to look at the permissions for the table you are selecting from, if you are connecting to the server as a different user from each machine.\nIf some users but not others have access to read that table, you may get the result you describe.\n",
"After you exhaust all the options mentioned above, I would look into row and table locks. If this is the case it should return an error saying it encountered a lock. Are you running an application that could be swallowing errors?\n",
"Perhaps one or two users who find records are using a different schema-name and thus a different tables. IE most users are using dbo.PODORDH, but one or two users are using otheruser.PODORDH.\n"
] |
[
4,
3,
2,
2,
1,
0
] |
[] |
[] |
[
"sql_server",
"sql_server_2000"
] |
stackoverflow_0000083295_sql_server_sql_server_2000.txt
|
Q:
Resending invitation/action emails
I've got a web app that sends out emails in response to a user-initaited action. These emails prompt the recipient for a response (an URL is included related to the specific action.)
I've got some users asking for a "resend" feature to push that email again.
My objection is that if the original email ended up in a spam folder (or didn't make it all the first time), the same thing is likely to happen the second time. (I've confirmed that the emails haven't bounced; they were accepted by the recipient's mail server.)
So what does the community think: is the ability to resend and email invitation/notification useful or pointless?
A:
Definitely useful, at least from the user's point of view. By manually resending the email, they know that it has been sent and can check their spam folder immediately to catch the mail. Otherwise, they might not know about the mail and it will dissapear from their spam before they can catch it.
A:
It can be useful. The users may have deleted it by accident. It may have been a transient error in the recipient's mail server. Spam filters aren't the only cause of lost mail.
A:
Absolutely pointless. But, if the user's want it, and it doesn't take too long, it may be worthwhile. Users are silly sometimes, and if it makes them happy...
A:
Useful - any number of factors can change between the first and the second sending.
A:
It is definitely useful. There could be a number of cases. For example, user deleted the original email accidentally.
A:
Your objection is assuming that the issue was the invitation was going to the spam folder. You don't know that for sure (or, at least, you hint at such). They could want a Resend button because they want to remind the customer for payment or notify them of something again or whatever. It doesn't matter the reason because the effect should be fairly easy to accomplish and allows them to send as many messages as they like.
One of those 'the customer wants it, it's not entirely unreasonable, maybe you should just implement it instead of questioning them or coming up with a reason to veto it' dealies :)
A:
This is absolutely required. Just because your application didn't get a bounce doesn't mean that the mail actually went through. Many sites drop e-mails that trigger a spam filter rather than deliver them to a spam folder. In such a circumstance, it's conceivable that a user might in the meantime opt-out of his sites spam filtering and then want to retry.
A:
If you implement it I would get the user to re-enter and re-confirm the email address they entered and I would not allow it to be used more than a few times, otherwise it would be very easy to script an abuse script to bomb someones mailbox.
A:
There's no argument against the ability to re-send it, is there? Assuming that re-sending it will end up with the same action doesn't count - there's no harm to re-sending it.
If there's an argument for it, and none against it, that should be an easy decision.
|
Resending invitation/action emails
|
I've got a web app that sends out emails in response to a user-initaited action. These emails prompt the recipient for a response (an URL is included related to the specific action.)
I've got some users asking for a "resend" feature to push that email again.
My objection is that if the original email ended up in a spam folder (or didn't make it all the first time), the same thing is likely to happen the second time. (I've confirmed that the emails haven't bounced; they were accepted by the recipient's mail server.)
So what does the community think: is the ability to resend and email invitation/notification useful or pointless?
|
[
"Definitely useful, at least from the user's point of view. By manually resending the email, they know that it has been sent and can check their spam folder immediately to catch the mail. Otherwise, they might not know about the mail and it will dissapear from their spam before they can catch it.\n",
"It can be useful. The users may have deleted it by accident. It may have been a transient error in the recipient's mail server. Spam filters aren't the only cause of lost mail.\n",
"Absolutely pointless. But, if the user's want it, and it doesn't take too long, it may be worthwhile. Users are silly sometimes, and if it makes them happy...\n",
"Useful - any number of factors can change between the first and the second sending.\n",
"It is definitely useful. There could be a number of cases. For example, user deleted the original email accidentally.\n",
"Your objection is assuming that the issue was the invitation was going to the spam folder. You don't know that for sure (or, at least, you hint at such). They could want a Resend button because they want to remind the customer for payment or notify them of something again or whatever. It doesn't matter the reason because the effect should be fairly easy to accomplish and allows them to send as many messages as they like.\nOne of those 'the customer wants it, it's not entirely unreasonable, maybe you should just implement it instead of questioning them or coming up with a reason to veto it' dealies :)\n",
"This is absolutely required. Just because your application didn't get a bounce doesn't mean that the mail actually went through. Many sites drop e-mails that trigger a spam filter rather than deliver them to a spam folder. In such a circumstance, it's conceivable that a user might in the meantime opt-out of his sites spam filtering and then want to retry.\n",
"If you implement it I would get the user to re-enter and re-confirm the email address they entered and I would not allow it to be used more than a few times, otherwise it would be very easy to script an abuse script to bomb someones mailbox.\n",
"There's no argument against the ability to re-send it, is there? Assuming that re-sending it will end up with the same action doesn't count - there's no harm to re-sending it.\nIf there's an argument for it, and none against it, that should be an easy decision.\n"
] |
[
2,
1,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"email",
"email_spam",
"web_applications"
] |
stackoverflow_0000084980_email_email_spam_web_applications.txt
|
Q:
How to stream a PDF file as binary to the browser using .NET 2.0
I'm looking for a way to stream a PDF file from my server to the browser using .NET 2.0 (in binary).
I'm trying to grab an existing PDF file from a server path and push that up as binary to the browser.
A:
Here you go: How To Write Binary Files to the Browser Using ASP.NET and Visual C# .NET
A:
Set Content-Type: Response.ContentType = "application/pdf"
Set ContentDisposition, if you want to give a new name for the file: Response.Headers.Add("Content-Disposition", "attachment: filename=file.pdf");
Write the content, using Response.OutputStream as Mr. Kopp said.
Step 2 is not strictly necessary, but it's probably a good idea if you don't want the browser to try to save the PDF with the same name as your ASPX file.
A:
Write the binary to the output stream, Response.OutputStream. Then just add the header Content-Disposition header.
A:
You can just setup a handler or a page that set's the correct response type and output the pdf to the response output buffer.
|
How to stream a PDF file as binary to the browser using .NET 2.0
|
I'm looking for a way to stream a PDF file from my server to the browser using .NET 2.0 (in binary).
I'm trying to grab an existing PDF file from a server path and push that up as binary to the browser.
|
[
"Here you go: How To Write Binary Files to the Browser Using ASP.NET and Visual C# .NET\n",
"\nSet Content-Type: Response.ContentType = \"application/pdf\"\nSet ContentDisposition, if you want to give a new name for the file: Response.Headers.Add(\"Content-Disposition\", \"attachment: filename=file.pdf\");\nWrite the content, using Response.OutputStream as Mr. Kopp said.\n\nStep 2 is not strictly necessary, but it's probably a good idea if you don't want the browser to try to save the PDF with the same name as your ASPX file.\n",
"Write the binary to the output stream, Response.OutputStream. Then just add the header Content-Disposition header.\n",
"You can just setup a handler or a page that set's the correct response type and output the pdf to the response output buffer.\n"
] |
[
5,
3,
1,
0
] |
[] |
[] |
[
".net"
] |
stackoverflow_0000084995_.net.txt
|
Q:
How do I programmatically wire up ToolStripButton events in C#?
I'm programmatically adding ToolStripButton items to a context menu.
That part is easy.
this.tsmiDelete.DropDownItems.Add("The text on the item.");
However, I also need to wire up the events so that when the user clicks the item something actually happens!
How do I do this? The method that handles the click also needs to receive some sort of id or object that relates to the particular ToolStripButton that the user clicked.
A:
Couldn't you just subscribe to the Click event? Something like this:
ToolStripButton btn = new ToolStripButton("The text on the item.");
this.tsmiDelete.DropDownItems.Add(btn);
btn.Click += new EventHandler(OnBtnClicked);
And OnBtnClicked would be declared like this:
private void OnBtnClicked(object sender, EventArgs e)
{
ToolStripButton btn = sender as ToolStripButton;
// handle the button click
}
The sender should be the ToolStripButton, so you can cast it and do whatever you need to do with it.
A:
Thanks for your help with that Andy. My only problem now is that the AutoSize is not working on the ToolStripButtons that I'm adding! They're all too narrow.
It's rather odd because it was working earlier.
Update: There's definitely something wrong with AutoSize for programmatically created ToolStripButtons. However, I found a solution:
Create the ToolStripButton.
Create a label control and set the font properties to match your button.
Set the text of the label to match your button.
Set the label to AutoSize.
Read the width of the label and use that to set the width of the ToolStripButton.
It's hacky, but it works.
|
How do I programmatically wire up ToolStripButton events in C#?
|
I'm programmatically adding ToolStripButton items to a context menu.
That part is easy.
this.tsmiDelete.DropDownItems.Add("The text on the item.");
However, I also need to wire up the events so that when the user clicks the item something actually happens!
How do I do this? The method that handles the click also needs to receive some sort of id or object that relates to the particular ToolStripButton that the user clicked.
|
[
"Couldn't you just subscribe to the Click event? Something like this:\nToolStripButton btn = new ToolStripButton(\"The text on the item.\");\nthis.tsmiDelete.DropDownItems.Add(btn);\nbtn.Click += new EventHandler(OnBtnClicked);\n\nAnd OnBtnClicked would be declared like this:\nprivate void OnBtnClicked(object sender, EventArgs e)\n{\n ToolStripButton btn = sender as ToolStripButton;\n\n // handle the button click\n}\n\nThe sender should be the ToolStripButton, so you can cast it and do whatever you need to do with it.\n",
"Thanks for your help with that Andy. My only problem now is that the AutoSize is not working on the ToolStripButtons that I'm adding! They're all too narrow. \nIt's rather odd because it was working earlier.\n\nUpdate: There's definitely something wrong with AutoSize for programmatically created ToolStripButtons. However, I found a solution:\n\nCreate the ToolStripButton.\nCreate a label control and set the font properties to match your button.\nSet the text of the label to match your button.\nSet the label to AutoSize.\nRead the width of the label and use that to set the width of the ToolStripButton.\n\nIt's hacky, but it works.\n"
] |
[
3,
0
] |
[] |
[] |
[
"c#",
"events",
"toolstripbutton",
"toolstripitem"
] |
stackoverflow_0000084842_c#_events_toolstripbutton_toolstripitem.txt
|
Q:
Performance of an large directory structure, networked application
I'm trying to find out what the performance of a large directory structure would be if deep directories were to be accessed on a shared, nfs filesystem. The structure would be excessively large, with 4 levels of nested directories, each level containing 1024 directories. (1024 at root, 1024 in a given subdirectory, and so on).
This filesystem would be on a network repository that users would be accessing for their personal information. The data would be replicated on multiple servers and load-balanced, but still, each machine would have a decent load at all times.
If the 4th level contained the information that the users were looking for, how bad would the performance be? If all were accessing different subdirectories? Could this be resolved by caching inode information, or no?
I've been searching on this for a while, but I'm primarily finding information on large files rather than large directory structures.
A:
I did that at my work once. Don't remember the exact numbers offhand, but I think it was 8 levels deep, 10 subdirectories in each level (user id 87654321 maps to directory 8/7/6/5/4/3/2/1/. Turned out that was not such a great idea, started running into problems with filesystem inode number limits, iirc (10^10 = 10000000000 directories, not good). Switched to more subdirectories per level and many less levels; problems went away. Your situation sounds more manageable, but still, check that your filesystem would support the kinds of file and directory counts that you're anticipating.
A:
The answer here is going to be highly dependent on your operating system, can you provide more information? I have found that file open times under Linux have been reasonable up to directory sizes in the small tens of thousands, but I have not tried any tests with directory structures as large as yours (you do know that 1024 to the fourth power is 1,099,511,627,776 right? And that that's something like 180 times the population of the earth, right?)
A:
Seems like you'd just want to write an test app to generate 1024 folders, iterated 8 levels down, with each folder containing some number (100 - 1000?) of files 1KB in size and then randomly find and access the files.
Track the access times over multiple passes and see if it's acceptable to your requirements.
|
Performance of an large directory structure, networked application
|
I'm trying to find out what the performance of a large directory structure would be if deep directories were to be accessed on a shared, nfs filesystem. The structure would be excessively large, with 4 levels of nested directories, each level containing 1024 directories. (1024 at root, 1024 in a given subdirectory, and so on).
This filesystem would be on a network repository that users would be accessing for their personal information. The data would be replicated on multiple servers and load-balanced, but still, each machine would have a decent load at all times.
If the 4th level contained the information that the users were looking for, how bad would the performance be? If all were accessing different subdirectories? Could this be resolved by caching inode information, or no?
I've been searching on this for a while, but I'm primarily finding information on large files rather than large directory structures.
|
[
"I did that at my work once. Don't remember the exact numbers offhand, but I think it was 8 levels deep, 10 subdirectories in each level (user id 87654321 maps to directory 8/7/6/5/4/3/2/1/. Turned out that was not such a great idea, started running into problems with filesystem inode number limits, iirc (10^10 = 10000000000 directories, not good). Switched to more subdirectories per level and many less levels; problems went away. Your situation sounds more manageable, but still, check that your filesystem would support the kinds of file and directory counts that you're anticipating.\n",
"The answer here is going to be highly dependent on your operating system, can you provide more information? I have found that file open times under Linux have been reasonable up to directory sizes in the small tens of thousands, but I have not tried any tests with directory structures as large as yours (you do know that 1024 to the fourth power is 1,099,511,627,776 right? And that that's something like 180 times the population of the earth, right?)\n",
"Seems like you'd just want to write an test app to generate 1024 folders, iterated 8 levels down, with each folder containing some number (100 - 1000?) of files 1KB in size and then randomly find and access the files. \nTrack the access times over multiple passes and see if it's acceptable to your requirements.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"nfs",
"performance"
] |
stackoverflow_0000080470_nfs_performance.txt
|
Q:
Test an object for NOT being a type
I know how to test an object to see if it is of a type, using the IS keyword e.g.
if (foo is bar)
{
//do something here
}
but how do you test for it not being "bar"?, I can't seem to find a keyword that works with IS to test for a negative result.
BTW - I have a horrible feeling this is soooo obvious, so apologies in advance...
A:
if (!(foo is bar)) {
}
A:
You can also use the as operator.
The as operator is used to perform
conversions between compatible types.
bar aBar = foo as bar; // aBar is null if foo is not bar
A:
There is no specific keyword
if (!(foo is bar)) ...
if (foo.GetType() != bar.GetType()) .. // foo & bar should be on the same level of type hierarchy
A:
You should clarify whether you want to test that an object is exactly a certain type or assignable from a certain type. For example:
public class Foo : Bar {}
And suppose you have:
Foo foo = new Foo();
If you want to know whether foo is not Bar(), then you would do this:
if(!(foo.GetType() == tyepof(Bar))) {...}
But if you want to make sure that foo does not derive from Bar, then an easy check is to use the as keyword.
Bar bar = foo as Bar;
if(bar == null) {/* foo is not a bar */}
|
Test an object for NOT being a type
|
I know how to test an object to see if it is of a type, using the IS keyword e.g.
if (foo is bar)
{
//do something here
}
but how do you test for it not being "bar"?, I can't seem to find a keyword that works with IS to test for a negative result.
BTW - I have a horrible feeling this is soooo obvious, so apologies in advance...
|
[
"if (!(foo is bar)) {\n}\n\n",
"You can also use the as operator.\n\nThe as operator is used to perform\n conversions between compatible types.\n\nbar aBar = foo as bar; // aBar is null if foo is not bar\n\n",
"There is no specific keyword\nif (!(foo is bar)) ...\nif (foo.GetType() != bar.GetType()) .. // foo & bar should be on the same level of type hierarchy\n\n",
"You should clarify whether you want to test that an object is exactly a certain type or assignable from a certain type. For example:\npublic class Foo : Bar {}\n\nAnd suppose you have: \nFoo foo = new Foo();\n\nIf you want to know whether foo is not Bar(), then you would do this:\nif(!(foo.GetType() == tyepof(Bar))) {...}\n\nBut if you want to make sure that foo does not derive from Bar, then an easy check is to use the as keyword.\nBar bar = foo as Bar;\nif(bar == null) {/* foo is not a bar */}\n\n"
] |
[
13,
4,
1,
1
] |
[] |
[] |
[
"c#",
"object",
"syntax",
"testing"
] |
stackoverflow_0000055978_c#_object_syntax_testing.txt
|
Q:
How can I get Emacs' key bindings in Python's IDLE?
I use Emacs primarily for coding Python but sometimes I use IDLE. Is there a way to change the key bindings easily in IDLE to match Emacs?
A:
IDLE provides Emacs keybindings without having to install other software.
Open up the menu item Options -> Configure IDLE...
Go to Keys tab
In the drop down menu on the right
side of the dialog change the select
to "IDLE Classic Unix"
It's not the true emacs key bindings but you get the basics like movement, saving/opening, ...
A:
There's a program for Windows called XKeymacs that allows you to specify emacs keybindings for different programs. It should work with IDLE.
http://www.cam.hi-ho.ne.jp/oishi/indexen.html
-Mark
A:
'readline' module supposedly provides Emacs like key bindings and even functionality. However, it is not available on Windows but on Unix. Therefore, this might be a viable solution if you are not using Windows.
import readline
Since I am running IDLE on Windows it is unfortunately not an option for me.
|
How can I get Emacs' key bindings in Python's IDLE?
|
I use Emacs primarily for coding Python but sometimes I use IDLE. Is there a way to change the key bindings easily in IDLE to match Emacs?
|
[
"IDLE provides Emacs keybindings without having to install other software. \n\nOpen up the menu item Options -> Configure IDLE...\nGo to Keys tab\nIn the drop down menu on the right\nside of the dialog change the select\nto \"IDLE Classic Unix\"\n\nIt's not the true emacs key bindings but you get the basics like movement, saving/opening, ...\n",
"There's a program for Windows called XKeymacs that allows you to specify emacs keybindings for different programs. It should work with IDLE.\nhttp://www.cam.hi-ho.ne.jp/oishi/indexen.html\n-Mark\n",
"'readline' module supposedly provides Emacs like key bindings and even functionality. However, it is not available on Windows but on Unix. Therefore, this might be a viable solution if you are not using Windows.\nimport readline\n\nSince I am running IDLE on Windows it is unfortunately not an option for me.\n"
] |
[
6,
2,
0
] |
[] |
[] |
[
"emacs",
"ide",
"keyboard",
"python"
] |
stackoverflow_0000055365_emacs_ide_keyboard_python.txt
|
Q:
.NET Remoting Server Only processes One request
I am using .NET Remoting. My server/hoster is a Windows Service. It will sometimes work just fine and other times it will process one request and then it does not process any more (until I restart it). It is running as a Windows service Here is the code from the Windows Service:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Tcp;
using System.ServiceProcess;
using System.Text;
using Remoting;
namespace CreateReview
{
public partial class Service1 : ServiceBase
{
public Service1()
{
InitializeComponent();
}
readonly TcpChannel channel = new TcpChannel(8180);
protected override void OnStart(string[] args)
{
// Create an instance of a channel
ChannelServices.RegisterChannel(channel, false);
// Register as an available service with the name HelloWorld
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(SampleObject),
"SetupReview",
WellKnownObjectMode.SingleCall);
}
protected override void OnStop()
{
}
}
}
Thanks for any help offered.
Vaccano
A:
as a SingleCall type, your SampleObject will be created for every call the client makes. This suggests to me that your object is at fault, and you don't show what it does. You need to look at any dependancies it has on shared resources orlocks. Try writing some debug out in the SampleObject's constructor to see how far the remoting call gets.
|
.NET Remoting Server Only processes One request
|
I am using .NET Remoting. My server/hoster is a Windows Service. It will sometimes work just fine and other times it will process one request and then it does not process any more (until I restart it). It is running as a Windows service Here is the code from the Windows Service:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Tcp;
using System.ServiceProcess;
using System.Text;
using Remoting;
namespace CreateReview
{
public partial class Service1 : ServiceBase
{
public Service1()
{
InitializeComponent();
}
readonly TcpChannel channel = new TcpChannel(8180);
protected override void OnStart(string[] args)
{
// Create an instance of a channel
ChannelServices.RegisterChannel(channel, false);
// Register as an available service with the name HelloWorld
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(SampleObject),
"SetupReview",
WellKnownObjectMode.SingleCall);
}
protected override void OnStop()
{
}
}
}
Thanks for any help offered.
Vaccano
|
[
"as a SingleCall type, your SampleObject will be created for every call the client makes. This suggests to me that your object is at fault, and you don't show what it does. You need to look at any dependancies it has on shared resources orlocks. Try writing some debug out in the SampleObject's constructor to see how far the remoting call gets.\n"
] |
[
1
] |
[] |
[] |
[
".net",
"c#",
"remoting"
] |
stackoverflow_0000085083_.net_c#_remoting.txt
|
Q:
How do I efficiently search an array to fill in form fields?
I am looking for an efficient way to pull the data I want out of an array called $submission_info so I can easily auto-fill my form fields. The array size is about 120.
I want to find the field name and extract the content. In this case, the field name is loanOfficer and the content is John Doe.
Output of Print_r($submission_info[1]):
Array (
[field_id] => 2399
[form_id] => 4
[field_name] => loanOfficer
[field_test_value] => ABCDEFGHIJKLMNOPQRSTUVWXYZ
[field_size] => medium
[field_type] => other
[data_type] => string
[field_title] => LoanOfficer
[col_name] => loanOfficer
[list_order] => 2
[admin_display] => yes
[is_sortable] => yes
[include_on_redirect] => yes
[option_orientation] => vertical
[file_upload_dir] =>
[file_upload_url] =>
[file_upload_max_size] => 1000000
[file_upload_types] =>
[content] => John Doe
)
I want to find the field name and extract the content. In this case, the field name is loanOfficer and the content is John Doe.
A:
You're probably best off going through each entry and creating a new associative array out of it.
foreach($submission_info as $elem) {
$newarray[$elem["field_name"]] = $elem["content"];
}
Then you can just find the form fields by getting the value from $newarray[<field you're filling in>]. Otherwise, you're going to have to search $submission_info each time for the correct field.
A:
Not sure if this is the optimal solution:
foreach($submission_info as $info){
if($info['field_name'] == 'loanOfficer'){ //check the field name
$content = $info['content']; //store the desired value
continue; //this will stop the loop after the desired item is found
}
}
Next time:
Questions are more helpful to you and others if you generalize them such that they cover some overarching topic that you and maybe others don't understand. Seems like you could use an array refresher course...
A:
I'm assuming that php has an associative array (commonly called dictionary or hashtable). The most efficient routine would be to run over the array once and put the fields into a dictionary keyed on the field name.
Then instead of having to search through the original array when you want to find a specific field (an O(n)) operation. You just used the dictionary to retrieve it by the name of the field in an O(1) (or constant) operation. Of course the first pass over the array to populate the dictionary would be O(n) but that's a one time cost rather than paying that same penalty for every lookup.
|
How do I efficiently search an array to fill in form fields?
|
I am looking for an efficient way to pull the data I want out of an array called $submission_info so I can easily auto-fill my form fields. The array size is about 120.
I want to find the field name and extract the content. In this case, the field name is loanOfficer and the content is John Doe.
Output of Print_r($submission_info[1]):
Array (
[field_id] => 2399
[form_id] => 4
[field_name] => loanOfficer
[field_test_value] => ABCDEFGHIJKLMNOPQRSTUVWXYZ
[field_size] => medium
[field_type] => other
[data_type] => string
[field_title] => LoanOfficer
[col_name] => loanOfficer
[list_order] => 2
[admin_display] => yes
[is_sortable] => yes
[include_on_redirect] => yes
[option_orientation] => vertical
[file_upload_dir] =>
[file_upload_url] =>
[file_upload_max_size] => 1000000
[file_upload_types] =>
[content] => John Doe
)
I want to find the field name and extract the content. In this case, the field name is loanOfficer and the content is John Doe.
|
[
"You're probably best off going through each entry and creating a new associative array out of it.\nforeach($submission_info as $elem) {\n $newarray[$elem[\"field_name\"]] = $elem[\"content\"];\n}\n\nThen you can just find the form fields by getting the value from $newarray[<field you're filling in>]. Otherwise, you're going to have to search $submission_info each time for the correct field.\n",
"Not sure if this is the optimal solution:\nforeach($submission_info as $info){\n if($info['field_name'] == 'loanOfficer'){ //check the field name\n $content = $info['content']; //store the desired value\n continue; //this will stop the loop after the desired item is found\n }\n}\n\nNext time:\nQuestions are more helpful to you and others if you generalize them such that they cover some overarching topic that you and maybe others don't understand. Seems like you could use an array refresher course...\n",
"I'm assuming that php has an associative array (commonly called dictionary or hashtable). The most efficient routine would be to run over the array once and put the fields into a dictionary keyed on the field name.\nThen instead of having to search through the original array when you want to find a specific field (an O(n)) operation. You just used the dictionary to retrieve it by the name of the field in an O(1) (or constant) operation. Of course the first pass over the array to populate the dictionary would be O(n) but that's a one time cost rather than paying that same penalty for every lookup. \n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"arrays",
"php"
] |
stackoverflow_0000084800_arrays_php.txt
|
Q:
Uninstall Sharepoint Infrastructure Update
I installed WSS Infrastructure Update and MOSS Infrastructure Update (http://technet.microsoft.com/en-us/office/sharepointserver/bb735839.aspx) and now I can't restore the content database on an older version. Do you know if there is a way to uninstall it ?
A:
The only option that I can think of is to restore the backup on another machine that has the same level of updates as when the backup was done, upgrade the whole box to Infrastructure Update, backup this environment and restore it in your already-upgraded-environment.
A:
There are no supported methods to uninstall updates in MOSS or WSS. Your only option is to restore a backup, which is why you should always back up everything and test the integrity of the backup before installing updates.
A:
There is no such an option, as others pointed out, the only option is to restore from a backup.
When you are trying to restore a content database to a different box both should have the same set of updates installed, otherwise you might experience all kind of problems.
|
Uninstall Sharepoint Infrastructure Update
|
I installed WSS Infrastructure Update and MOSS Infrastructure Update (http://technet.microsoft.com/en-us/office/sharepointserver/bb735839.aspx) and now I can't restore the content database on an older version. Do you know if there is a way to uninstall it ?
|
[
"The only option that I can think of is to restore the backup on another machine that has the same level of updates as when the backup was done, upgrade the whole box to Infrastructure Update, backup this environment and restore it in your already-upgraded-environment.\n",
"There are no supported methods to uninstall updates in MOSS or WSS. Your only option is to restore a backup, which is why you should always back up everything and test the integrity of the backup before installing updates.\n",
"There is no such an option, as others pointed out, the only option is to restore from a backup.\nWhen you are trying to restore a content database to a different box both should have the same set of updates installed, otherwise you might experience all kind of problems.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"moss",
"sharepoint",
"wss"
] |
stackoverflow_0000084132_moss_sharepoint_wss.txt
|
Q:
Object Oriented Update Approach
I've been tasked with maintaining an application originally written in VB6. It has since been imported into VB .Net and to say the least the code is anything but Object Oriented. The code is riddled with classes which contain nothing more than Public Shared attributes(variables) and methods(functions), the result of which restricts the application from opening more than one project at a time.
A project consists of a XML file which contains general project settings, as well as the location to an Access database which contains other project related data. Over the years the format of the XML file has been modified, and an update and versioning strategy has been adopted. The chosen strategy performs an update upon open whenever an old version is encountered. Thus far, updates have only consisted of rearranging data within the XML file or making database schema changes and moving data from the XML file to the database.
Having quite a bit of background in OOP it's easy for me to see that a project should be a self contained object which other objects interact with. However, I fail to see how to apply the chosen update strategy in OOP.
The problem of implementing the chosen update strategy in OOP has kept me from using OOP as of yet. If anyone has experience with such a task, or recommendations on how to proceeded I'd appreciate any assistance you can provide.
A:
Build a class which reads the XML file in, and provides properties/methods/etc based upon the data in that file. When the class writes the XML file back out, have it format in the manner needed for the new version.
So, basically, the class will be able to read in the current version, plus all the older versions, but it will always write out the new version.
Data would be held in internal variables of the class, rather than having to scan the XML file every time you need something.
Adding a VERSION node to your XML file will also help in this case.
A:
You might have answered your own question when you used the word strategy (i.e. the Strategy Design Pattern).
Possibly you could:
Create a project class that knows nothing about conversions but accepts a strategy object.
Create a hierarchy of classes to model each possible conversion strategy.
Use a factory method to build your project object with the right strategy
A:
I don't understand why this is a troubling problem. It could be solved in any number of ways.
If you want to do a full object oriented enterprisey type thing, you could take any subset of the following solution:
Create an interface IProject which
describes how other objects interact
with a project.
Create the current implementation of
Project which implements IProject
and can read and write to the
current version.
Extend Project for each past
version, overriding the xml and
database read methods and having the
constructor call write when these
classes are instanced
For extra enterpriseyness, create a
ProjectFactory, which detects the
version of the file and instanciates
the correct version.
If further versions are needed,
rewrite the current Project to do
the same thing as past projects,
accessing the new version of Project
with all the reads and then calling
write.
The advantage of this solution is that you can continue meandering about with different versions and each new version only requires the ability to be updated to from the previous version, with all previous versions cascading up to the second to last version.
|
Object Oriented Update Approach
|
I've been tasked with maintaining an application originally written in VB6. It has since been imported into VB .Net and to say the least the code is anything but Object Oriented. The code is riddled with classes which contain nothing more than Public Shared attributes(variables) and methods(functions), the result of which restricts the application from opening more than one project at a time.
A project consists of a XML file which contains general project settings, as well as the location to an Access database which contains other project related data. Over the years the format of the XML file has been modified, and an update and versioning strategy has been adopted. The chosen strategy performs an update upon open whenever an old version is encountered. Thus far, updates have only consisted of rearranging data within the XML file or making database schema changes and moving data from the XML file to the database.
Having quite a bit of background in OOP it's easy for me to see that a project should be a self contained object which other objects interact with. However, I fail to see how to apply the chosen update strategy in OOP.
The problem of implementing the chosen update strategy in OOP has kept me from using OOP as of yet. If anyone has experience with such a task, or recommendations on how to proceeded I'd appreciate any assistance you can provide.
|
[
"Build a class which reads the XML file in, and provides properties/methods/etc based upon the data in that file. When the class writes the XML file back out, have it format in the manner needed for the new version.\nSo, basically, the class will be able to read in the current version, plus all the older versions, but it will always write out the new version.\nData would be held in internal variables of the class, rather than having to scan the XML file every time you need something.\nAdding a VERSION node to your XML file will also help in this case.\n",
"You might have answered your own question when you used the word strategy (i.e. the Strategy Design Pattern). \nPossibly you could:\n\nCreate a project class that knows nothing about conversions but accepts a strategy object.\nCreate a hierarchy of classes to model each possible conversion strategy.\nUse a factory method to build your project object with the right strategy\n\n",
"I don't understand why this is a troubling problem. It could be solved in any number of ways.\nIf you want to do a full object oriented enterprisey type thing, you could take any subset of the following solution:\n\nCreate an interface IProject which\ndescribes how other objects interact\nwith a project.\nCreate the current implementation of\nProject which implements IProject\nand can read and write to the\ncurrent version.\nExtend Project for each past\nversion, overriding the xml and\ndatabase read methods and having the\nconstructor call write when these\nclasses are instanced\nFor extra enterpriseyness, create a\nProjectFactory, which detects the\nversion of the file and instanciates\nthe correct version.\nIf further versions are needed,\nrewrite the current Project to do\nthe same thing as past projects,\naccessing the new version of Project\nwith all the reads and then calling\nwrite.\n\nThe advantage of this solution is that you can continue meandering about with different versions and each new version only requires the ability to be updated to from the previous version, with all previous versions cascading up to the second to last version.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"oop",
"updates",
"version"
] |
stackoverflow_0000085010_oop_updates_version.txt
|
Q:
What's the easiest way to use C source code in a Java application?
I found this open-source library that I want to use in my Java application. The library is written in C and was developed under Unix/Linux, and my application will run on Windows. It's a library of mostly mathematical functions, so as far as I can tell it doesn't use anything that's platform-dependent, it's just very basic C code. Also, it's not that big, less than 5,000 lines.
What's the easiest way to use the library in my application? I know there's JNI, but that involves finding a compiler to compile the library under Windows, getting up-to-date with the JNI framework, writing the code, etc. Doable, but not that easy. Is there an easier way? Considering the small size of the library, I'm tempted to just translate it to Java. Are there any tools that can help with that?
EDIT
I ended up translating the part of the library that I needed to Java. It's about 10% of the library so far, though it'll probably increase with time. C and Java are pretty similar, so it only took a few hours. The main difficulty is fixing the bugs that get introduced by mistakes in the translation.
Thank you everyone for your help. The proposed solutions all seemed interesting and I'll look into them when I need to link to larger libraries. For a small piece of C code, manual translation was the simplest solution.
A:
On the Java GNU Scientific Library project I used Swig to generate the JNI wrapper classes around the C libraries. Great tool, and can also generate wrapper code in several languages including Python. Highly recommended.
A:
Your best bet is probably to grab a good c book (K&R: The C Progranmming language) a cup of tea and start translating! I would be skeptical about trusting a translation program, more often then not the best translator is yourself! If you do this one, then its done and you don't need to keep re-doing it. There might be some complications if the library is open source, you'll need to check the licence carefully about this. Another point to consider is that there is always going to be some element of risk and potential error in the translation, therefore it might be necessary to consider writing some tests to ensure that the translation is correct.
Are there no JAVA equivelent Math functions?
As you yourself comment the JNI way is possible, as for a c compiler you could probably use 'Bloodshead Dev-c++' might work, but it is a lot of effort for ~5000 lines.
A:
I'd compile it and use JNA.
JNA (Java Native Access) is basically does in runtime what JNI at compile time and doesnt need any non-java code (not much java either).
I don't know about its performance or usability in your case but I'd give it a try.
A:
Are you sure you want to use the C library, even if it is that small?
Once 64 bit gets a little more common, you'll need to start building/deploying both 32 bit and 64 bit versions of the library as well. And depending on what the C code is like, you may or may not need to update the code to make it build as 64 bit.
If the C library is simple, it may be easier to just port the C library to pure java and not have to deal with building/deploying a JNI library, the C library and the java code.
A:
Indeed, JNA looks impressive, it requires less effort than directly using JNI. But in any case you'd lose the platform independence, and since you're probably only using a small part of it, you might consider translating what you actually need.
A:
Well, there is AMPC. It is a C compiler for Windows, MacOS X and Linux, that can compile C code into Java Byte Code (the kind of code, that runs on a Java virtual machine).
AMPC
However, it is commercial and costs $199 per license. I doubt that pays off for you ;) I don't know of any free compiler like that.
OTOH, Java and C are pretty similar. You could probably refactor the C Code to Java (structs can be replaced with objects with public instance variables) and pointer operations can usually be translated to something else (array operations for example). Though I guess you don't want to go through 5,000 lines of code, do you?
Using JNI makes the code platform dependent, however if you say it is platform independent C, there is no reason why your Java code should be platform dependent. OTOH, depending on how costly these calculations are, using JNI might actually buy you a performance gain, as when it comes to raw number crunching throughput, C can still beat Java in speed. However JNI calls are very costly, so if the calculation is just a very simple, quick calculation, the JNI call itself might take equally long (or even longer) than the calculation performed, in which case using JNI will buy you nothing, but slowing down your app and causing memory overhead.
|
What's the easiest way to use C source code in a Java application?
|
I found this open-source library that I want to use in my Java application. The library is written in C and was developed under Unix/Linux, and my application will run on Windows. It's a library of mostly mathematical functions, so as far as I can tell it doesn't use anything that's platform-dependent, it's just very basic C code. Also, it's not that big, less than 5,000 lines.
What's the easiest way to use the library in my application? I know there's JNI, but that involves finding a compiler to compile the library under Windows, getting up-to-date with the JNI framework, writing the code, etc. Doable, but not that easy. Is there an easier way? Considering the small size of the library, I'm tempted to just translate it to Java. Are there any tools that can help with that?
EDIT
I ended up translating the part of the library that I needed to Java. It's about 10% of the library so far, though it'll probably increase with time. C and Java are pretty similar, so it only took a few hours. The main difficulty is fixing the bugs that get introduced by mistakes in the translation.
Thank you everyone for your help. The proposed solutions all seemed interesting and I'll look into them when I need to link to larger libraries. For a small piece of C code, manual translation was the simplest solution.
|
[
"On the Java GNU Scientific Library project I used Swig to generate the JNI wrapper classes around the C libraries. Great tool, and can also generate wrapper code in several languages including Python. Highly recommended.\n",
"Your best bet is probably to grab a good c book (K&R: The C Progranmming language) a cup of tea and start translating! I would be skeptical about trusting a translation program, more often then not the best translator is yourself! If you do this one, then its done and you don't need to keep re-doing it. There might be some complications if the library is open source, you'll need to check the licence carefully about this. Another point to consider is that there is always going to be some element of risk and potential error in the translation, therefore it might be necessary to consider writing some tests to ensure that the translation is correct.\nAre there no JAVA equivelent Math functions?\nAs you yourself comment the JNI way is possible, as for a c compiler you could probably use 'Bloodshead Dev-c++' might work, but it is a lot of effort for ~5000 lines.\n",
"I'd compile it and use JNA. \nJNA (Java Native Access) is basically does in runtime what JNI at compile time and doesnt need any non-java code (not much java either).\nI don't know about its performance or usability in your case but I'd give it a try.\n",
"Are you sure you want to use the C library, even if it is that small?\nOnce 64 bit gets a little more common, you'll need to start building/deploying both 32 bit and 64 bit versions of the library as well. And depending on what the C code is like, you may or may not need to update the code to make it build as 64 bit.\nIf the C library is simple, it may be easier to just port the C library to pure java and not have to deal with building/deploying a JNI library, the C library and the java code.\n",
"Indeed, JNA looks impressive, it requires less effort than directly using JNI. But in any case you'd lose the platform independence, and since you're probably only using a small part of it, you might consider translating what you actually need. \n",
"Well, there is AMPC. It is a C compiler for Windows, MacOS X and Linux, that can compile C code into Java Byte Code (the kind of code, that runs on a Java virtual machine).\nAMPC\nHowever, it is commercial and costs $199 per license. I doubt that pays off for you ;) I don't know of any free compiler like that.\nOTOH, Java and C are pretty similar. You could probably refactor the C Code to Java (structs can be replaced with objects with public instance variables) and pointer operations can usually be translated to something else (array operations for example). Though I guess you don't want to go through 5,000 lines of code, do you?\nUsing JNI makes the code platform dependent, however if you say it is platform independent C, there is no reason why your Java code should be platform dependent. OTOH, depending on how costly these calculations are, using JNI might actually buy you a performance gain, as when it comes to raw number crunching throughput, C can still beat Java in speed. However JNI calls are very costly, so if the calculation is just a very simple, quick calculation, the JNI call itself might take equally long (or even longer) than the calculation performed, in which case using JNI will buy you nothing, but slowing down your app and causing memory overhead.\n"
] |
[
11,
5,
4,
1,
0,
0
] |
[
"Have you tried using:\nSystem.loadLibrary(\"mylibrary.dll\");\n\nNot sure if this will work with a pure C library but it's probably worth a shot. :)\n"
] |
[
-2
] |
[
"c",
"java",
"translation"
] |
stackoverflow_0000083299_c_java_translation.txt
|
Q:
JConsole Config
in JBoss' run.bat, add:
set JAVA_OPTS=%JAVA_OPTS% -Dcom.sun.management.jmxremote -
Dcom.sun.management.jmxremote.port=9987 -
Dcom.sun.management.jmxremote.ssl=false -
Dcom.sun.management.jmxremote.authenticate=false
To start jconsole:
JDK/bin>jconsole localhost:9987
A:
Yes, that should work. If it doesn't, then use 'ps' (or your platform's equivalent) to check whether those arguments are making it on the JVM's command line.
Was that the question?
|
JConsole Config
|
in JBoss' run.bat, add:
set JAVA_OPTS=%JAVA_OPTS% -Dcom.sun.management.jmxremote -
Dcom.sun.management.jmxremote.port=9987 -
Dcom.sun.management.jmxremote.ssl=false -
Dcom.sun.management.jmxremote.authenticate=false
To start jconsole:
JDK/bin>jconsole localhost:9987
|
[
"Yes, that should work. If it doesn't, then use 'ps' (or your platform's equivalent) to check whether those arguments are making it on the JVM's command line.\nWas that the question?\n"
] |
[
1
] |
[] |
[] |
[
"jconsole"
] |
stackoverflow_0000085254_jconsole.txt
|
Q:
How to get VMWARE ESX 3i Image from infrastructure client using script
I've download the SDK to try to copy the image to my PC from the server but no cmdlet for copy just get info moving etc
any help?
A:
You may have problems running the image locally, you'll want to use VMWare converter to transfer the image in a format you can use (in VMWare server etc).
Otherwise, use vifs in the remote cli: http://www.vmware.com/pdf/vi3_35/esx_3/r35u2/vi3_35_25_u2_rcli.pdf
|
How to get VMWARE ESX 3i Image from infrastructure client using script
|
I've download the SDK to try to copy the image to my PC from the server but no cmdlet for copy just get info moving etc
any help?
|
[
"You may have problems running the image locally, you'll want to use VMWare converter to transfer the image in a format you can use (in VMWare server etc).\nOtherwise, use vifs in the remote cli: http://www.vmware.com/pdf/vi3_35/esx_3/r35u2/vi3_35_25_u2_rcli.pdf\n"
] |
[
1
] |
[] |
[] |
[
"esx",
"vmware"
] |
stackoverflow_0000080969_esx_vmware.txt
|
Q:
Best way to set up CruiseControl for IIS 5.1 dev box and IIS6 server
Can anyone point me in the right direction on this. From reading the FAQs at cruisecontrol, it appears that you should develop in the same environment as you produce.
But i have Windows XP (which only runs IIS 5.1) on my dev machine and the server is 2003.
A:
We have a similar setup that we have been using successfully over a year now. Our CC.Net server is on a Windows 2003 server and all development happens on Windows XP/Vista machines. Code checked into SVN is pulled down onto the Windows 2003 server, built and pushed onto our hosting boxes.
|
Best way to set up CruiseControl for IIS 5.1 dev box and IIS6 server
|
Can anyone point me in the right direction on this. From reading the FAQs at cruisecontrol, it appears that you should develop in the same environment as you produce.
But i have Windows XP (which only runs IIS 5.1) on my dev machine and the server is 2003.
|
[
"We have a similar setup that we have been using successfully over a year now. Our CC.Net server is on a Windows 2003 server and all development happens on Windows XP/Vista machines. Code checked into SVN is pulled down onto the Windows 2003 server, built and pushed onto our hosting boxes.\n"
] |
[
1
] |
[] |
[] |
[
".net",
"cruisecontrol.net",
"iis",
"svn"
] |
stackoverflow_0000085129_.net_cruisecontrol.net_iis_svn.txt
|
Q:
Is there a Problem with JPA Entities, Oracle 10g and Calendar Type properties?
I'm experiencing the following very annoying behaviour when using JPA entitys in conjunction with Oracle 10g.
Suppose you have the following entity.
@Entity
@Table(name = "T_Order")
public class TOrder implements Serializable {
private static final long serialVersionUID = 2235742302377173533L;
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Integer id;
@Column(name = "activationDate")
private Calendar activationDate;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public Calendar getActivationDate() {
return activationDate;
}
public void setActivationDate(Calendar activationDate) {
this.activationDate = activationDate;
}
}
This entity is mapped to Oracle 10g, so in the DB there will be a table T_ORDER with a primary key NUMBER column ID and a TIMESTAMP column activationDate.
Lets suppose I create an instance of this class with the activation date 15. Sep 2008 00:00AM. My local timezone is CEST which is GMT+02:00. When I persist this object and select the data from the table T_ORDER using sqlplus, I find out that in the table actually 14. Sep 2008 22:00 is stored, which is ok so far, because the oracle db timezone is GMT.
But now the annoying part. When I read this entity back into my JAVA program, I find out that the oracle time zone is ignored and I get 14. Sep 2008 22:00 CEST, which is definitly wrong.
So basically, when writing to the DB the timezone information will be used, when reading it will be ignored.
Is there any solution for this out there? The most simple solution I guess would be to set the oracle dbs timezone to GMT+02, but unfortunatly I can't do this because there are other applications using the same server.
We use the following technology
MyEclipse 6.5
JPA with Hibernate 3.2
Oracle 10g thin JDBC Driver
A:
You should not use a Calendar for accessing dates from the database, for this exact reason. You should use java.util.Date as so:
@Temporal(TemporalType.TIMESTAMP)
@Column(name="activationDate")
public Date getActivationDate() {
return this.activationDate;
}
java.util.Date points to a moment in time, irrespective of any timezones. Calendar can be used to format a date for a particular timezone or locale.
A:
I already had my share of problems with JPA and timestamps. I've been reading in the oracle forums and please check the following:
The field in the database should be TIMESTAMP_TZ and not just TIMESTAMP
Try adding the annotation @Temporal(value = TemporalType.TIMESTAMP)
If you don't really need the timezone, put in a date or timestamp field.
|
Is there a Problem with JPA Entities, Oracle 10g and Calendar Type properties?
|
I'm experiencing the following very annoying behaviour when using JPA entitys in conjunction with Oracle 10g.
Suppose you have the following entity.
@Entity
@Table(name = "T_Order")
public class TOrder implements Serializable {
private static final long serialVersionUID = 2235742302377173533L;
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Integer id;
@Column(name = "activationDate")
private Calendar activationDate;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public Calendar getActivationDate() {
return activationDate;
}
public void setActivationDate(Calendar activationDate) {
this.activationDate = activationDate;
}
}
This entity is mapped to Oracle 10g, so in the DB there will be a table T_ORDER with a primary key NUMBER column ID and a TIMESTAMP column activationDate.
Lets suppose I create an instance of this class with the activation date 15. Sep 2008 00:00AM. My local timezone is CEST which is GMT+02:00. When I persist this object and select the data from the table T_ORDER using sqlplus, I find out that in the table actually 14. Sep 2008 22:00 is stored, which is ok so far, because the oracle db timezone is GMT.
But now the annoying part. When I read this entity back into my JAVA program, I find out that the oracle time zone is ignored and I get 14. Sep 2008 22:00 CEST, which is definitly wrong.
So basically, when writing to the DB the timezone information will be used, when reading it will be ignored.
Is there any solution for this out there? The most simple solution I guess would be to set the oracle dbs timezone to GMT+02, but unfortunatly I can't do this because there are other applications using the same server.
We use the following technology
MyEclipse 6.5
JPA with Hibernate 3.2
Oracle 10g thin JDBC Driver
|
[
"You should not use a Calendar for accessing dates from the database, for this exact reason. You should use java.util.Date as so:\n@Temporal(TemporalType.TIMESTAMP)\n@Column(name=\"activationDate\")\npublic Date getActivationDate() {\n return this.activationDate;\n}\n\njava.util.Date points to a moment in time, irrespective of any timezones. Calendar can be used to format a date for a particular timezone or locale.\n",
"I already had my share of problems with JPA and timestamps. I've been reading in the oracle forums and please check the following:\n\nThe field in the database should be TIMESTAMP_TZ and not just TIMESTAMP\nTry adding the annotation @Temporal(value = TemporalType.TIMESTAMP)\nIf you don't really need the timezone, put in a date or timestamp field.\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"ejb_3.0",
"java",
"jpa",
"oracle"
] |
stackoverflow_0000082235_ejb_3.0_java_jpa_oracle.txt
|
Q:
Customizing Search Results Display in Sharepoint Services 3.0 Wiki
I'm looking at using a Windows SharePoint Services 3.0 wiki as a metadata repository. We basically want a community-driven dictionary and for various reasons we're using Sharepoint instead of say MediaWiki.
What can I do to customize or completely replace searchresults.aspx?
Features I'd add if I knew how:
Automatically load the #1 hit if it is a 100% match to the search term
Show the first few lines of each result as a preview so users don't have to click through to bad results
Add a "Page doesn't exist, click here to create it" link in cases where there's not a 100% match
I've got Sharepoint Designer installed and it looks like I'll be able to use it to upload any custom .aspx files I create but I don't see that it will give me access to searchresults.aspx.
Note: Since I plan to access this search tool from an external site via URL parameters it should be fine to leave the existing searchresults.aspx unchanged and just load this solution as a complementary search option.
A:
Yes, everything is possible but you will need to customize it a little bit.
I would recommend you to build a custom web part to display your results. Here is a nice article to start with: http://msdn.microsoft.com/en-us/library/ms584220.aspx
|
Customizing Search Results Display in Sharepoint Services 3.0 Wiki
|
I'm looking at using a Windows SharePoint Services 3.0 wiki as a metadata repository. We basically want a community-driven dictionary and for various reasons we're using Sharepoint instead of say MediaWiki.
What can I do to customize or completely replace searchresults.aspx?
Features I'd add if I knew how:
Automatically load the #1 hit if it is a 100% match to the search term
Show the first few lines of each result as a preview so users don't have to click through to bad results
Add a "Page doesn't exist, click here to create it" link in cases where there's not a 100% match
I've got Sharepoint Designer installed and it looks like I'll be able to use it to upload any custom .aspx files I create but I don't see that it will give me access to searchresults.aspx.
Note: Since I plan to access this search tool from an external site via URL parameters it should be fine to leave the existing searchresults.aspx unchanged and just load this solution as a complementary search option.
|
[
"Yes, everything is possible but you will need to customize it a little bit.\nI would recommend you to build a custom web part to display your results. Here is a nice article to start with: http://msdn.microsoft.com/en-us/library/ms584220.aspx\n"
] |
[
2
] |
[] |
[] |
[
"metadata",
"search",
"sharepoint",
"wiki"
] |
stackoverflow_0000085336_metadata_search_sharepoint_wiki.txt
|
Q:
Handling and storing elapsed time
I'm having problems deciding on what is the best way is to handle and store time measurements.
I have an app that has a textbox that allows the users to input time in either hh:mm:ss or mm:ss format.
So I was planning on parsing this string, tokenizing it on the colons and creating TimeSpan (or using TimeSpan.Parse() and just adding a "00:" to the mm:ss case) for my business logic. Ok?
How do I store this as in a database though? What would the field type be? DateTime seems wrong. I don't want a time of 00:54:12 to be stored as 1901-01-01 00:54:12 that seems a bit poor?
A:
TimeSpan has an Int64 Ticks property that you can store instead, and a constructor that takes a Ticks value.
A:
I think the simplest is to just convert user input into a integer number of seconds. So 54:12 == 3252 seconds, so store the 3252 in your database or wherever. Then when you need to display it to the user, you can convert it back again.
A:
For periods less than a day, just use seconds as other have said.
For longer periods, it depends on your db engine. If SQL Server, prior to version 2008 you want a datetime. It's okay- you can just ignore the default 1/1/1900 date they'll all have. If you are fortunate enough to have sql server 2008, then there are separate Date and Time datatypes you can use. The advantage with using a real datetime/time type is the use of the DateDiff function for comparing durations.
A:
Most databases have some sort of time interval type. The answer depends on which database you're talking about. For Oracle, it's just a floating point NUMBER that represents the number of days (including fractional days). You can add/subtract that to/from any DATE type and you get the right answer.
A:
As an integer count of seconds (or Milliseconds as appropriate)
A:
Are you collecting both the start time and stop time? If so, you could use the "timestamp" data type, if your DBMS supports that. If not, just as a date/time type. Now, you've said you don't want the date part to be stored - but consider the case where the time period spans midnight - you start at 23:55:01 and end at 00:05:14, for example - unless you also have the date in there. There are standard build in functions to return the elapsed time (in seconds) between two date-time values.
A:
Go with integers for seconds or minutes. Seconds is probably better. you'll never kick yourself for choosing something with too much precision. Also, for your UI, consider using multiple text inputs you don't have to worry about the user actually typing in the ":" properly. It's also much easier to add other constraints such as the minute and second values conting containing 0-59.
A:
and int type should do it, storing it as seconds and parsing it back and forth
http://msdn.microsoft.com/en-us/library/ms187745.aspx
|
Handling and storing elapsed time
|
I'm having problems deciding on what is the best way is to handle and store time measurements.
I have an app that has a textbox that allows the users to input time in either hh:mm:ss or mm:ss format.
So I was planning on parsing this string, tokenizing it on the colons and creating TimeSpan (or using TimeSpan.Parse() and just adding a "00:" to the mm:ss case) for my business logic. Ok?
How do I store this as in a database though? What would the field type be? DateTime seems wrong. I don't want a time of 00:54:12 to be stored as 1901-01-01 00:54:12 that seems a bit poor?
|
[
"TimeSpan has an Int64 Ticks property that you can store instead, and a constructor that takes a Ticks value.\n",
"I think the simplest is to just convert user input into a integer number of seconds. So 54:12 == 3252 seconds, so store the 3252 in your database or wherever. Then when you need to display it to the user, you can convert it back again. \n",
"For periods less than a day, just use seconds as other have said. \nFor longer periods, it depends on your db engine. If SQL Server, prior to version 2008 you want a datetime. It's okay- you can just ignore the default 1/1/1900 date they'll all have. If you are fortunate enough to have sql server 2008, then there are separate Date and Time datatypes you can use. The advantage with using a real datetime/time type is the use of the DateDiff function for comparing durations.\n",
"Most databases have some sort of time interval type. The answer depends on which database you're talking about. For Oracle, it's just a floating point NUMBER that represents the number of days (including fractional days). You can add/subtract that to/from any DATE type and you get the right answer.\n",
"As an integer count of seconds (or Milliseconds as appropriate)\n",
"Are you collecting both the start time and stop time? If so, you could use the \"timestamp\" data type, if your DBMS supports that. If not, just as a date/time type. Now, you've said you don't want the date part to be stored - but consider the case where the time period spans midnight - you start at 23:55:01 and end at 00:05:14, for example - unless you also have the date in there. There are standard build in functions to return the elapsed time (in seconds) between two date-time values.\n",
"Go with integers for seconds or minutes. Seconds is probably better. you'll never kick yourself for choosing something with too much precision. Also, for your UI, consider using multiple text inputs you don't have to worry about the user actually typing in the \":\" properly. It's also much easier to add other constraints such as the minute and second values conting containing 0-59.\n",
"and int type should do it, storing it as seconds and parsing it back and forth\nhttp://msdn.microsoft.com/en-us/library/ms187745.aspx\n"
] |
[
9,
3,
3,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c#",
"database",
"datetime",
"timespan"
] |
stackoverflow_0000085307_c#_database_datetime_timespan.txt
|
Q:
Most common cause of "java.lang.NullPointerException" when dealing with XMLs?
My strongest lead is that the code who deals with the incoming XMLs is actually receiving an invalid/incomplete file hence failing the DOM parsing. Any suggestions?
A:
Incomplete file is definitely the place to start looking. I'd print out the file right before the point you parse it to see what's getting sent to the parser. If it's incomplete it will be obvious. If it's invalid, you'll have a little searching to do.
A:
My first guess would be that the DOM-using code is treating elements that are marked as optional in the DTD as compulsory.
Edited to add:
What I mean is that unless you validate against a DTD, you cannot expect something like the following (example using dom4j) to return anything but null.
doc.selectSingleNode("//some/element/in/a/structure");
The same is of course true if you're stringing element navigation calls together, or generally don't check return values before using them.
A:
You should have a stack trace pointing to where you NPE is thrown. That should narrow down the number of variables that can be null. Rather than getting the debugger or printf out, I suggest adding appropriate checks and throwing an exception where as soon as the error can be detected. It's a good habit to get into to avoid mysterious problems later.
A:
Ideally you should be running your java application inside a debugger, thus when an uncaught exception is thrown you can examine the callstack, variables, etc and see exactly what line caused the crash, and perhaps which data is null that got used.
If you can't use a debugger for whatever reason, then compile your application with debugging support, and add an exception handler for this particular error, and print out the stack trace. Again, this will show exactly what line in what file caused the crash.
|
Most common cause of "java.lang.NullPointerException" when dealing with XMLs?
|
My strongest lead is that the code who deals with the incoming XMLs is actually receiving an invalid/incomplete file hence failing the DOM parsing. Any suggestions?
|
[
"Incomplete file is definitely the place to start looking. I'd print out the file right before the point you parse it to see what's getting sent to the parser. If it's incomplete it will be obvious. If it's invalid, you'll have a little searching to do.\n",
"My first guess would be that the DOM-using code is treating elements that are marked as optional in the DTD as compulsory.\nEdited to add:\nWhat I mean is that unless you validate against a DTD, you cannot expect something like the following (example using dom4j) to return anything but null.\ndoc.selectSingleNode(\"//some/element/in/a/structure\");\n\nThe same is of course true if you're stringing element navigation calls together, or generally don't check return values before using them.\n",
"You should have a stack trace pointing to where you NPE is thrown. That should narrow down the number of variables that can be null. Rather than getting the debugger or printf out, I suggest adding appropriate checks and throwing an exception where as soon as the error can be detected. It's a good habit to get into to avoid mysterious problems later.\n",
"Ideally you should be running your java application inside a debugger, thus when an uncaught exception is thrown you can examine the callstack, variables, etc and see exactly what line caused the crash, and perhaps which data is null that got used.\nIf you can't use a debugger for whatever reason, then compile your application with debugging support, and add an exception handler for this particular error, and print out the stack trace. Again, this will show exactly what line in what file caused the crash.\n"
] |
[
3,
2,
1,
1
] |
[] |
[] |
[
"dom",
"dtd",
"java",
"parsing",
"xml"
] |
stackoverflow_0000085370_dom_dtd_java_parsing_xml.txt
|
Q:
XML Serialize boolean as 0 and 1
The XML Schema Part 2 specifies that an instance of a datatype that is defined as boolean can have the following legal literals {true, false, 1, 0}.
The following XML, for example, when deserialized, sets the boolean property "Emulate" to true.
<root>
<emulate>1</emulate>
</root>
However, when I serialize the object back to the XML, I get true instead of the numerical value. My question is, is there a way that I can control the boolean representation in the XML?
A:
You can also do this by using some XmlSerializer attribute black magic:
[XmlIgnore]
public bool MyValue { get; set; }
/// <summary>Get a value purely for serialization purposes</summary>
[XmlElement("MyValue")]
public string MyValueSerialize
{
get { return this.MyValue ? "1" : "0"; }
set { this.MyValue = XmlConvert.ToBoolean(value); }
}
You can also use other attributes to hide this member from intellisense if you're offended by it! It's not a perfect solution, but it can be quicker than implementing IXmlSerializable.
A:
You can implement IXmlSerializable which will allow you to alter the serialized output of your class however you want. This will entail creating the 3 methods GetSchema(), ReadXml(XmlReader r) and WriteXml(XmlWriter r). When you implement the interface, these methods are called instead of .NET trying to serialize the object itself.
Examples can be found at:
http://www.developerfusion.co.uk/show/4639/ and
http://msdn.microsoft.com/en-us/library/system.xml.serialization.ixmlserializable.aspx
A:
No, not using the default System.Xml.XmlSerializer: you'd need to change the data type to an int to achieve that, or muck around with providing your own serialization code (possible, but not much fun).
However, you can simply post-process the generated XML instead, of course, either using XSLT, or simply using string substitution. A bit of a hack, but pretty quick, both in development time and run time...
|
XML Serialize boolean as 0 and 1
|
The XML Schema Part 2 specifies that an instance of a datatype that is defined as boolean can have the following legal literals {true, false, 1, 0}.
The following XML, for example, when deserialized, sets the boolean property "Emulate" to true.
<root>
<emulate>1</emulate>
</root>
However, when I serialize the object back to the XML, I get true instead of the numerical value. My question is, is there a way that I can control the boolean representation in the XML?
|
[
"You can also do this by using some XmlSerializer attribute black magic:\n[XmlIgnore]\npublic bool MyValue { get; set; }\n\n/// <summary>Get a value purely for serialization purposes</summary>\n[XmlElement(\"MyValue\")]\npublic string MyValueSerialize\n{\n get { return this.MyValue ? \"1\" : \"0\"; }\n set { this.MyValue = XmlConvert.ToBoolean(value); }\n}\n\nYou can also use other attributes to hide this member from intellisense if you're offended by it! It's not a perfect solution, but it can be quicker than implementing IXmlSerializable.\n",
"You can implement IXmlSerializable which will allow you to alter the serialized output of your class however you want. This will entail creating the 3 methods GetSchema(), ReadXml(XmlReader r) and WriteXml(XmlWriter r). When you implement the interface, these methods are called instead of .NET trying to serialize the object itself. \nExamples can be found at:\nhttp://www.developerfusion.co.uk/show/4639/ and\nhttp://msdn.microsoft.com/en-us/library/system.xml.serialization.ixmlserializable.aspx\n",
"No, not using the default System.Xml.XmlSerializer: you'd need to change the data type to an int to achieve that, or muck around with providing your own serialization code (possible, but not much fun). \nHowever, you can simply post-process the generated XML instead, of course, either using XSLT, or simply using string substitution. A bit of a hack, but pretty quick, both in development time and run time...\n"
] |
[
55,
3,
1
] |
[] |
[] |
[
"c#",
"constraints",
"schema",
"serialization",
"xml"
] |
stackoverflow_0000084449_c#_constraints_schema_serialization_xml.txt
|
Q:
Code Outlining, Classic ASP and Visual Studio 2005
Is there a way to enable code outlining for Classic ASP in Visual Studio 2005? It outlines the HTML code pretty well and I get a big outline between <% and %>, but nothing for the code itself.
A:
Visual Studio 2008 SP1 has re-added support for classic ASP, but I don't believe either version supports the type of outlining you're looking for.
|
Code Outlining, Classic ASP and Visual Studio 2005
|
Is there a way to enable code outlining for Classic ASP in Visual Studio 2005? It outlines the HTML code pretty well and I get a big outline between <% and %>, but nothing for the code itself.
|
[
"Visual Studio 2008 SP1 has re-added support for classic ASP, but I don't believe either version supports the type of outlining you're looking for.\n"
] |
[
0
] |
[] |
[] |
[
"asp_classic",
"visual_studio"
] |
stackoverflow_0000085434_asp_classic_visual_studio.txt
|
Q:
Get IFile from IWorkspaceRoot and location String
This is an Eclipse question, and you can assume the Java package for all these Eclipse classes is org.eclipse.core.resources.
I want to get an IFile corresponding to a location String I have:
"platform:/resource/Tracbility_All_Supported_lib/processes/gastuff/globalht/GlobalHTInterface.wsdl"
I have the enclosing IWorkspace and IWorkspaceRoot. If I had the IPath corresponding to the location above, I could simply call IWorkspaceRoot.getFileForLocation(IPath).
How do I get the corresponding IPath from the location String? Or is there some other way to get the corresponding IFile?
A:
String platformLocationString = portTypeContainer
.getLocation();
String locationString = platformLocationString
.substring("platform:/resource/".length());
IWorkspace workspace = ResourcesPlugin.getWorkspace();
IWorkspaceRoot workspaceRoot = workspace.getRoot();
IFile wSDLFile = (IFile) workspaceRoot
.findMember(locationString);
A:
Since IWorkspaceRoot is an IContainer, can't you just use workspaceRoot.findMember(String name) and cast the resulting IResource to IFile?
A:
org.eclipse.core.runtime.Path implements IPath.
IPath p = new Path(locationString);
IWorkspaceRoot.getFileForLocation(p);
This would have worked had the location string not been a URL of type "platform:"
For this particular case, notes in org.eclipse.core.runtime.Platform javadoc indicate that the "correct" solution is something like
fileUrl = FileLocator.toFileURL(new URL(locationString));
IWorkspaceRoot.getFileForLocation(fileUrl.getPath());
@[Paul Reiners] your solution apparently assumes that the workspace root is going to be in the "resources" folder
|
Get IFile from IWorkspaceRoot and location String
|
This is an Eclipse question, and you can assume the Java package for all these Eclipse classes is org.eclipse.core.resources.
I want to get an IFile corresponding to a location String I have:
"platform:/resource/Tracbility_All_Supported_lib/processes/gastuff/globalht/GlobalHTInterface.wsdl"
I have the enclosing IWorkspace and IWorkspaceRoot. If I had the IPath corresponding to the location above, I could simply call IWorkspaceRoot.getFileForLocation(IPath).
How do I get the corresponding IPath from the location String? Or is there some other way to get the corresponding IFile?
|
[
"String platformLocationString = portTypeContainer\n .getLocation();\nString locationString = platformLocationString\n .substring(\"platform:/resource/\".length());\nIWorkspace workspace = ResourcesPlugin.getWorkspace();\nIWorkspaceRoot workspaceRoot = workspace.getRoot();\nIFile wSDLFile = (IFile) workspaceRoot\n .findMember(locationString);\n\n",
"Since IWorkspaceRoot is an IContainer, can't you just use workspaceRoot.findMember(String name) and cast the resulting IResource to IFile?\n",
"org.eclipse.core.runtime.Path implements IPath.\nIPath p = new Path(locationString);\nIWorkspaceRoot.getFileForLocation(p);\n\nThis would have worked had the location string not been a URL of type \"platform:\"\nFor this particular case, notes in org.eclipse.core.runtime.Platform javadoc indicate that the \"correct\" solution is something like\nfileUrl = FileLocator.toFileURL(new URL(locationString)); \nIWorkspaceRoot.getFileForLocation(fileUrl.getPath());\n\n@[Paul Reiners] your solution apparently assumes that the workspace root is going to be in the \"resources\" folder\n"
] |
[
3,
3,
2
] |
[] |
[] |
[
"eclipse",
"java"
] |
stackoverflow_0000084759_eclipse_java.txt
|
Q:
Extension Methods not working for an interface
Inspired by the MVC storefront the latest project I'm working on is using extension methods on IQueryable to filter results.
I have this interface;
IPrimaryKey
{
int ID { get; }
}
and I have this extension method
public static IPrimaryKey GetByID(this IQueryable<IPrimaryKey> source, int id)
{
return source(obj => obj.ID == id);
}
Let's say I have a class, SimpleObj which implements IPrimaryKey. When I have an IQueryable of SimpleObj the GetByID method doesn't exist, unless I explicitally cast as an IQueryable of IPrimaryKey, which is less than ideal.
Am I missing something here?
A:
It works, when done right. cfeduke's solution works. However, you don't have to make the IPrimaryKey interface generic, in fact, you don't have to change your original definition at all:
public static IPrimaryKey GetByID<T>(this IQueryable<T> source, int id) where T : IPrimaryKey
{
return source(obj => obj.ID == id);
}
A:
Edit: Konrad's solution is better because its far simpler. The below solution works but is only required in situations similar to ObjectDataSource where a method of a class is retrieved through reflection without walking up the inheritance hierarchy. Obviously that's not happening here.
This is possible, I've had to implement a similar pattern when I designed a custom entity framework solution for working with ObjectDataSource:
public interface IPrimaryKey<T> where T : IPrimaryKey<T>
{
int Id { get; }
}
public static class IPrimaryKeyTExtension
{
public static IPrimaryKey<T> GetById<T>(this IQueryable<T> source, int id) where T : IPrimaryKey<T>
{
return source.Where(pk => pk.Id == id).SingleOrDefault();
}
}
public class Person : IPrimaryKey<Person>
{
public int Id { get; set; }
}
Snippet of use:
var people = new List<Person>
{
new Person { Id = 1 },
new Person { Id = 2 },
new Person { Id = 3 }
};
var personOne = people.AsQueryable().GetById(1);
A:
This cannot work due to the fact that generics don't have the ability to follow inheritance patterns. ie. IQueryable<SimpleObj> is not in the inheritance tree of IQueryable<IPrimaryKey>
|
Extension Methods not working for an interface
|
Inspired by the MVC storefront the latest project I'm working on is using extension methods on IQueryable to filter results.
I have this interface;
IPrimaryKey
{
int ID { get; }
}
and I have this extension method
public static IPrimaryKey GetByID(this IQueryable<IPrimaryKey> source, int id)
{
return source(obj => obj.ID == id);
}
Let's say I have a class, SimpleObj which implements IPrimaryKey. When I have an IQueryable of SimpleObj the GetByID method doesn't exist, unless I explicitally cast as an IQueryable of IPrimaryKey, which is less than ideal.
Am I missing something here?
|
[
"It works, when done right. cfeduke's solution works. However, you don't have to make the IPrimaryKey interface generic, in fact, you don't have to change your original definition at all:\npublic static IPrimaryKey GetByID<T>(this IQueryable<T> source, int id) where T : IPrimaryKey\n{\n return source(obj => obj.ID == id);\n}\n\n",
"Edit: Konrad's solution is better because its far simpler. The below solution works but is only required in situations similar to ObjectDataSource where a method of a class is retrieved through reflection without walking up the inheritance hierarchy. Obviously that's not happening here.\nThis is possible, I've had to implement a similar pattern when I designed a custom entity framework solution for working with ObjectDataSource:\npublic interface IPrimaryKey<T> where T : IPrimaryKey<T>\n{\n int Id { get; }\n}\n\npublic static class IPrimaryKeyTExtension\n{\n public static IPrimaryKey<T> GetById<T>(this IQueryable<T> source, int id) where T : IPrimaryKey<T>\n {\n return source.Where(pk => pk.Id == id).SingleOrDefault();\n }\n}\n\npublic class Person : IPrimaryKey<Person>\n{\n public int Id { get; set; }\n}\n\nSnippet of use:\nvar people = new List<Person>\n{\n new Person { Id = 1 },\n new Person { Id = 2 },\n new Person { Id = 3 }\n};\n\nvar personOne = people.AsQueryable().GetById(1);\n\n",
"This cannot work due to the fact that generics don't have the ability to follow inheritance patterns. ie. IQueryable<SimpleObj> is not in the inheritance tree of IQueryable<IPrimaryKey>\n"
] |
[
13,
4,
2
] |
[] |
[] |
[
".net",
"c#",
"extension_methods"
] |
stackoverflow_0000082442_.net_c#_extension_methods.txt
|
Q:
Performance: call-template vs apply-template
in XSLT processing, is there a performance difference between apply-template and call-template? In my stylesheets there are many instances where I can use either, which is the best choice?
A:
As with all performance questions, the answer will depend on your particular configuration (in particular the XSLT processor you're using) and the kind of processing that you're doing.
<xsl:apply-templates> takes a sequence of nodes and goes through them one by one. For each, it locates the template with the highest priority that matches the node, and invokes it. So <xsl:apply-templates> is like a <xsl:for-each> with an <xsl:choose> inside, but more modular.
In contrast, <xsl:call-template> invokes a template by name. There's no change to the context node (no <xsl:for-each>) and no choice about which template to use.
So with exactly the same circumstances, you might imagine that <xsl:call-template> will be faster because it's doing less work. But if you're in a situation where either <xsl:apply-templates> or <xsl:call-template> could be used, you're probably going to be doing the <xsl:for-each> and <xsl:choose> yourself, in XSLT, rather than the processor doing it for you, behind the scenes. So in the end my guess it that it will probably balance out. But as I say it depends a lot on the kind of optimisation your processor has put into place and exactly what processing you're doing. Measure it and see.
My rules of thumb about when to use matching templates and when to use named templates are:
use <xsl:apply-templates> and matching templates if you're processing individual nodes to create a result; use modes if a particular node needs to be processed in several different ways (such as in the table of contents vs the body of a document)
use <xsl:call-template> and a named template if you're processing something other than an individual node, such as strings or numbers or sets of nodes
(in XSLT 2.0) use <xsl:function> if you're returning an atomic value or an existing node
A:
apply-template and call-template do not perform the same task, performance comparison is not really relevant here.
call-template takes a template name as a parameter whereas apply-template takes an xpath expression. Apply-template is therefore much more powerful since you do not really know which template will be executed.
You will get performance issues if you use complex xpath expressions. Avoid "//" in your xpath expressions since every node of your input document will be evaluated.
A:
It may depend on the xml parser you are using. I can't speak for anything but .NET 2003 parser where I did some informal performance tests on push vs pull XSLT code. This is similar to what you are asking: apply-template = push and call-template = pull. I was convinced push would be faster, but that was not the case. It was about even.
Sorry I don't have the exact tests now. I recommend trying it out with your parser of choice and see if there is any major difference. My bet is there won't be.
|
Performance: call-template vs apply-template
|
in XSLT processing, is there a performance difference between apply-template and call-template? In my stylesheets there are many instances where I can use either, which is the best choice?
|
[
"As with all performance questions, the answer will depend on your particular configuration (in particular the XSLT processor you're using) and the kind of processing that you're doing.\n<xsl:apply-templates> takes a sequence of nodes and goes through them one by one. For each, it locates the template with the highest priority that matches the node, and invokes it. So <xsl:apply-templates> is like a <xsl:for-each> with an <xsl:choose> inside, but more modular.\nIn contrast, <xsl:call-template> invokes a template by name. There's no change to the context node (no <xsl:for-each>) and no choice about which template to use.\nSo with exactly the same circumstances, you might imagine that <xsl:call-template> will be faster because it's doing less work. But if you're in a situation where either <xsl:apply-templates> or <xsl:call-template> could be used, you're probably going to be doing the <xsl:for-each> and <xsl:choose> yourself, in XSLT, rather than the processor doing it for you, behind the scenes. So in the end my guess it that it will probably balance out. But as I say it depends a lot on the kind of optimisation your processor has put into place and exactly what processing you're doing. Measure it and see.\nMy rules of thumb about when to use matching templates and when to use named templates are:\n\nuse <xsl:apply-templates> and matching templates if you're processing individual nodes to create a result; use modes if a particular node needs to be processed in several different ways (such as in the table of contents vs the body of a document)\nuse <xsl:call-template> and a named template if you're processing something other than an individual node, such as strings or numbers or sets of nodes\n(in XSLT 2.0) use <xsl:function> if you're returning an atomic value or an existing node\n\n",
"apply-template and call-template do not perform the same task, performance comparison is not really relevant here.\ncall-template takes a template name as a parameter whereas apply-template takes an xpath expression. Apply-template is therefore much more powerful since you do not really know which template will be executed.\nYou will get performance issues if you use complex xpath expressions. Avoid \"//\" in your xpath expressions since every node of your input document will be evaluated.\n",
"It may depend on the xml parser you are using. I can't speak for anything but .NET 2003 parser where I did some informal performance tests on push vs pull XSLT code. This is similar to what you are asking: apply-template = push and call-template = pull. I was convinced push would be faster, but that was not the case. It was about even. \nSorry I don't have the exact tests now. I recommend trying it out with your parser of choice and see if there is any major difference. My bet is there won't be. \n"
] |
[
58,
10,
4
] |
[] |
[] |
[
"xslt"
] |
stackoverflow_0000084422_xslt.txt
|
Q:
.NET abstract classes
I'm designing a web site navigation hierarchy. It's a tree of nodes.
Most nodes are pages. Some nodes are links (think shortcuts in Windows).
Most pages hold HTML content. Some execute code.
I'd like to represent these as this collection of classes and abstract (MustInherit) classes…
This is the database table where I'm going to store all this…
database table http://img178.imageshack.us/img178/8573/nodetablefm8.gif
Here's where I'm stumped. PageNodes may or may not be roots.
How should I handle the root class?
I don't want to have to have all four of…
HtmlPageNode
CodePageNode
HtmlRootPageNode
CodeRootPageNode
I want the HtmlPageNode and CodePageNode classes to inherit either from PageNode or else from RootPageNode. Is that possible?
Clarification: There are multiple root nodes and roots may have parent nodes. Each is the root of only a sub-tree that has distinct styling. Think of different, color-coded departments. (Perhaps root is a poor name choice. Suggestions?)
Update: Regarding the "Root" name...
I've asked: Is there a specific name for the node that coresponds to a subtree?
A:
Use the the Composite Pattern.
With regard to your root nodes, are there differences in functionality or is the difference it entirely appearance? If the difference is appearance only I suggest you have an association with a separate Style class from your PageNode.
If there are differences in functionality AND you have lots of types of page then think about using the Decorator Pattern.
A:
As noted, the Composite Pattern may be a good solution.
If that doesn't work for you, it may be simpler - if appropriate - to define Root as an Interface, and apply that as needed.
Of course, that doesnt let you provide any implementation for Root...
If Root must have implementation, you can use the Decorator Pattern.
A:
Actually, as the "root" node is a special case of node, maybe you need RootHtmlPageNode : HtmlPageNode.
Another idea: as you do not specify what is the difference between a "root" and normal node, maybe just a flag in node specifying if it is root or not also will be a good design.
EDIT: Per your clarification, there is no functional difference between normal and root node, so a simple flag should be enough (or property IsRootNode). If "root" node only supplies a styling data (or any other data for itself and it's children), then you can place this styling data in a separate structure/class and fetch it recursively (based on IsRootNode):
class Node
{
private bool isRootNode;
public bool IsRootNode;
private StylingData stylingData;
public StylingData StylingData
{
set
{
if (this.IsRootNode)
this.stylingData = value;
else
throw new ApplicationException("The node is not root.");
}
get
{
if (this.IsRootNode)
return this.stylingData;
else
return this.parent.StylingData;
}
}
}
This assumes, that each node has a reference to it's parent.
It become way beyond the question, as I do not know the exact design.
A:
I want the HtmlPageNode and CodePageNode classes to inherit either from PageNode or else from RootPageNode. Is that possible?
Yeah it's possible. You need to have HtmlPageNode and codePageNode have an object that will be an Abstract class that PageNode will inherit and RootPageNode too. In the constructor of HtmlPageNode and codePageNode you accept your new Abstract class that will be in your case PageNode OR RootPageNode. This way you have 2 differents classes with same methods but two differents object. Hope that help you!
A:
Clarification: There are multiple root nodes and roots may have parent nodes. Each is the root of only a sub-tree that has distinct styling. Think of different, color-coded departments. (Perhaps root is a poor name choice. Suggestions?)
Root is a poor name choice because it's (somewhat ironically) accepted as explicitly the top level of a tree structure, because the tree starts where root comes out of the ground. Any node beyond that is a branch or leaf and not directly attached to the root.
A better name would be something like IsAuthoritativeStyleNode, IsCascadingStyleNode, IsStyleParentNode or instead qualify it: e.g. IsDepartmentRootNode. Giving things clear unambiguous names is one of things that drastically improves readability / easy understanding.
You can't really achieve what you want just via abstract base classes/inheritence. As per other suggestion(s), consider interfaces instead.
I'd also consider thinking about whether you're letting the database schema drive your client side class design too much. Not saying it needs changing in this case, but it should at least be thought about. Think about how you could factor out properties into separate tables referencing the common 'Node' table, and normalize them to minimize nulls and/or duplicated identical data.
A:
Should the PageNode class simply have a property of type Root?
Is that counter to the idea that a PageNode is-a Root. Or, are they not "is-a Root" because only some of them are roots?
And does that imply that the property might traverse the tree looking for the root ancestor? Or is that just me?
|
.NET abstract classes
|
I'm designing a web site navigation hierarchy. It's a tree of nodes.
Most nodes are pages. Some nodes are links (think shortcuts in Windows).
Most pages hold HTML content. Some execute code.
I'd like to represent these as this collection of classes and abstract (MustInherit) classes…
This is the database table where I'm going to store all this…
database table http://img178.imageshack.us/img178/8573/nodetablefm8.gif
Here's where I'm stumped. PageNodes may or may not be roots.
How should I handle the root class?
I don't want to have to have all four of…
HtmlPageNode
CodePageNode
HtmlRootPageNode
CodeRootPageNode
I want the HtmlPageNode and CodePageNode classes to inherit either from PageNode or else from RootPageNode. Is that possible?
Clarification: There are multiple root nodes and roots may have parent nodes. Each is the root of only a sub-tree that has distinct styling. Think of different, color-coded departments. (Perhaps root is a poor name choice. Suggestions?)
Update: Regarding the "Root" name...
I've asked: Is there a specific name for the node that coresponds to a subtree?
|
[
"Use the the Composite Pattern.\n\nWith regard to your root nodes, are there differences in functionality or is the difference it entirely appearance? If the difference is appearance only I suggest you have an association with a separate Style class from your PageNode.\nIf there are differences in functionality AND you have lots of types of page then think about using the Decorator Pattern.\n",
"As noted, the Composite Pattern may be a good solution. \nIf that doesn't work for you, it may be simpler - if appropriate - to define Root as an Interface, and apply that as needed. \nOf course, that doesnt let you provide any implementation for Root...\nIf Root must have implementation, you can use the Decorator Pattern.\n",
"Actually, as the \"root\" node is a special case of node, maybe you need RootHtmlPageNode : HtmlPageNode.\nAnother idea: as you do not specify what is the difference between a \"root\" and normal node, maybe just a flag in node specifying if it is root or not also will be a good design.\nEDIT: Per your clarification, there is no functional difference between normal and root node, so a simple flag should be enough (or property IsRootNode). If \"root\" node only supplies a styling data (or any other data for itself and it's children), then you can place this styling data in a separate structure/class and fetch it recursively (based on IsRootNode):\nclass Node\n{\n private bool isRootNode;\n public bool IsRootNode;\n\n private StylingData stylingData;\n public StylingData StylingData\n {\n set\n {\n if (this.IsRootNode)\n this.stylingData = value;\n else\n throw new ApplicationException(\"The node is not root.\");\n }\n get\n {\n if (this.IsRootNode)\n return this.stylingData;\n else\n return this.parent.StylingData;\n }\n }\n}\n\nThis assumes, that each node has a reference to it's parent.\nIt become way beyond the question, as I do not know the exact design.\n",
"\nI want the HtmlPageNode and CodePageNode classes to inherit either from PageNode or else from RootPageNode. Is that possible?\n\nYeah it's possible. You need to have HtmlPageNode and codePageNode have an object that will be an Abstract class that PageNode will inherit and RootPageNode too. In the constructor of HtmlPageNode and codePageNode you accept your new Abstract class that will be in your case PageNode OR RootPageNode. This way you have 2 differents classes with same methods but two differents object. Hope that help you!\n",
"\nClarification: There are multiple root nodes and roots may have parent nodes. Each is the root of only a sub-tree that has distinct styling. Think of different, color-coded departments. (Perhaps root is a poor name choice. Suggestions?)\n\nRoot is a poor name choice because it's (somewhat ironically) accepted as explicitly the top level of a tree structure, because the tree starts where root comes out of the ground. Any node beyond that is a branch or leaf and not directly attached to the root.\nA better name would be something like IsAuthoritativeStyleNode, IsCascadingStyleNode, IsStyleParentNode or instead qualify it: e.g. IsDepartmentRootNode. Giving things clear unambiguous names is one of things that drastically improves readability / easy understanding.\nYou can't really achieve what you want just via abstract base classes/inheritence. As per other suggestion(s), consider interfaces instead. \nI'd also consider thinking about whether you're letting the database schema drive your client side class design too much. Not saying it needs changing in this case, but it should at least be thought about. Think about how you could factor out properties into separate tables referencing the common 'Node' table, and normalize them to minimize nulls and/or duplicated identical data. \n",
"Should the PageNode class simply have a property of type Root?\n\n\nIs that counter to the idea that a PageNode is-a Root. Or, are they not \"is-a Root\" because only some of them are roots?\nAnd does that imply that the property might traverse the tree looking for the root ancestor? Or is that just me?\n"
] |
[
2,
2,
1,
1,
1,
0
] |
[] |
[] |
[
".net",
"abstract_class",
"inheritance"
] |
stackoverflow_0000084263_.net_abstract_class_inheritance.txt
|
Q:
Is it possible to drag and drop from/to outside a Flash applet with JavaScript?
Let's say I want a web page that contains a Flash applet and I'd like to drag and drop some objects from or to the rest of the web page, is this at all possible?
Bonus if you know a website somewhere that does that!
A:
This one intrigued me. I know jessegavin posted some code while I went to figure this out, but this one is tested. I have a super-simple working example that lets you drag to and from flash. It's pretty messy as I threw it together during my lunch break.
Here's the demo
And the source
The base class is taken directly from the External Interface LiveDocs. I added MyButton so the button could have some text. The majority of the javascript comes from the same LiveDocs example.
I compiled this using mxmlc.
A:
DISCLAIMER I haven't tested this code at all, but the idea should work. Also, this only handles the dragging to a flash movie.
Here's some Actionscript 3.0 code which makes use of the ExternalInterface class.
import flash.display.Sprite;
import flash.external.ExternalInterface;
import flash.net.URLLoader;
import flash.net.URLRequest;
if (ExternalInterface.available) {
ExternalInterface.addCallback("handleDroppedImage", myDroppedImageHandler);
}
private function myDroppedImageHandler(url:String, x:Number, y:Number):void {
var container:Sprite = new Sprite();
container.x = x;
container.y = y;
addChild(container);
var loader:Loader = new Loader();
var request:URLRequest = new URLRequest(url);
loader.load(request);
container.addChild(loader);
}
Here's the HTML/jQuery code
<html>
<head>
<title>XHTML 1.0 Transitional Template</title>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.5.2/jquery-ui.min.js"></script>
<script type="text/javascript">
$(function() {
$("#dragIcon").draggable();
$("#flash").droppable({
tolerance : "intersect",
drop: function(e,ui) {
// Get the X,Y coords relative to to the flash movie
var x = $(this).offset().left - ui.draggable.offset().left;
var y = $(this).offset().top - ui.draggable.offset().top;
// Get the url of the dragged image
var url = ui.draggable.attr("src");
// Get access to the swf
var swf = ($.browser.msie) ? document["MyFlashMovie"] : window["MyFlashMovie"];
// Call the ExternalInterface function
swf.handleDroppedImage(url, x, y);
// remove the swf from the javascript DOM
ui.draggable.remove();
}
});
});
</script>
</head>
<body>
<img id="dragIcon" width="16" height="16" alt="drag me" />
<div id="flash">
<object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"
id="MyFlashMovie" width="500" height="375"
codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab">
<param name="movie" value="MyFlashMovie.swf" />
<param name="quality" value="high" />
<param name="bgcolor" value="#869ca7" />
<param name="allowScriptAccess" value="sameDomain" />
<embed src="MyFlashMovie.swf" quality="high" bgcolor="#869ca7"
width="500" height="375" name="MyFlashMovie" align="middle"
play="true" loop="false" quality="high" allowScriptAccess="sameDomain"
type="application/x-shockwave-flash"
pluginspage="http://www.macromedia.com/go/getflashplayer">
</embed>
</object>
</div>
</body>
</html>
A:
I would say it is possible to drop to Flash if you detect that the item is dragged on to the that contains the flash stuff, and you set your dragged objects to have a z-index higher than the flash. Then when it is dropped you can talk to Flash using javascript to tell it where and what was dropped.
However the other way around is probably much harder, because you'd have to detect when the object hits the border of the flash movie and "pass" it to the javascript handler (create it in the html, hide it in flash).
The question is probably to know whether it's worth the trouble, or if you can maybe achieve everything in JS or in Flash ?
A:
Hang on, the encapsulation point is a valid one but flash can execute JS functions, and Seldaek is right that an HTML element with a higher z-index should float on the flash movie. So if you did all the drag handling in JS and had the flash read its own dimensions and the position of the pointer in the app it could signal JS methods that slave element(s) to the pointer even (especially) when the pointer leaves the boundaries of the flash app. It would be pretty hairy though.
A:
If the whole site is one big embedded flash file then yes it's possible.
I don't think that you can acheive it any other way
A:
Not possible in flash - unless you want to drag to a target inside the same flash application.
Could probably be done with a signed Java applet (but who wants to go down that road?)
|
Is it possible to drag and drop from/to outside a Flash applet with JavaScript?
|
Let's say I want a web page that contains a Flash applet and I'd like to drag and drop some objects from or to the rest of the web page, is this at all possible?
Bonus if you know a website somewhere that does that!
|
[
"This one intrigued me. I know jessegavin posted some code while I went to figure this out, but this one is tested. I have a super-simple working example that lets you drag to and from flash. It's pretty messy as I threw it together during my lunch break.\nHere's the demo\nAnd the source\nThe base class is taken directly from the External Interface LiveDocs. I added MyButton so the button could have some text. The majority of the javascript comes from the same LiveDocs example.\nI compiled this using mxmlc.\n",
"DISCLAIMER I haven't tested this code at all, but the idea should work. Also, this only handles the dragging to a flash movie.\nHere's some Actionscript 3.0 code which makes use of the ExternalInterface class.\nimport flash.display.Sprite;\nimport flash.external.ExternalInterface;\nimport flash.net.URLLoader;\nimport flash.net.URLRequest;\n\nif (ExternalInterface.available) {\n ExternalInterface.addCallback(\"handleDroppedImage\", myDroppedImageHandler);\n}\n\nprivate function myDroppedImageHandler(url:String, x:Number, y:Number):void {\n\n var container:Sprite = new Sprite();\n container.x = x;\n container.y = y;\n addChild(container);\n\n var loader:Loader = new Loader();\n var request:URLRequest = new URLRequest(url);\n loader.load(request);\n\n container.addChild(loader);\n}\n\nHere's the HTML/jQuery code\n<html>\n<head>\n <title>XHTML 1.0 Transitional Template</title>\n <script type=\"text/javascript\" src=\"http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js\"></script>\n <script type=\"text/javascript\" src=\"http://ajax.googleapis.com/ajax/libs/jqueryui/1.5.2/jquery-ui.min.js\"></script>\n <script type=\"text/javascript\">\n $(function() {\n $(\"#dragIcon\").draggable();\n\n $(\"#flash\").droppable({ \n tolerance : \"intersect\",\n drop: function(e,ui) {\n\n // Get the X,Y coords relative to to the flash movie\n var x = $(this).offset().left - ui.draggable.offset().left;\n var y = $(this).offset().top - ui.draggable.offset().top;\n\n // Get the url of the dragged image\n var url = ui.draggable.attr(\"src\");\n\n // Get access to the swf\n var swf = ($.browser.msie) ? document[\"MyFlashMovie\"] : window[\"MyFlashMovie\"];\n\n // Call the ExternalInterface function\n swf.handleDroppedImage(url, x, y);\n\n // remove the swf from the javascript DOM\n ui.draggable.remove();\n }\n });\n });\n </script>\n</head>\n<body>\n\n <img id=\"dragIcon\" width=\"16\" height=\"16\" alt=\"drag me\" />\n\n <div id=\"flash\">\n <object classid=\"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000\"\n id=\"MyFlashMovie\" width=\"500\" height=\"375\"\n codebase=\"http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab\">\n <param name=\"movie\" value=\"MyFlashMovie.swf\" />\n <param name=\"quality\" value=\"high\" />\n <param name=\"bgcolor\" value=\"#869ca7\" />\n <param name=\"allowScriptAccess\" value=\"sameDomain\" />\n <embed src=\"MyFlashMovie.swf\" quality=\"high\" bgcolor=\"#869ca7\"\n width=\"500\" height=\"375\" name=\"MyFlashMovie\" align=\"middle\"\n play=\"true\" loop=\"false\" quality=\"high\" allowScriptAccess=\"sameDomain\"\n type=\"application/x-shockwave-flash\"\n pluginspage=\"http://www.macromedia.com/go/getflashplayer\">\n </embed>\n </object>\n </div>\n\n</body>\n</html>\n\n",
"I would say it is possible to drop to Flash if you detect that the item is dragged on to the that contains the flash stuff, and you set your dragged objects to have a z-index higher than the flash. Then when it is dropped you can talk to Flash using javascript to tell it where and what was dropped. \nHowever the other way around is probably much harder, because you'd have to detect when the object hits the border of the flash movie and \"pass\" it to the javascript handler (create it in the html, hide it in flash).\nThe question is probably to know whether it's worth the trouble, or if you can maybe achieve everything in JS or in Flash ?\n",
"Hang on, the encapsulation point is a valid one but flash can execute JS functions, and Seldaek is right that an HTML element with a higher z-index should float on the flash movie. So if you did all the drag handling in JS and had the flash read its own dimensions and the position of the pointer in the app it could signal JS methods that slave element(s) to the pointer even (especially) when the pointer leaves the boundaries of the flash app. It would be pretty hairy though.\n",
"If the whole site is one big embedded flash file then yes it's possible.\nI don't think that you can acheive it any other way\n",
"Not possible in flash - unless you want to drag to a target inside the same flash application.\nCould probably be done with a signed Java applet (but who wants to go down that road?)\n"
] |
[
12,
3,
1,
1,
0,
0
] |
[] |
[] |
[
"drag_and_drop",
"flash",
"javascript"
] |
stackoverflow_0000082509_drag_and_drop_flash_javascript.txt
|
Q:
PHP DOMDocument stripping HTML tags
I'm working on a small templating engine, and I'm using DOMDocument to parse the pages. My test page so far looks like this:
<block name="content">
<?php echo 'this is some rendered PHP! <br />' ?>
<p>Main column of <span>content</span></p>
</block>
And part of my class looks like this:
private function parse($tag, $attr = 'name')
{
$strict = 0;
/*** the array to return ***/
$out = array();
if($this->totalBlocks() > 0)
{
/*** a new dom object ***/
$dom = new domDocument;
/*** discard white space ***/
$dom->preserveWhiteSpace = false;
/*** load the html into the object ***/
if($strict==1)
{
$dom->loadXML($this->file_contents);
}
else
{
$dom->loadHTML($this->file_contents);
}
/*** the tag by its tag name ***/
$content = $dom->getElementsByTagname($tag);
$i = 0;
foreach ($content as $item)
{
/*** add node value to the out array ***/
$out[$i]['name'] = $item->getAttribute($attr);
$out[$i]['value'] = $item->nodeValue;
$i++;
}
}
return $out;
}
I have it working the way I want in that it grabs each <block> on the page and injects it's contents into my template, however, it is stripping the HTML tags within the <block>, thus returning the following without the <p> or <span> tags:
this is some rendered PHP! Main column of content
What am I doing wrong here? :) Thanks
A:
Nothing: nodeValue is the concatenation of the value portion of the tree, and will never have tags.
What I would do to make an HTML fragment of the tree under $node is this:
$doc = new DOMDocument();
foreach($node->childNodes as $child) {
$doc->appendChild($doc->importNode($child, true));
}
return $doc->saveHTML();
HTML "fragments" are actually more problematic than you'd think at first, because they tend to lack things like doctypes and character sets, which makes it hard to deterministically go back and forth between portions of a DOM tree and HTML fragments.
|
PHP DOMDocument stripping HTML tags
|
I'm working on a small templating engine, and I'm using DOMDocument to parse the pages. My test page so far looks like this:
<block name="content">
<?php echo 'this is some rendered PHP! <br />' ?>
<p>Main column of <span>content</span></p>
</block>
And part of my class looks like this:
private function parse($tag, $attr = 'name')
{
$strict = 0;
/*** the array to return ***/
$out = array();
if($this->totalBlocks() > 0)
{
/*** a new dom object ***/
$dom = new domDocument;
/*** discard white space ***/
$dom->preserveWhiteSpace = false;
/*** load the html into the object ***/
if($strict==1)
{
$dom->loadXML($this->file_contents);
}
else
{
$dom->loadHTML($this->file_contents);
}
/*** the tag by its tag name ***/
$content = $dom->getElementsByTagname($tag);
$i = 0;
foreach ($content as $item)
{
/*** add node value to the out array ***/
$out[$i]['name'] = $item->getAttribute($attr);
$out[$i]['value'] = $item->nodeValue;
$i++;
}
}
return $out;
}
I have it working the way I want in that it grabs each <block> on the page and injects it's contents into my template, however, it is stripping the HTML tags within the <block>, thus returning the following without the <p> or <span> tags:
this is some rendered PHP! Main column of content
What am I doing wrong here? :) Thanks
|
[
"Nothing: nodeValue is the concatenation of the value portion of the tree, and will never have tags.\nWhat I would do to make an HTML fragment of the tree under $node is this:\n\n$doc = new DOMDocument();\nforeach($node->childNodes as $child) {\n $doc->appendChild($doc->importNode($child, true));\n}\nreturn $doc->saveHTML();\n\nHTML \"fragments\" are actually more problematic than you'd think at first, because they tend to lack things like doctypes and character sets, which makes it hard to deterministically go back and forth between portions of a DOM tree and HTML fragments.\n"
] |
[
9
] |
[] |
[] |
[
"php"
] |
stackoverflow_0000085520_php.txt
|
Q:
What platforms JavaFX is/will be supported on?
I have read about JavaFX, and like all new technologies I wanted to get my hands "dirty" with it. However, although it talks of multiplatform support, I can't find specifics on this.
What platforms support a JavaFX application? All those with Java SE? ME? Does it depend upon the APIs in JavaFX that I use?
A:
JavaFX has three planned distributions.
JavaFX Desktop will run on Windows, Mac, Linux, and Solaris at FCS and will require Java SE. Support for Linux and Solaris will be forthcoming.
JavaFX TV and JavaFX Mobile have no announce target platforms. Also unannounced is whether they will run on ME or SE, and if ME which profiles.
One important platform distinction is that JavaFX Desktop will support Swing components while JavaFX Mobile will not (only scene graph for graphics). JavaFX TV the least publicly concrete of the three at this time.
A:
From what I can see JavaFX is a whole new runtime and compiler so is not a subset of Java. Sun will support it on mobile phones and on the desktop.
OS-wise it is currently released for Windows/Mac but Solaris/Linux are in the works.
A:
JavaFx is not a new runtime. It is the same JRE but a new language/compiler with some a few new APIs to make it all works....
Using Netbeans, you can build applications on any platform. As of today, the APIs are beta. Classfiles produced by the compiler are JRE 6 compatible.
|
What platforms JavaFX is/will be supported on?
|
I have read about JavaFX, and like all new technologies I wanted to get my hands "dirty" with it. However, although it talks of multiplatform support, I can't find specifics on this.
What platforms support a JavaFX application? All those with Java SE? ME? Does it depend upon the APIs in JavaFX that I use?
|
[
"JavaFX has three planned distributions.\n\nJavaFX Desktop will run on Windows, Mac, Linux, and Solaris at FCS and will require Java SE. Support for Linux and Solaris will be forthcoming.\nJavaFX TV and JavaFX Mobile have no announce target platforms. Also unannounced is whether they will run on ME or SE, and if ME which profiles.\n\nOne important platform distinction is that JavaFX Desktop will support Swing components while JavaFX Mobile will not (only scene graph for graphics). JavaFX TV the least publicly concrete of the three at this time.\n",
"From what I can see JavaFX is a whole new runtime and compiler so is not a subset of Java. Sun will support it on mobile phones and on the desktop.\nOS-wise it is currently released for Windows/Mac but Solaris/Linux are in the works.\n",
"JavaFx is not a new runtime. It is the same JRE but a new language/compiler with some a few new APIs to make it all works....\nUsing Netbeans, you can build applications on any platform. As of today, the APIs are beta. Classfiles produced by the compiler are JRE 6 compatible.\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"java",
"javafx"
] |
stackoverflow_0000059129_java_javafx.txt
|
Q:
How to make Swing scroll with "ensureIndexIsVisible"?
When I run this code the selected item is not visible. I've already tried to run it in a separate thread with no luck.
import javax.swing.JFrame;
import java.awt.Container;
import javax.swing.JList;
import javax.swing.ListSelectionModel;
import javax.swing.JScrollPane;
import java.awt.Dimension;
public class ScrollList extends JFrame {
int defaultValue;
ScrollList() {
Container cp = getContentPane();
JList list = createList();
defaultValue = 20;
cp.add(createScrollPane(list));
pack();
setVisible(true);
list.ensureIndexIsVisible(defaultValue);
}
JList createList() {
Integer[] model = new Integer[73];
JList list = new JList(model);
for (int i = 1; i < model.length; i++)
model[i] = i;
list.setSelectedIndex(defaultValue);
return list;
}
JScrollPane createScrollPane(JList list) {
JScrollPane s = new JScrollPane(createList());
s.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS);
s.setPreferredSize(new Dimension(100, 200));
return s;
}
public static void main(String[] args) {
new ScrollList();
}
}
A:
I figured it out!!
This is the code:
import javax.swing.JFrame;
import java.awt.Container;
import javax.swing.JList;
import javax.swing.ListSelectionModel;
import javax.swing.JScrollPane;
import java.awt.Dimension;
public class ScrollList extends JFrame {
final int defaultValue = 20;
ScrollList() {
setDefaultCloseOperation(DISPOSE_ON_CLOSE);
Container cp = getContentPane();
JList list = createList();
cp.add(createScrollPane(list));
pack();
list.ensureIndexIsVisible(list.getSelectedIndex());
setVisible(true);
}
JList createList() {
Integer[] model = new Integer[73];
JList list = new JList(model);
for (int i = 1; i < model.length; i++)
model[i] = i;
list.setSelectedIndex(defaultValue);
return list;
}
JScrollPane createScrollPane(JList list) {
JScrollPane s = new JScrollPane(list); // MAJOR FIX HERE!
s.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS);
s.setPreferredSize(new Dimension(100, 200));
return s;
}
public static void main(String[] args) {
new ScrollList();
}
}
Instead of using the list that you passed into the createScrollPane() method, you create a new one.
|
How to make Swing scroll with "ensureIndexIsVisible"?
|
When I run this code the selected item is not visible. I've already tried to run it in a separate thread with no luck.
import javax.swing.JFrame;
import java.awt.Container;
import javax.swing.JList;
import javax.swing.ListSelectionModel;
import javax.swing.JScrollPane;
import java.awt.Dimension;
public class ScrollList extends JFrame {
int defaultValue;
ScrollList() {
Container cp = getContentPane();
JList list = createList();
defaultValue = 20;
cp.add(createScrollPane(list));
pack();
setVisible(true);
list.ensureIndexIsVisible(defaultValue);
}
JList createList() {
Integer[] model = new Integer[73];
JList list = new JList(model);
for (int i = 1; i < model.length; i++)
model[i] = i;
list.setSelectedIndex(defaultValue);
return list;
}
JScrollPane createScrollPane(JList list) {
JScrollPane s = new JScrollPane(createList());
s.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS);
s.setPreferredSize(new Dimension(100, 200));
return s;
}
public static void main(String[] args) {
new ScrollList();
}
}
|
[
"I figured it out!!\nThis is the code:\nimport javax.swing.JFrame;\nimport java.awt.Container;\nimport javax.swing.JList;\nimport javax.swing.ListSelectionModel;\nimport javax.swing.JScrollPane;\nimport java.awt.Dimension;\n\npublic class ScrollList extends JFrame {\n final int defaultValue = 20;\n\n ScrollList() {\n setDefaultCloseOperation(DISPOSE_ON_CLOSE);\n Container cp = getContentPane();\n JList list = createList();\n\n cp.add(createScrollPane(list));\n pack();\n list.ensureIndexIsVisible(list.getSelectedIndex());\n\n setVisible(true);\n }\n\n JList createList() {\n Integer[] model = new Integer[73];\n JList list = new JList(model);\n\n for (int i = 1; i < model.length; i++)\n model[i] = i;\n list.setSelectedIndex(defaultValue);\n return list;\n }\n\n JScrollPane createScrollPane(JList list) {\n JScrollPane s = new JScrollPane(list); // MAJOR FIX HERE!\n s.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS);\n s.setPreferredSize(new Dimension(100, 200));\n return s;\n }\n\n public static void main(String[] args) {\n new ScrollList();\n }\n}\n\nInstead of using the list that you passed into the createScrollPane() method, you create a new one.\n"
] |
[
1
] |
[] |
[] |
[
"java",
"swing"
] |
stackoverflow_0000085548_java_swing.txt
|
Q:
ORA-00161: transaction branch length 103 is illegal (maximum allowed 64
Error:
ORA-00161: transaction branch length 103 is illegal (maximum allowed 64…
I'm using the DAC from Oracle, any idea if there is a patch for this?
A:
This looks to be a similar issue for .net 2.0, vista and oracle http://forums.oracle.com/forums/thread.jspa?threadID=516250
|
ORA-00161: transaction branch length 103 is illegal (maximum allowed 64
|
Error:
ORA-00161: transaction branch length 103 is illegal (maximum allowed 64…
I'm using the DAC from Oracle, any idea if there is a patch for this?
|
[
"This looks to be a similar issue for .net 2.0, vista and oracle http://forums.oracle.com/forums/thread.jspa?threadID=516250\n"
] |
[
1
] |
[] |
[] |
[
"c#",
"oracle"
] |
stackoverflow_0000083130_c#_oracle.txt
|
Q:
How To Read Active Directory Group Membership From PHP/IIS using COM?
I have the following code:
$bind = new COM("LDAP://CN=GroupName,OU=Groups,OU=Division,DC=company,DC=local");
When I execute it from a command-prompt, it runs fine. When it runs under IIS/PHP/ISAPI, it barfs.
Fatal error: Uncaught exception 'com_exception' with message 'Failed to create COM object `LDAP://CN=...[cut]...,DC=local':
An operations error occurred. ' in index.php
Stack trace:
#0 index.php: com->com('LDAP://CN=...')
#1 {main} thrown
IIS is configured for Windows Authentication (no anonymous, no basic, no digest) and I am connecting as the same user as the command prompt. I cannot find any specific errors in the IIS logfiles or the eventlog.
The main purpose of this exercise is to refrain from keeping user credentials in my script and relying on IIS authentication to pass them through to the active directory. I understand that you can use LDAP to accomplish the same thing, but as far as I know credentials cannot be passed through.
Perhaps it is in some way related to the error I get when I try to port it to ASP. I get error 80072020 (which I'm currently looking up).
The event logs show nothing out of the ordinary. No warnings, no errors. Full security auditing is enabled (success and failure on every item in the security policy), and it shows successful Windows logons for every user I authenticate against the web page (which is expected.)
A:
Since you're using Windows Authentication in IIS, you may have some security events in the Windows Event log. I would check the Event log for Security Events as well as Application Events and see if you're hitting any sort of permissions issues.
Also, since you're basically just communicating to AD via LDAP...you might look into using the a native LDAP library for PHP rather than a COM.
You'll have to enable the extension probably in your php.ini. Worth looking at probably.
A:
It seems to be working now.
I enabled "Trust this computer for delegation" for the computer object in Active Directory. Normally IIS cannot both authenticate you and then subsequently impersonate you across the network (in my case to a domain controller to query Active Directory) without the delegation trust enabled.
You just have to be sure that it's authenticating using Kerberos and not NTLM or some other digest authentication because the digest is not trusted to use as an impersonation token.
It fixed both my PHP and ASP scripts.
A:
Well, if you want to use LDAP, let me point you to the LDAP authentication code we use for Maia Mailguard: look for the function named lauth_ldap
I think it requires ldap version 3, so you have to set that parameter for ldap. To verify the password, we use the ldap bind function to let the ldap server authenticate.
A:
I'm no AD/COM/IIS expert, but it could be a permissions problem. e.g the IUSR_computername user does not have applicable access within the directory, or you're not binding as a specific user?
The alarm bell for me is the fact it runs ok from command line (e.g. running with your permissions) but fails on IIS (e.g. not your permissions).
|
How To Read Active Directory Group Membership From PHP/IIS using COM?
|
I have the following code:
$bind = new COM("LDAP://CN=GroupName,OU=Groups,OU=Division,DC=company,DC=local");
When I execute it from a command-prompt, it runs fine. When it runs under IIS/PHP/ISAPI, it barfs.
Fatal error: Uncaught exception 'com_exception' with message 'Failed to create COM object `LDAP://CN=...[cut]...,DC=local':
An operations error occurred. ' in index.php
Stack trace:
#0 index.php: com->com('LDAP://CN=...')
#1 {main} thrown
IIS is configured for Windows Authentication (no anonymous, no basic, no digest) and I am connecting as the same user as the command prompt. I cannot find any specific errors in the IIS logfiles or the eventlog.
The main purpose of this exercise is to refrain from keeping user credentials in my script and relying on IIS authentication to pass them through to the active directory. I understand that you can use LDAP to accomplish the same thing, but as far as I know credentials cannot be passed through.
Perhaps it is in some way related to the error I get when I try to port it to ASP. I get error 80072020 (which I'm currently looking up).
The event logs show nothing out of the ordinary. No warnings, no errors. Full security auditing is enabled (success and failure on every item in the security policy), and it shows successful Windows logons for every user I authenticate against the web page (which is expected.)
|
[
"Since you're using Windows Authentication in IIS, you may have some security events in the Windows Event log. I would check the Event log for Security Events as well as Application Events and see if you're hitting any sort of permissions issues. \nAlso, since you're basically just communicating to AD via LDAP...you might look into using the a native LDAP library for PHP rather than a COM. \nYou'll have to enable the extension probably in your php.ini. Worth looking at probably.\n",
"It seems to be working now.\nI enabled \"Trust this computer for delegation\" for the computer object in Active Directory. Normally IIS cannot both authenticate you and then subsequently impersonate you across the network (in my case to a domain controller to query Active Directory) without the delegation trust enabled.\nYou just have to be sure that it's authenticating using Kerberos and not NTLM or some other digest authentication because the digest is not trusted to use as an impersonation token.\nIt fixed both my PHP and ASP scripts.\n",
"Well, if you want to use LDAP, let me point you to the LDAP authentication code we use for Maia Mailguard: look for the function named lauth_ldap\nI think it requires ldap version 3, so you have to set that parameter for ldap. To verify the password, we use the ldap bind function to let the ldap server authenticate.\n",
"I'm no AD/COM/IIS expert, but it could be a permissions problem. e.g the IUSR_computername user does not have applicable access within the directory, or you're not binding as a specific user?\nThe alarm bell for me is the fact it runs ok from command line (e.g. running with your permissions) but fails on IIS (e.g. not your permissions).\n"
] |
[
3,
2,
0,
0
] |
[] |
[] |
[
"adsi",
"com",
"iis",
"php"
] |
stackoverflow_0000084641_adsi_com_iis_php.txt
|
Q:
Rich Edit Control in raw Win32
Is the documentation for Rich Edit Controls really as bad (wrong?) as it seems to be? Right now I'm manually calling LoadLibrary("riched20.dll") in order to get a Rich Edit Control to show up. The documentation for Rich Edit poorly demonstrates this in the first code sample for using Rich Edit controls.
It talks about calling InitCommonControlsEx() to add visual styles, but makes no mention of which flags to pass in.
Is there a better way to load a Rich Edit control?
http://msdn.microsoft.com/en-us/library/bb787877(VS.85).aspx
Here's the only code I could write to make it work:
#include "Richedit.h"
#include "commctrl.h"
INITCOMMONCONTROLSEX icex;
icex.dwSize = sizeof(INITCOMMONCONTROLSEX);
icex.dwICC = ICC_USEREX_CLASSES; //Could be 0xFFFFFFFF and it still wouldn't work
InitCommonControlsEx(&icex); //Does nothing for Rich Edit controls
LoadLibrary("riched20.dll"); //Manually? For real?
hWndRichEdit = CreateWindowEx(
ES_SUNKEN,
RICHEDIT_CLASS,
"",
WS_BORDER | WS_VISIBLE | WS_CHILD,
2, 2, 100, 24,
hWnd, (HMENU) ID_RICH_EDIT, hInst, NULL);
A:
Using MFC, RichEdit controls just work.
Loading with InitCommonControlsEx() - ICC_USEREX_CLASSES doesn't load RichEdit AFAIK, you don't need it as it only does the 'standard' common controls, which don't include richedit. Apparently you only need to call this to enable 'visual styles' in Windows, not to get RichEdits working.
If you're using 2008, you want to include Msftedit.dll and use the MSFTEDIT_CLASS instead (MS are rubbish for backward compatibilty sometimes).
The docs do suggest you're doing it right for Win32 programming.
A:
Many years ago, I ran into this same issue, and yes, the answer was to load the .dll manually. The reason, as far as I can remember, is that the RichEdit window class is registered in DllMain of riched20.dll.
A:
Isn't there an import library (maybe riched20.lib) that you can link to. Then you won't have to load it "manually" at run time. That's how all the standard controls work. VS automatically adds a reference to user32.lib when you create a project.
A:
I think you have to call CoInitializeEx before you create any of the common controls.
The LoadLibrary is not needed. If you link with the correct .lib file the exe-loader will take care of such details for you.
|
Rich Edit Control in raw Win32
|
Is the documentation for Rich Edit Controls really as bad (wrong?) as it seems to be? Right now I'm manually calling LoadLibrary("riched20.dll") in order to get a Rich Edit Control to show up. The documentation for Rich Edit poorly demonstrates this in the first code sample for using Rich Edit controls.
It talks about calling InitCommonControlsEx() to add visual styles, but makes no mention of which flags to pass in.
Is there a better way to load a Rich Edit control?
http://msdn.microsoft.com/en-us/library/bb787877(VS.85).aspx
Here's the only code I could write to make it work:
#include "Richedit.h"
#include "commctrl.h"
INITCOMMONCONTROLSEX icex;
icex.dwSize = sizeof(INITCOMMONCONTROLSEX);
icex.dwICC = ICC_USEREX_CLASSES; //Could be 0xFFFFFFFF and it still wouldn't work
InitCommonControlsEx(&icex); //Does nothing for Rich Edit controls
LoadLibrary("riched20.dll"); //Manually? For real?
hWndRichEdit = CreateWindowEx(
ES_SUNKEN,
RICHEDIT_CLASS,
"",
WS_BORDER | WS_VISIBLE | WS_CHILD,
2, 2, 100, 24,
hWnd, (HMENU) ID_RICH_EDIT, hInst, NULL);
|
[
"Using MFC, RichEdit controls just work.\nLoading with InitCommonControlsEx() - ICC_USEREX_CLASSES doesn't load RichEdit AFAIK, you don't need it as it only does the 'standard' common controls, which don't include richedit. Apparently you only need to call this to enable 'visual styles' in Windows, not to get RichEdits working.\nIf you're using 2008, you want to include Msftedit.dll and use the MSFTEDIT_CLASS instead (MS are rubbish for backward compatibilty sometimes).\nThe docs do suggest you're doing it right for Win32 programming.\n",
"Many years ago, I ran into this same issue, and yes, the answer was to load the .dll manually. The reason, as far as I can remember, is that the RichEdit window class is registered in DllMain of riched20.dll.\n",
"Isn't there an import library (maybe riched20.lib) that you can link to. Then you won't have to load it \"manually\" at run time. That's how all the standard controls work. VS automatically adds a reference to user32.lib when you create a project.\n",
"I think you have to call CoInitializeEx before you create any of the common controls.\nThe LoadLibrary is not needed. If you link with the correct .lib file the exe-loader will take care of such details for you. \n"
] |
[
2,
2,
1,
0
] |
[] |
[] |
[
"richedit",
"winapi",
"windows"
] |
stackoverflow_0000085427_richedit_winapi_windows.txt
|
Q:
How do I handle message failure in MSMQ bindings for WCF
I have create a WCF service and am utilising netMsmqBinding binding.
This is a simple service that passes a Dto to my service method and does not expect a response. The message is placed in an MSMQ, and once picked up inserted into a database.
What is the best method to make sure no data is being lost.
I have tried the 2 following methods:
Throw an exception
This places the message in a dead letter queue for manual perusal. I can process this when my strvice starts
set the receiveRetryCount="3" on the binding
After 3 tries - which happen instantanously, this seems to leave the message in queue, but fault my service. Restarting my service repeats this process.
Ideally I would like to do the follow:
Try process the message
If this fails, wait 5 minutes for that message and try again.
If that process fails 3 times, move the message to a dead letter queue.
Restarting the service will push all messages from the dead letter queue back into the queue so that it can be processed.
Can I achieve this? If so how?
Can you point me to any good articles on how best to utilize WCF and MSMQ for my given sceneria.
Any help would be much appreciated. Thanks!
Some additional information
I am using MSMQ 3.0 on Windows XP and Windows Server 2003.
Unfortunately I can't use the built in poison message support targeted at MSMQ 4.0 and Vista/2008.
A:
I think with MSMQ (avaiable only on Vista) you might be able to to do like this:
<bindings>
<netMsmqBinding>
<binding name="PosionMessageHandling"
receiveRetryCount="3"
retryCycleDelay="00:05:00"
maxRetryCycles="3"
receiveErrorHandling="Move" />
</netMsmqBinding>
</bindings>
WCF will immediately retry for ReceiveRetryCount times after the first call failure. After the batch has failed the message is moved
to the retry queue. After a delay of RetryCycleDelay minute, the message moved from the retry queue to the endpoint queue and the batch is retried. This will be repeated
MaxRetryCycle time. If all that fails the message is handled according to receiveErrorHandling which can be move
(to poison queue), reject, drop or fault
By the way a good text about WCF and MSMQ is the chapther 9 of Progammig WCF book from Juval Lowy
A:
There's a sample in the SDK that might be useful in your case. Basically, what it does is attach an IErrorHandler implementation to your service that will catch the error when WCF declares the message to be "poison" (i.e. when all configured retries have been exhausted). What the sample does is move the message to another queue and then restart the ServiceHost associated with the message (since it will have faulted when the poison message was found).
It's not a very pretty sample, but it can be useful. There are a couple of limitations, though:
1- If you have multiple endpoints associated with your service (i.e. exposed through several queues), there's no way to know which queue the poison message arrived in. If you only have a single queue, this won't be a problem. I haven't seen any official workaround for this, but I've experimented with one possible alternative which I've documented here: http://winterdom.com/weblog/2008/05/27/NetMSMQAndPoisonMessages.aspx
2- Once the problem message is moved to another queue, it becomes your responsibility, so it's up to you to move it back to the processing queue once the timeout is done (or attach a new service to that queue to handle it).
To be honest, in either case, you're looking at some "manual" work here that WCF just doesn't cover on it's own.
I've been recently working on a different project where I have a requirement to explicitly control how often retries happen, and my current solution was to create a set of retry queues and manually move messages between the retry queues and the main processing queue based on a set of timers and some heuristics, just using the raw System.Messaging stuff to handle the MSMQ queues. It seems to work pretty nicely, though there are a couple of gotchas if you go this way.
A:
If you're using SQL-Server then you should use a distributed transaction, since both MSMQ and SQL-Server support it. What happens is you wrap your database write in a TransactionScope block and call scope.Complete() only if it succeeds. If it fails, then when your WCF method returns the message will be placed back into the queue to be tried again. Here's a trimmed version of code I use:
[OperationBehavior(TransactionScopeRequired=true, TransactionAutoComplete=true)]
public void InsertRecord(RecordType record)
{
try
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
SqlConnection InsertConnection = new SqlConnection(ConnectionString);
InsertConnection.Open();
// Insert statements go here
InsertConnection.Close();
// Vote to commit the transaction if there were no failures
scope.Complete();
}
}
catch (Exception ex)
{
logger.WarnException(string.Format("Distributed transaction failure for {0}",
Transaction.Current.TransactionInformation.DistributedIdentifier.ToString()),
ex);
}
}
I test this by queueing up a large but known number of records, let WCF start lots of threads to handle many of them simultaneously (reaches 16 threads--16 messages off the queue at once), then kill the process in the middle of operations. When the program is restarted the messages are read back from the queue and processed again as if nothing happened, and at the conclusion of the test the database is consistent and has no missing records.
The Distributed Transaction Manager has an ambient presence, and when you create a new instance of TransactionScope it automatically searches for the current transaction within the scope of the method invokation--which should have been created already by WCF when it popped the message off the queue and invoked your method.
A:
Unfortunately I'm stuck on Windows XP and Windows Server 2003 so that isn't an option for me. - (I will re-clarify that in my question as I found this solution after posting and realised i couldn't use it)
I found that one solution was to setup a custom handler which would move my message onto another queue or poison queue and restart my service.
This seemed crazy to me. Imagine my Sql Server was down how often the service would be restarted.
SO what I've ended up doing is allowing the Line to fault and leave messages on the queue.
I also log a fatal message to my system logging service that this has happened.
Once our issue is resolved, I restart the service and all the messages start getting processed again.
I realised re-processing this message or any other will all fail, so why the need to move this message and the others to another queue. I may as well stop my service, and start it again when all is operating as expected.
aogan, you had the perfect answer for MSMQ 4.0, but unfortunately not for me
|
How do I handle message failure in MSMQ bindings for WCF
|
I have create a WCF service and am utilising netMsmqBinding binding.
This is a simple service that passes a Dto to my service method and does not expect a response. The message is placed in an MSMQ, and once picked up inserted into a database.
What is the best method to make sure no data is being lost.
I have tried the 2 following methods:
Throw an exception
This places the message in a dead letter queue for manual perusal. I can process this when my strvice starts
set the receiveRetryCount="3" on the binding
After 3 tries - which happen instantanously, this seems to leave the message in queue, but fault my service. Restarting my service repeats this process.
Ideally I would like to do the follow:
Try process the message
If this fails, wait 5 minutes for that message and try again.
If that process fails 3 times, move the message to a dead letter queue.
Restarting the service will push all messages from the dead letter queue back into the queue so that it can be processed.
Can I achieve this? If so how?
Can you point me to any good articles on how best to utilize WCF and MSMQ for my given sceneria.
Any help would be much appreciated. Thanks!
Some additional information
I am using MSMQ 3.0 on Windows XP and Windows Server 2003.
Unfortunately I can't use the built in poison message support targeted at MSMQ 4.0 and Vista/2008.
|
[
"I think with MSMQ (avaiable only on Vista) you might be able to to do like this:\n<bindings>\n <netMsmqBinding>\n <binding name=\"PosionMessageHandling\"\n receiveRetryCount=\"3\"\n retryCycleDelay=\"00:05:00\"\n maxRetryCycles=\"3\"\n receiveErrorHandling=\"Move\" />\n </netMsmqBinding>\n</bindings>\n\nWCF will immediately retry for ReceiveRetryCount times after the first call failure. After the batch has failed the message is moved\nto the retry queue. After a delay of RetryCycleDelay minute, the message moved from the retry queue to the endpoint queue and the batch is retried. This will be repeated \nMaxRetryCycle time. If all that fails the message is handled according to receiveErrorHandling which can be move \n(to poison queue), reject, drop or fault\nBy the way a good text about WCF and MSMQ is the chapther 9 of Progammig WCF book from Juval Lowy\n",
"There's a sample in the SDK that might be useful in your case. Basically, what it does is attach an IErrorHandler implementation to your service that will catch the error when WCF declares the message to be \"poison\" (i.e. when all configured retries have been exhausted). What the sample does is move the message to another queue and then restart the ServiceHost associated with the message (since it will have faulted when the poison message was found).\nIt's not a very pretty sample, but it can be useful. There are a couple of limitations, though:\n1- If you have multiple endpoints associated with your service (i.e. exposed through several queues), there's no way to know which queue the poison message arrived in. If you only have a single queue, this won't be a problem. I haven't seen any official workaround for this, but I've experimented with one possible alternative which I've documented here: http://winterdom.com/weblog/2008/05/27/NetMSMQAndPoisonMessages.aspx\n2- Once the problem message is moved to another queue, it becomes your responsibility, so it's up to you to move it back to the processing queue once the timeout is done (or attach a new service to that queue to handle it).\nTo be honest, in either case, you're looking at some \"manual\" work here that WCF just doesn't cover on it's own. \nI've been recently working on a different project where I have a requirement to explicitly control how often retries happen, and my current solution was to create a set of retry queues and manually move messages between the retry queues and the main processing queue based on a set of timers and some heuristics, just using the raw System.Messaging stuff to handle the MSMQ queues. It seems to work pretty nicely, though there are a couple of gotchas if you go this way.\n",
"If you're using SQL-Server then you should use a distributed transaction, since both MSMQ and SQL-Server support it. What happens is you wrap your database write in a TransactionScope block and call scope.Complete() only if it succeeds. If it fails, then when your WCF method returns the message will be placed back into the queue to be tried again. Here's a trimmed version of code I use:\n [OperationBehavior(TransactionScopeRequired=true, TransactionAutoComplete=true)]\n public void InsertRecord(RecordType record)\n {\n try\n {\n using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))\n {\n SqlConnection InsertConnection = new SqlConnection(ConnectionString);\n InsertConnection.Open();\n\n // Insert statements go here\n\n InsertConnection.Close();\n\n // Vote to commit the transaction if there were no failures\n scope.Complete();\n }\n }\n catch (Exception ex)\n {\n logger.WarnException(string.Format(\"Distributed transaction failure for {0}\", \n Transaction.Current.TransactionInformation.DistributedIdentifier.ToString()),\n ex);\n }\n }\n\nI test this by queueing up a large but known number of records, let WCF start lots of threads to handle many of them simultaneously (reaches 16 threads--16 messages off the queue at once), then kill the process in the middle of operations. When the program is restarted the messages are read back from the queue and processed again as if nothing happened, and at the conclusion of the test the database is consistent and has no missing records.\nThe Distributed Transaction Manager has an ambient presence, and when you create a new instance of TransactionScope it automatically searches for the current transaction within the scope of the method invokation--which should have been created already by WCF when it popped the message off the queue and invoked your method.\n",
"Unfortunately I'm stuck on Windows XP and Windows Server 2003 so that isn't an option for me. - (I will re-clarify that in my question as I found this solution after posting and realised i couldn't use it)\nI found that one solution was to setup a custom handler which would move my message onto another queue or poison queue and restart my service.\nThis seemed crazy to me. Imagine my Sql Server was down how often the service would be restarted.\nSO what I've ended up doing is allowing the Line to fault and leave messages on the queue.\nI also log a fatal message to my system logging service that this has happened.\nOnce our issue is resolved, I restart the service and all the messages start getting processed again.\nI realised re-processing this message or any other will all fail, so why the need to move this message and the others to another queue. I may as well stop my service, and start it again when all is operating as expected.\naogan, you had the perfect answer for MSMQ 4.0, but unfortunately not for me\n"
] |
[
14,
9,
4,
1
] |
[] |
[] |
[
".net",
"msmq",
"wcf"
] |
stackoverflow_0000082099_.net_msmq_wcf.txt
|
Q:
Move Active Directory Group to Another OU using Powershell
How do I move an active directory group to another organizational unit using Powershell?
ie.
I would like to move the group "IT Department" from:
(CN=IT Department, OU=Technology Department, OU=Departments,DC=Company,DC=ca)
to:
(CN=IT Department, OU=Temporarily Moved Groups, DC=Company,DC=ca)
A:
Your script was really close to correct (and I really appreciate your response).
The following script is what I used to solve my problem.:
$from = [ADSI]"LDAP://CN=IT Department, OU=Technology Department, OU=Departments,DC=Company,DC=ca"
$to = [ADSI]"LDAP://OU=Temporarily Moved Groups, DC=Company,DC=ca"
$from.PSBase.MoveTo($to,"cn="+$from.name)
A:
I haven't tried this yet, but this should do it..
$objectlocation= 'CN=IT Department, OU=Technology Department, OU=Departments,DC=Company,DC=ca'
$newlocation = 'OU=Temporarily Moved Groups, DC=Company,DC=ca'
$from = new-object System.DirectoryServices.DirectoryEntry("LDAP://$objectLocation")
$to = new-object System.DirectoryServices.DirectoryEntry("LDAP://$newlocation")
$from.MoveTo($newlocation,$from.name)
|
Move Active Directory Group to Another OU using Powershell
|
How do I move an active directory group to another organizational unit using Powershell?
ie.
I would like to move the group "IT Department" from:
(CN=IT Department, OU=Technology Department, OU=Departments,DC=Company,DC=ca)
to:
(CN=IT Department, OU=Temporarily Moved Groups, DC=Company,DC=ca)
|
[
"Your script was really close to correct (and I really appreciate your response).\nThe following script is what I used to solve my problem.:\n$from = [ADSI]\"LDAP://CN=IT Department, OU=Technology Department, OU=Departments,DC=Company,DC=ca\"\n$to = [ADSI]\"LDAP://OU=Temporarily Moved Groups, DC=Company,DC=ca\"\n$from.PSBase.MoveTo($to,\"cn=\"+$from.name)\n\n",
"I haven't tried this yet, but this should do it..\n$objectlocation= 'CN=IT Department, OU=Technology Department, OU=Departments,DC=Company,DC=ca'\n$newlocation = 'OU=Temporarily Moved Groups, DC=Company,DC=ca'\n\n$from = new-object System.DirectoryServices.DirectoryEntry(\"LDAP://$objectLocation\")\n$to = new-object System.DirectoryServices.DirectoryEntry(\"LDAP://$newlocation\")\n$from.MoveTo($newlocation,$from.name)\n\n"
] |
[
6,
3
] |
[] |
[] |
[
"active_directory",
"powershell"
] |
stackoverflow_0000076325_active_directory_powershell.txt
|
Q:
Rendering suggested values from an ext Combobox to an element in the DOM
I have an ext combobox which uses a store to suggest values to a user as they type.
An example of which can be found here: combobox example
Is there a way of making it so the suggested text list is rendered to an element in the DOM. Please note I do not mean the "applyTo" config option, as this would render the whole control, including the textbox to the DOM element.
A:
You can use plugin for this, since you can call or even override private methods from within the plugin:
var suggested_text_plugin = {
init: function(o) {
o.onTypeAhead = function() {
// Original code from the sources goes here:
if(this.store.getCount() > 0){
var r = this.store.getAt(0);
var newValue = r.data[this.displayField];
var len = newValue.length;
var selStart = this.getRawValue().length;
if(selStart != len){
this.setRawValue(newValue);
this.selectText(selStart, newValue.length);
}
}
// Your code to display newValue in DOM
......myDom.getEl().update(newValue);
};
}
};
// in combobox code:
var cb = new Ext.form.ComboBox({
....
plugins: suggested_text_plugin,
....
});
I think it's even possible to create a whole chain of methods, calling original method before or after yours, but I haven't tried this yet.
Also, please don't push me hard for using non-standard plugin definition and invocation methodics (undocumented). It's just my way of seeing things.
EDIT:
I think the method chain could be implemented something like that (untested):
....
o.origTypeAhead = new Function(this.onTypeAhead.toSource());
// or just
o.origTypeAhead = this.onTypeAhead;
....
o.onTypeAhead = function() {
// Call original
this.origTypeAhead();
// Display value into your DOM element
...myDom....
};
A:
@qui
Another thing to consider is that initList is not part of the API. That method could disappear or the behavior could change significantly in future releases of Ext. If you never plan on upgrading, then you don't need to worry.
A:
So clarify, you want the selected text to render somewhere besides directly below the text input. Correct?
ComboBox is just a composite of Ext.DataView, a text input, and an optional trigger button. There isn't an official option for what you want and hacking it to make it do what you want would be really painful. So, the easiest course of action (other than finding and using some other library with a component that does exactly what you want) is to build your own with the components above:
Create a text box. You can use an Ext.form.TextField if you want, and observe the keyup event.
Create a DataView bound to your store, rendering to whatever DOM element you want. Depending on what you want, listen to the 'selectionchange' event and take whatever action you need to in response to the selection. e.g., setValue on an Ext.form.Hidden (or plain HTML input type="hidden" element).
In your keyup event listener, call the store's filter method (see doc), passing the field name and the value from the text field. e.g., store.filter('name',new RegEx(value+'.*'))
It's a little more work, but it's a lot shorter than writing your own component from scratch or hacking the ComboBox to behave like you want.
A:
@Thevs
I think you were on the right track.
What I did was override the initList method of Combobox.
Ext.override(Ext.form.ComboBox, {
initList : function(){
If you look at the code you can see the bit where it renders the list of suggestions to a dataview. So just set the apply to the dom element you want:
this.view = new Ext.DataView({
//applyTo: this.innerList,
applyTo: "contentbox",
A:
@qui
Ok. I thought you want an extra DOM field (in addition to existing combo field).
But your solution would override a method in the ComboBox class, isn't it? That would lead to all your combo-boxes would render to the same DOM. Using a plugin would override only one particular instance.
|
Rendering suggested values from an ext Combobox to an element in the DOM
|
I have an ext combobox which uses a store to suggest values to a user as they type.
An example of which can be found here: combobox example
Is there a way of making it so the suggested text list is rendered to an element in the DOM. Please note I do not mean the "applyTo" config option, as this would render the whole control, including the textbox to the DOM element.
|
[
"You can use plugin for this, since you can call or even override private methods from within the plugin:\nvar suggested_text_plugin = {\n\n init: function(o) {\n\n o.onTypeAhead = function() {\n // Original code from the sources goes here:\n\n if(this.store.getCount() > 0){\n var r = this.store.getAt(0);\n var newValue = r.data[this.displayField];\n var len = newValue.length;\n var selStart = this.getRawValue().length;\n if(selStart != len){\n this.setRawValue(newValue);\n this.selectText(selStart, newValue.length);\n }\n }\n\n // Your code to display newValue in DOM\n ......myDom.getEl().update(newValue);\n };\n }\n};\n\n\n// in combobox code:\n\nvar cb = new Ext.form.ComboBox({\n ....\n plugins: suggested_text_plugin,\n ....\n});\n\nI think it's even possible to create a whole chain of methods, calling original method before or after yours, but I haven't tried this yet.\nAlso, please don't push me hard for using non-standard plugin definition and invocation methodics (undocumented). It's just my way of seeing things.\nEDIT:\nI think the method chain could be implemented something like that (untested):\n....\no.origTypeAhead = new Function(this.onTypeAhead.toSource());\n// or just\no.origTypeAhead = this.onTypeAhead;\n....\n\no.onTypeAhead = function() {\n // Call original\n this.origTypeAhead();\n // Display value into your DOM element\n ...myDom....\n};\n\n",
"@qui\nAnother thing to consider is that initList is not part of the API. That method could disappear or the behavior could change significantly in future releases of Ext. If you never plan on upgrading, then you don't need to worry.\n",
"So clarify, you want the selected text to render somewhere besides directly below the text input. Correct?\nComboBox is just a composite of Ext.DataView, a text input, and an optional trigger button. There isn't an official option for what you want and hacking it to make it do what you want would be really painful. So, the easiest course of action (other than finding and using some other library with a component that does exactly what you want) is to build your own with the components above:\n\nCreate a text box. You can use an Ext.form.TextField if you want, and observe the keyup event.\nCreate a DataView bound to your store, rendering to whatever DOM element you want. Depending on what you want, listen to the 'selectionchange' event and take whatever action you need to in response to the selection. e.g., setValue on an Ext.form.Hidden (or plain HTML input type=\"hidden\" element).\nIn your keyup event listener, call the store's filter method (see doc), passing the field name and the value from the text field. e.g., store.filter('name',new RegEx(value+'.*'))\n\nIt's a little more work, but it's a lot shorter than writing your own component from scratch or hacking the ComboBox to behave like you want.\n",
"@Thevs\nI think you were on the right track. \nWhat I did was override the initList method of Combobox.\n Ext.override(Ext.form.ComboBox, {\n initList : function(){\n\nIf you look at the code you can see the bit where it renders the list of suggestions to a dataview. So just set the apply to the dom element you want:\n this.view = new Ext.DataView({\n //applyTo: this.innerList,\n applyTo: \"contentbox\",\n\n",
"@qui\nOk. I thought you want an extra DOM field (in addition to existing combo field).\nBut your solution would override a method in the ComboBox class, isn't it? That would lead to all your combo-boxes would render to the same DOM. Using a plugin would override only one particular instance.\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"extjs",
"javascript"
] |
stackoverflow_0000074266_extjs_javascript.txt
|
Q:
AxAcroPDF - Vista64 Class Not Registered Error
We have a WinForms application written in C# that uses the AxAcroPDFLib.AxAcroPDF component to load and print a PDF file. Has been working without any problems in Windows XP. I have moved my development environment to Vista 64 bit and now the application will not run (on Vista 64) unless I remove the AxAcroPDF component. I get the following error when the application runs:
"System.Runtime.InteropServices.COMException:
Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG))."
I have been advised on the Adobe Forums that the reason for the error is that they do not have a 64 bit version of the AxAcroPDF ActiveX control.
Is there some way around this problem? For example can I convert the 32bit ActiveX control to a 64bit control myself?
A:
You can't convert Adobe's ActiveX control to 64bit yourself, but you can force your application to run in 32bit mode by setting the platform target to x86.
For instructions for your version of Visual Studio, see section 1.44 of Issues When Using Microsoft Visual Studio 2005
A:
The .Net framework 1.1 is always targeting 32 bits CPUs while .Net framework 2.0 and above can target 32 bits or 64 bits according to the processorArchitecture property of the program manifest changed by the 'Platform Target' option of the Visual Studio IDE.
With the default option 'Any CPU', the IL code is compiled according to the platform but of course the COM call to the AxAcroPDF 32 bits component fails if the platform is 64 bits.
Just rebuild the EXE to target 32 bits platform only. This works fine with the WOW64 emulator in Vista 64 bits.
A:
Use DLL isolation, works with every 32bit COM+ application. See more at:
http://support.microsoft.com/kb/281335
With this solution you can isolate your 32 bit COM+ application into a separate 32bit process.
64bit applications search installed COM+ objects at: HKLM\Software\Classes, but 32bit applications use HKLM\Software\WOW6432\Classes
|
AxAcroPDF - Vista64 Class Not Registered Error
|
We have a WinForms application written in C# that uses the AxAcroPDFLib.AxAcroPDF component to load and print a PDF file. Has been working without any problems in Windows XP. I have moved my development environment to Vista 64 bit and now the application will not run (on Vista 64) unless I remove the AxAcroPDF component. I get the following error when the application runs:
"System.Runtime.InteropServices.COMException:
Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG))."
I have been advised on the Adobe Forums that the reason for the error is that they do not have a 64 bit version of the AxAcroPDF ActiveX control.
Is there some way around this problem? For example can I convert the 32bit ActiveX control to a 64bit control myself?
|
[
"You can't convert Adobe's ActiveX control to 64bit yourself, but you can force your application to run in 32bit mode by setting the platform target to x86.\nFor instructions for your version of Visual Studio, see section 1.44 of Issues When Using Microsoft Visual Studio 2005\n",
"The .Net framework 1.1 is always targeting 32 bits CPUs while .Net framework 2.0 and above can target 32 bits or 64 bits according to the processorArchitecture property of the program manifest changed by the 'Platform Target' option of the Visual Studio IDE.\nWith the default option 'Any CPU', the IL code is compiled according to the platform but of course the COM call to the AxAcroPDF 32 bits component fails if the platform is 64 bits.\nJust rebuild the EXE to target 32 bits platform only. This works fine with the WOW64 emulator in Vista 64 bits.\n",
"Use DLL isolation, works with every 32bit COM+ application. See more at:\nhttp://support.microsoft.com/kb/281335\nWith this solution you can isolate your 32 bit COM+ application into a separate 32bit process.\n64bit applications search installed COM+ objects at: HKLM\\Software\\Classes, but 32bit applications use HKLM\\Software\\WOW6432\\Classes\n"
] |
[
14,
6,
0
] |
[] |
[] |
[
"64_bit",
"activex",
"adobe",
"axacropdf"
] |
stackoverflow_0000067167_64_bit_activex_adobe_axacropdf.txt
|
Q:
How do I compare two CLOB values in Oracle
I have two tables I would like to complare. One of the columns is type CLOB. I would like to do something like this:
select key, clob_value source_table
minus
select key, clob_value target_table
Unfortunately, Oracle can't perform minus operations on clobs. How can I do this?
A:
The format is this:
dbms_lob.compare(
lob_1 IN BLOB,
lob_2 IN BLOB,
amount IN INTEGER := 18446744073709551615,
offset_1 IN INTEGER := 1,
offset_2 IN INTEGER := 1)
RETURN INTEGER;
If dbms_lob.compare(lob1, lob2) = 0, they are identical.
Here's an example query based on your example:
Select key, glob_value
From source_table Left Join target_table
On source_table.key = target_table.key
Where target_table.glob_value is Null
Or dbms_lob.compare(source_table.glob_value, target_table.glob_value) <> 0
A:
Can you access the data via a built in package? If so then perhaps you could write a function that returned a string representation of the data (eg some sort of hash on the data), then you could do
select key, to_hash_str_val(glob_value) from source_table
minus
select key, to_hash_str_val(glob_value) from target_table
|
How do I compare two CLOB values in Oracle
|
I have two tables I would like to complare. One of the columns is type CLOB. I would like to do something like this:
select key, clob_value source_table
minus
select key, clob_value target_table
Unfortunately, Oracle can't perform minus operations on clobs. How can I do this?
|
[
"The format is this: \ndbms_lob.compare( \nlob_1 IN BLOB, \nlob_2 IN BLOB, \namount IN INTEGER := 18446744073709551615, \noffset_1 IN INTEGER := 1, \noffset_2 IN INTEGER := 1) \nRETURN INTEGER; \n\n\nIf dbms_lob.compare(lob1, lob2) = 0, they are identical.\nHere's an example query based on your example: \nSelect key, glob_value \nFrom source_table Left Join target_table \n On source_table.key = target_table.key \nWhere target_table.glob_value is Null \n Or dbms_lob.compare(source_table.glob_value, target_table.glob_value) <> 0\n\n",
"Can you access the data via a built in package? If so then perhaps you could write a function that returned a string representation of the data (eg some sort of hash on the data), then you could do\nselect key, to_hash_str_val(glob_value) from source_table\nminus\nselect key, to_hash_str_val(glob_value) from target_table\n\n"
] |
[
14,
2
] |
[] |
[] |
[
"oracle",
"sql"
] |
stackoverflow_0000085675_oracle_sql.txt
|
Q:
How can I test that my Linq IQueryable has executed
I am currently using Linq to NHibernate (although that is not an issue with regards to this question) to execute queries against my database and I want to be able to test whether the current IQueryable result instance has been executed or not.
The debugger knows that my IQueryable has not been 'invoked' because it tells me that expanding the Results property will 'enumerate' it. Is there a way for me to programmatically identify that as well.
I hope that makes sense :)
A:
How about writing an IQueryable wrapper like this:
class QueryableWrapper<T> : IQueryable<T>
{
private IQueryable<T> _InnerQueryable;
private bool _HasExecuted;
public QueryableWrapper(IQueryable<T> innerQueryable)
{
_InnerQueryable = innerQueryable;
}
public bool HasExecuted
{
get
{
return _HasExecuted;
}
}
public IEnumerator<T> GetEnumerator()
{
_HasExecuted = true;
return _InnerQueryable.GetEnumerator();
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public Type ElementType
{
get { return _InnerQueryable.ElementType; }
}
public System.Linq.Expressions.Expression Expression
{
get { return _InnerQueryable.Expression; }
}
public IQueryProvider Provider
{
get { return _InnerQueryable.Provider; }
}
}
Then you can use it like this:
var query = new QueryableWrapper<string>(
from str in myDataSource
select str);
Debug.WriteLine("HasExecuted: " + query.HasExecuted.ToString());
foreach (string str in query)
{
Debug.WriteLine(str);
}
Debug.WriteLine("HasExecuted: " + query.HasExecuted.ToString());
Output is:
False
String0
String1
...
True
A:
I believe you can use DataContext.Log to log everything that is executed.
A:
Assuming you're using Visual Studio, you can insert DataContext.Log = Console.Out into your code. You can then watch the SQL as it's executed, in the output window.
I'm not sure whether it's possible to programatically test whether the query has been executed. You can force it to execute, for example by calling .ToList on the query.
|
How can I test that my Linq IQueryable has executed
|
I am currently using Linq to NHibernate (although that is not an issue with regards to this question) to execute queries against my database and I want to be able to test whether the current IQueryable result instance has been executed or not.
The debugger knows that my IQueryable has not been 'invoked' because it tells me that expanding the Results property will 'enumerate' it. Is there a way for me to programmatically identify that as well.
I hope that makes sense :)
|
[
"How about writing an IQueryable wrapper like this:\nclass QueryableWrapper<T> : IQueryable<T>\n{\n private IQueryable<T> _InnerQueryable;\n private bool _HasExecuted;\n\n public QueryableWrapper(IQueryable<T> innerQueryable)\n {\n _InnerQueryable = innerQueryable;\n }\n\n public bool HasExecuted\n {\n get\n {\n return _HasExecuted;\n }\n }\n\n public IEnumerator<T> GetEnumerator()\n {\n _HasExecuted = true;\n\n return _InnerQueryable.GetEnumerator();\n }\n\n System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()\n {\n return GetEnumerator();\n }\n\n public Type ElementType\n {\n get { return _InnerQueryable.ElementType; }\n }\n\n public System.Linq.Expressions.Expression Expression\n {\n get { return _InnerQueryable.Expression; }\n }\n\n public IQueryProvider Provider\n {\n get { return _InnerQueryable.Provider; }\n }\n}\n\nThen you can use it like this:\nvar query = new QueryableWrapper<string>(\n from str in myDataSource\n select str);\n\nDebug.WriteLine(\"HasExecuted: \" + query.HasExecuted.ToString());\n\nforeach (string str in query)\n{\n Debug.WriteLine(str);\n}\n\nDebug.WriteLine(\"HasExecuted: \" + query.HasExecuted.ToString());\n\nOutput is:\nFalse\nString0\nString1\n...\nTrue\n",
"I believe you can use DataContext.Log to log everything that is executed.\n",
"Assuming you're using Visual Studio, you can insert DataContext.Log = Console.Out into your code. You can then watch the SQL as it's executed, in the output window.\nI'm not sure whether it's possible to programatically test whether the query has been executed. You can force it to execute, for example by calling .ToList on the query.\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"c#",
"linq"
] |
stackoverflow_0000055843_c#_linq.txt
|
Q:
How do you port a virtual machine from VMWare to VirtualBox?
I've been using VMWare for a while and am very happy with it, but I would like to compare it with VirtualBox. Apparently the disk images are compatible, and I have successfully booted my Fedora based VM created by VMWare in VirtualBox... but the network is completely unavailable. How do you port a virtual machine from VMWare to VirtualBox and keep all the capabilities intact?
A:
have you tried going into the options in virtual box and changing the network adapter to the VB one? VB is a bit different in it's virtual adapters, you might have to create a new one attached to the nic and then specify that one as the primary nic.
A:
If the network is unavailable, you may want to check your VirtualBox configuration and make sure you have a network card configured. If you do, then the next stop would be the OS running in the virtual machine. An unfortunate fact of some operating systems is that they don't always appreciate hardware changes. If the OS is not auto-detecting the change to the network card, you may need to reconfigure it to support the new card.
Another possibility is that you were using a fixed IP address. VirtualBox uses a couple of schemes for networking that are a bit different than VMWare. You may need to change the IP inside the VM to match the expected subnet.
Outside the VM, you need to use either a bridged networking device or configure ports virtual ports through the NAT system if you want to gain access to your Virtual Machine.
A:
Are you sure that network is completly unavailable? VirtualBox is known to have a problem with ICMP support so you won't be able to ping any host from the guest OS. I ran into the same problem yesterday and the network was actually working.
|
How do you port a virtual machine from VMWare to VirtualBox?
|
I've been using VMWare for a while and am very happy with it, but I would like to compare it with VirtualBox. Apparently the disk images are compatible, and I have successfully booted my Fedora based VM created by VMWare in VirtualBox... but the network is completely unavailable. How do you port a virtual machine from VMWare to VirtualBox and keep all the capabilities intact?
|
[
"have you tried going into the options in virtual box and changing the network adapter to the VB one? VB is a bit different in it's virtual adapters, you might have to create a new one attached to the nic and then specify that one as the primary nic.\n",
"If the network is unavailable, you may want to check your VirtualBox configuration and make sure you have a network card configured. If you do, then the next stop would be the OS running in the virtual machine. An unfortunate fact of some operating systems is that they don't always appreciate hardware changes. If the OS is not auto-detecting the change to the network card, you may need to reconfigure it to support the new card. \nAnother possibility is that you were using a fixed IP address. VirtualBox uses a couple of schemes for networking that are a bit different than VMWare. You may need to change the IP inside the VM to match the expected subnet. \nOutside the VM, you need to use either a bridged networking device or configure ports virtual ports through the NAT system if you want to gain access to your Virtual Machine.\n",
"Are you sure that network is completly unavailable? VirtualBox is known to have a problem with ICMP support so you won't be able to ping any host from the guest OS. I ran into the same problem yesterday and the network was actually working.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"virtualbox",
"vmware"
] |
stackoverflow_0000085414_virtualbox_vmware.txt
|
Q:
Migrating from MySQL to PostgreSQL
We are currently using MySQL for a product we are building, and are keen to move to PostgreSQL as soon as possible, primarily for licensing reasons.
Has anyone else done such a move? Our database is the lifeblood of the application and will eventually be storing TBs of data, so I'm keen to hear about experiences of performance improvements/losses, major hurdles in converting SQL and stored procedures, etc.
Edit: Just to clarify to those who have asked why we don't like MySQL's licensing. We are developing a commercial product which (currently) depends on MySQL as a database back-end. Their license states we need to pay them a percentage of our list price per installation, and not a flat fee. As a startup, this is less than appealing.
A:
Steve, I had to migrate my old application the way around, that is PgSQL->MySQL. I must say, you should consider yourself lucky ;-)
Common gotchas are:
SQL is actually pretty close to language standard, so you may suffer from MySQL's dialect you already know
MySQL quietly truncates varchars that exceed max length, whereas Pg complains - quick workaround is to have these columns as 'text' instead of 'varchar' and use triggers to truncate long lines
double quotes are used instead of reverse apostrophes
boolean fields are compared using IS and IS NOT operators, however MySQL-compatible INT(1) with = and <> is still possible
there is no REPLACE, use DELETE/INSERT combo
Pg is pretty strict on enforcing foreign keys integrity, so don't forget to use ON DELETE CASCADE on references
if you use PHP with PDO, remember to pass a parameter to lastInsertId() method - it should be sequence name, which is created usually this way: [tablename]_[primarykeyname]_seq
I hope that helps at least a bit. Have lots of fun playing with Postgres!
A:
I have done a similar conversion, but for different reasons. It was because we needed better ACID support, and the ability to have web users see the same data they could via other DB tools (one ID for both).
Here are the things that bit us:
MySQL does not enforce constraints
as strictly as PostgreSQL.
There are different date handling routines. These will need to be manually converted.
Any code that does not expect ACID
compliance may be an issue.
That said, once it was in place and tested, it was much nicer. With correct locking for safety reasons and heavy concurrent use, PostgreSQL performed better than MySQL. On the things where locking was not needed (read only) the performance was not quite as good, but it was still faster than the network card, so it was not an issue.
Tips:
The automated scripts in the contrib
directory are a good starting point
for your conversion, but will need
to be touched a little usually.
I would highly recommend that you
use the serializable isolation
level as a default.
The pg_autodoc tool is good to
really see your data structures and
help find any relationships you
forgot to define and enforce.
A:
We did a move from a MySQL3 to PostgreSQL 8.2 then 8.3. PostgreSQL has the basic of SQL and a lot more so if your MYSQL do not use fancy MySQL stuff you will be OK.
From my experience, our MySQL database (version 3) doesn't have Foreign Key... PostgreSQL lets you have them, so we had to change that... and it was a good thing and we found some mistake.
The other thing that we had to change was the coding (C#) connector that wasn't the same in MySQL. The MySQL one was more stable than the PostgreSQL one. We still have few problems with the PostgreSQL one.
|
Migrating from MySQL to PostgreSQL
|
We are currently using MySQL for a product we are building, and are keen to move to PostgreSQL as soon as possible, primarily for licensing reasons.
Has anyone else done such a move? Our database is the lifeblood of the application and will eventually be storing TBs of data, so I'm keen to hear about experiences of performance improvements/losses, major hurdles in converting SQL and stored procedures, etc.
Edit: Just to clarify to those who have asked why we don't like MySQL's licensing. We are developing a commercial product which (currently) depends on MySQL as a database back-end. Their license states we need to pay them a percentage of our list price per installation, and not a flat fee. As a startup, this is less than appealing.
|
[
"Steve, I had to migrate my old application the way around, that is PgSQL->MySQL. I must say, you should consider yourself lucky ;-)\nCommon gotchas are:\n\nSQL is actually pretty close to language standard, so you may suffer from MySQL's dialect you already know\nMySQL quietly truncates varchars that exceed max length, whereas Pg complains - quick workaround is to have these columns as 'text' instead of 'varchar' and use triggers to truncate long lines\ndouble quotes are used instead of reverse apostrophes\nboolean fields are compared using IS and IS NOT operators, however MySQL-compatible INT(1) with = and <> is still possible\nthere is no REPLACE, use DELETE/INSERT combo\nPg is pretty strict on enforcing foreign keys integrity, so don't forget to use ON DELETE CASCADE on references\nif you use PHP with PDO, remember to pass a parameter to lastInsertId() method - it should be sequence name, which is created usually this way: [tablename]_[primarykeyname]_seq\n\nI hope that helps at least a bit. Have lots of fun playing with Postgres!\n",
"I have done a similar conversion, but for different reasons. It was because we needed better ACID support, and the ability to have web users see the same data they could via other DB tools (one ID for both).\nHere are the things that bit us:\n\nMySQL does not enforce constraints\nas strictly as PostgreSQL. \nThere are different date handling routines. These will need to be manually converted. \nAny code that does not expect ACID\ncompliance may be an issue.\n\nThat said, once it was in place and tested, it was much nicer. With correct locking for safety reasons and heavy concurrent use, PostgreSQL performed better than MySQL. On the things where locking was not needed (read only) the performance was not quite as good, but it was still faster than the network card, so it was not an issue.\nTips:\n\nThe automated scripts in the contrib\ndirectory are a good starting point\nfor your conversion, but will need\nto be touched a little usually.\nI would highly recommend that you\nuse the serializable isolation\nlevel as a default.\nThe pg_autodoc tool is good to\nreally see your data structures and\nhelp find any relationships you\nforgot to define and enforce.\n\n",
"We did a move from a MySQL3 to PostgreSQL 8.2 then 8.3. PostgreSQL has the basic of SQL and a lot more so if your MYSQL do not use fancy MySQL stuff you will be OK.\nFrom my experience, our MySQL database (version 3) doesn't have Foreign Key... PostgreSQL lets you have them, so we had to change that... and it was a good thing and we found some mistake.\nThe other thing that we had to change was the coding (C#) connector that wasn't the same in MySQL. The MySQL one was more stable than the PostgreSQL one. We still have few problems with the PostgreSQL one.\n"
] |
[
27,
13,
3
] |
[] |
[] |
[
"database",
"licensing",
"migration",
"mysql",
"postgresql"
] |
stackoverflow_0000017717_database_licensing_migration_mysql_postgresql.txt
|
Q:
Php $_GET issue
foreach ($_GET as $field => $label)
{
$datarray[]=$_GET[$field];
echo "$_GET[$field]";
echo "<br>";
}
print_r($datarray);
This is the output I am getting. I see the data is there in datarray but when
I echo $_GET[$field]
I only get "Array"
But print_r($datarray) prints all the data. Any idea how I pull those values?
OUTPUT
Array (
[0] => Array (
[0] => Grade1
[1] => ln
[2] => North America
[3] => yuiyyu
[4] => iuy
[5] => uiyui
[6] => yui
[7] => uiy
[8] => 0:0:5
)
)
A:
Use var_export($_GET) to more easily see what kind of array you are getting.
From the output of your script I can see that you have multiple nested arrays. It seems to be something like:
$_GET = array( array( array("Grade1", "ln", "North America", "yuiyyu", "iuy", "uiyui", "yui","uiy","0:0:5")))
so to get those variables out you need something like:
echo $_GET[0][0][0]; // => "Grade1"
A:
EDIT: When I completed your test, here was the final URL:
http://hofstrateach.org/Roberto/process.php?keys=Grade1&keys=Nathan&keys=North%20America&keys=5&keys=3&keys=no&keys=foo&keys=blat&keys=0%3A0%3A24
This is probably a malformed URL. When you pass duplicate keys in a query, PHP makes them an array. The above URL should probably be something like:
http://hofstrateach.org/Roberto/process.php?grade=Grade1&schoolname=Nathan®ion=North%20America&answer[]=5&answer[]=3&answer[]=no&answer[]=foo&answer[]=blat&time=0%3A0%3A24
This will create individual entries for most of the fields, and make $_GET['answer'] be an array of the answers provided by the user.
Bottom line: fix your Flash file.
A:
Use <pre> tags before print_r, then you will have a tree printed (or just look at the source. From this point you will have a clear understanding of how your array is and will be able to pull the value you want.
I suggest further reading on $_GET variable and arrays, for a better understanding of its values
A:
calling echo on an array will always output "Array".
print_r (from the PHP manual) prints human-readable information about a variable.
A:
Try this:
foreach ($_GET as $field => $label)
{
$datarray[]=$_GET[$field];
echo $_GET[$field]; // you don't really need quotes
echo "With quotes: {$_GET[$field]}"; // but if you want to use them
echo $field; // this is really the same thing as echo $_GET[$field], so
if($label == $_GET[$field]) {
echo "Should always be true<br>";
}
echo "<br>";
}
print_r($datarray);
A:
It's printing just "Array" because when you say
echo "$_GET[$field]";
PHP can't know that you mean $_GET element $field, it sees it as you wanting to print variable $_GET. So, it tries to print it, and of course it's an Array, so that's what you get. Generally, when you want to echo an array element, you'd do it like this:
echo "The foo element of get is: {$_GET['foo']}";
The curly brackets tell PHP that the whole thing is a variable that needs to be interpreted; otherwise it will assume the variable name is $_GET by itself.
In your case though you don't need that, what you need is:
foreach ($_GET as $field => $label)
{
$datarray[] = $label;
}
and if you want to print it, just do
echo $label; // or $_GET[$field], but that's kind of pointless.
The problem was not with your flash file, change it back to how it was; you know it was correct because your $dataarray variable contained all the data. Why do you want to extract data from $_GET into another array anyway?
|
Php $_GET issue
|
foreach ($_GET as $field => $label)
{
$datarray[]=$_GET[$field];
echo "$_GET[$field]";
echo "<br>";
}
print_r($datarray);
This is the output I am getting. I see the data is there in datarray but when
I echo $_GET[$field]
I only get "Array"
But print_r($datarray) prints all the data. Any idea how I pull those values?
OUTPUT
Array (
[0] => Array (
[0] => Grade1
[1] => ln
[2] => North America
[3] => yuiyyu
[4] => iuy
[5] => uiyui
[6] => yui
[7] => uiy
[8] => 0:0:5
)
)
|
[
"Use var_export($_GET) to more easily see what kind of array you are getting.\nFrom the output of your script I can see that you have multiple nested arrays. It seems to be something like:\n$_GET = array( array( array(\"Grade1\", \"ln\", \"North America\", \"yuiyyu\", \"iuy\", \"uiyui\", \"yui\",\"uiy\",\"0:0:5\")))\n\nso to get those variables out you need something like:\necho $_GET[0][0][0]; // => \"Grade1\"\n\n",
"EDIT: When I completed your test, here was the final URL:\nhttp://hofstrateach.org/Roberto/process.php?keys=Grade1&keys=Nathan&keys=North%20America&keys=5&keys=3&keys=no&keys=foo&keys=blat&keys=0%3A0%3A24\nThis is probably a malformed URL. When you pass duplicate keys in a query, PHP makes them an array. The above URL should probably be something like:\nhttp://hofstrateach.org/Roberto/process.php?grade=Grade1&schoolname=Nathan®ion=North%20America&answer[]=5&answer[]=3&answer[]=no&answer[]=foo&answer[]=blat&time=0%3A0%3A24\nThis will create individual entries for most of the fields, and make $_GET['answer'] be an array of the answers provided by the user.\nBottom line: fix your Flash file.\n",
"Use <pre> tags before print_r, then you will have a tree printed (or just look at the source. From this point you will have a clear understanding of how your array is and will be able to pull the value you want.\nI suggest further reading on $_GET variable and arrays, for a better understanding of its values\n",
"calling echo on an array will always output \"Array\".\nprint_r (from the PHP manual) prints human-readable information about a variable.\n",
"Try this:\nforeach ($_GET as $field => $label)\n{\n $datarray[]=$_GET[$field];\n\n echo $_GET[$field]; // you don't really need quotes\n\n echo \"With quotes: {$_GET[$field]}\"; // but if you want to use them\n\n echo $field; // this is really the same thing as echo $_GET[$field], so\n\n if($label == $_GET[$field]) {\n echo \"Should always be true<br>\";\n }\n echo \"<br>\";\n}\nprint_r($datarray);\n\n",
"It's printing just \"Array\" because when you say\n echo \"$_GET[$field]\";\n\nPHP can't know that you mean $_GET element $field, it sees it as you wanting to print variable $_GET. So, it tries to print it, and of course it's an Array, so that's what you get. Generally, when you want to echo an array element, you'd do it like this:\necho \"The foo element of get is: {$_GET['foo']}\";\n\nThe curly brackets tell PHP that the whole thing is a variable that needs to be interpreted; otherwise it will assume the variable name is $_GET by itself.\nIn your case though you don't need that, what you need is:\nforeach ($_GET as $field => $label)\n{\n $datarray[] = $label;\n}\n\nand if you want to print it, just do \necho $label; // or $_GET[$field], but that's kind of pointless.\n\nThe problem was not with your flash file, change it back to how it was; you know it was correct because your $dataarray variable contained all the data. Why do you want to extract data from $_GET into another array anyway?\n"
] |
[
1,
1,
0,
0,
0,
0
] |
[
"Perhaps the GET variables are arrays themselves? i.e. http://site.com?var[]=1&var[]=2\n",
"It looks like your GET argument is itself an array. It would be helpful to have the input as well as the output.\n"
] |
[
-1,
-1
] |
[
"arrays",
"foreach",
"get",
"php",
"printing"
] |
stackoverflow_0000083953_arrays_foreach_get_php_printing.txt
|
Q:
How to sort an array by keys in an ascending direction?
here is the input i am getting from my flash file
process.php?Q2=898&Aa=Grade1&Tim=0%3A0%3A12&Q1=908&Bb=lkj&Q4=jhj&Q3=08&Cc=North%20America&Q0=1
and in php i use this code
foreach ($_GET as $field => $label)
{
$datarray[]=$_GET[$field];
echo "$field :";
echo $_GET[$field];;
echo "<br>";
i get this out put
Q2 :898
Aa :Grade1
Tim :0:0:12
Q1 :908
Bb :lkj
Q4 :jhj
Q3 :08
Cc :North America
Q0 :1
now my question is how do i sort it alphabaticaly so it should look like this
Aa :Grade1
Bb :lkj
Cc :North America
Q0 :1
Q1 :908
and so on....before i can insert it into the DB
A:
ksort($_GET);
This should ksort the $_GET array by it's keys. krsort for reverse order.
A:
what you're looking for is ksort. Dig the PHP manual! ;)
A:
To get a natural sort by key:
function knatsort(&$karr){
$kkeyarr = array_keys($karr);
natsort($kkeyarr);
$ksortedarr = array();
foreach($kkeyarr as $kcurrkey){
$ksortedarr[$kcurrkey] = $karr[$kcurrkey];
}
$karr = $ksortedarr;
return true;
}
Thanks, PHP Manual!
foreach ($_GET as $key => $value) {
echo $key.' - '.$value.'<br/>';
}
|
How to sort an array by keys in an ascending direction?
|
here is the input i am getting from my flash file
process.php?Q2=898&Aa=Grade1&Tim=0%3A0%3A12&Q1=908&Bb=lkj&Q4=jhj&Q3=08&Cc=North%20America&Q0=1
and in php i use this code
foreach ($_GET as $field => $label)
{
$datarray[]=$_GET[$field];
echo "$field :";
echo $_GET[$field];;
echo "<br>";
i get this out put
Q2 :898
Aa :Grade1
Tim :0:0:12
Q1 :908
Bb :lkj
Q4 :jhj
Q3 :08
Cc :North America
Q0 :1
now my question is how do i sort it alphabaticaly so it should look like this
Aa :Grade1
Bb :lkj
Cc :North America
Q0 :1
Q1 :908
and so on....before i can insert it into the DB
|
[
"ksort($_GET);\n\nThis should ksort the $_GET array by it's keys. krsort for reverse order.\n",
"what you're looking for is ksort. Dig the PHP manual! ;)\n",
"To get a natural sort by key:\nfunction knatsort(&$karr){\n $kkeyarr = array_keys($karr);\n natsort($kkeyarr);\n $ksortedarr = array();\n foreach($kkeyarr as $kcurrkey){\n $ksortedarr[$kcurrkey] = $karr[$kcurrkey];\n }\n $karr = $ksortedarr;\n return true;\n}\n\nThanks, PHP Manual!\nforeach ($_GET as $key => $value) {\n echo $key.' - '.$value.'<br/>';\n}\n\n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"arrays",
"php",
"query_string",
"sorting"
] |
stackoverflow_0000085770_arrays_php_query_string_sorting.txt
|
Q:
MVC C# custom MvcRouteHandler - How to?
Does anyone have experiences in providing a custom MvcRouteHandler? In my application I'd like to implement a globalization-pattern like http://mydomain/en/about or http://mydomain/de/about.
As for persistance, I'd like to have a cookie read as soon as a request arrives and if there is a language setting in this cookie apply it (so a user arriving at http://mydomain/ would be transferred to http://mydomain/en/ for example). If there is no cookie present, I'd like to get the first language the browser supports, apply this one and store it in this cookie.
I guess this can't be done with the standard routing mechanism mvc provides in it's initial project template. In a newsgroup I got the tip to have a look at the MvcRouteHandler and implement my own. But its hard to find a sample on how to do that.
Any ideas?
A:
I don't believe a custom route handler is required for what you are doing.
For your "globalized" URIs, a regular MVC route, with a constraint that the "locale" parameter must be equal to "en", "de", etc., will do. The constraint will prevent non-globalized URIs from matching the route.
For a "non-globalized" URI, make a "catch-all" route which simply redirects to the default or cookie-set locale URI.
Place the "globalized" route above the "catch-all" route in Global.asax, so that "already-globalized" URIs don't fall through to the redirection.
You would need to make a new route handler if you want a certain URI pattern to trigger something that is not an action on a controller. But I don't think that's what you're dealing with, here.
A:
You should be able to do this with ASP.NET MVC's default template, I'm doing something similar. Just build your routes as {language}/{controller}/{action}/{id}
Just set a default route that goes to a controller that checks for the language cookie, and redirects the user based on that cookie.
|
MVC C# custom MvcRouteHandler - How to?
|
Does anyone have experiences in providing a custom MvcRouteHandler? In my application I'd like to implement a globalization-pattern like http://mydomain/en/about or http://mydomain/de/about.
As for persistance, I'd like to have a cookie read as soon as a request arrives and if there is a language setting in this cookie apply it (so a user arriving at http://mydomain/ would be transferred to http://mydomain/en/ for example). If there is no cookie present, I'd like to get the first language the browser supports, apply this one and store it in this cookie.
I guess this can't be done with the standard routing mechanism mvc provides in it's initial project template. In a newsgroup I got the tip to have a look at the MvcRouteHandler and implement my own. But its hard to find a sample on how to do that.
Any ideas?
|
[
"I don't believe a custom route handler is required for what you are doing.\nFor your \"globalized\" URIs, a regular MVC route, with a constraint that the \"locale\" parameter must be equal to \"en\", \"de\", etc., will do. The constraint will prevent non-globalized URIs from matching the route.\nFor a \"non-globalized\" URI, make a \"catch-all\" route which simply redirects to the default or cookie-set locale URI.\nPlace the \"globalized\" route above the \"catch-all\" route in Global.asax, so that \"already-globalized\" URIs don't fall through to the redirection.\nYou would need to make a new route handler if you want a certain URI pattern to trigger something that is not an action on a controller. But I don't think that's what you're dealing with, here.\n",
"You should be able to do this with ASP.NET MVC's default template, I'm doing something similar. Just build your routes as {language}/{controller}/{action}/{id}\nJust set a default route that goes to a controller that checks for the language cookie, and redirects the user based on that cookie.\n"
] |
[
2,
0
] |
[] |
[] |
[
"asp.net_mvc",
"c#",
"mvcroutehandler"
] |
stackoverflow_0000085470_asp.net_mvc_c#_mvcroutehandler.txt
|
Q:
How should I write code with unique sections for different versions of .NET
My source code needs to support both .NET version 1.1 and 2.0 ... how do I test for the different versions & what is the best way to deal with this situation.
I'm wondering if I should have the two sections of code inline, in separate classes, methods etc. What do you think?
A:
There are a lot of different options here. Where I work we use #if pragmas but it could also be done with separate assemblies for the separate versions.
Ideally you would at least keep the version dependant code in separate partial class files and make the correct version available at compile time. I would enforce this if I could go back in time, our code base now has a whole lot of #if pragmas and sometimes it can be hard to manage. The worst part of the whole #if pragma thing is that Visual Studio just ignores anything that won't compile with the current defines and so it's very easy to check in breaking changes.
NUnit supports both 1.1 and 2.0 and so is a good choice for a test framework. It's not too hard to use something like NAnt to make separate 1.1 and 2.0 builds and then automatically run the NUnit tests.
A:
If you want to do something like this you will need to use preprocessor commands and conditional compilation symbols.
I would use symbols that clearly indicate the version of .NET you are targeting (say NET11 and NET20) and then wrap the relevant code like this:
#if NET11
// .NET 1.1 code
#elif NET20
// .NET 2.0 code
#endif
The reason for doing it this way rather than a simple if/else is an extra layer of protection in case someone forgets to define the symbol.
That being said, you should really drill down to the heart of the reason why you want/need to do this.
A:
I would be asking the question of WHY you have to maintain two code bases, I would pick one and go with it if there is any chance of it.
Trying to keep two code bases in sync with the number of changes, and types of changes would be very complex, and a build process to build for either version would be very complex.
A:
We had this problem and we ended up with a "compatability layer" where we implemented a single set of interfaces and utility code for .NET v1.1 and v2.0.
Then our installer laid down the right code for the right version. We used NSIS (free!), and they have functions you can call to determine the .NET version.
|
How should I write code with unique sections for different versions of .NET
|
My source code needs to support both .NET version 1.1 and 2.0 ... how do I test for the different versions & what is the best way to deal with this situation.
I'm wondering if I should have the two sections of code inline, in separate classes, methods etc. What do you think?
|
[
"There are a lot of different options here. Where I work we use #if pragmas but it could also be done with separate assemblies for the separate versions. \nIdeally you would at least keep the version dependant code in separate partial class files and make the correct version available at compile time. I would enforce this if I could go back in time, our code base now has a whole lot of #if pragmas and sometimes it can be hard to manage. The worst part of the whole #if pragma thing is that Visual Studio just ignores anything that won't compile with the current defines and so it's very easy to check in breaking changes.\nNUnit supports both 1.1 and 2.0 and so is a good choice for a test framework. It's not too hard to use something like NAnt to make separate 1.1 and 2.0 builds and then automatically run the NUnit tests.\n",
"If you want to do something like this you will need to use preprocessor commands and conditional compilation symbols.\nI would use symbols that clearly indicate the version of .NET you are targeting (say NET11 and NET20) and then wrap the relevant code like this:\n#if NET11\n// .NET 1.1 code\n#elif NET20\n// .NET 2.0 code\n#endif\n\nThe reason for doing it this way rather than a simple if/else is an extra layer of protection in case someone forgets to define the symbol.\nThat being said, you should really drill down to the heart of the reason why you want/need to do this.\n",
"I would be asking the question of WHY you have to maintain two code bases, I would pick one and go with it if there is any chance of it.\nTrying to keep two code bases in sync with the number of changes, and types of changes would be very complex, and a build process to build for either version would be very complex.\n",
"We had this problem and we ended up with a \"compatability layer\" where we implemented a single set of interfaces and utility code for .NET v1.1 and v2.0.\nThen our installer laid down the right code for the right version. We used NSIS (free!), and they have functions you can call to determine the .NET version.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"c_preprocessor"
] |
stackoverflow_0000085773_.net_c#_c_preprocessor.txt
|
Q:
C# component do not refresh when source code updated
I have a solution with many projects. One project contain few custom components. One of these components is used to display a title on an image. We can change the color of the background and many other things.
The problem is IF I decide to change the default color of the background of the component or change the position of the text, thoses change won't reflect in all other projects of the solution where the component is used. I have compilent the project of the component and all other projects Reference the component by the Project.
For the moment, what I have to do is to take off the component from the other project one by one and to add it back, then all is fine. Do you have a quick way to do it?
UPDATE
I have added a CheckBox inside that component and it seems that the checkbox is everywhere! Fine! But when a property has a some tag that let the component to change (example like the Background color) it doesn't change the "default" value but instead put the old value as a changed value in the property. So, I see the old value setted like if I add manually changed the color in the Properties panel when I haven't...
UPDATE 2
alt text http://img517.imageshack.us/img517/9112/oldonenewoneei0.png
Update 3:
This problem is still here. Just to let people know that I am still curious to find a way.
I have tried few of your suggestions.
If I clean all the solution and build only the project that has the Custom control then I build the solution. Nothing change (To test it, I have change the color of the component to Yellow. Nothing change : fail.
If I remove the reference and add it back to the project and then rebuild the solution. I can see the old color in the designer : fail.
I have updated the question with more information and an image (above) for those who want to try to help me.
As you can see, the old "compile" of the component show the Yellow background but when I insert a new component (from the Left Tool bar in Visual Studio) I can have the new component with the supposed WHITE background...
A:
This is most likely due to references.
Your other projects probably copy in a reference to your component project. You'll have to rebuild these other projects for them to re-copy in the referenced component project, if it has changed. It is only updated at build time.
You can somewhat get around this by having them part of the same solution. In that case, you can set up your project dependencies correctly and it should handle things for you mostly automatically. But having everything in the same solution isn't always the right thing to do.
If you already have them part of the same solution or it's not a references problem, it might be due to component serialization. We've run into this quirk a lot when doing custom control development.
A:
My guess is that the designer is smart and remembers the settings for the component as you have it in the designer and thus sees it as the default.
A:
This doesn't sound usual. Right clicking on the solution and hitting "Clean Solution" might help (it will delete all dlls and executables from each project's bin directory, which forces fresh builds to occur)
You might also want to check your build order sequence.
A:
I work on a project that has a similar problem, I have found that if you touch the .NET config file or assembly information file (depending on your project type). The other projects will then reflect the component change...
I'm not sure why this happens, but this is how I overcome it...
Recently I have switch to building everything via Nant, and that takes care of the problem altogether.
A:
Sometimes the Visual Designer serialize all your properties in the code-behind, even if they have the default value.
If your component have a default backcolor of Red, and you change the default backcolor to Blue, the components that use your component will change it back to Red.
|
C# component do not refresh when source code updated
|
I have a solution with many projects. One project contain few custom components. One of these components is used to display a title on an image. We can change the color of the background and many other things.
The problem is IF I decide to change the default color of the background of the component or change the position of the text, thoses change won't reflect in all other projects of the solution where the component is used. I have compilent the project of the component and all other projects Reference the component by the Project.
For the moment, what I have to do is to take off the component from the other project one by one and to add it back, then all is fine. Do you have a quick way to do it?
UPDATE
I have added a CheckBox inside that component and it seems that the checkbox is everywhere! Fine! But when a property has a some tag that let the component to change (example like the Background color) it doesn't change the "default" value but instead put the old value as a changed value in the property. So, I see the old value setted like if I add manually changed the color in the Properties panel when I haven't...
UPDATE 2
alt text http://img517.imageshack.us/img517/9112/oldonenewoneei0.png
Update 3:
This problem is still here. Just to let people know that I am still curious to find a way.
I have tried few of your suggestions.
If I clean all the solution and build only the project that has the Custom control then I build the solution. Nothing change (To test it, I have change the color of the component to Yellow. Nothing change : fail.
If I remove the reference and add it back to the project and then rebuild the solution. I can see the old color in the designer : fail.
I have updated the question with more information and an image (above) for those who want to try to help me.
As you can see, the old "compile" of the component show the Yellow background but when I insert a new component (from the Left Tool bar in Visual Studio) I can have the new component with the supposed WHITE background...
|
[
"This is most likely due to references.\nYour other projects probably copy in a reference to your component project. You'll have to rebuild these other projects for them to re-copy in the referenced component project, if it has changed. It is only updated at build time.\nYou can somewhat get around this by having them part of the same solution. In that case, you can set up your project dependencies correctly and it should handle things for you mostly automatically. But having everything in the same solution isn't always the right thing to do.\nIf you already have them part of the same solution or it's not a references problem, it might be due to component serialization. We've run into this quirk a lot when doing custom control development.\n",
"My guess is that the designer is smart and remembers the settings for the component as you have it in the designer and thus sees it as the default.\n",
"This doesn't sound usual. Right clicking on the solution and hitting \"Clean Solution\" might help (it will delete all dlls and executables from each project's bin directory, which forces fresh builds to occur)\nYou might also want to check your build order sequence.\n",
"I work on a project that has a similar problem, I have found that if you touch the .NET config file or assembly information file (depending on your project type). The other projects will then reflect the component change...\nI'm not sure why this happens, but this is how I overcome it...\nRecently I have switch to building everything via Nant, and that takes care of the problem altogether. \n",
"Sometimes the Visual Designer serialize all your properties in the code-behind, even if they have the default value. \nIf your component have a default backcolor of Red, and you change the default backcolor to Blue, the components that use your component will change it back to Red. \n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"custom_component"
] |
stackoverflow_0000085866_.net_c#_custom_component.txt
|
Q:
How do I speed up data retrieval from .NET AD within ColdFusion?
How can I optimize the following code, which currently takes over 2 minutes to retrieve and loop through 800+ records from a pool of over 100K records, returning 6 fields per record (adds approximately 20 seconds per additional field):
<cfset dllPath="C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\System.DirectoryServices.dll" />
<cfset LDAPPath="LDAP://" & arguments.searchPath />
<cfset theLookUp=CreateObject(".NET","System.DirectoryServices.DirectoryEntry", dllPath).init(LDAPPath) />
<cfset theSearch=CreateObject(".NET","System.DirectoryServices.DirectorySearcher", dllPath).init(theLookUp) />
<cfset theSearch.Set_Filter(arguments.theFilter) />
<cfset theObject = theSearch.FindAll() />
<cfloop index="row" from="#startRow#" to="#endRow#">
<cfset QueryAddRow(theQuery) />
<cfloop list="#columnList#" index="col">
<cfloop from="0" to="#theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Count()-1#" index="item">
<cftry>
<cfset theQuery[col][theQuery.recordCount]=ListAppend(theQuery[col][theQuery.recordCount],theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Item(item),"|") />
<cfcatch type="any">
</cfcatch>
</cftry>
</cfloop>
</cfloop>
</cfloop>
A:
It's been a long time since I touched CF, but I can give some hints in pseudo-code. For one thing, this expression is extremely inefficent:
#theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Count()-1#
Take the first part for example, Get_Item(row) - your code causes CF to go retrieve the row and its properties for each iteration of the #columnList# loop; and to top it all, you're doing that TWICE per iteration of columnlist (once for loop and again for the inner cfset). If you think about it, it only needs to retrieve the row for each iteration of the outer loop (from #sfstart# to #cfend). So, in pseudo-code do this:
for each row between start and end
cfset props = #theobject.get_item(row).get_properties()#
for each col in #columnlist#
cfset currentcol = #props.getitem(col)#
cfset count = #currentcol.getcount() - 1#
foreach item from 0 to #count#
cfset #currentcol.getItem(item)# etc...
Make sense? Every time you enter a loop, cache objects that will be reused in that scope (or child scopes) in a variable. That means you are only grabbing the column object once per iteration of the column loop. All variables defined in outer scopes are available in the inner scopes, as you can see in what I've done above. I know its tempting to cut and paste from previous lines, but don't. It only hurts you in the end.
hope this helps,
Oisin
A:
How large is the list of items for the inner loop?
Switching to an array might be faster if there is a significantly large number of items.
I have implemented this alongside x0n's suggestions...
<cfset dllPath="C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\System.DirectoryServices.dll" />
<cfset LDAPPath="LDAP://" & arguments.searchPath />
<cfset theLookUp=CreateObject(".NET","System.DirectoryServices.DirectoryEntry", dllPath).init(LDAPPath) />
<cfset theSearch=CreateObject(".NET","System.DirectoryServices.DirectorySearcher", dllPath).init(theLookUp) />
<cfset theSearch.Set_Filter(arguments.theFilter) />
<cfset theObject = theSearch.FindAll() />
<cfloop index="row" from="#startRow#" to="#endRow#">
<cfset Props = theObject.get_item(row).get_properties() />
<cfset QueryAddRow(theQuery) />
<cfloop list="#columnList#" index="col">
<cfset CurrentCol = Props.getItem(col) />
<cfset ItemArray = ArrayNew(1)/>
<cfloop from="0" to="#CurrentCol.getcount() - 1#" index="item">
<cftry>
<cfset ArrayAppend( ItemArray , CurrentCol.Get_Item(item) )/>
<cfcatch type="any">
</cfcatch>
</cftry>
</cfloop>
<cfset theQuery[col][theQuery.recordCount] = ArrayToList( ItemArray , '|' )/>
</cfloop>
</cfloop>
A:
Additionally, using a cftry block in each loop is likely slowing this down quite a bit. Unless you are expecting individual rows to fail (and you need to continue from that point), I would suggest a single try/catch block for the entire process. Try/catch is an expensive operation.
A:
I would think that you'd want to stop doing so many evaluations inside of your loops and instead use variables to hold counts, pointers to the col object and to hold your pipe-delim string until you're ready to commit to the query object. If I've done the refactoring correctly, you should notice an improvement if you use the code below:
<cfloop index="row" from="#startRow#" to="#endRow#">
<cfset QueryAddRow(theQuery) />
<cfloop list="#columnList#" index="col">
<cfset PipedVals = "">
<cfset theItem = theObject.Get_Item(row).Get_Properties().Get_Item(col)>
<cfset ColCount = theItem.Get_Count()-1>
<cfloop from="0" to="#ColCount#" index="item">
<cftry>
<cfset PipedVals = ListAppend(PipedVals,theItem.Get_Item(item),"|")>
<cfcatch type="any"></cfcatch>
</cftry>
</cfloop>
<cfset QuerySetCell(theQuery,col) = PipedVals>
</cfloop>
|
How do I speed up data retrieval from .NET AD within ColdFusion?
|
How can I optimize the following code, which currently takes over 2 minutes to retrieve and loop through 800+ records from a pool of over 100K records, returning 6 fields per record (adds approximately 20 seconds per additional field):
<cfset dllPath="C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\System.DirectoryServices.dll" />
<cfset LDAPPath="LDAP://" & arguments.searchPath />
<cfset theLookUp=CreateObject(".NET","System.DirectoryServices.DirectoryEntry", dllPath).init(LDAPPath) />
<cfset theSearch=CreateObject(".NET","System.DirectoryServices.DirectorySearcher", dllPath).init(theLookUp) />
<cfset theSearch.Set_Filter(arguments.theFilter) />
<cfset theObject = theSearch.FindAll() />
<cfloop index="row" from="#startRow#" to="#endRow#">
<cfset QueryAddRow(theQuery) />
<cfloop list="#columnList#" index="col">
<cfloop from="0" to="#theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Count()-1#" index="item">
<cftry>
<cfset theQuery[col][theQuery.recordCount]=ListAppend(theQuery[col][theQuery.recordCount],theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Item(item),"|") />
<cfcatch type="any">
</cfcatch>
</cftry>
</cfloop>
</cfloop>
</cfloop>
|
[
"It's been a long time since I touched CF, but I can give some hints in pseudo-code. For one thing, this expression is extremely inefficent:\n#theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Count()-1#\nTake the first part for example, Get_Item(row) - your code causes CF to go retrieve the row and its properties for each iteration of the #columnList# loop; and to top it all, you're doing that TWICE per iteration of columnlist (once for loop and again for the inner cfset). If you think about it, it only needs to retrieve the row for each iteration of the outer loop (from #sfstart# to #cfend). So, in pseudo-code do this:\n\nfor each row between start and end\n\ncfset props = #theobject.get_item(row).get_properties()#\nfor each col in #columnlist#\n\ncfset currentcol = #props.getitem(col)#\ncfset count = #currentcol.getcount() - 1#\nforeach item from 0 to #count#\n\ncfset #currentcol.getItem(item)# etc...\n\n\n\n\nMake sense? Every time you enter a loop, cache objects that will be reused in that scope (or child scopes) in a variable. That means you are only grabbing the column object once per iteration of the column loop. All variables defined in outer scopes are available in the inner scopes, as you can see in what I've done above. I know its tempting to cut and paste from previous lines, but don't. It only hurts you in the end.\nhope this helps,\nOisin\n",
"How large is the list of items for the inner loop?\nSwitching to an array might be faster if there is a significantly large number of items.\nI have implemented this alongside x0n's suggestions...\n<cfset dllPath=\"C:\\WINDOWS\\Microsoft.NET\\Framework\\v1.1.4322\\System.DirectoryServices.dll\" />\n<cfset LDAPPath=\"LDAP://\" & arguments.searchPath />\n<cfset theLookUp=CreateObject(\".NET\",\"System.DirectoryServices.DirectoryEntry\", dllPath).init(LDAPPath) />\n<cfset theSearch=CreateObject(\".NET\",\"System.DirectoryServices.DirectorySearcher\", dllPath).init(theLookUp) />\n<cfset theSearch.Set_Filter(arguments.theFilter) />\n<cfset theObject = theSearch.FindAll() />\n\n<cfloop index=\"row\" from=\"#startRow#\" to=\"#endRow#\">\n\n <cfset Props = theObject.get_item(row).get_properties() />\n\n <cfset QueryAddRow(theQuery) />\n\n <cfloop list=\"#columnList#\" index=\"col\">\n\n <cfset CurrentCol = Props.getItem(col) />\n\n <cfset ItemArray = ArrayNew(1)/>\n <cfloop from=\"0\" to=\"#CurrentCol.getcount() - 1#\" index=\"item\">\n <cftry>\n <cfset ArrayAppend( ItemArray , CurrentCol.Get_Item(item) )/>\n <cfcatch type=\"any\">\n </cfcatch>\n </cftry>\n </cfloop>\n <cfset theQuery[col][theQuery.recordCount] = ArrayToList( ItemArray , '|' )/>\n\n </cfloop>\n\n</cfloop>\n\n",
"Additionally, using a cftry block in each loop is likely slowing this down quite a bit. Unless you are expecting individual rows to fail (and you need to continue from that point), I would suggest a single try/catch block for the entire process. Try/catch is an expensive operation.\n",
"I would think that you'd want to stop doing so many evaluations inside of your loops and instead use variables to hold counts, pointers to the col object and to hold your pipe-delim string until you're ready to commit to the query object. If I've done the refactoring correctly, you should notice an improvement if you use the code below:\n<cfloop index=\"row\" from=\"#startRow#\" to=\"#endRow#\">\n<cfset QueryAddRow(theQuery) />\n<cfloop list=\"#columnList#\" index=\"col\">\n <cfset PipedVals = \"\">\n <cfset theItem = theObject.Get_Item(row).Get_Properties().Get_Item(col)>\n <cfset ColCount = theItem.Get_Count()-1>\n <cfloop from=\"0\" to=\"#ColCount#\" index=\"item\">\n <cftry>\n <cfset PipedVals = ListAppend(PipedVals,theItem.Get_Item(item),\"|\")>\n <cfcatch type=\"any\"></cfcatch>\n </cftry>\n </cfloop>\n <cfset QuerySetCell(theQuery,col) = PipedVals>\n</cfloop>\n\n\n"
] |
[
2,
2,
1,
0
] |
[] |
[] |
[
".net",
"active_directory",
"coldfusion",
"ldap"
] |
stackoverflow_0000084795_.net_active_directory_coldfusion_ldap.txt
|
Q:
How can I reimplement external pop-up jQuery code in Prototype?
I have this code in jQuery, that I want to reimplement with the prototype library.
// make external links open in popups
// this will apply a window.open() behaviour to all anchor links
// the not() functions filter iteratively filter out http://www.foo.com
// and http://foo.com so they don't trigger off the pop-ups
jQuery("a[href='http://']").
not("a[href^='http://www.foo.com']").
not("a[href^='http://foo.com']").
addClass('external');
jQuery("a.external").
not('a[rel="lightbox"]').click( function() {
window.open( jQuery(this).attr('href') );
return false;
});
How can you iteratively filter an collection of elements using an equivalent to the not() operators listed here in jQuery?
A:
The filtering can be done using the reject method like so:
$$('a').reject(function(element) { return element.getAttribute("href").match(/http:\/\/(www.|)foo.com/); }).invoke("addClassName", "external");
|
How can I reimplement external pop-up jQuery code in Prototype?
|
I have this code in jQuery, that I want to reimplement with the prototype library.
// make external links open in popups
// this will apply a window.open() behaviour to all anchor links
// the not() functions filter iteratively filter out http://www.foo.com
// and http://foo.com so they don't trigger off the pop-ups
jQuery("a[href='http://']").
not("a[href^='http://www.foo.com']").
not("a[href^='http://foo.com']").
addClass('external');
jQuery("a.external").
not('a[rel="lightbox"]').click( function() {
window.open( jQuery(this).attr('href') );
return false;
});
How can you iteratively filter an collection of elements using an equivalent to the not() operators listed here in jQuery?
|
[
"The filtering can be done using the reject method like so:\n$$('a').reject(function(element) { return element.getAttribute(\"href\").match(/http:\\/\\/(www.|)foo.com/); }).invoke(\"addClassName\", \"external\");\n\n"
] |
[
4
] |
[] |
[] |
[
"javascript",
"jquery",
"popup",
"prototypejs"
] |
stackoverflow_0000085887_javascript_jquery_popup_prototypejs.txt
|
Q:
Multi-user Snippet Manager
Currently, we're using a wiki at work to share insights, tips and information. But somehow, people aren't sharing snippets that way. It's probably too inconvenient to write and too difficult to find snippets there.
So, is there a multi-user/collaborative snippets manager around? Something like Snippely. (Has anyone tried Snippely in multi-user mode?)
Since we're all on the same site, it would probably be best if it used mapped network drives or ODBC instead of its own server process.
Oh, and it has to support Unicode and let us choose any truetype font. We're using the hideous APL language, which uses special characters.
It would be nice if it didn't cost money, so I wouldn't have to convince management to pay for it as well as the other developers to use it.
A:
Pastebin is a common solution to this. Just install somewhere on your network, then paste snippets. http://pastebin.com/
Works well when trying to debug a piece of code, or stack trace also.
A:
There's Snip-it pro ( http://www.snipitpro.com ), I looked at it a while back, and the interface seemed to be pretty horrible. It's 40 bucks / seat, which is not too bad. Last time I was looking for a tool like that I found nothing at all, and I found that it's very hard to get my co-workers to start using snippet libraries - everybody is happy to google it or search their old codebases. These days I use Evernote for all of my own snippeting needs.
|
Multi-user Snippet Manager
|
Currently, we're using a wiki at work to share insights, tips and information. But somehow, people aren't sharing snippets that way. It's probably too inconvenient to write and too difficult to find snippets there.
So, is there a multi-user/collaborative snippets manager around? Something like Snippely. (Has anyone tried Snippely in multi-user mode?)
Since we're all on the same site, it would probably be best if it used mapped network drives or ODBC instead of its own server process.
Oh, and it has to support Unicode and let us choose any truetype font. We're using the hideous APL language, which uses special characters.
It would be nice if it didn't cost money, so I wouldn't have to convince management to pay for it as well as the other developers to use it.
|
[
"Pastebin is a common solution to this. Just install somewhere on your network, then paste snippets. http://pastebin.com/\nWorks well when trying to debug a piece of code, or stack trace also.\n",
"There's Snip-it pro ( http://www.snipitpro.com ), I looked at it a while back, and the interface seemed to be pretty horrible. It's 40 bucks / seat, which is not too bad. Last time I was looking for a tool like that I found nothing at all, and I found that it's very hard to get my co-workers to start using snippet libraries - everybody is happy to google it or search their old codebases. These days I use Evernote for all of my own snippeting needs.\n"
] |
[
1,
0
] |
[] |
[] |
[
"code_snippets",
"collaboration"
] |
stackoverflow_0000081445_code_snippets_collaboration.txt
|
Q:
Linking .Net Assemblies
This is all hypothetical, so please bear with me.
Say I'm writing a tool in C# called Foo. The output is foo.exe. I've found some really great library that I like to use called Bar, which I can reference as bar.dll in my project. When I build my project, I have foo.exe and bar.dll in my output directory. Good so far.
What I'd like to do is link foo.exe and bar.dll so they are one assembly, foo.exe. I would prefer to be able to do this in VS2008, but if I have to resort to a command-line tool like al.exe I don't mind so much.
A:
There's ILMerge. Link
A:
Set up a post-build event under Project Properties:
ilmerge /out:$(TargetDir)foo.exe $(TargetPath) $(TargetDir)bar.dll
A:
Check out the ILMerge tool found here.
A:
Thanks everyone who answered!
I ended up with NuGenUnify which provides a GUI wrapper for ilmerge.
|
Linking .Net Assemblies
|
This is all hypothetical, so please bear with me.
Say I'm writing a tool in C# called Foo. The output is foo.exe. I've found some really great library that I like to use called Bar, which I can reference as bar.dll in my project. When I build my project, I have foo.exe and bar.dll in my output directory. Good so far.
What I'd like to do is link foo.exe and bar.dll so they are one assembly, foo.exe. I would prefer to be able to do this in VS2008, but if I have to resort to a command-line tool like al.exe I don't mind so much.
|
[
"There's ILMerge. Link\n",
"Set up a post-build event under Project Properties: \nilmerge /out:$(TargetDir)foo.exe $(TargetPath) $(TargetDir)bar.dll\n",
"Check out the ILMerge tool found here.\n",
"Thanks everyone who answered!\nI ended up with NuGenUnify which provides a GUI wrapper for ilmerge.\n"
] |
[
10,
5,
2,
1
] |
[] |
[] |
[
".net",
"al.exe",
"assemblies",
"linker",
"visual_studio"
] |
stackoverflow_0000085222_.net_al.exe_assemblies_linker_visual_studio.txt
|
Q:
How to Autogenerate multiple getters/setters or accessors in Visual Studio
Before I start, I know there is this post and it doesn't answer my question: How to generate getters and setters in Visual Studio?
In Visual Studio 2008 there is the ability to auto generate getters and setters (accessors) by right clicking on a private variable -> Refactor -> Encapsulate Field...
This is great for a class that has 2 or 3 methods, but come on MS! When have you ever worked with a class that has a few accessors?
I am looking for a way to generate ALL with a few clicks (Eclipse folks out there will know what I am talking about - you can right click a class and select 'generate accessors'. DONE.). I really don't like spending 20 minutes a class clicking through wizards. I used to have some .NET 1.0 code that would generate classes, but it is long gone and this feature should really be standard for the IDE.
UPDATE: I might mention that I have found Linq to Entities and SQLMetal to be really cool ideas, and way beyond my simple request in the paragraph above.
A:
Sorry, you really need to install Resharper to get approximately the same amount of refactoring support as you are used to in Eclipse.
However, Resharper gives you a dialog very similar to the one you are used to in Eclipse:
A:
I have an "info class generator" application that you can use an excel sheet and it will generate the private members and the public get/set methods.
You can download it for free from my website.
A:
In 2008 I don't bother with Encapsulate Field. I use the new syntax for properties:
public string SomeString { get; set; }
A:
Possibly a macro. There are also addins (like ReSharper, which is great but commercial) capable of doing that quickly.
|
How to Autogenerate multiple getters/setters or accessors in Visual Studio
|
Before I start, I know there is this post and it doesn't answer my question: How to generate getters and setters in Visual Studio?
In Visual Studio 2008 there is the ability to auto generate getters and setters (accessors) by right clicking on a private variable -> Refactor -> Encapsulate Field...
This is great for a class that has 2 or 3 methods, but come on MS! When have you ever worked with a class that has a few accessors?
I am looking for a way to generate ALL with a few clicks (Eclipse folks out there will know what I am talking about - you can right click a class and select 'generate accessors'. DONE.). I really don't like spending 20 minutes a class clicking through wizards. I used to have some .NET 1.0 code that would generate classes, but it is long gone and this feature should really be standard for the IDE.
UPDATE: I might mention that I have found Linq to Entities and SQLMetal to be really cool ideas, and way beyond my simple request in the paragraph above.
|
[
"Sorry, you really need to install Resharper to get approximately the same amount of refactoring support as you are used to in Eclipse.\nHowever, Resharper gives you a dialog very similar to the one you are used to in Eclipse:\n\n\n",
"I have an \"info class generator\" application that you can use an excel sheet and it will generate the private members and the public get/set methods.\nYou can download it for free from my website.\n",
"In 2008 I don't bother with Encapsulate Field. I use the new syntax for properties:\npublic string SomeString { get; set; }\n\n",
"Possibly a macro. There are also addins (like ReSharper, which is great but commercial) capable of doing that quickly.\n"
] |
[
10,
3,
2,
0
] |
[] |
[] |
[
"code_generation",
"ide",
"visual_studio",
"visual_studio_2008"
] |
stackoverflow_0000085928_code_generation_ide_visual_studio_visual_studio_2008.txt
|
Q:
how to sort a flex datagrid according to multiple columns?
I have a datagrid, populated as shown below. When the user clicks on a column header, I would like to sort the rows using a lexicographic sort in which the selected column is used first, then the remaining columns are used in left-to-right order to break any ties. How can I code this?
(I have one answer, which I'll post below, but it has a problem -- I'll be thrilled if somebody can provide a better one!)
Here's the layout:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"
layout="absolute" creationComplete="onCreationComplete()">
<mx:Script source="GridCode.as" />
<mx:DataGrid id="theGrid" x="61" y="55" width="466" height="317">
<mx:columns>
<mx:DataGridColumn dataField="A"/>
<mx:DataGridColumn dataField="B"/>
<mx:DataGridColumn dataField="C"/>
</mx:columns>
</mx:DataGrid>
</mx:Application>
And here's the backing code:
import mx.collections.ArrayCollection;
import mx.collections.Sort;
import mx.collections.SortField;
import mx.controls.dataGridClasses.DataGridColumn;
import mx.events.DataGridEvent;
public function onCreationComplete():void
{
var ar:ArrayCollection = new ArrayCollection();
var ob:Object;
for( var i:int=0; i<20; i++ )
{
ob = new Object();
ob["A"] = i;
ob["B"] = i%3;
ob["C"] = i%5;
ar.addItem(ob);
}
this.theGrid.dataProvider = ar;
}
A:
The best answer I've found so far is to capture the headerRelease event when the user clicks:
<mx:DataGrid id="theGrid" x="61" y="55" width="466" height="317"
headerRelease="onHeaderRelease(event)">
The event handler can then apply a sort order to the data:
private var lastIndex:int = -1;
private var desc:Boolean = false;
public function onHeaderRelease(evt:DataGridEvent):void
{
evt.preventDefault();
var srt:Sort = new Sort();
var fields:Array = new Array();
if( evt.columnIndex == lastIndex )
{
desc = !desc;
}
else
{
desc = false;
lastIndex = evt.columnIndex;
}
fields.push( new SortField( evt.dataField, false, desc ) );
if( evt.dataField != "A" )
fields.push( new SortField("A", false, desc) );
if( evt.dataField != "B" )
fields.push( new SortField("B", false, desc) );
if( evt.dataField != "C" )
fields.push( new SortField("C", false, desc) );
srt.fields = fields;
var ar:ArrayCollection = this.theGrid.dataProvider as ArrayCollection;
ar.sort = srt;
ar.refresh();
}
However this approach has a well-known problem, which is that the column headers no longer display little arrows to show the sort direction. This is a side-effect of calling
evt.preventDefault()
however you must make that call or else your custom sort won't be applied.
|
how to sort a flex datagrid according to multiple columns?
|
I have a datagrid, populated as shown below. When the user clicks on a column header, I would like to sort the rows using a lexicographic sort in which the selected column is used first, then the remaining columns are used in left-to-right order to break any ties. How can I code this?
(I have one answer, which I'll post below, but it has a problem -- I'll be thrilled if somebody can provide a better one!)
Here's the layout:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"
layout="absolute" creationComplete="onCreationComplete()">
<mx:Script source="GridCode.as" />
<mx:DataGrid id="theGrid" x="61" y="55" width="466" height="317">
<mx:columns>
<mx:DataGridColumn dataField="A"/>
<mx:DataGridColumn dataField="B"/>
<mx:DataGridColumn dataField="C"/>
</mx:columns>
</mx:DataGrid>
</mx:Application>
And here's the backing code:
import mx.collections.ArrayCollection;
import mx.collections.Sort;
import mx.collections.SortField;
import mx.controls.dataGridClasses.DataGridColumn;
import mx.events.DataGridEvent;
public function onCreationComplete():void
{
var ar:ArrayCollection = new ArrayCollection();
var ob:Object;
for( var i:int=0; i<20; i++ )
{
ob = new Object();
ob["A"] = i;
ob["B"] = i%3;
ob["C"] = i%5;
ar.addItem(ob);
}
this.theGrid.dataProvider = ar;
}
|
[
"The best answer I've found so far is to capture the headerRelease event when the user clicks:\n<mx:DataGrid id=\"theGrid\" x=\"61\" y=\"55\" width=\"466\" height=\"317\"\n headerRelease=\"onHeaderRelease(event)\">\n\nThe event handler can then apply a sort order to the data:\nprivate var lastIndex:int = -1;\nprivate var desc:Boolean = false;\n\npublic function onHeaderRelease(evt:DataGridEvent):void\n{\n evt.preventDefault();\n\n var srt:Sort = new Sort();\n var fields:Array = new Array();\n\n if( evt.columnIndex == lastIndex )\n {\n desc = !desc;\n }\n else\n {\n desc = false;\n lastIndex = evt.columnIndex;\n }\n\n fields.push( new SortField( evt.dataField, false, desc ) );\n if( evt.dataField != \"A\" )\n fields.push( new SortField(\"A\", false, desc) );\n if( evt.dataField != \"B\" )\n fields.push( new SortField(\"B\", false, desc) );\n if( evt.dataField != \"C\" )\n fields.push( new SortField(\"C\", false, desc) );\n srt.fields = fields;\n\n var ar:ArrayCollection = this.theGrid.dataProvider as ArrayCollection;\n ar.sort = srt;\n ar.refresh();\n}\n\nHowever this approach has a well-known problem, which is that the column headers no longer display little arrows to show the sort direction. This is a side-effect of calling \n evt.preventDefault()\nhowever you must make that call or else your custom sort won't be applied.\n"
] |
[
12
] |
[] |
[] |
[
"actionscript_3",
"apache_flex",
"datagrid"
] |
stackoverflow_0000085974_actionscript_3_apache_flex_datagrid.txt
|
Q:
Change default port when registering a new SQL 2000 server
I'm trying to register an externally hosted SQL 2000 server through Enterprise Manager which isn't on the default port and I can't see anywhere to change it within Enterprise Manager.
So, the question is, how do I connect to the database if:
I.P. Address is 123.456.789 (example)
Port is 1334
A:
I found this via Google:
You add a comma and the port number to the end of the server name.
So if you want to connect to MySqlServer.MyDomain.com on port 3821, you type...
MySqlServer.MyDomain.com,3821
A:
Rob is correct - I have a SQL 2000 server running on the non-default port on a different instance name and the way I access it is like this:
[ip or dns name]\[instance], [port]
example:
my.server.com\MSSQLSERVER2, 12345
You don't need \\[instance] if you used the default sql server instance when you installed.
|
Change default port when registering a new SQL 2000 server
|
I'm trying to register an externally hosted SQL 2000 server through Enterprise Manager which isn't on the default port and I can't see anywhere to change it within Enterprise Manager.
So, the question is, how do I connect to the database if:
I.P. Address is 123.456.789 (example)
Port is 1334
|
[
"I found this via Google:\n\nYou add a comma and the port number to the end of the server name.\nSo if you want to connect to MySqlServer.MyDomain.com on port 3821, you type...\nMySqlServer.MyDomain.com,3821\n\n",
"Rob is correct - I have a SQL 2000 server running on the non-default port on a different instance name and the way I access it is like this:\n\n[ip or dns name]\\[instance], [port]\n\nexample:\n\nmy.server.com\\MSSQLSERVER2, 12345\n\nYou don't need \\\\[instance] if you used the default sql server instance when you installed.\n"
] |
[
4,
0
] |
[] |
[] |
[
"port",
"sql_server",
"sql_server_2000"
] |
stackoverflow_0000084992_port_sql_server_sql_server_2000.txt
|
Q:
Sending Excel to user through ASP.NET
I have a web application that is able to open an excel template, push data into a worksheet and send the file to a user. When the file is opened a VBA Macro will refresh a pivot table based on the data that was pushed into the template.
The user receives the standard File Open / Save dialog.
In Internet Explorer (version 6), if the user chooses to save the file, when the file is opened the VBA code runs as expected, however if the user chooses 'Open' then the VBA fails with:
Run-Time error 1004: Cannot open Pivot Table source file.
In all other browsers both open and save work as expected.
It is not in my power to upgrade to a newer version of IE (corporate bureaucracy); Is there anything Ican do to allow the users to open without first saving?
A:
Newer versions of Excel really don't like running macros automatically. If you really want to generate an Excel file and send that to your users, build the full file on the server using the COM interface to Excel, or some libraries that can read/write XLS files, and then send the completed file.
A:
The option to open or save is a browser selection item, as far as I know it is not possible to override this behavior.
A:
If I had to guess, I'd say it has to do with what zone the file is currently in. Its probably still considered in the "internet zone" when you click Open. VBA shouldn't be running within that zone.
Have a user mark the server website as safe (Control Panel -> Internet Options -> Security -> Trusted Sites -> Sites) and see if that helps.
|
Sending Excel to user through ASP.NET
|
I have a web application that is able to open an excel template, push data into a worksheet and send the file to a user. When the file is opened a VBA Macro will refresh a pivot table based on the data that was pushed into the template.
The user receives the standard File Open / Save dialog.
In Internet Explorer (version 6), if the user chooses to save the file, when the file is opened the VBA code runs as expected, however if the user chooses 'Open' then the VBA fails with:
Run-Time error 1004: Cannot open Pivot Table source file.
In all other browsers both open and save work as expected.
It is not in my power to upgrade to a newer version of IE (corporate bureaucracy); Is there anything Ican do to allow the users to open without first saving?
|
[
"Newer versions of Excel really don't like running macros automatically. If you really want to generate an Excel file and send that to your users, build the full file on the server using the COM interface to Excel, or some libraries that can read/write XLS files, and then send the completed file.\n",
"The option to open or save is a browser selection item, as far as I know it is not possible to override this behavior.\n",
"If I had to guess, I'd say it has to do with what zone the file is currently in. Its probably still considered in the \"internet zone\" when you click Open. VBA shouldn't be running within that zone.\nHave a user mark the server website as safe (Control Panel -> Internet Options -> Security -> Trusted Sites -> Sites) and see if that helps.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"excel",
"internet_explorer",
"vba"
] |
stackoverflow_0000085807_excel_internet_explorer_vba.txt
|
Q:
What's the best way to learn server RESTful code?
I'm an experienced client application developer (C++/C#), but need to come up to speed quickly on writing server side code to perform RESTful interactions. Specifically, I need to learn how to exchange data with OpenSocial containers via the RESTful API.
A:
The RESTWiki is a very good resource and then there is the classic "How I explained REST to my Wife".
However, don't forget to go read about it directly from the source, it is not as difficult a read as it may first seem.
And I am assuming you will be doing REST over HTTP so this will come in very handy.
Lastly, considering OpenSocial supports the Atom Publishing Protocol, this will be useful.
Enjoy.
A:
RESTful Web Services
A:
I found this this to be a good introduction to RESTful web apps, although it doesn't refer to OpenSocial containers.
|
What's the best way to learn server RESTful code?
|
I'm an experienced client application developer (C++/C#), but need to come up to speed quickly on writing server side code to perform RESTful interactions. Specifically, I need to learn how to exchange data with OpenSocial containers via the RESTful API.
|
[
"The RESTWiki is a very good resource and then there is the classic \"How I explained REST to my Wife\".\nHowever, don't forget to go read about it directly from the source, it is not as difficult a read as it may first seem.\nAnd I am assuming you will be doing REST over HTTP so this will come in very handy.\nLastly, considering OpenSocial supports the Atom Publishing Protocol, this will be useful.\nEnjoy.\n",
"RESTful Web Services\n",
"I found this this to be a good introduction to RESTful web apps, although it doesn't refer to OpenSocial containers.\n"
] |
[
5,
3,
0
] |
[] |
[] |
[
"https",
"opensocial",
"rest"
] |
stackoverflow_0000085856_https_opensocial_rest.txt
|
Q:
linking HTMLHelp.lib with x64
i have a VS05 C++ (MFC) project which uses HtmlHelp (function HTMLHelpA, linked from HmleHelp.lib, which came from HTML HElp Workshop v1.4). the 32-bit version compiles and links fine.
the 64-bit version compiles fine, but gets an "unresolved external" error on HTMLHelpA, when linking.
so, my question is simple: is there a way to use HTMLHelp in x64?
A:
If you download the latest Windows SDK (6.0A), it contains both x86 and x64 versions of this library.
|
linking HTMLHelp.lib with x64
|
i have a VS05 C++ (MFC) project which uses HtmlHelp (function HTMLHelpA, linked from HmleHelp.lib, which came from HTML HElp Workshop v1.4). the 32-bit version compiles and links fine.
the 64-bit version compiles fine, but gets an "unresolved external" error on HTMLHelpA, when linking.
so, my question is simple: is there a way to use HTMLHelp in x64?
|
[
"If you download the latest Windows SDK (6.0A), it contains both x86 and x64 versions of this library.\n"
] |
[
2
] |
[] |
[] |
[
"64_bit",
"chm",
"visual_c++"
] |
stackoverflow_0000085872_64_bit_chm_visual_c++.txt
|
Q:
Strange characters in PHP
This is driving me crazy.
I have this one php file on a test server at work which does not work.. I kept deleting stuff from it till it became
<?
print 'Hello';
?>
it outputs
Hello
if I create a new file and copy / paste the same script to it it works!
Why does this one file give me the strange characters all the time?
A:
That's the BOM (Byte Order Mark) you are seeing.
In your editor, there should be a way to force saving without BOM which will remove the problem.
A:
Found it, file -> encoding -> UTF8 with BOM , changed to to UTF :-)
I should ahve asked before wasing time trying to figure it out :-)
A:
Just in case, here is a list of bytes for BOM
Encoding Representation (hexadecimal)
UTF-8 EF BB BF
UTF-16 (BE) FE FF
UTF-16 (LE) FF FE
UTF-32 (BE) 00 00 FE FF
UTF-32 (LE) FF FE 00 00
UTF-7 2B 2F 76, and one of the following bytes: [ 38 | 39 | 2B | 2F ]†
UTF-1 F7 64 4C
UTF-EBCDIC DD 73 66 73
SCSU 0E FE FF
BOCU-1 FB EE 28 optionally followed by FF†
|
Strange characters in PHP
|
This is driving me crazy.
I have this one php file on a test server at work which does not work.. I kept deleting stuff from it till it became
<?
print 'Hello';
?>
it outputs
Hello
if I create a new file and copy / paste the same script to it it works!
Why does this one file give me the strange characters all the time?
|
[
"That's the BOM (Byte Order Mark) you are seeing.\nIn your editor, there should be a way to force saving without BOM which will remove the problem.\n",
"Found it, file -> encoding -> UTF8 with BOM , changed to to UTF :-)\nI should ahve asked before wasing time trying to figure it out :-)\n",
"Just in case, here is a list of bytes for BOM \n\nEncoding Representation (hexadecimal)\nUTF-8 EF BB BF\nUTF-16 (BE) FE FF\nUTF-16 (LE) FF FE\nUTF-32 (BE) 00 00 FE FF\nUTF-32 (LE) FF FE 00 00\nUTF-7 2B 2F 76, and one of the following bytes: [ 38 | 39 | 2B | 2F ]†\nUTF-1 F7 64 4C\nUTF-EBCDIC DD 73 66 73\nSCSU 0E FE FF\nBOCU-1 FB EE 28 optionally followed by FF†\n\n"
] |
[
16,
3,
2
] |
[] |
[] |
[
"encoding",
"php"
] |
stackoverflow_0000012765_encoding_php.txt
|
Q:
Prototype's Enumerable#pluck in F#?
In JavaScript, using the Prototype library, the following functional construction is possible:
var words = ["aqueous", "strength", "hated", "sesquicentennial", "area"];
words.pluck('length');
//-> [7, 8, 5, 16, 4]
Note that this example code is equivalent to
words.map( function(word) { return word.length; } );
I wondered if something similar is possible in F#:
let words = ["aqueous"; "strength"; "hated";"sesquicentennial"; "area"]
//val words: string list
List.pluck 'Length' words
//int list = [7; 8; 5; 16; 4]
without having to write:
List.map (fun (s:string) -> s.Length) words
This would seem quite useful to me because then you don't have to write functions for every property to access them.
A:
I saw your request on the F# mailing list. Hope I can help.
You could use type extension and reflection to allow this. We simple extend the generic list type with the pluck function. Then we can use pluck() on any list. An unknown property will return a list with the error string as its only contents.
type Microsoft.FSharp.Collections.List<'a> with
member list.pluck property =
try
let prop = typeof<'a>.GetProperty property
[for elm in list -> prop.GetValue(elm, [| |])]
with e->
[box <| "Error: Property '" + property + "'" +
" not found on type '" + typeof<'a>.Name + "'"]
let a = ["aqueous"; "strength"; "hated"; "sesquicentennial"; "area"]
a.pluck "Length"
a.pluck "Unknown"
which produces the follow result in the interactive window:
> a.pluck "Length" ;;
val it : obj list = [7; 8; 5; 16; 4]
> a.pluck "Unknown";;
val it : obj list = ["Error: Property 'Unknown' not found on type 'String'"]
warm regards,
DannyAsher
>
>
>
>
>
NOTE: When using <pre> the angle brackets around <'a> didn't show though in the preview window it looked fine. The backtick didn't work for me. Had to resort you the colorized version which is all wrong. I don't think I'll post here again until FSharp syntax is fully supported.
A:
Prototype's pluck takes advantage of that in Javascript object.method() is the same as object[method].
Unfortunately you can't call String.Length either because it's not a static method. You can however use:
#r "FSharp.PowerPack.dll"
open Microsoft.FSharp.Compatibility
words |> List.map String.length
http://research.microsoft.com/fsharp/manual/FSharp.PowerPack/Microsoft.FSharp.Compatibility.String.html
However, using Compatibility will probably make things more confusing to people looking at your code.
|
Prototype's Enumerable#pluck in F#?
|
In JavaScript, using the Prototype library, the following functional construction is possible:
var words = ["aqueous", "strength", "hated", "sesquicentennial", "area"];
words.pluck('length');
//-> [7, 8, 5, 16, 4]
Note that this example code is equivalent to
words.map( function(word) { return word.length; } );
I wondered if something similar is possible in F#:
let words = ["aqueous"; "strength"; "hated";"sesquicentennial"; "area"]
//val words: string list
List.pluck 'Length' words
//int list = [7; 8; 5; 16; 4]
without having to write:
List.map (fun (s:string) -> s.Length) words
This would seem quite useful to me because then you don't have to write functions for every property to access them.
|
[
"I saw your request on the F# mailing list. Hope I can help. \nYou could use type extension and reflection to allow this. We simple extend the generic list type with the pluck function. Then we can use pluck() on any list. An unknown property will return a list with the error string as its only contents.\ntype Microsoft.FSharp.Collections.List<'a> with\n member list.pluck property = \n try \n let prop = typeof<'a>.GetProperty property \n [for elm in list -> prop.GetValue(elm, [| |])]\n with e-> \n [box <| \"Error: Property '\" + property + \"'\" + \n \" not found on type '\" + typeof<'a>.Name + \"'\"]\n\nlet a = [\"aqueous\"; \"strength\"; \"hated\"; \"sesquicentennial\"; \"area\"]\n\na.pluck \"Length\" \na.pluck \"Unknown\"\n\nwhich produces the follow result in the interactive window:\n\n> a.pluck \"Length\" ;; \nval it : obj list = [7; 8; 5; 16; 4]\n\n> a.pluck \"Unknown\";;\nval it : obj list = [\"Error: Property 'Unknown' not found on type 'String'\"]\n\nwarm regards,\nDannyAsher\n>\n>\n>\n>\n>\nNOTE: When using <pre> the angle brackets around <'a> didn't show though in the preview window it looked fine. The backtick didn't work for me. Had to resort you the colorized version which is all wrong. I don't think I'll post here again until FSharp syntax is fully supported. \n",
"Prototype's pluck takes advantage of that in Javascript object.method() is the same as object[method]. \nUnfortunately you can't call String.Length either because it's not a static method. You can however use:\n#r \"FSharp.PowerPack.dll\" \nopen Microsoft.FSharp.Compatibility\nwords |> List.map String.length \n\nhttp://research.microsoft.com/fsharp/manual/FSharp.PowerPack/Microsoft.FSharp.Compatibility.String.html\nHowever, using Compatibility will probably make things more confusing to people looking at your code.\n"
] |
[
2,
1
] |
[] |
[] |
[
"f#",
"functional_programming",
"javascript",
"prototypejs"
] |
stackoverflow_0000076571_f#_functional_programming_javascript_prototypejs.txt
|
Q:
How to handle errors loading with the Flex Sound class
I am seeing strange behaviour with the flash.media.Sound class in Flex 3.
var sound:Sound = new Sound();
try{
sound.load(new URLRequest("directory/file.mp3"))
} catch(e:IOError){
...
}
However this isn't helping. I'm getting a stream error, and it actually sees to be in the Sound constructor.
Error #2044: Unhandled IOErrorEvent:.
text=Error #2032: Stream Error. at... ]
I saw one example in the Flex docs where they add an event listener for IOErrorEvent, SURELY I don't have to do this, and can simply use try-catch? Can I set a null event listener?
A:
IOError = target file cannot be found (or for some other reason cannot be read). Check your file's path.
Edit: I just realized this may not be your problem, you're just trying to catch the IO error? If so, you can do this:
var sound:Sound = new Sound();
sound.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler);
sound.load(new URLRequest("directory/file.mp3"));
function ioErrorHandler(event:IOErrorEvent):void {
trace("IO error occurred");
}
A:
You will need to add a listener since the URLRequest is not instantaneous. It will be very fast if you're loading from disk, but you will still need the Event-listener.
There's a good example of how to set this up (Complete with IOErrorEvent handling) in the livedocs.
A:
try...catch only applies for errors that are thrown when that function is called. Any kind of method that involves loading stuff from the network, disk, etc will be asynchronous, that is it doesn't execute right when you call it, but instead it happens sometime shortly after you call it. In that case you DO need the addEventListener in order to catch any errors or events or to know when it's finished loading.
|
How to handle errors loading with the Flex Sound class
|
I am seeing strange behaviour with the flash.media.Sound class in Flex 3.
var sound:Sound = new Sound();
try{
sound.load(new URLRequest("directory/file.mp3"))
} catch(e:IOError){
...
}
However this isn't helping. I'm getting a stream error, and it actually sees to be in the Sound constructor.
Error #2044: Unhandled IOErrorEvent:.
text=Error #2032: Stream Error. at... ]
I saw one example in the Flex docs where they add an event listener for IOErrorEvent, SURELY I don't have to do this, and can simply use try-catch? Can I set a null event listener?
|
[
"IOError = target file cannot be found (or for some other reason cannot be read). Check your file's path.\nEdit: I just realized this may not be your problem, you're just trying to catch the IO error? If so, you can do this:\nvar sound:Sound = new Sound();\nsound.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler);\nsound.load(new URLRequest(\"directory/file.mp3\"));\n\nfunction ioErrorHandler(event:IOErrorEvent):void {\n trace(\"IO error occurred\");\n}\n\n",
"You will need to add a listener since the URLRequest is not instantaneous. It will be very fast if you're loading from disk, but you will still need the Event-listener. \nThere's a good example of how to set this up (Complete with IOErrorEvent handling) in the livedocs.\n",
"try...catch only applies for errors that are thrown when that function is called. Any kind of method that involves loading stuff from the network, disk, etc will be asynchronous, that is it doesn't execute right when you call it, but instead it happens sometime shortly after you call it. In that case you DO need the addEventListener in order to catch any errors or events or to know when it's finished loading. \n"
] |
[
5,
1,
1
] |
[] |
[] |
[
"actionscript_3",
"apache_flex",
"flash"
] |
stackoverflow_0000080863_actionscript_3_apache_flex_flash.txt
|
Q:
Super Robust as chrome c++ and portable - tips - help - comments
We are producing a portable code (win+macOs) and we are looking at how to make the code more rubust as it crashes every so often... (overflows or bad initializations usually) :-(
I was reading that Google Chrome uses a process for every tab so if something goes wrong then the program does not crash compleatelly, only that tab. I think that is quite neat, so i might give it a go!
So i was wondering if someone has some tips, help, reading list, comment, or something that can help me build more rubust c++ code (portable is always better).
In the same topic i was also wondering if there is a portable library for processes (like boost)?
Well many Thanks.
A:
The Chrome answer is more about failure mitigation and not about code quality. Doing what Chrome is doing is admitting defeat.
Better QA that is more than just programmer testing their own work.
Unit testing
Regression testing
Read up on best practices that other
companies use.
To be blunt, if your software is crashing often due to overflows and bad initializations, then you have a very basic programming quality problem that isn't going to be easily fixed. That sounds a hash and mean, that isn't my intent. My point is that the problem with the bad code has to be your primary concern (which I'm sure it is). Things like Chrome or liberal use to exception handling to catch program flaw are only distracting you from the real problem.
A:
I've developed on numerous multi-platform C++ apps (the largest being 1.5M lines of code and running on 7 platforms -- AIX, HP-UX PA-RISC, HP-UX Itanium, Solaris, Linux, Windows, OS X). You actually have two entirely different issues in your post.
Instability. Your code is not stable. Fix it.
Use unit tests to find logic problems before they kill you.
Use debuggers to find out what's causing the crashes if it's not obvious.
Use boost and similar libraries. In particular, the pointer types will help you avoid memory leaks.
Cross-platform coding.
Again, use libraries that are designed for this when possible. Particularly for any GUI bits.
Use standards (e.g. ANSI vs gcc/MSVC, POSIX threads vs Unix-specific thread models, etc) as much as possible, even if it requires a bit more work. Minimizing your platform specific code means less overall work, and fewer APIs to learn.
Isolate, isolate, isolate. Avoid in-line #ifdefs for different platforms as much as possible. Instead, stick platform specific code into its own header/source/class and use your build system and #includes to get the right code. This helps keep the code clean and readable.
Use the C99 integer types if at all possible instead of "long", "int", "short", etc -- otherwise it will bite you when you move from a 32-bit platform to a 64-bit one and longs suddenly change from 4 bytes to 8 bytes. And if that's ever written to the network/disk/etc then you'll run into incompatibility between platforms.
Personally, I'd stabilize the code first (without adding any more features) and then deal with the cross-platform issues, but that's up to you. Note that Visual Studio has an excellent debugger (the code base mentioned above was ported to Windows just for that reason).
A:
You don't mention what the target project is; having a process per-tab does not necessarily mean more "robust" code at all. You should aim to write solid code with tests regardless of portability - just read about writing good C++ code :)
As for the portability section, make sure you are testing on both platforms from day one and ensure that no new code is written until platform-specific problems are solved.
A:
You really, really don't want to do what Chrome is doing, it requires a process manager which is probably WAY overkill for what you want.
You should investigate using smart pointers from Boost or another tool that will provide reference counting or garbage collection for C++.
Alternatively, if you are frequently crashing you might want to perhaps consider writing non-performance critical parts of your application in a scripting language that has C++ bindings.
A:
Scott Meyers' Effective C++ and More Effective C++ are very good, and fun to read.
Steve McConnell's Code Complete is a favorite of many, including Jeff Atwood.
The Boost libraries are probably an excellent choice. One project where I work uses them. I've only used WIN32 threading myself.
A:
I agree with Torlack.
Bad initialization or overflows are signs of poor quality code.
Google did it that way because sometimes, there was no way to control the code that was executed in a page (because of faulty plugins, etc.). So if you're using low quality plug ins (it happens), perhaps the Google solution will be good for you.
But a program without plugins that crashes often is just badly written, or very very complex, or very old (and missing a lot of maintenance time). You must stop the development, and investigate each and every crash. On Windows, compile the modules with PDBs (program databases), and each time it crashes, attach a debugger to it.
You must add internal tests, too. Avoid the pattern:
doSomethingBad(T * t)
{
if(t == NULL) return ;
// do the processing.
}
This is very bad design because the error is there, and you just avoid it, this time. But the next function without this guard will crash. Better to crash sooner to be nearer from the error.
Instead, on Windows (there must be a similar API on MacOS)
doSomethingBad(T * t)
{
if(t == NULL) ::DebugBreak() ; // it will call the debugger
// do the processing.
}
(don't use this code directly... Put it in a define to avoid delivering it to a client...)
You can choose the error API that suits you (exceptions, DebugBreak, assert, etc.), but use it to stop the moment the code knows something's wrong.
Avoid the C API whenever possible. Use C++ idioms (RAII, etc.) and libraries.
Etc..
P.S.: If you use exceptions (which is a good choice), don't hide them inside a catch. You'll only make your problem worse because the error is there, but the program will try to continue and will probably crash sometimes after, and corrupt anything it touches in the mean time.
A:
You can always add exception handling to your program to catch these kinds of faults and ignore them (though the details are platform specific) ... but that is very much a two edged sword. Instead consider having the program catch the exceptions and create dump files for analysis.
If your program has behaved in an unexpected way, what do you know about your internal state? Maybe the routine/thread that crashed has corrupted some key data structure? Maybe if you catch the error and try to continue the user will save whatever they are working on and commit the corruption to disk?
A:
Beside writing more stable code, here's one idea that answers your question.
Whether you are using processes or threads. You can write a small / simple watchdog program. Then your other programs register with that watchdog. If any process dies, or a thread dies, it can be restarted by the watchdog. Of course you'll want to put in some test to make sure you don't keep restarting the same buggy thread. ie: restart it 5 times, then after the 5th, shutdown the whole program and log to file / syslog.
A:
Build your app with debug symbols, then either add an exception handler or configure Dr Watson to generate crash dumps (run drwtsn32.exe /i to install it as the debugger, without the /i to pop the config dialog). When your app crashes, you can inspect where it went wrong in windbg or visual studio by seeing a callstack and variables.
google for symbol server for more info.
Obviously you can use exception handling to make it more robust and use smart pointers, but fixing the bugs is best.
A:
Build it with the idea that the only way to quit is for the program to crash and that it can crash at any time. When you build it that way, crashing will never/almost never lose any data. I read an article about it a year or two ago. Sadly, I don't have a link to it.
Combine that with some sort of crash dump and have it email you it so you can fix the problem.
A:
I would recommend that you compile up a linux version and run it under Valgrind.
Valgrind will track memory leaks, uninitialized memory reads and many other code problems. I highly recommend it.
A:
After over 15 years of Windows development I recently wrote my first cross-platform C++ app (Windows/Linux). Here's how:
STL
Boost. In particular the filesystem and thread libraries.
A browser based UI. The app 'does' HTTP, with the UI consisting of XHTML/CSS/JavaScript (Ajax style). These resources are embedded in the server code and served to the browser when required.
Copious unit testing. Not quite TDD, but close. This actually changed the way I develop.
I used NetBeans C++ for the Linux build and had a full Linux port in no time at all.
|
Super Robust as chrome c++ and portable - tips - help - comments
|
We are producing a portable code (win+macOs) and we are looking at how to make the code more rubust as it crashes every so often... (overflows or bad initializations usually) :-(
I was reading that Google Chrome uses a process for every tab so if something goes wrong then the program does not crash compleatelly, only that tab. I think that is quite neat, so i might give it a go!
So i was wondering if someone has some tips, help, reading list, comment, or something that can help me build more rubust c++ code (portable is always better).
In the same topic i was also wondering if there is a portable library for processes (like boost)?
Well many Thanks.
|
[
"The Chrome answer is more about failure mitigation and not about code quality. Doing what Chrome is doing is admitting defeat.\n\nBetter QA that is more than just programmer testing their own work.\nUnit testing\nRegression testing\nRead up on best practices that other\ncompanies use.\n\nTo be blunt, if your software is crashing often due to overflows and bad initializations, then you have a very basic programming quality problem that isn't going to be easily fixed. That sounds a hash and mean, that isn't my intent. My point is that the problem with the bad code has to be your primary concern (which I'm sure it is). Things like Chrome or liberal use to exception handling to catch program flaw are only distracting you from the real problem.\n",
"I've developed on numerous multi-platform C++ apps (the largest being 1.5M lines of code and running on 7 platforms -- AIX, HP-UX PA-RISC, HP-UX Itanium, Solaris, Linux, Windows, OS X). You actually have two entirely different issues in your post.\n\nInstability. Your code is not stable. Fix it. \n\nUse unit tests to find logic problems before they kill you.\nUse debuggers to find out what's causing the crashes if it's not obvious. \nUse boost and similar libraries. In particular, the pointer types will help you avoid memory leaks.\n\nCross-platform coding.\n\nAgain, use libraries that are designed for this when possible. Particularly for any GUI bits.\nUse standards (e.g. ANSI vs gcc/MSVC, POSIX threads vs Unix-specific thread models, etc) as much as possible, even if it requires a bit more work. Minimizing your platform specific code means less overall work, and fewer APIs to learn.\nIsolate, isolate, isolate. Avoid in-line #ifdefs for different platforms as much as possible. Instead, stick platform specific code into its own header/source/class and use your build system and #includes to get the right code. This helps keep the code clean and readable.\nUse the C99 integer types if at all possible instead of \"long\", \"int\", \"short\", etc -- otherwise it will bite you when you move from a 32-bit platform to a 64-bit one and longs suddenly change from 4 bytes to 8 bytes. And if that's ever written to the network/disk/etc then you'll run into incompatibility between platforms.\n\n\nPersonally, I'd stabilize the code first (without adding any more features) and then deal with the cross-platform issues, but that's up to you. Note that Visual Studio has an excellent debugger (the code base mentioned above was ported to Windows just for that reason).\n",
"You don't mention what the target project is; having a process per-tab does not necessarily mean more \"robust\" code at all. You should aim to write solid code with tests regardless of portability - just read about writing good C++ code :)\nAs for the portability section, make sure you are testing on both platforms from day one and ensure that no new code is written until platform-specific problems are solved.\n",
"You really, really don't want to do what Chrome is doing, it requires a process manager which is probably WAY overkill for what you want.\nYou should investigate using smart pointers from Boost or another tool that will provide reference counting or garbage collection for C++.\nAlternatively, if you are frequently crashing you might want to perhaps consider writing non-performance critical parts of your application in a scripting language that has C++ bindings.\n",
"Scott Meyers' Effective C++ and More Effective C++ are very good, and fun to read.\nSteve McConnell's Code Complete is a favorite of many, including Jeff Atwood.\nThe Boost libraries are probably an excellent choice. One project where I work uses them. I've only used WIN32 threading myself.\n",
"I agree with Torlack.\nBad initialization or overflows are signs of poor quality code.\nGoogle did it that way because sometimes, there was no way to control the code that was executed in a page (because of faulty plugins, etc.). So if you're using low quality plug ins (it happens), perhaps the Google solution will be good for you.\nBut a program without plugins that crashes often is just badly written, or very very complex, or very old (and missing a lot of maintenance time). You must stop the development, and investigate each and every crash. On Windows, compile the modules with PDBs (program databases), and each time it crashes, attach a debugger to it.\nYou must add internal tests, too. Avoid the pattern:\ndoSomethingBad(T * t)\n{\n if(t == NULL) return ;\n\n // do the processing.\n}\n\nThis is very bad design because the error is there, and you just avoid it, this time. But the next function without this guard will crash. Better to crash sooner to be nearer from the error.\nInstead, on Windows (there must be a similar API on MacOS)\ndoSomethingBad(T * t)\n{\n if(t == NULL) ::DebugBreak() ; // it will call the debugger\n\n // do the processing.\n}\n\n(don't use this code directly... Put it in a define to avoid delivering it to a client...)\nYou can choose the error API that suits you (exceptions, DebugBreak, assert, etc.), but use it to stop the moment the code knows something's wrong.\nAvoid the C API whenever possible. Use C++ idioms (RAII, etc.) and libraries.\nEtc..\nP.S.: If you use exceptions (which is a good choice), don't hide them inside a catch. You'll only make your problem worse because the error is there, but the program will try to continue and will probably crash sometimes after, and corrupt anything it touches in the mean time.\n",
"You can always add exception handling to your program to catch these kinds of faults and ignore them (though the details are platform specific) ... but that is very much a two edged sword. Instead consider having the program catch the exceptions and create dump files for analysis.\nIf your program has behaved in an unexpected way, what do you know about your internal state? Maybe the routine/thread that crashed has corrupted some key data structure? Maybe if you catch the error and try to continue the user will save whatever they are working on and commit the corruption to disk?\n",
"Beside writing more stable code, here's one idea that answers your question.\nWhether you are using processes or threads. You can write a small / simple watchdog program. Then your other programs register with that watchdog. If any process dies, or a thread dies, it can be restarted by the watchdog. Of course you'll want to put in some test to make sure you don't keep restarting the same buggy thread. ie: restart it 5 times, then after the 5th, shutdown the whole program and log to file / syslog.\n",
"Build your app with debug symbols, then either add an exception handler or configure Dr Watson to generate crash dumps (run drwtsn32.exe /i to install it as the debugger, without the /i to pop the config dialog). When your app crashes, you can inspect where it went wrong in windbg or visual studio by seeing a callstack and variables.\ngoogle for symbol server for more info.\nObviously you can use exception handling to make it more robust and use smart pointers, but fixing the bugs is best.\n",
"Build it with the idea that the only way to quit is for the program to crash and that it can crash at any time. When you build it that way, crashing will never/almost never lose any data. I read an article about it a year or two ago. Sadly, I don't have a link to it.\nCombine that with some sort of crash dump and have it email you it so you can fix the problem.\n",
"I would recommend that you compile up a linux version and run it under Valgrind.\nValgrind will track memory leaks, uninitialized memory reads and many other code problems. I highly recommend it.\n",
"After over 15 years of Windows development I recently wrote my first cross-platform C++ app (Windows/Linux). Here's how:\n\nSTL\nBoost. In particular the filesystem and thread libraries.\nA browser based UI. The app 'does' HTTP, with the UI consisting of XHTML/CSS/JavaScript (Ajax style). These resources are embedded in the server code and served to the browser when required.\nCopious unit testing. Not quite TDD, but close. This actually changed the way I develop.\n\nI used NetBeans C++ for the Linux build and had a full Linux port in no time at all.\n"
] |
[
5,
5,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"boost",
"c++",
"google_chrome",
"portability",
"robust"
] |
stackoverflow_0000084817_boost_c++_google_chrome_portability_robust.txt
|
Q:
How to best implement simple crash / error reporting?
What would be the best way to implement a simple crash / error reporting mechanism?
Details: my app is cross-platform (mac/windows/linux) and written in Python, so I just need something that will send me a small amount of text, e.g. just a timestamp and a traceback (which I already generate and show in my error dialog).
It would be fine if it could simply email it, but I can't think of a way to do this without including a username and password for the smtp server in the application...
Should I implement a simple web service on the server side and have my app send it an HTTP request with the info? Any better ideas?
A:
The web service is the best way, but there are some caveats:
You should always ask the user if it is ok to send error feedback information.
You should be prepared to fail gracefully if there are network errors. Don't let a failure to report a crash impede recovery!
You should avoid including user identifying or sensitive information unless the user knows (see #1) and you should either use SSL or otherwise protect it. Some jurisdictions impose burdens on you that you might not want to deal with, so it's best to simply not save such information.
Like any web service, make sure your service is not exploitable by miscreants.
A:
I can't think of a way to do this without including a username and password for the smtp server in the application...
You only need a username and password for authenticating yourself to a smarthost. You don't need it to send mail directly, you need it to send mail through a relay, e.g. your ISP's mail server. It's perfectly possible to send email without authentication - that's why spam is so hard to stop.
Having said that, some ISPs block outbound traffic on port 25, so the most robust alternative is an HTTP POST, which is unlikely to be blocked by anything. Be sure to pick a URL that you won't feel restricted by later on, or better yet, have the application periodically check for updates, so if you decide to change domains or something, you can push an update in advance.
Security isn't really an issue. You can fairly easily discard junk data, so all that really concerns you is whether or not somebody would go to the trouble of constructing fake tracebacks to mess with you, and that's a very unlikely situation.
As for the payload, PyCrash can help you with that.
A:
The web hit is the way to go, but make sure you pick a good URL - your app will be hitting it for years to come.
A:
PyCrash?
A:
Whether you use SMTP or HTTP to send the data, you need to have a username/password in the application to prevent just anyone from sending random data to you.
With that in mind, I suspect it would be easier to use SMTP rather than HTTP to send the data.
A:
Some kind of simple web service would suffice. You would have to consider security so not just anyone could make requests to your service..
On a larger scale we considered a JMS messaging system. Put a serialized object of data containing the traceback/error message into a queue and consume it every x minutes generating reports/alerts from that data.
|
How to best implement simple crash / error reporting?
|
What would be the best way to implement a simple crash / error reporting mechanism?
Details: my app is cross-platform (mac/windows/linux) and written in Python, so I just need something that will send me a small amount of text, e.g. just a timestamp and a traceback (which I already generate and show in my error dialog).
It would be fine if it could simply email it, but I can't think of a way to do this without including a username and password for the smtp server in the application...
Should I implement a simple web service on the server side and have my app send it an HTTP request with the info? Any better ideas?
|
[
"The web service is the best way, but there are some caveats:\n\nYou should always ask the user if it is ok to send error feedback information.\nYou should be prepared to fail gracefully if there are network errors. Don't let a failure to report a crash impede recovery!\nYou should avoid including user identifying or sensitive information unless the user knows (see #1) and you should either use SSL or otherwise protect it. Some jurisdictions impose burdens on you that you might not want to deal with, so it's best to simply not save such information.\nLike any web service, make sure your service is not exploitable by miscreants.\n\n",
"\nI can't think of a way to do this without including a username and password for the smtp server in the application...\n\nYou only need a username and password for authenticating yourself to a smarthost. You don't need it to send mail directly, you need it to send mail through a relay, e.g. your ISP's mail server. It's perfectly possible to send email without authentication - that's why spam is so hard to stop.\nHaving said that, some ISPs block outbound traffic on port 25, so the most robust alternative is an HTTP POST, which is unlikely to be blocked by anything. Be sure to pick a URL that you won't feel restricted by later on, or better yet, have the application periodically check for updates, so if you decide to change domains or something, you can push an update in advance.\nSecurity isn't really an issue. You can fairly easily discard junk data, so all that really concerns you is whether or not somebody would go to the trouble of constructing fake tracebacks to mess with you, and that's a very unlikely situation.\nAs for the payload, PyCrash can help you with that.\n",
"The web hit is the way to go, but make sure you pick a good URL - your app will be hitting it for years to come. \n",
"PyCrash?\n",
"Whether you use SMTP or HTTP to send the data, you need to have a username/password in the application to prevent just anyone from sending random data to you.\nWith that in mind, I suspect it would be easier to use SMTP rather than HTTP to send the data.\n",
"Some kind of simple web service would suffice. You would have to consider security so not just anyone could make requests to your service..\nOn a larger scale we considered a JMS messaging system. Put a serialized object of data containing the traceback/error message into a queue and consume it every x minutes generating reports/alerts from that data.\n"
] |
[
6,
3,
1,
1,
0,
0
] |
[] |
[] |
[
"cross_platform",
"error_reporting",
"python"
] |
stackoverflow_0000085985_cross_platform_error_reporting_python.txt
|
Q:
Is there an easy way to change the behavior of a Java/Swing control when it gets focus?
For most GUI's I've used, when a control that contains text gets the focus, the entire contents of the control are selected. This means if you just start typing, you completely replace the former contents.
Example: You have spin control that is initialized with the value zero. You tab to it and type "1" The value in the control is now 1.
With Swing, this doesn't happen. The text in the control is not selected and the carat appears at one end or another of the existing text. Continuing the above example:
With a Swing JSpinner, when you tab to the spin control, the carat is at the left. You type "1" and the value in the control is now 10.
This drives me, (and my users) up a wall, and I'd like to change it. Even more important, I'd like to change it globally so the new behavior applies to JTextField, JPasswordField, JFormattedTextField, JTextArea, JComboBox, JSpinner, and so on. The only way I have found to do this to add a FocusAdapter to each control and override the focusGained() method to Do The Right Thing[tm].
There's gotta be an easier, and less fragile way. Please?
EDIT: One additional piece of information for this particular case. The form I am working with was generated using Idea's form designer. That means I normally don't actually write the code to create the components. It is possible to tell Idea that you want to create them yourself, but that's a hassle I'd like to avoid.
Motto: All good programmers are basically lazy.
A:
When I've needed this in the past, I've created subclasses of the components I wanted to add "auto-clearing" functionality too. eg:
public class AutoClearingTextField extends JTextField {
final FocusListener AUTO_CLEARING_LISTENER = new FocusListener(){
@Override
public void focusLost(FocusEvent e) {
//onFocusLost(e);
}
@Override
public void focusGained(FocusEvent e) {
selectAll();
}
};
public AutoClearingTextField(String string) {
super(string);
addListener();
}
private void addListener() {
addFocusListener(AUTO_CLEARING_LISTENER);
}
}
The biggest problem is that I haven't found a "good" way to get all the standard constructors without writing overrides. Adding them, and forcing a call to addListener is the most general approach I've found.
Another option is to watch for ContainerEvents on a top-level container with a ContainerListeer to detect the presence of new widgets, and add a corresponding focus listener based on the widgets that have been added. (eg: if the container event is caused by adding a TextField, then add a focus listener that knows how to select all the text in a TextField, and so on.) If a Container is added, then you need to recursively add the ContainerListener to that new sub-container as well.
Either way, you won't need to muck about with focus listeners in your actual UI code -- it will all be taken care of at a higher level.
A:
I haven't tried this myself (only dabbled in it a while ago), but you can probably get the current focused component by using:
KeyboardFocusManager (there is a static method getCurrentKeyboardFocusManager())
an adding a PropertyChangeListener to it.
From there, you can find out if the component is a JTextComponent and select all text.
A:
A separate class that attaches a FocusListener to the desired text field can be written. All the focus listener would do is call selectAll() on the text widget when it gains the focus.
public class SelectAllListener implements FocusListener {
private static INSTANCE = new SelectAllListener();
public void focusLost(FocusEvent e) { }
public void focusGained(FocusEvent e) {
if (e.getSource() instanceof JTextComponent) {
((JTextComponent)e.getSource()).selectAll();
}
};
public static void addSelectAllListener(JTextComponent tc) {
tc.addFocusListener(INSTANCE);
}
public static void removeSelectAllListener(JTextComponent tc) {
tc.removeFocusListener(INSTANCE);
}
}
By accepting a JTextComponent as an argument this behavior can be added to JTextArea, JPasswordField, and all of the other text editing components directly. This also allows the class to add select all to editable combo boxes and JSpinners, where your control over the text editor component may be more limited. Convenience methods can be added:
public static void addSelectAllListener(JSpinner spin) {
if (spin.getEditor() instanceof JTextComponent) {
addSelectAllListener((JTextComponent)spin.getEditor());
}
}
public static void addSelectAllListener(JComboBox combo) {
JComponent editor = combo.getEditor().getEditorComponent();
if (editor instanceof JTextComponent) {
addSelectAllListener((JTextComponent)editor);
}
}
Also, the remove listener methods are likely unneeded, since the listener contains no exterior references to any other instances, but they can be added to make code reviews go smoother.
A:
After reading the replies so far (Thanks!) I passed the outermost JPanel to the following method:
void addTextFocusSelect(JComponent component){
if(component instanceof JTextComponent){
component.addFocusListener(new FocusAdapter() {
@Override
public void focusGained(FocusEvent event) {
super.focusGained(event);
JTextComponent component = (JTextComponent)event.getComponent();
// a trick I found on JavaRanch.com
// Without this, some components don't honor selectAll
component.setText(component.getText());
component.selectAll();
}
});
}
else
{
for(Component child: component.getComponents()){
if(child instanceof JComponent){
addTextFocusSelect((JComponent) child);
}
}
}
}
It works!
A:
The only way I know is to create a FocusListener and attach it to your component. If you want it this FocusListener to be global to all components in your application you might consider using Aspect Oriented Programming (AOP). With AOP is possible to code it once and apply your focus listener to all components instantiated in your app without having to copy-and-paste the component.addFocusListener(listener) code throughout your application..
Your aspect would have to intercept the creation of a JComponent (or the sub-classes you want to add this behaviour to) and add the focus listener to the newly created instance. The AOP approach is better than copy-and-pasting the FocusListener to your entire code because you keep it all in a single piece of code, and don't create a maintenance nightmare once you decide to change your global behavior like removing the listener for JSpinners.
There are many AOP frameworks out there to choose from. I like JBossAOP since it's 100% pure Java, but you might like to take a look at AspectJ.
|
Is there an easy way to change the behavior of a Java/Swing control when it gets focus?
|
For most GUI's I've used, when a control that contains text gets the focus, the entire contents of the control are selected. This means if you just start typing, you completely replace the former contents.
Example: You have spin control that is initialized with the value zero. You tab to it and type "1" The value in the control is now 1.
With Swing, this doesn't happen. The text in the control is not selected and the carat appears at one end or another of the existing text. Continuing the above example:
With a Swing JSpinner, when you tab to the spin control, the carat is at the left. You type "1" and the value in the control is now 10.
This drives me, (and my users) up a wall, and I'd like to change it. Even more important, I'd like to change it globally so the new behavior applies to JTextField, JPasswordField, JFormattedTextField, JTextArea, JComboBox, JSpinner, and so on. The only way I have found to do this to add a FocusAdapter to each control and override the focusGained() method to Do The Right Thing[tm].
There's gotta be an easier, and less fragile way. Please?
EDIT: One additional piece of information for this particular case. The form I am working with was generated using Idea's form designer. That means I normally don't actually write the code to create the components. It is possible to tell Idea that you want to create them yourself, but that's a hassle I'd like to avoid.
Motto: All good programmers are basically lazy.
|
[
"When I've needed this in the past, I've created subclasses of the components I wanted to add \"auto-clearing\" functionality too. eg:\npublic class AutoClearingTextField extends JTextField {\n final FocusListener AUTO_CLEARING_LISTENER = new FocusListener(){\n @Override\n public void focusLost(FocusEvent e) {\n //onFocusLost(e);\n }\n\n @Override\n public void focusGained(FocusEvent e) {\n selectAll();\n }\n };\n\n public AutoClearingTextField(String string) {\n super(string);\n addListener();\n }\n\n private void addListener() {\n addFocusListener(AUTO_CLEARING_LISTENER); \n }\n}\n\nThe biggest problem is that I haven't found a \"good\" way to get all the standard constructors without writing overrides. Adding them, and forcing a call to addListener is the most general approach I've found.\nAnother option is to watch for ContainerEvents on a top-level container with a ContainerListeer to detect the presence of new widgets, and add a corresponding focus listener based on the widgets that have been added. (eg: if the container event is caused by adding a TextField, then add a focus listener that knows how to select all the text in a TextField, and so on.) If a Container is added, then you need to recursively add the ContainerListener to that new sub-container as well.\nEither way, you won't need to muck about with focus listeners in your actual UI code -- it will all be taken care of at a higher level.\n",
"I haven't tried this myself (only dabbled in it a while ago), but you can probably get the current focused component by using:\nKeyboardFocusManager (there is a static method getCurrentKeyboardFocusManager())\nan adding a PropertyChangeListener to it.\nFrom there, you can find out if the component is a JTextComponent and select all text.\n",
"A separate class that attaches a FocusListener to the desired text field can be written. All the focus listener would do is call selectAll() on the text widget when it gains the focus.\npublic class SelectAllListener implements FocusListener {\n private static INSTANCE = new SelectAllListener();\n\n public void focusLost(FocusEvent e) { }\n\n public void focusGained(FocusEvent e) {\n if (e.getSource() instanceof JTextComponent) { \n ((JTextComponent)e.getSource()).selectAll();\n }\n };\n\n public static void addSelectAllListener(JTextComponent tc) {\n tc.addFocusListener(INSTANCE);\n }\n\n public static void removeSelectAllListener(JTextComponent tc) {\n tc.removeFocusListener(INSTANCE);\n }\n}\n\nBy accepting a JTextComponent as an argument this behavior can be added to JTextArea, JPasswordField, and all of the other text editing components directly. This also allows the class to add select all to editable combo boxes and JSpinners, where your control over the text editor component may be more limited. Convenience methods can be added:\npublic static void addSelectAllListener(JSpinner spin) {\n if (spin.getEditor() instanceof JTextComponent) {\n addSelectAllListener((JTextComponent)spin.getEditor());\n }\n}\n\npublic static void addSelectAllListener(JComboBox combo) {\n JComponent editor = combo.getEditor().getEditorComponent();\n if (editor instanceof JTextComponent) {\n addSelectAllListener((JTextComponent)editor);\n }\n}\n\nAlso, the remove listener methods are likely unneeded, since the listener contains no exterior references to any other instances, but they can be added to make code reviews go smoother.\n",
"After reading the replies so far (Thanks!) I passed the outermost JPanel to the following method: \nvoid addTextFocusSelect(JComponent component){\n if(component instanceof JTextComponent){\n component.addFocusListener(new FocusAdapter() {\n @Override\n public void focusGained(FocusEvent event) {\n super.focusGained(event);\n JTextComponent component = (JTextComponent)event.getComponent();\n // a trick I found on JavaRanch.com\n // Without this, some components don't honor selectAll\n component.setText(component.getText());\n component.selectAll();\n }\n });\n\n }\n else\n {\n for(Component child: component.getComponents()){\n if(child instanceof JComponent){\n addTextFocusSelect((JComponent) child);\n }\n }\n }\n}\n\nIt works!\n",
"The only way I know is to create a FocusListener and attach it to your component. If you want it this FocusListener to be global to all components in your application you might consider using Aspect Oriented Programming (AOP). With AOP is possible to code it once and apply your focus listener to all components instantiated in your app without having to copy-and-paste the component.addFocusListener(listener) code throughout your application..\nYour aspect would have to intercept the creation of a JComponent (or the sub-classes you want to add this behaviour to) and add the focus listener to the newly created instance. The AOP approach is better than copy-and-pasting the FocusListener to your entire code because you keep it all in a single piece of code, and don't create a maintenance nightmare once you decide to change your global behavior like removing the listener for JSpinners.\nThere are many AOP frameworks out there to choose from. I like JBossAOP since it's 100% pure Java, but you might like to take a look at AspectJ.\n"
] |
[
2,
2,
2,
1,
0
] |
[] |
[] |
[
"java",
"swing",
"user_interface"
] |
stackoverflow_0000066455_java_swing_user_interface.txt
|
Q:
How do I write SELECT FROM myTable WHERE id IN (SELECT...) in Linq?
How do you rewrite this in Linq?
SELECT Id, Name FROM TableA WHERE TableA.Id IN (SELECT xx from TableB INNER JOIN Table C....)
So in plain english, I want to select Id and Name from TableA where TableA's Id is in a result set from a second query.
A:
from a in TableA
where (from b in TableB
join c in TableC on b.id equals c.id
where .. select b.id)
.Contains(a.Id)
select new { a.Id, a.Name }
A:
LINQ supports IN in the form of contains. Think "collection.Contains(id)" instead of "id IN (collection)".
from a in TableA
where (
from b in TableB
join c in TableC
on b.id equals c.id
select b.id
).Contains(TableA.Id)
select new { a.Id, a.Name }
See also this blog post.
A:
There is no out of box support for IN in LINQ. You need to join 2 queries.
|
How do I write SELECT FROM myTable WHERE id IN (SELECT...) in Linq?
|
How do you rewrite this in Linq?
SELECT Id, Name FROM TableA WHERE TableA.Id IN (SELECT xx from TableB INNER JOIN Table C....)
So in plain english, I want to select Id and Name from TableA where TableA's Id is in a result set from a second query.
|
[
"from a in TableA \nwhere (from b in TableB \n join c in TableC on b.id equals c.id\n where .. select b.id)\n.Contains(a.Id) \nselect new { a.Id, a.Name }\n\n",
"LINQ supports IN in the form of contains. Think \"collection.Contains(id)\" instead of \"id IN (collection)\".\nfrom a in TableA\nwhere (\n from b in TableB\n join c in TableC\n on b.id equals c.id\n select b.id\n).Contains(TableA.Id)\nselect new { a.Id, a.Name }\n\nSee also this blog post.\n",
"There is no out of box support for IN in LINQ. You need to join 2 queries.\n"
] |
[
10,
4,
1
] |
[] |
[] |
[
"linq",
"linq_to_sql",
"sql"
] |
stackoverflow_0000045634_linq_linq_to_sql_sql.txt
|
Q:
Best way of store only date on datetime field?
Scenario:
A stored procedure receives from code a DateTime with, let's say DateTime.Now value, as a datetime parameter.
The stored procedure needs to store only the date part of the datetime on the row, but preserving all date related arithmetics for, to say, do searches over time intervals and doing reports based on dates.
I know there is a couple of ways, but what is the better having in mind performance and wasted space?
A:
Business Logic should be handled outside of the proc. The procs jobs should be to save the data passed to it. If the requirment is to only store Date and not time, then the BL/DL should pass in DateTime.Now**.Date** (or the equiv...basically the Date part of your DateTime object).
If you can't control the code for some reason, there's always convert(varchar(10), @YOURDATETIME, 101)
A:
store the date with time = midnight
EDIT: i was assuming MS SQL Server
A:
Essentially you're only going to store the Date part of your DateTime object. This means regardless of how you wish to handle querying the data the Date returned will always be set to 00:00:00.
Time related functions are useless in this scenario (even though your original DateTime object uses them) as your database drops this info.
Date related arithmetics will still apply though you will have to assume a time of midnight for each date returned from the database.
A:
SQL Server 2008 has a date only type (DATE) that does not store the time. Consider upgrading.
http://www.sqlteam.com/article/using-the-date-data-type-in-sql-server-2008
A:
If you're working on Oracle, inside your stored procedure use the TRUNC function on the datetime. This will return ONLY the date portion.
|
Best way of store only date on datetime field?
|
Scenario:
A stored procedure receives from code a DateTime with, let's say DateTime.Now value, as a datetime parameter.
The stored procedure needs to store only the date part of the datetime on the row, but preserving all date related arithmetics for, to say, do searches over time intervals and doing reports based on dates.
I know there is a couple of ways, but what is the better having in mind performance and wasted space?
|
[
"Business Logic should be handled outside of the proc. The procs jobs should be to save the data passed to it. If the requirment is to only store Date and not time, then the BL/DL should pass in DateTime.Now**.Date** (or the equiv...basically the Date part of your DateTime object).\nIf you can't control the code for some reason, there's always convert(varchar(10), @YOURDATETIME, 101)\n",
"store the date with time = midnight\nEDIT: i was assuming MS SQL Server\n",
"Essentially you're only going to store the Date part of your DateTime object. This means regardless of how you wish to handle querying the data the Date returned will always be set to 00:00:00.\nTime related functions are useless in this scenario (even though your original DateTime object uses them) as your database drops this info.\nDate related arithmetics will still apply though you will have to assume a time of midnight for each date returned from the database.\n",
"SQL Server 2008 has a date only type (DATE) that does not store the time. Consider upgrading.\nhttp://www.sqlteam.com/article/using-the-date-data-type-in-sql-server-2008\n",
"If you're working on Oracle, inside your stored procedure use the TRUNC function on the datetime. This will return ONLY the date portion.\n"
] |
[
4,
0,
0,
0,
0
] |
[] |
[] |
[
"database",
"tsql"
] |
stackoverflow_0000079949_database_tsql.txt
|
Q:
Why am I getting this Objective-C error message: invalid conversion from 'objc_object*'
This error message had me stumped for a while:
invalid conversion from 'objc_object* to 'int'
The line in question was something like this:
int iResult = [MyUtils utilsMemberFunc:param1,param2];
A:
It doesn't matter what the "to" type is, what is important is that you recognize that this message, in this context, is reporting that the utilsMemberFunc declaration was not found and due to Objective-C's dynamic binding it is assuming it returns an objc_object* rather than the type that utilsMemberFunc was declared to return.
So why isn't it finding the declaration? Because ',' is being used rather than ':' to separate the parameters.
|
Why am I getting this Objective-C error message: invalid conversion from 'objc_object*'
|
This error message had me stumped for a while:
invalid conversion from 'objc_object* to 'int'
The line in question was something like this:
int iResult = [MyUtils utilsMemberFunc:param1,param2];
|
[
"It doesn't matter what the \"to\" type is, what is important is that you recognize that this message, in this context, is reporting that the utilsMemberFunc declaration was not found and due to Objective-C's dynamic binding it is assuming it returns an objc_object* rather than the type that utilsMemberFunc was declared to return. \nSo why isn't it finding the declaration? Because ',' is being used rather than ':' to separate the parameters. \n"
] |
[
6
] |
[] |
[] |
[
"macos",
"objective_c"
] |
stackoverflow_0000086244_macos_objective_c.txt
|
Q:
Visual Studio Intellisense, c#, no code behind
If I open a file in Design View (web form), I get intellisense for my display code, but not my script code.. If I open with source code editor I, occasionally, get intellisense within the script tags.
Anyone know how to get intellisense working all of the time for all of my code?
Been living with this one for a long time.
A:
What version are you using? Design view is for human-readable elements, you wouldn't be editing code there and therefore wouldn't need intellisense. If you are not using code-behind, you should only have one <script runat="server"> tag on the page, and you would edit this in Source view. To enable intellisense, add the following on the first line:
<%@ Page Language="C#" %>
If you change it, the tag will be underlined and it will say that you need to close the file and reopen it.
If you are in VS 2008, JavaScript intellisense will be available to you as well. Make sure you specify the language in the <script> tag.
A:
VS2008. So far doing a re-install seems to be the best advice. I am using the <%@ Page Language="C#" MasterPageFile="~/common/masterpages/MasterPage.master" %>. When I say design-view I mean that I right+click on the file and choose "view designer" - this gives me access to the toolbox and tabs for designer,split, and code-view (which is the view I primarily work in). In that mode, all of my <asp: tags get intellisense, but then I lose all intellisense within my <script> tags. I've never been able to have intellisense working both within the <script> tags and within my form.
I should say that when we create a website, we don't do it through file>new>website.. I mention this because I wonder if VS might configure a website differently when creating it that way vs. pointing VS to an existing set of directories which contain our website.
A:
Some service packs have broken Intellisense and in the past I have had to either reinstall the product or repair. Try repair first!
A:
I'm not sure if it helps, but I've been working on some code recently which uses a bunch of projects compiled using NMake. For those, there's an option if you right-click on the project and select the NMake option (I guess you might have a web form option in there?). I found intellisense often wouldn't pick everything up unless I set the right include directories:
Right-click on the project.
Select NMake (or whatever it is you 'compile' the form with) from the tree of options on the left of the dialog.
In the pane on the right you'll hopefully see a bunch of options under an 'intellisense' listing.
Make sure the files you're interested in intellisensing are in a directory which is listed in the 'Include search paths' option.
|
Visual Studio Intellisense, c#, no code behind
|
If I open a file in Design View (web form), I get intellisense for my display code, but not my script code.. If I open with source code editor I, occasionally, get intellisense within the script tags.
Anyone know how to get intellisense working all of the time for all of my code?
Been living with this one for a long time.
|
[
"What version are you using? Design view is for human-readable elements, you wouldn't be editing code there and therefore wouldn't need intellisense. If you are not using code-behind, you should only have one <script runat=\"server\"> tag on the page, and you would edit this in Source view. To enable intellisense, add the following on the first line:\n<%@ Page Language=\"C#\" %>\n\nIf you change it, the tag will be underlined and it will say that you need to close the file and reopen it.\nIf you are in VS 2008, JavaScript intellisense will be available to you as well. Make sure you specify the language in the <script> tag.\n",
"VS2008. So far doing a re-install seems to be the best advice. I am using the <%@ Page Language=\"C#\" MasterPageFile=\"~/common/masterpages/MasterPage.master\" %>. When I say design-view I mean that I right+click on the file and choose \"view designer\" - this gives me access to the toolbox and tabs for designer,split, and code-view (which is the view I primarily work in). In that mode, all of my <asp: tags get intellisense, but then I lose all intellisense within my <script> tags. I've never been able to have intellisense working both within the <script> tags and within my form.\nI should say that when we create a website, we don't do it through file>new>website.. I mention this because I wonder if VS might configure a website differently when creating it that way vs. pointing VS to an existing set of directories which contain our website. \n",
"Some service packs have broken Intellisense and in the past I have had to either reinstall the product or repair. Try repair first!\n",
"I'm not sure if it helps, but I've been working on some code recently which uses a bunch of projects compiled using NMake. For those, there's an option if you right-click on the project and select the NMake option (I guess you might have a web form option in there?). I found intellisense often wouldn't pick everything up unless I set the right include directories:\n\nRight-click on the project.\nSelect NMake (or whatever it is you 'compile' the form with) from the tree of options on the left of the dialog.\nIn the pane on the right you'll hopefully see a bunch of options under an 'intellisense' listing.\nMake sure the files you're interested in intellisensing are in a directory which is listed in the 'Include search paths' option.\n\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"c#",
"intellisense",
"visual_studio"
] |
stackoverflow_0000084968_c#_intellisense_visual_studio.txt
|
Q:
Best resource for learning .NET generics?
I've never used any of the .NET generics in my work, but I understand that they are fairly popular. Does anyone have any good links or book suggestions for learning them? As a bonus; I only vaguely understand what .NET generic collections are and what they do...does anyone have any practical examples of how they might be used to greater advantage than the normal collections in .NET?
A:
The obvious choice..
MSDN C# Generics
A:
http://www.informit.com/articles/article.aspx?p=605369
http://www.codeproject.com/KB/cs/genericcache.aspx
A:
CLR via C# by Jeffrey Richter goes into depth about generics, and is one of the priceless resources every .NET developer should own and read.
A:
If you've ever used C++ templates, then .Net generics are nearly the same thing. They even use a similar <T> syntax.
Even if you don't know any C++ you're probably making this harder than you need to, especially with regard to the collections. They're just like any other collection, but when you create them you supply a type name inside the <<>'s so the compiler knows what kind of item they hold.
A:
C# 3.0 in a Nutshell is a fantastic reference book with examples just big enought to grasp the concept without feeling bloated.
A:
I found WROX's Professional .NET 2.0 Generics very useful as it contains lots of real world examples. Generics could be confusing to the beginner but they could be a very useful/powerful/time saving tool in the hands of an experienced developer. Personally, I find .NET Generics most useful in simplifying the defining and use of collections. Also the use of generics could lead to more efficient code as it could minimize the performance hit usually associated with boxing/unboxing type conversions.
A:
It's best to follow some Microsoft training on the subject. If you are looking for books, the following would be ideal:
http://www.microsoft.com/MSPress/books/9469.aspx
|
Best resource for learning .NET generics?
|
I've never used any of the .NET generics in my work, but I understand that they are fairly popular. Does anyone have any good links or book suggestions for learning them? As a bonus; I only vaguely understand what .NET generic collections are and what they do...does anyone have any practical examples of how they might be used to greater advantage than the normal collections in .NET?
|
[
"The obvious choice..\nMSDN C# Generics\n",
"\nhttp://www.informit.com/articles/article.aspx?p=605369\nhttp://www.codeproject.com/KB/cs/genericcache.aspx\n\n",
"CLR via C# by Jeffrey Richter goes into depth about generics, and is one of the priceless resources every .NET developer should own and read.\n",
"If you've ever used C++ templates, then .Net generics are nearly the same thing. They even use a similar <T> syntax.\nEven if you don't know any C++ you're probably making this harder than you need to, especially with regard to the collections. They're just like any other collection, but when you create them you supply a type name inside the <<>'s so the compiler knows what kind of item they hold.\n",
"C# 3.0 in a Nutshell is a fantastic reference book with examples just big enought to grasp the concept without feeling bloated.\n",
"I found WROX's Professional .NET 2.0 Generics very useful as it contains lots of real world examples. Generics could be confusing to the beginner but they could be a very useful/powerful/time saving tool in the hands of an experienced developer. Personally, I find .NET Generics most useful in simplifying the defining and use of collections. Also the use of generics could lead to more efficient code as it could minimize the performance hit usually associated with boxing/unboxing type conversions.\n",
"It's best to follow some Microsoft training on the subject. If you are looking for books, the following would be ideal:\nhttp://www.microsoft.com/MSPress/books/9469.aspx\n"
] |
[
11,
6,
5,
1,
0,
0,
0
] |
[
"My vote is- mostly you can just avoid them :) \nThe main advantage for generics is to save casting, if you end up doing casting internally that doesn't make any sense. If you try to look into this issue you would found that mostly the candidates for generics are really those collections/sets which could create native storage internally for such benefit. Most other components, on the other hand, gain little/no performance, and degrades the flexibility significantly comparing to an interface inheritance implementation.\nIf casting annoys you so much, maybe time for you to consider dynamic languages like IronPython :)\nAnd- if you really come across a scenario you think generic make sense, post it out as another question, the brains here could look together and solve it case by case :)\nUpdate: Yep compiler checking is nice, but check Castle Project's source, you can see many situations where a generic type gets into the way because you can't do casting- making things like IBusinessObject is a lot more flexible than BusinessObject- because you can't cast something inherit from BusinessObject back to BusinessObject and expects to access a function inherited. Usually I saw code end up as BusinessObjectBase -> BusinessObject ->Your actual class. That's why I kinda feel its not always beneficial to use generics- and I was one of those who did abuse such implementation and end up having tons of funny function with generic typing, not nice at all.\nUpdate #2: Boxing/Unboxing just means the requirement to cast the object when you use an abstract type (like object) to store value and use a strong typed value to store it back again (which requires casting), not much difference I can see apart from the collection situation I stated. AND code like this still does boxing:\npublic T GetValue<T>() {\n return (T) ...;\n}\n\nThats the generic abuse I have seen most often. People think they are dealing with native type here, in fact they are not. Instead they just make the casting into generic syntax. What really make sense is this:\npublic class MyList<T>\n{\n private List<T> _list;\n...\npublic T GetValue(int index)\n{\n return _list[index];\n}\n\nThen thats again back to our collection storage. Thats why I said after collection storage I don't see a lot of chance generic helps. \nReferred from this tutorial: \nhttp://en.csharp-online.net/Understanding_Generics—Revisiting_Boxing_and_Unboxing\n"
] |
[
-4
] |
[
".net",
"c#",
"generics"
] |
stackoverflow_0000085147_.net_c#_generics.txt
|
Q:
How to control a web application through email? Or how to run php script by sending an email?
I want to run a web application on php and mysql, using the CakePHP framework. And to keep the threshold of using the site at a very low place, I want to not use the standard login with username/password. (And I don't want to hassle my users with something like OpenID either. Goes to user type.)
So I'm thinking that the users shall be able to log in by sending an email to [email protected] with no subject or content required. And they will get, in reply, an email with a link that will log them in (it will contain a hash). Also I will let the users do some actions without even visiting the site at all, just send an email with [email protected] and the command will be carried out. I will assume that the users and their email providers takes care of their email account security and as such there is no need for it on my site.
Now, how do I go from an email is sent to an account that is not read by humans to there being fired off some script (basically a "dummy browser client" calls an url( and the cakephp will take care of the rest)?
I have never used a cron job before, but I do think I understand their purpose or how they generally work. I can not have the script be called by random people visiting the site, as that solution won't work for several reasons. I think I would like to hear more about the possibility of having the script be run as response to an email coming in, if anyone has any input at all on that. If it's run as a cron job it would only check every X minutes and users would get a lag in their response (if i understand it correctly).
Since there will be different email addresses for different commands, like [email protected] and I know what to do and how to do it to based on the sender email, i dont even need the content, subject or any other headers from the email.
There is a lot of worry about security of this application, I understand the issues, but without giving away my concept, I dont think it is a big issue for what I am doing. Also about the usability issue, there really isnt any. It's just gonna be login to provide changes on a users profile if/when they need that and one other command. And this is the main email and is very easy to remember and the outset of this whole concept.
A:
I have used the pop3 php class with great success (there is also a Pear POP3 module).
Using the pop3 class looks something like this:
require ('pop3.php');
$pop3 = new pop3_class();
$pop3->hostname = MAILHOST;
$pop3->Open();
$pop3->Login('[email protected]', 'mypassword');
foreach($pop3->ListMessages("","") as $msgidx => $msgsize)
{
$headers = "";
$body = "";
$pop3->RetrieveMessage($msgidx, $headers, $body, -1);
}
I use it to monitor a POP3 mailbox which feeds into a database.
It gets called by a cronjob which uses wget to call the url to my php script.
*/5 * * * * "wget -q --http-user=me --http-passwd=pass 'http://mydomain.com/mail.php'" >> /dev/null 2>&1
Edit
I've been thinking about your need to have users send certain site commands by email.
Wouldn't it be easier to have a single address that multiple commands can be sent to rather than having multiple addresses?
I think the security concerns are pretty valid too. Unless the commands are non-destructive or aren't doing anything user-specific, the system will be wide open to anyone who knows how to spoof an email address (which would be everyone :) ).
A:
You'll need some sort of CronJob/Timer Service that checks the Mailbox regularly and then acts on it. Alternatively, you should check the mailserver if it can run a script when a mail arrives (i.e. see if it's possible to put a spamfilter-script in and "abuse" that functionality to call your script instead).
With pure PHP, you're mostly out of luck as something needs to trigger the script. On a Pagewith a LOT of traffic, you could have your index.php or whatever do the check, but when no one visits your site for quite some time, then the mail will not be sent, and you have to be careful of "race conditions" when multiple people are accessing the script at the same time.
Edit: Just keep one usability flaw in mind: People with Multiple PCs and without an e-Mail Client on every one. For example, I use 4 PCs, but only 1 (my main one) has a Mail Client installed, and I use Webmail to check the other ones. Now, logging in and sending a mail through Webmail is not the greatest usability - in order to use YOUR site, I first have to log in to ANOTHER site, compose a mail through the crappy interface most Webmail tools have and wait for answer. Could as well use OpenID there :-)
A:
If your server allows it you can use a .forward file or Procmail to start a process (php or anything) when a mail arrives to a certain address.
A:
You don't want to hassle users with OpenID, but you want them to deal with this email scheme. Firstly, email can take a long time to go through. There isn't any guaranteed time that an email will be delivered in. It's not even guaranteed that the email will get there at all. I know things usually are quick, but it's not uncommon to take up to 10 minutes for a round trip to be completed. Also, unless you're encrypting the email, the link you are sending back is sent in the open. That means anybody can use that link to log in. Depending one how secure you want to be, this may or may not be an issue, but it's definitely something to think about. Using a non-standard login method like this is going to be a lot more work than it is probably worth, and I can't really see any advantages to the whole process.
A:
I was also thinking using procmail to start some script. There is also formail, which might come in handy to change or extract headers. If you have admin access to the mail server, you could also use /etc/aliases and just pipe to your script.
Besides usability issues, you should really think about security - it's actually quite simple to send email with a fake sender address, so I would not rely on it for anything critical.
A:
I agree with all the security concerns. Your assumption that "the users and their email providers takes care of their email account security" is not correct when it comes to the sender's e-mail address.
But since you specifically asked "how do I go from an email is sent to an account that is not read by humans to there being fired off some script", I recommend using procmail to deliver the incoming e-mail to a script you write.
I would not call a URL. I would have the script perform the work by reading the message sent in on stdin. That way, the script is not acessible to anyone on the web site.
To set this up, the e-mail address you provide to your users will have to be associated with a real user
on the system. In that user's home directory, create a file called ".procmailrc"
In that file, add these two lines:
:0 hb:
| /path/to/program
Where /path/to/program is the full path to the script or program for handling
the incoming message. Then create the script with code something like this:
#!/usr/bin/php
<?php
$fp=fopen('php://stdin','r');
while($line = fgets($fp)) {
[do something with each $line of input here]
}
?>
The e-mail message will not remain in the mailbox, so if you want to save or log it, have the script do it.
--
Bruce
A:
I would seriously reconsider this approach. E-mail hasn't got very high reliability. There's all kinds of spamfilters that might intercept e-mails with links thereby rendering the "command" half-finished, not to mention the security risks.
It's very easy to spoof the sender-address on an e-mail. You are basically opening up your system to anyone.
Also instead of a username/password combination you're suddenly requiring the users to remember a list of commands to put in front of an email-address. It would be better to provide them with a username/password and then giving access to a help page.
In other words the usability and security of this scheme scores very low.
I can't really find any advantages to this approach that even comes close to outweighing the massive disadvantages.
A:
One solution to prevent spam, make sure the first line, last line or a specific line contains a certain string, almost like a password, but a full sentence is better.
Only you have the word or words, pretty secure, just remember to delete the mails after use and those that do not have the secret line.
A:
Apart from the security and usability email delivery can be another problem. Depending on the user's email provider, email delivery can be delayed from a few minutes to few hours.
A:
There is a realy nice educational story on thedailywtf.com on designing software. The posed question should be solved by a proper design, not by techo-woopla.
Alexander, please read the linked story and think gloves, not email-driven webpage browsing.
PHP is not a hammer.
|
How to control a web application through email? Or how to run php script by sending an email?
|
I want to run a web application on php and mysql, using the CakePHP framework. And to keep the threshold of using the site at a very low place, I want to not use the standard login with username/password. (And I don't want to hassle my users with something like OpenID either. Goes to user type.)
So I'm thinking that the users shall be able to log in by sending an email to [email protected] with no subject or content required. And they will get, in reply, an email with a link that will log them in (it will contain a hash). Also I will let the users do some actions without even visiting the site at all, just send an email with [email protected] and the command will be carried out. I will assume that the users and their email providers takes care of their email account security and as such there is no need for it on my site.
Now, how do I go from an email is sent to an account that is not read by humans to there being fired off some script (basically a "dummy browser client" calls an url( and the cakephp will take care of the rest)?
I have never used a cron job before, but I do think I understand their purpose or how they generally work. I can not have the script be called by random people visiting the site, as that solution won't work for several reasons. I think I would like to hear more about the possibility of having the script be run as response to an email coming in, if anyone has any input at all on that. If it's run as a cron job it would only check every X minutes and users would get a lag in their response (if i understand it correctly).
Since there will be different email addresses for different commands, like [email protected] and I know what to do and how to do it to based on the sender email, i dont even need the content, subject or any other headers from the email.
There is a lot of worry about security of this application, I understand the issues, but without giving away my concept, I dont think it is a big issue for what I am doing. Also about the usability issue, there really isnt any. It's just gonna be login to provide changes on a users profile if/when they need that and one other command. And this is the main email and is very easy to remember and the outset of this whole concept.
|
[
"I have used the pop3 php class with great success (there is also a Pear POP3 module).\nUsing the pop3 class looks something like this:\nrequire ('pop3.php');\n\n$pop3 = new pop3_class();\n$pop3->hostname = MAILHOST;\n$pop3->Open();\n$pop3->Login('[email protected]', 'mypassword');\n\nforeach($pop3->ListMessages(\"\",\"\") as $msgidx => $msgsize)\n{\n $headers = \"\";\n $body = \"\";\n\n $pop3->RetrieveMessage($msgidx, $headers, $body, -1);\n}\n\nI use it to monitor a POP3 mailbox which feeds into a database. \nIt gets called by a cronjob which uses wget to call the url to my php script.\n*/5 * * * * \"wget -q --http-user=me --http-passwd=pass 'http://mydomain.com/mail.php'\" >> /dev/null 2>&1\n\nEdit\nI've been thinking about your need to have users send certain site commands by email.\nWouldn't it be easier to have a single address that multiple commands can be sent to rather than having multiple addresses?\nI think the security concerns are pretty valid too. Unless the commands are non-destructive or aren't doing anything user-specific, the system will be wide open to anyone who knows how to spoof an email address (which would be everyone :) ).\n",
"You'll need some sort of CronJob/Timer Service that checks the Mailbox regularly and then acts on it. Alternatively, you should check the mailserver if it can run a script when a mail arrives (i.e. see if it's possible to put a spamfilter-script in and \"abuse\" that functionality to call your script instead).\nWith pure PHP, you're mostly out of luck as something needs to trigger the script. On a Pagewith a LOT of traffic, you could have your index.php or whatever do the check, but when no one visits your site for quite some time, then the mail will not be sent, and you have to be careful of \"race conditions\" when multiple people are accessing the script at the same time.\nEdit: Just keep one usability flaw in mind: People with Multiple PCs and without an e-Mail Client on every one. For example, I use 4 PCs, but only 1 (my main one) has a Mail Client installed, and I use Webmail to check the other ones. Now, logging in and sending a mail through Webmail is not the greatest usability - in order to use YOUR site, I first have to log in to ANOTHER site, compose a mail through the crappy interface most Webmail tools have and wait for answer. Could as well use OpenID there :-)\n",
"If your server allows it you can use a .forward file or Procmail to start a process (php or anything) when a mail arrives to a certain address.\n",
"You don't want to hassle users with OpenID, but you want them to deal with this email scheme. Firstly, email can take a long time to go through. There isn't any guaranteed time that an email will be delivered in. It's not even guaranteed that the email will get there at all. I know things usually are quick, but it's not uncommon to take up to 10 minutes for a round trip to be completed. Also, unless you're encrypting the email, the link you are sending back is sent in the open. That means anybody can use that link to log in. Depending one how secure you want to be, this may or may not be an issue, but it's definitely something to think about. Using a non-standard login method like this is going to be a lot more work than it is probably worth, and I can't really see any advantages to the whole process.\n",
"I was also thinking using procmail to start some script. There is also formail, which might come in handy to change or extract headers. If you have admin access to the mail server, you could also use /etc/aliases and just pipe to your script.\nBesides usability issues, you should really think about security - it's actually quite simple to send email with a fake sender address, so I would not rely on it for anything critical.\n",
"I agree with all the security concerns. Your assumption that \"the users and their email providers takes care of their email account security\" is not correct when it comes to the sender's e-mail address.\nBut since you specifically asked \"how do I go from an email is sent to an account that is not read by humans to there being fired off some script\", I recommend using procmail to deliver the incoming e-mail to a script you write. \nI would not call a URL. I would have the script perform the work by reading the message sent in on stdin. That way, the script is not acessible to anyone on the web site.\nTo set this up, the e-mail address you provide to your users will have to be associated with a real user\non the system. In that user's home directory, create a file called \".procmailrc\"\nIn that file, add these two lines:\n:0 hb:\n| /path/to/program\n\nWhere /path/to/program is the full path to the script or program for handling\nthe incoming message. Then create the script with code something like this:\n#!/usr/bin/php\n<?php\n\n$fp=fopen('php://stdin','r');\nwhile($line = fgets($fp)) {\n [do something with each $line of input here]\n}\n\n?>\n\nThe e-mail message will not remain in the mailbox, so if you want to save or log it, have the script do it.\n--\nBruce\n",
"I would seriously reconsider this approach. E-mail hasn't got very high reliability. There's all kinds of spamfilters that might intercept e-mails with links thereby rendering the \"command\" half-finished, not to mention the security risks.\nIt's very easy to spoof the sender-address on an e-mail. You are basically opening up your system to anyone.\nAlso instead of a username/password combination you're suddenly requiring the users to remember a list of commands to put in front of an email-address. It would be better to provide them with a username/password and then giving access to a help page.\nIn other words the usability and security of this scheme scores very low.\nI can't really find any advantages to this approach that even comes close to outweighing the massive disadvantages.\n",
"One solution to prevent spam, make sure the first line, last line or a specific line contains a certain string, almost like a password, but a full sentence is better.\nOnly you have the word or words, pretty secure, just remember to delete the mails after use and those that do not have the secret line.\n",
"Apart from the security and usability email delivery can be another problem. Depending on the user's email provider, email delivery can be delayed from a few minutes to few hours.\n",
"There is a realy nice educational story on thedailywtf.com on designing software. The posed question should be solved by a proper design, not by techo-woopla.\nAlexander, please read the linked story and think gloves, not email-driven webpage browsing.\nPHP is not a hammer.\n"
] |
[
7,
3,
3,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"cakephp",
"email"
] |
stackoverflow_0000046777_cakephp_email.txt
|
Q:
Painting javax.microedition.lcdui.Graphics on LWUIT Component
What would be the best method for getting a custom element (that is using J2ME native Graphics) painted on LWUIT elements?
The custom element is an implementation from mapping library, that paints it's content (for example Google map) to Graphics object. How would it be possible to paint the result directly on LWUIT elements (at the moment I am trying to paint it on a Component).
Is the only way to write a wrapper in LWUIT package, that would expose the internal implementation of it?
Edit:
John: your solution looks like a lot of engineering :P What I ended up using is following wrapper:
package com.sun.lwuit;
public class ImageWrapper {
private final Image image;
public ImageWrapper(final Image lwuitBuffer) {
this.image = lwuitBuffer;
}
public javax.microedition.lcdui.Graphics getGraphics() {
return image.getGraphics().getGraphics();
}
}
Now I can get the 'native' Graphics element from LWUIT. Paint on it - effectively painting on LWUIT image. And I can use the image to paint on a component.
And it still looks like a hack :)
But the real problem is 50kB of code overhead, even after obfuscation. But this is a issue for another post :)
/JaanusSiim
A:
I do not think any hacking is necessary. You can subclass the LWTUI Component class and then you can pain whatever you want on to the graphic context of the component. You do not get the native lcdui.Graphics object but an object with a same interface that is easy to use.
If you really need to pass a lcdui.Graphics to some underlying library to display its output then I would suggest this:
Somewhere in your component code (do only when the component contents really need to be changed):
private Image buffer = null; // keep this
int[] bufferArray = new int[desiredWidth * desiredHeight];
javax.microedition.lcdui.Image bufferImage =
Image.createEmptyImage(desiredWidth, desiredHeight);
thirPartyComponent.paint(bufferImage.getGraphics());
bufferImage.getRGB(bufferArray,0,1,0,0,desiredWidth, desiredHeight);
bufferImage = null; //no longer needed
buffer = Image.createImage(bufferArray, desiredWidth, desiredHeight);
In the component paint(g) method:
g.drawImage(0,0, buffer);
By doing the hack you did you are losing portablity and also sice you are exposing implementation private object you might also break other things.
Hope this helps.
A:
Based on the javadoc for LWUIT and J2ME and guessing that the custom J2ME class is a Canvas it looks like you would have to:
Subclass LWUIT's Component class wrapping the custom J2ME component
Override the paint() method of the LWUIT Component
Subclass the J2ME Graphics class wrapping the LWUIT Graphics class and pass all the method calls through
Pass in the wrapped J2ME Graphics implementation to the custom J2ME component's paint method
That third step is an ugly one. Check on the LWUIT mailing list to see if anyone has dome this before. From the published APIs I don't see another way to do it.
Edit: The hack added in the question looks better than my hack for an Image. What I have may be better for a general case, but I don't know either LWUIT or J2ME well enough to really say that.
|
Painting javax.microedition.lcdui.Graphics on LWUIT Component
|
What would be the best method for getting a custom element (that is using J2ME native Graphics) painted on LWUIT elements?
The custom element is an implementation from mapping library, that paints it's content (for example Google map) to Graphics object. How would it be possible to paint the result directly on LWUIT elements (at the moment I am trying to paint it on a Component).
Is the only way to write a wrapper in LWUIT package, that would expose the internal implementation of it?
Edit:
John: your solution looks like a lot of engineering :P What I ended up using is following wrapper:
package com.sun.lwuit;
public class ImageWrapper {
private final Image image;
public ImageWrapper(final Image lwuitBuffer) {
this.image = lwuitBuffer;
}
public javax.microedition.lcdui.Graphics getGraphics() {
return image.getGraphics().getGraphics();
}
}
Now I can get the 'native' Graphics element from LWUIT. Paint on it - effectively painting on LWUIT image. And I can use the image to paint on a component.
And it still looks like a hack :)
But the real problem is 50kB of code overhead, even after obfuscation. But this is a issue for another post :)
/JaanusSiim
|
[
"I do not think any hacking is necessary. You can subclass the LWTUI Component class and then you can pain whatever you want on to the graphic context of the component. You do not get the native lcdui.Graphics object but an object with a same interface that is easy to use.\nIf you really need to pass a lcdui.Graphics to some underlying library to display its output then I would suggest this:\nSomewhere in your component code (do only when the component contents really need to be changed):\nprivate Image buffer = null; // keep this\n\nint[] bufferArray = new int[desiredWidth * desiredHeight];\njavax.microedition.lcdui.Image bufferImage = \n Image.createEmptyImage(desiredWidth, desiredHeight);\nthirPartyComponent.paint(bufferImage.getGraphics());\nbufferImage.getRGB(bufferArray,0,1,0,0,desiredWidth, desiredHeight);\nbufferImage = null; //no longer needed\nbuffer = Image.createImage(bufferArray, desiredWidth, desiredHeight);\n\nIn the component paint(g) method:\ng.drawImage(0,0, buffer);\n\nBy doing the hack you did you are losing portablity and also sice you are exposing implementation private object you might also break other things.\nHope this helps.\n",
"Based on the javadoc for LWUIT and J2ME and guessing that the custom J2ME class is a Canvas it looks like you would have to:\n\nSubclass LWUIT's Component class wrapping the custom J2ME component\nOverride the paint() method of the LWUIT Component\nSubclass the J2ME Graphics class wrapping the LWUIT Graphics class and pass all the method calls through\nPass in the wrapped J2ME Graphics implementation to the custom J2ME component's paint method\n\nThat third step is an ugly one. Check on the LWUIT mailing list to see if anyone has dome this before. From the published APIs I don't see another way to do it. \nEdit: The hack added in the question looks better than my hack for an Image. What I have may be better for a general case, but I don't know either LWUIT or J2ME well enough to really say that. \n"
] |
[
2,
0
] |
[] |
[] |
[
"java",
"java_me",
"lwuit"
] |
stackoverflow_0000023372_java_java_me_lwuit.txt
|
Q:
RIghtFax Esoteric error message in .NET 1.1
I have a problem with RightFax component Interop.RFCOMAPILib.dll version 1.0.0.0 , using VB .NET 1.1.
It works in several environments, but not in Production.
It returns this message in the exception - "?" - .
How can I solve it? I couldn't find any solution in manuals or on the internet.
A:
remembers rightfax and lack of information on the internet. this is no answer to your question but one could switch to the builtin windows 2003 fax server with dedicated isdn fax board, has automatic mailforwarding
|
RIghtFax Esoteric error message in .NET 1.1
|
I have a problem with RightFax component Interop.RFCOMAPILib.dll version 1.0.0.0 , using VB .NET 1.1.
It works in several environments, but not in Production.
It returns this message in the exception - "?" - .
How can I solve it? I couldn't find any solution in manuals or on the internet.
|
[
"remembers rightfax and lack of information on the internet. this is no answer to your question but one could switch to the builtin windows 2003 fax server with dedicated isdn fax board, has automatic mailforwarding\n"
] |
[
0
] |
[] |
[] |
[
".net_1.1",
"asp.net",
"rightfax",
"vb.net"
] |
stackoverflow_0000084916_.net_1.1_asp.net_rightfax_vb.net.txt
|
Q:
Multiple Forms and a Single Update,Will it work?
I need to make an application in .NET CF with different/single forms with a lot of drawing/animation on each forms.I would prefer to have a single update[my own for state management and so on] function so that i can manage the different states, so that my [J2ME Gaming Code] will work without much changes.I have came to some possible scenarios. Which of the one will be perfect?
Have a single form and add/delete the controls manually , then use any of the gamelooping tricks.
Create different forms with controls and call update and application.doEvents() in the main thread.[ while(isAppRunning){ UPDATE() Application.DoEvents() }
Create a update - paint loop on each of the form as required.
Any other ideas.
Please give me suggestion regarding this
A:
If its a game then i'd drop most of the forms and work with the bare essentials, work off a bitmap if possible and render that by either overriding the main form's paint method or a control that resides within it (perhaps a panel). That will give you better performance.
The main issue is that the compact framework isn't really designed for a lot of UI fun you don't get double-buffering for free like in full framework, proper transparency is a bitch to do with WinForm controls and if you hold onto to the UI thread for a little too long you'll get serious rendering glitches. Hell you might even get those if you do too much on background threads! :O
You're never going to get optimal performance from explicitly calling Application.DoEvents, my rule of thumb is to only use that when trouble-shooting or writing little hacks in the UI.
It might be worth sticking the game on a background thread and then calling .Invoke on the control to marshal back to the main UI thread to update your display leaving the UI with plenty of time to respond while also handling user input.
User input is another reason I avoid normal winform controls, as mobile devices generally don't have many keys it's very useful to be able to remap them so I generally avoid things like TextBoxes that have preset key events/responses.
I'd also avoid using different forms as showing a new form can provide a subtle pause, I generally swap out controls to a main form to avoid this issue when writing business software.
At the end of the day it's probably worth experimenting with various techniques to see what works out for the best. Also see if you can get any tips from people who develop games on CF as I generally only do business software.
HTH!
|
Multiple Forms and a Single Update,Will it work?
|
I need to make an application in .NET CF with different/single forms with a lot of drawing/animation on each forms.I would prefer to have a single update[my own for state management and so on] function so that i can manage the different states, so that my [J2ME Gaming Code] will work without much changes.I have came to some possible scenarios. Which of the one will be perfect?
Have a single form and add/delete the controls manually , then use any of the gamelooping tricks.
Create different forms with controls and call update and application.doEvents() in the main thread.[ while(isAppRunning){ UPDATE() Application.DoEvents() }
Create a update - paint loop on each of the form as required.
Any other ideas.
Please give me suggestion regarding this
|
[
"If its a game then i'd drop most of the forms and work with the bare essentials, work off a bitmap if possible and render that by either overriding the main form's paint method or a control that resides within it (perhaps a panel). That will give you better performance.\nThe main issue is that the compact framework isn't really designed for a lot of UI fun you don't get double-buffering for free like in full framework, proper transparency is a bitch to do with WinForm controls and if you hold onto to the UI thread for a little too long you'll get serious rendering glitches. Hell you might even get those if you do too much on background threads! :O\nYou're never going to get optimal performance from explicitly calling Application.DoEvents, my rule of thumb is to only use that when trouble-shooting or writing little hacks in the UI.\nIt might be worth sticking the game on a background thread and then calling .Invoke on the control to marshal back to the main UI thread to update your display leaving the UI with plenty of time to respond while also handling user input.\nUser input is another reason I avoid normal winform controls, as mobile devices generally don't have many keys it's very useful to be able to remap them so I generally avoid things like TextBoxes that have preset key events/responses.\nI'd also avoid using different forms as showing a new form can provide a subtle pause, I generally swap out controls to a main form to avoid this issue when writing business software.\nAt the end of the day it's probably worth experimenting with various techniques to see what works out for the best. Also see if you can get any tips from people who develop games on CF as I generally only do business software.\nHTH!\n"
] |
[
1
] |
[] |
[] |
[
".net",
"c#",
"compact_framework"
] |
stackoverflow_0000085925_.net_c#_compact_framework.txt
|
Q:
How do I disable a button cell in a WinForms DataGrid?
I have a WinForms application with a DataGridView control and a column of DataGridViewButtonCell cells within that. When I click on one of these buttons, it starts a background task, and I'd like to disable the buttons until that task completes.
I can disable the DataGridView control, but it gives no visual indication that the buttons are disabled. I want the user to see that the buttons are disabled, and to notice that the task has finished when the buttons are enabled again.
Bonus points for a method that allows me to disable the buttons individually, so I can leave one of the buttons enabled while the task runs. (Note that I can't actually give out bonus points.)
A:
Here's the best solution I've found so far. This MSDN article gives the source code for a cell class that adds an Enabled property.
It works reasonably well, but there are two gotchas:
You have to invalidate the grid after setting the Enabled property on any cells. It shows that in the sample code, but I missed it.
It's only a visual change, setting the Enabled property doesn't actually enable or disable the button. The user can still click on it. I could check the enabled property before executing the click event, but it also seemed to be messing up the appearance when the user clicked on it. Instead, I just disabled the entire grid. That works alright for me, but I'd prefer a method that allows me to disable some buttons without disabling the entire grid.
There's a similar sample in the DataGridView FAQ.
A:
You could give this a try:
When you click on the cell...
Check to see if the process with the current row identifier is running from a class-level list; if so, exit the cell click event.
Store the row identifier in the class-level list of running processes.
Change the button text to "Running..." or something appropriate.
Attach a basic RunWorkerCompleted event handler to your process (explained shortly).
Call backgroundWorker.RunWorkerAsync(rowIdentifier).
In the DoWork event handler...
Set e.Result = e.Argument (or create an object that will return both the argument and your desired result)
In the RunWorkerCompleted event hanlder...
Remove the row identifier from the running processes list (e.Result is the identifier).
Change the button text from "Running..." to "Ready"
|
How do I disable a button cell in a WinForms DataGrid?
|
I have a WinForms application with a DataGridView control and a column of DataGridViewButtonCell cells within that. When I click on one of these buttons, it starts a background task, and I'd like to disable the buttons until that task completes.
I can disable the DataGridView control, but it gives no visual indication that the buttons are disabled. I want the user to see that the buttons are disabled, and to notice that the task has finished when the buttons are enabled again.
Bonus points for a method that allows me to disable the buttons individually, so I can leave one of the buttons enabled while the task runs. (Note that I can't actually give out bonus points.)
|
[
"Here's the best solution I've found so far. This MSDN article gives the source code for a cell class that adds an Enabled property. \nIt works reasonably well, but there are two gotchas:\n\nYou have to invalidate the grid after setting the Enabled property on any cells. It shows that in the sample code, but I missed it.\nIt's only a visual change, setting the Enabled property doesn't actually enable or disable the button. The user can still click on it. I could check the enabled property before executing the click event, but it also seemed to be messing up the appearance when the user clicked on it. Instead, I just disabled the entire grid. That works alright for me, but I'd prefer a method that allows me to disable some buttons without disabling the entire grid.\n\nThere's a similar sample in the DataGridView FAQ.\n",
"You could give this a try:\nWhen you click on the cell...\n\nCheck to see if the process with the current row identifier is running from a class-level list; if so, exit the cell click event.\nStore the row identifier in the class-level list of running processes.\nChange the button text to \"Running...\" or something appropriate.\nAttach a basic RunWorkerCompleted event handler to your process (explained shortly).\nCall backgroundWorker.RunWorkerAsync(rowIdentifier).\n\nIn the DoWork event handler...\n\nSet e.Result = e.Argument (or create an object that will return both the argument and your desired result)\n\nIn the RunWorkerCompleted event hanlder...\n\nRemove the row identifier from the running processes list (e.Result is the identifier).\nChange the button text from \"Running...\" to \"Ready\" \n\n"
] |
[
3,
1
] |
[] |
[] |
[
"c#",
"user_interface",
"winforms"
] |
stackoverflow_0000086096_c#_user_interface_winforms.txt
|
Q:
How to conditionally enable actions in C# ASP.NET website
Using a configuration file I want to enable myself to turn on and off things like (third party) logging and using a cache in a C# website. The solution should not be restricted to logging and caching in particular but more general, so I can use it for other things as well.
I have a configuration xml file in which I can assert that logging and caching should be turned on or off (it could also be in the Web.Config, that's not the point right now) which will result in for example a bool logging and a bool caching that are true or false.
The question is about this part:
What I can do is prepend every logging/caching related statement with if (logging) and if (caching).
What is better way of programming this? Is there also a programming term for this kind of problem? Maybe attributes are also a way to go?
A:
Why not just use the web.config and the System.Configuration functionality that already exists?
Your web app is going to parse web.config on every page load anyway, so the overhead involved in having yet another XML config file seems overkill when you can just define your own section on the existing configuration.
A:
I'm curious what kind of logging/caching statements you have? If you have some class that is doing WriteLog or StoreCahce or whatever... why not just put the if(logging) in the WriteLog method. It seems like if you put all of your logging caching related methods into once class and that class knew whether logging/caching was on, then you could save your self a bunch of If statements at each instance.
A:
You could check out the Microsoft Enterprise Library. It features stuff like logging and caching. The logging is made easy by the fact you always include the logging code but the actual logging beneath it is controlled by the settings.
http://msdn.microsoft.com/en-us/library/cc467894.aspx
You can find other cool stuff in the patterns and practices group.
A:
Consult http://msdn.microsoft.com/en-us/library/ms178606.aspx for specifics regarding configuring cache.
A:
I agree with foxxtrot, you want to use the web.config and add in a appsetting or two to hold the values.
Then for the implementation on checking, yes, simply use an if to see if you need to do the action. I highly recommend centralizing your logging classes to prevent duplication of code.
A:
You could use a dependency injection container and have it load different logging and caching objects based on configuration. If you wanted to enable logging, you would specify an active Logging object/provider in config; if you wanted to then disable it, you could have the DI inject a "dummy" logging provider that did not log anything but returned right away.
I would lean toward a simpler design such as the one proposed by @foxxtrot, but runtime swapping out of utility components is one of the things that DI can do for you that is kind of nice.
|
How to conditionally enable actions in C# ASP.NET website
|
Using a configuration file I want to enable myself to turn on and off things like (third party) logging and using a cache in a C# website. The solution should not be restricted to logging and caching in particular but more general, so I can use it for other things as well.
I have a configuration xml file in which I can assert that logging and caching should be turned on or off (it could also be in the Web.Config, that's not the point right now) which will result in for example a bool logging and a bool caching that are true or false.
The question is about this part:
What I can do is prepend every logging/caching related statement with if (logging) and if (caching).
What is better way of programming this? Is there also a programming term for this kind of problem? Maybe attributes are also a way to go?
|
[
"Why not just use the web.config and the System.Configuration functionality that already exists?\nYour web app is going to parse web.config on every page load anyway, so the overhead involved in having yet another XML config file seems overkill when you can just define your own section on the existing configuration.\n",
"I'm curious what kind of logging/caching statements you have? If you have some class that is doing WriteLog or StoreCahce or whatever... why not just put the if(logging) in the WriteLog method. It seems like if you put all of your logging caching related methods into once class and that class knew whether logging/caching was on, then you could save your self a bunch of If statements at each instance.\n",
"You could check out the Microsoft Enterprise Library. It features stuff like logging and caching. The logging is made easy by the fact you always include the logging code but the actual logging beneath it is controlled by the settings.\nhttp://msdn.microsoft.com/en-us/library/cc467894.aspx\nYou can find other cool stuff in the patterns and practices group.\n",
"Consult http://msdn.microsoft.com/en-us/library/ms178606.aspx for specifics regarding configuring cache.\n",
"I agree with foxxtrot, you want to use the web.config and add in a appsetting or two to hold the values.\nThen for the implementation on checking, yes, simply use an if to see if you need to do the action. I highly recommend centralizing your logging classes to prevent duplication of code.\n",
"You could use a dependency injection container and have it load different logging and caching objects based on configuration. If you wanted to enable logging, you would specify an active Logging object/provider in config; if you wanted to then disable it, you could have the DI inject a \"dummy\" logging provider that did not log anything but returned right away.\nI would lean toward a simpler design such as the one proposed by @foxxtrot, but runtime swapping out of utility components is one of the things that DI can do for you that is kind of nice.\n"
] |
[
6,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"c#"
] |
stackoverflow_0000086204_c#.txt
|
Q:
SharePoint - How do insert new items using the list web service?
I have a list with 2 text fields, and a choice field. How do I use the Lists.asmx web service to insert a new item? I can make a web reference to the lists.asmx service, so you can assume that this is known.
I would like a complete example including code and the XML for the CAML query. Ideally the sample would use C#.
A:
Using the Lists web service to insert item into a SharePoint list can indeed be tricky. Since this method is of the form: XML in, XML out, it can be hard to get the parameters right.
First you should take a look at the list definition. It can be retrieved with the method GetList(), as shown below:
XmlNode listXml = sharePointLists.GetList(listName);
File.WriteAllText("listdefinition.xml", listXml.OuterXml);
Important here are the names of the fields and their data types. Field names will never be the same as the ones you see in the SharePoint GUI. A good example is the Title field which is used for the first field of the list.
Now that you know that, you can create the query to go to SharePoint. An example:
<Batch OnError="Continue">
<Method ID="1" Cmd="New">
<Field Name="Title">Abcdef</Field>
<Field Name="Project_x0020_code">999050</Field>
<Field Name="Status">Open</Field>
</Method>
</Batch>
The Batch element is the root element of the XML. Inside you can put different Methods. These should get a unique ID (which is used to report errors back to you) and a command, which can for instance be "New" or "Update". Inside the Method, you put Field elements that specify the value for each field. For instance, the Title field gets the value "Abcdef". Be careful to use the exact name as it is returned by GetList().
To execute the query on SharePoint, use the UpdateListItems() method:
XmlNode result = sharePointLists.UpdateListItems(listDefinition.Name, updates);
The return value is an XML fragment containing the status of each update. For instance:
<Results xmlns="http://schemas.microsoft.com/sharepoint/soap/">
<Result ID="1,New">
<ErrorCode>0x00000000</ErrorCode>
<z:row ows_ContentTypeId="0x010036F3F587127F1A44B8BA3FEFED4733C6"
ows_Title="Abcdef"
ows_Project_x0020_code="999050"
ows_Status="Open"
ows_LinkTitleNoMenu="Abcdef"
ows_LinkTitle="Abcdef"
ows_ID="1005"
...
xmlns:z="#RowsetSchema" />
</Result>
</Results>
You can parse this and look at the ErrorCode to see which methods failed.
In practice I have created a wrapper class that takes care of all the dirty details for me. Unfortunately this is owned by my employer so I cannot share it with you.
This wrapper class is part of an internal utility that is used to retrieve information from our project database and post it to SharePoint. Since it was developed during company time, I'm not allowed to post it here.
|
SharePoint - How do insert new items using the list web service?
|
I have a list with 2 text fields, and a choice field. How do I use the Lists.asmx web service to insert a new item? I can make a web reference to the lists.asmx service, so you can assume that this is known.
I would like a complete example including code and the XML for the CAML query. Ideally the sample would use C#.
|
[
"Using the Lists web service to insert item into a SharePoint list can indeed be tricky. Since this method is of the form: XML in, XML out, it can be hard to get the parameters right. \nFirst you should take a look at the list definition. It can be retrieved with the method GetList(), as shown below:\nXmlNode listXml = sharePointLists.GetList(listName);\nFile.WriteAllText(\"listdefinition.xml\", listXml.OuterXml);\n\nImportant here are the names of the fields and their data types. Field names will never be the same as the ones you see in the SharePoint GUI. A good example is the Title field which is used for the first field of the list.\nNow that you know that, you can create the query to go to SharePoint. An example:\n<Batch OnError=\"Continue\">\n <Method ID=\"1\" Cmd=\"New\">\n <Field Name=\"Title\">Abcdef</Field>\n <Field Name=\"Project_x0020_code\">999050</Field>\n <Field Name=\"Status\">Open</Field> \n </Method>\n</Batch>\n\nThe Batch element is the root element of the XML. Inside you can put different Methods. These should get a unique ID (which is used to report errors back to you) and a command, which can for instance be \"New\" or \"Update\". Inside the Method, you put Field elements that specify the value for each field. For instance, the Title field gets the value \"Abcdef\". Be careful to use the exact name as it is returned by GetList().\nTo execute the query on SharePoint, use the UpdateListItems() method:\nXmlNode result = sharePointLists.UpdateListItems(listDefinition.Name, updates);\n\nThe return value is an XML fragment containing the status of each update. For instance:\n<Results xmlns=\"http://schemas.microsoft.com/sharepoint/soap/\">\n <Result ID=\"1,New\">\n <ErrorCode>0x00000000</ErrorCode>\n <z:row ows_ContentTypeId=\"0x010036F3F587127F1A44B8BA3FEFED4733C6\" \n ows_Title=\"Abcdef\" \n ows_Project_x0020_code=\"999050\" \n ows_Status=\"Open\" \n ows_LinkTitleNoMenu=\"Abcdef\" \n ows_LinkTitle=\"Abcdef\" \n ows_ID=\"1005\" \n ... \n xmlns:z=\"#RowsetSchema\" />\n </Result>\n</Results>\n\nYou can parse this and look at the ErrorCode to see which methods failed.\nIn practice I have created a wrapper class that takes care of all the dirty details for me. Unfortunately this is owned by my employer so I cannot share it with you.\nThis wrapper class is part of an internal utility that is used to retrieve information from our project database and post it to SharePoint. Since it was developed during company time, I'm not allowed to post it here.\n"
] |
[
16
] |
[] |
[] |
[
"c#",
"service",
"sharepoint",
"xml"
] |
stackoverflow_0000085392_c#_service_sharepoint_xml.txt
|
Q:
Why is setInterval calling a function with random arguments?
So, I am seeing a curious problem. If I have a function
// counter wraps around to beginning eventually, omitted for clarity.
var counter;
cycleCharts(chartId) {
// chartId should be undefined when called from setInterval
console.log('chartId: ' + chartId);
if(typeof chartId == 'undefined' || chartId < 0) {
next = counter++;
}
else {
next = chartId;
}
// ... do stuff to display the next chart
}
This function can be called explicitly by user action, in which case chartId is passed in as an argument, and the selected chart is shown; or it can be in autoplay mode, in which case it's called by a setInterval which is initialized by the following:
var cycleId = setInterval(cycleCharts, 10000);
The odd thing is, I'm actually seeing the cycleCharts() get a chartId argument even when it's called from setInterval! The setInterval doesn't even have any parameters to pass along to the cycleCharts function, so I'm very baffled as to why chartId is not undefined when cycleCharts is called from the setInterval.
A:
setInterval is feeding cycleCharts actual timing data ( so one can work out the actual time it ran and use to produce a less stilted response, mostly practical in animation )
you want
var cycleId = setInterval(function(){ cycleCharts(); }, 10000);
( this behavior may not be standardized, so don't rely on it too heavily )
A:
It tells you how many milliseconds late the callback is called.
A:
var cycleId = setInterval(cycleCharts, 10000, 4242);
From the third parameter and onwards - they get passed into the function so in my example you send 4242 as the chartId. I know it might not be the answer to the question you posed, but it might the the solution to your problem? I think the value it gets is just random from whatever lies on the stack at the time of passing/calling the method.
|
Why is setInterval calling a function with random arguments?
|
So, I am seeing a curious problem. If I have a function
// counter wraps around to beginning eventually, omitted for clarity.
var counter;
cycleCharts(chartId) {
// chartId should be undefined when called from setInterval
console.log('chartId: ' + chartId);
if(typeof chartId == 'undefined' || chartId < 0) {
next = counter++;
}
else {
next = chartId;
}
// ... do stuff to display the next chart
}
This function can be called explicitly by user action, in which case chartId is passed in as an argument, and the selected chart is shown; or it can be in autoplay mode, in which case it's called by a setInterval which is initialized by the following:
var cycleId = setInterval(cycleCharts, 10000);
The odd thing is, I'm actually seeing the cycleCharts() get a chartId argument even when it's called from setInterval! The setInterval doesn't even have any parameters to pass along to the cycleCharts function, so I'm very baffled as to why chartId is not undefined when cycleCharts is called from the setInterval.
|
[
"setInterval is feeding cycleCharts actual timing data ( so one can work out the actual time it ran and use to produce a less stilted response, mostly practical in animation )\nyou want \n var cycleId = setInterval(function(){ cycleCharts(); }, 10000); \n\n( this behavior may not be standardized, so don't rely on it too heavily ) \n",
"It tells you how many milliseconds late the callback is called.\n",
"var cycleId = setInterval(cycleCharts, 10000, 4242);\nFrom the third parameter and onwards - they get passed into the function so in my example you send 4242 as the chartId. I know it might not be the answer to the question you posed, but it might the the solution to your problem? I think the value it gets is just random from whatever lies on the stack at the time of passing/calling the method.\n"
] |
[
4,
3,
1
] |
[] |
[] |
[
"firefox",
"javascript"
] |
stackoverflow_0000086269_firefox_javascript.txt
|
Q:
Beginner Digital Synth
I'm looking into writing a audio syntesizer in Java, and was wondering if anybody has any advice or good resources for writing such a program. I'm looking for info on generating raw sound waves, how to output them into a usable form (playing over speakers), as well as general theory on the topic. Thanks guys.
A:
This problem is basically about mapping functions to arrays of numbers. A language that supports first-class functions would come in really handy here.
Check out
http://www.harmony-central.com/Computer/Programming and
http://www.developer.com/java/other/article.php/3071021 for some Java-related info.
If you don't know the basic concepts of encoding sound data, then read http://en.wikipedia.org/wiki/Sampling_rate
The canonical WAVE format is very simple, see http://www.lightlink.com/tjweber/StripWav/Canon.html. A header (first 44 bytes) + the wave-data. You don't need any library to implement that.
In C/C++, the corresponding data structure would look something like this:
typedef struct _WAVstruct
{
char headertag[4];
unsigned int remnantlength;
char fileid[4];
char fmtchunktag[4];
unsigned int fmtlength;
unsigned short fmttag;
unsigned short channels;
unsigned int samplerate;
unsigned int bypse;
unsigned short ba;
unsigned short bipsa;
char datatag[4];
unsigned int datalength;
void* data; //<--- that's where the raw sound-data goes
}* WAVstruct;
I'm not sure about Java. I guess you'll have to substitute "struct" with "class" and "void* data" with "char[] data" or "short[] data" or "int[] data", corresponding to the number of bits per sample, as defined in the field bipsa.
To fill it with data, you would use something like that in C/C++:
int data2WAVstruct(unsigned short channels, unsigned short bipsa, unsigned int samplerate, unsigned int datalength, void* data, WAVstruct result)
{
result->headertag[0] = 'R';
result->headertag[1] = 'I';
result->headertag[2] = 'F';
result->headertag[3] = 'F';
result->remnantlength = 44 + datalength - 8;
result->fileid[0] = 'W';
result->fileid[1] = 'A';
result->fileid[2] = 'V';
result->fileid[3] = 'E';
result->fmtchunktag[0] = 'f';
result->fmtchunktag[1] = 'm';
result->fmtchunktag[2] = 't';
result->fmtchunktag[3] = ' ';
result->fmtlength = 0x00000010;
result->fmttag = 1;
result->channels = channels;
result->samplerate = samplerate;
result->bipsa = bipsa;
result->ba = channels*bipsa / 8;
result->bypse = samplerate*result->ba;
result->datatag[0] = 'd';
result->datatag[1] = 'a';
result->datatag[2] = 't';
result->datatag[3] = 'a';
result->datalength = datalength;
result->data = data; // <--- that's were the data comes in
return 0; // an error code, not implemented, yet ...; in Java: return result
}
Again, I'm not sure about Java but the conversion should be straightforward if you convert the void-pointer to an array corresponding to the bitrate.
Then simply write the entire structure to a file to get a playable wave file.
A:
Check out Frinika. It's a full-featured music workstation implemented in Java (open source). Using the API, you can run midi events through the synthesizer, read the raw sound output, and write it to a WAV file (see source code link below).
Additional information:
Frinika Developer Area
Source code for midi renderer tool
A:
While studying for my degree, my dissertation project was the creation of a Java based modular synthesizer, and the University at which I studied saw fit to make my report publicly available:
A Software Based Modular Synthesiser in Java
A:
I dont't know if that helps, but if you can use MIDI for anything, you should check out JFuge.
|
Beginner Digital Synth
|
I'm looking into writing a audio syntesizer in Java, and was wondering if anybody has any advice or good resources for writing such a program. I'm looking for info on generating raw sound waves, how to output them into a usable form (playing over speakers), as well as general theory on the topic. Thanks guys.
|
[
"\nThis problem is basically about mapping functions to arrays of numbers. A language that supports first-class functions would come in really handy here.\nCheck out\nhttp://www.harmony-central.com/Computer/Programming and\nhttp://www.developer.com/java/other/article.php/3071021 for some Java-related info.\nIf you don't know the basic concepts of encoding sound data, then read http://en.wikipedia.org/wiki/Sampling_rate\nThe canonical WAVE format is very simple, see http://www.lightlink.com/tjweber/StripWav/Canon.html. A header (first 44 bytes) + the wave-data. You don't need any library to implement that.\n\nIn C/C++, the corresponding data structure would look something like this:\ntypedef struct _WAVstruct\n{\n char headertag[4];\n unsigned int remnantlength;\n char fileid[4];\n\n char fmtchunktag[4];\n unsigned int fmtlength;\n unsigned short fmttag;\n unsigned short channels;\n unsigned int samplerate;\n unsigned int bypse;\n unsigned short ba;\n unsigned short bipsa;\n\n char datatag[4];\n unsigned int datalength;\n\n void* data; //<--- that's where the raw sound-data goes\n}* WAVstruct;\n\nI'm not sure about Java. I guess you'll have to substitute \"struct\" with \"class\" and \"void* data\" with \"char[] data\" or \"short[] data\" or \"int[] data\", corresponding to the number of bits per sample, as defined in the field bipsa.\nTo fill it with data, you would use something like that in C/C++:\nint data2WAVstruct(unsigned short channels, unsigned short bipsa, unsigned int samplerate, unsigned int datalength, void* data, WAVstruct result)\n{\n result->headertag[0] = 'R';\n result->headertag[1] = 'I';\n result->headertag[2] = 'F';\n result->headertag[3] = 'F';\n result->remnantlength = 44 + datalength - 8;\n result->fileid[0] = 'W';\n result->fileid[1] = 'A';\n result->fileid[2] = 'V';\n result->fileid[3] = 'E';\n\n result->fmtchunktag[0] = 'f';\n result->fmtchunktag[1] = 'm'; \n result->fmtchunktag[2] = 't';\n result->fmtchunktag[3] = ' ';\n result->fmtlength = 0x00000010;\n result->fmttag = 1;\n result->channels = channels;\n result->samplerate = samplerate;\n result->bipsa = bipsa;\n result->ba = channels*bipsa / 8;\n result->bypse = samplerate*result->ba;\n\n result->datatag[0] = 'd';\n result->datatag[1] = 'a';\n result->datatag[2] = 't';\n result->datatag[3] = 'a';\n result->datalength = datalength;\n\n result->data = data; // <--- that's were the data comes in\n\n return 0; // an error code, not implemented, yet ...; in Java: return result\n}\n\nAgain, I'm not sure about Java but the conversion should be straightforward if you convert the void-pointer to an array corresponding to the bitrate. \nThen simply write the entire structure to a file to get a playable wave file.\n",
"Check out Frinika. It's a full-featured music workstation implemented in Java (open source). Using the API, you can run midi events through the synthesizer, read the raw sound output, and write it to a WAV file (see source code link below).\nAdditional information:\n\nFrinika Developer Area\nSource code for midi renderer tool\n\n",
"While studying for my degree, my dissertation project was the creation of a Java based modular synthesizer, and the University at which I studied saw fit to make my report publicly available:\nA Software Based Modular Synthesiser in Java\n",
"I dont't know if that helps, but if you can use MIDI for anything, you should check out JFuge.\n"
] |
[
6,
2,
2,
1
] |
[] |
[] |
[
"java",
"synthesizer"
] |
stackoverflow_0000036567_java_synthesizer.txt
|
Q:
'Looser' typing in C# by casting down the inheritance tree
The question I want to ask is thus:
Is casting down the inheritance tree (ie. towards a more specialiased class) from inside an abstract class excusable, or even a good thing, or is it always a poor choice with better options available?
Now, the example of why I think it can be used for good.
I recently implemented Bencoding from the BitTorrent protocol in C#. A simple enough problem, how to represent the data. I chose to do it this way,
We have an abstract BItem class, which provides some basic functionality, including the static BItem Decode(string) that is used to decode a Bencoded string into the necessary structure.
There are also four derived classes, BString, BInteger, BList and BDictionary, representing the four different data types that be encoded. Now, here is the tricky part. BList and BDictionary have this[int] and this[string] accessors respectively to allow access to the array-like qualities of these data types.
The potentially horrific part is coming now:
BDictionary torrent = (BDictionary) BItem.DecodeFile("my.torrent");
int filelength = (BInteger)((BDictionary)((BList)((BDictionary)
torrent["info"])["files"])[0])["length"];
Well, you get the picture... Ouch, that's hard on the eyes, not to mention the brain. So, I introduced something extra into the abstract class:
public BItem this[int index]
{
get { return ((BList)this)[index]; }
}
public BItem this[string index]
{
get { return ((BDictionary)this)[index]; }
}
Now we could rewrite that old code as:
BDictionary torrent = (BDictionary)BItem.DecodeFile("my.torrent");
int filelength = (BInteger)torrent["info"]["files"][0]["length"];
Wow, hey presto, MUCH more readable code. But did I just sell part of my soul for implying knowledge of subclasses into the abstract class?
EDIT: In response to some of the answers coming in, you're completely off track for this particular question since the structure is variable, for instance my example of torrent["info"]["files"][0]["length"] is valid, but so is torrent["announce-list"][0][0], and both would be in 90% of torrent files out there. Generics isn't the way to go, with this problem atleast :(. Have a click through to the spec I linked, it's only 4 small dot-points large.
A:
I think I would make the this[int] and this[string] accessors virtual and override them in BList/BDictionary. Classes where the accessors does not make sense should cast a NotSupportedException() (perhaps by having a default implementation in BItem).
That makes your code work in the same way and gives you a more readable error in case you should write
(BInteger)torrent["info"][0]["files"]["length"];
by mistake.
A:
You really should not access any derived classes from the base class as it pretty much breaks the idea of OOP. Readibility certainly goes a long way, but I wouldn't trade it for reusability. Consider the case when you'll need to add another subclass - you'll also need to update the base class accordingly.
A:
If file length is something you retrieve often, why not implement a property in the BDictionary (?) class... so that you code becomes:
BDictionary torrent = BItem.DecodeFile("my.torrent");
int filelength = torrent.FileLength;
That way the implementation details are hidden from the user.
A:
The way I see it, not all BItems are collections, thus not all BItems have indexers, so the indexer shouldn't be in BItem. I would derive another abstract class from BItem, let's name it BCollection, and put the indexers there, something like:
abstract class BCollection : BItem {
public BItem this[int index] {get;}
public BItem this[string index] {get;}
}
and make BList and BDictionary inherit from BCollection.
Or you could go the extra mile and make BCollection a generic class.
A:
My recommendation would be to introduce more abstractions. I find it confusing that a BItem has a DecodeFile() which returns a BDictionary. This may be a reasonable thing to do in the torrent domain, I don't know.
However, I would find an api like the following more reasonable:
BFile torrent = BFile.DecodeFile("my.torrent");
int filelength = torrent.Length;
A:
Did you concider parsing a simple "path" so you could write it this way:
BDictionary torrent = BItem.DecodeFile("my.torrent");
int filelength = (int)torrent.Fetch("info.files.0.length");
Perhaps not the best way, but the readability increases(a little)
A:
If you have complete control of your codebase and your thought-process, by all means do.
If not, you'll regret this the day some new person injects a BItem derivation that you didn't see coming into your BList or BDictionary.
If you have to do this, atleast wrap it (control access to the list) in a class which has strongly typed method signatures.
BString GetString(BInteger);
SetString(BInteger, BString);
Accept and return BStrings even though you internally store it in a BList of BItems. (let me split before I make my 2 B or not 2 B)
A:
Hmm. I would actually argue that the first line of coded is more readable than the second - it takes a little longer to figure out what's going on it, but its more apparant that you're treating objects as BList or BDictionary. Applying the methods to the abstract class hides that detail, which can make it harder to figure out what your method is actually doing.
A:
If you introduce generics, you can avoid casting.
class DecodedTorrent : BDictionary<BDictionary<BList<BDictionary<BInteger>>>>
{
}
DecodedTorrent torrent = BItem.DecodeFile("mytorrent");
int x = torrent["info"]["files"][0]["length"];
Hmm, but that probably won't work, as the types may depend on the path you take through the structure.
A:
Is it just me
BDictionary torrent = BItem.DecodeFile("my.torrent");int filelength = (BInteger)((BDictionary)((BList)((BDictionary) torrent["info"])["files"])[0])["length"];
You don't need the BDictionary cast 'torrent' is declared as a BDictionary
public BItem this[int index]{ get { return ((BList)this)[index]; }}public BItem this[string index]{ get { return ((BDictionary)this)[index]; }}
These don't acheive the desired result as the return type is still the abstrat version, so you still have to cast.
The rewritten code would have to be
BDictionary torrent = BItem.DecodeFile("my.torrent");int filelength = (BInteger)((BList)((BDictionary)torrent["info"]["files"])[0])["length"];
Which is the just as bad as the first lot
|
'Looser' typing in C# by casting down the inheritance tree
|
The question I want to ask is thus:
Is casting down the inheritance tree (ie. towards a more specialiased class) from inside an abstract class excusable, or even a good thing, or is it always a poor choice with better options available?
Now, the example of why I think it can be used for good.
I recently implemented Bencoding from the BitTorrent protocol in C#. A simple enough problem, how to represent the data. I chose to do it this way,
We have an abstract BItem class, which provides some basic functionality, including the static BItem Decode(string) that is used to decode a Bencoded string into the necessary structure.
There are also four derived classes, BString, BInteger, BList and BDictionary, representing the four different data types that be encoded. Now, here is the tricky part. BList and BDictionary have this[int] and this[string] accessors respectively to allow access to the array-like qualities of these data types.
The potentially horrific part is coming now:
BDictionary torrent = (BDictionary) BItem.DecodeFile("my.torrent");
int filelength = (BInteger)((BDictionary)((BList)((BDictionary)
torrent["info"])["files"])[0])["length"];
Well, you get the picture... Ouch, that's hard on the eyes, not to mention the brain. So, I introduced something extra into the abstract class:
public BItem this[int index]
{
get { return ((BList)this)[index]; }
}
public BItem this[string index]
{
get { return ((BDictionary)this)[index]; }
}
Now we could rewrite that old code as:
BDictionary torrent = (BDictionary)BItem.DecodeFile("my.torrent");
int filelength = (BInteger)torrent["info"]["files"][0]["length"];
Wow, hey presto, MUCH more readable code. But did I just sell part of my soul for implying knowledge of subclasses into the abstract class?
EDIT: In response to some of the answers coming in, you're completely off track for this particular question since the structure is variable, for instance my example of torrent["info"]["files"][0]["length"] is valid, but so is torrent["announce-list"][0][0], and both would be in 90% of torrent files out there. Generics isn't the way to go, with this problem atleast :(. Have a click through to the spec I linked, it's only 4 small dot-points large.
|
[
"I think I would make the this[int] and this[string] accessors virtual and override them in BList/BDictionary. Classes where the accessors does not make sense should cast a NotSupportedException() (perhaps by having a default implementation in BItem).\nThat makes your code work in the same way and gives you a more readable error in case you should write\n (BInteger)torrent[\"info\"][0][\"files\"][\"length\"];\n\nby mistake.\n",
"You really should not access any derived classes from the base class as it pretty much breaks the idea of OOP. Readibility certainly goes a long way, but I wouldn't trade it for reusability. Consider the case when you'll need to add another subclass - you'll also need to update the base class accordingly.\n",
"If file length is something you retrieve often, why not implement a property in the BDictionary (?) class... so that you code becomes:\nBDictionary torrent = BItem.DecodeFile(\"my.torrent\");\nint filelength = torrent.FileLength;\n\nThat way the implementation details are hidden from the user.\n",
"The way I see it, not all BItems are collections, thus not all BItems have indexers, so the indexer shouldn't be in BItem. I would derive another abstract class from BItem, let's name it BCollection, and put the indexers there, something like:\nabstract class BCollection : BItem {\n\n public BItem this[int index] {get;}\n public BItem this[string index] {get;}\n}\n\nand make BList and BDictionary inherit from BCollection.\nOr you could go the extra mile and make BCollection a generic class.\n",
"My recommendation would be to introduce more abstractions. I find it confusing that a BItem has a DecodeFile() which returns a BDictionary. This may be a reasonable thing to do in the torrent domain, I don't know.\nHowever, I would find an api like the following more reasonable:\nBFile torrent = BFile.DecodeFile(\"my.torrent\");\nint filelength = torrent.Length;\n\n",
"Did you concider parsing a simple \"path\" so you could write it this way:\n\nBDictionary torrent = BItem.DecodeFile(\"my.torrent\");\nint filelength = (int)torrent.Fetch(\"info.files.0.length\");\n\nPerhaps not the best way, but the readability increases(a little)\n",
"\nIf you have complete control of your codebase and your thought-process, by all means do.\nIf not, you'll regret this the day some new person injects a BItem derivation that you didn't see coming into your BList or BDictionary.\n\nIf you have to do this, atleast wrap it (control access to the list) in a class which has strongly typed method signatures. \nBString GetString(BInteger);\nSetString(BInteger, BString);\n\nAccept and return BStrings even though you internally store it in a BList of BItems. (let me split before I make my 2 B or not 2 B)\n",
"Hmm. I would actually argue that the first line of coded is more readable than the second - it takes a little longer to figure out what's going on it, but its more apparant that you're treating objects as BList or BDictionary. Applying the methods to the abstract class hides that detail, which can make it harder to figure out what your method is actually doing.\n",
"If you introduce generics, you can avoid casting.\nclass DecodedTorrent : BDictionary<BDictionary<BList<BDictionary<BInteger>>>>\n{\n}\n\n\nDecodedTorrent torrent = BItem.DecodeFile(\"mytorrent\");\nint x = torrent[\"info\"][\"files\"][0][\"length\"];\n\nHmm, but that probably won't work, as the types may depend on the path you take through the structure.\n",
"Is it just me\nBDictionary torrent = BItem.DecodeFile(\"my.torrent\");int filelength = (BInteger)((BDictionary)((BList)((BDictionary) torrent[\"info\"])[\"files\"])[0])[\"length\"];\n\nYou don't need the BDictionary cast 'torrent' is declared as a BDictionary\npublic BItem this[int index]{ get { return ((BList)this)[index]; }}public BItem this[string index]{ get { return ((BDictionary)this)[index]; }}\n\nThese don't acheive the desired result as the return type is still the abstrat version, so you still have to cast.\nThe rewritten code would have to be\nBDictionary torrent = BItem.DecodeFile(\"my.torrent\");int filelength = (BInteger)((BList)((BDictionary)torrent[\"info\"][\"files\"])[0])[\"length\"];\n\nWhich is the just as bad as the first lot\n"
] |
[
5,
3,
1,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"bittorrent",
"c#",
"data_structures",
"inheritance"
] |
stackoverflow_0000081972_bittorrent_c#_data_structures_inheritance.txt
|
Q:
Inline displayed blocks form a single word in IE
The problem is that I have several "h2" tags that have a display:inline attribute, and on Microsoft's wonderful browsers the space between them doesn't appear. Is there a workaround?
I know there is a "non-breaking space" in HTML but I was wondering if one can make a space that may be a "breaking space".
--- edit ---
The website is http://newstoday.ro and the behaviour is in the footer. If the site is opened in IE the list is continuous, even though there is a space between the words. Please don't comment the rest of the code as I am just the plumber in this situation. Also there is a must for the headings as the client thinks it is better for SEO.
A:
I can't think of a rationale for why you're wanting h2's to display inline. In fact, why would you want two headers to read together? Think of the way it should be read. Do you want it to read:
"Header one header two"
or:
"Header One"
"Header Two"
If it's the first way, then it's probably your HTML that's messed up. If it's the second, then you should probably think of it's positioning rather than changing it's behavior and utilize other css methods like float and position.
A:
You can just use a regular space, but add "margin-right:_ px" to the h2 css definition to adjust the spacing between tags. Negative values are allowed too.
A:
Have you tried setting the "margin" property? Not sure if that directly applies to your question.
A:
Throwing an in there seems to create a space:
<html>
<head><title>Blah</title></head>
<body>
<h2 style="display:inline;">Something</h2>
<h2 style="display:inline;">Something Else</h2>
</body>
</html>
In this example, you actually end up with 2 spaces, so you might want to eliminate whitespace between the tags and the if you require only one space. Another option would be to add a left/right margin to the header element.
A:
Apply the style margin: 0 0.5em to both headers - adjust 0.5 to suit (maybe 0.25 or 0.75 is better; also the first 0 is top/bottom margin, adjust as relevant).
Note: Since you want a character space, you want em not px as suggested earlier.
Complete example code...
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Inline Header Example</title>
<style type="text/css">
h2
{
display: inline;
margin: 0 0.5em;
}
</style>
</head>
<body>
<h2>First Header</h2>
<h2>Second Header</h2>
</body>
</html>
|
Inline displayed blocks form a single word in IE
|
The problem is that I have several "h2" tags that have a display:inline attribute, and on Microsoft's wonderful browsers the space between them doesn't appear. Is there a workaround?
I know there is a "non-breaking space" in HTML but I was wondering if one can make a space that may be a "breaking space".
--- edit ---
The website is http://newstoday.ro and the behaviour is in the footer. If the site is opened in IE the list is continuous, even though there is a space between the words. Please don't comment the rest of the code as I am just the plumber in this situation. Also there is a must for the headings as the client thinks it is better for SEO.
|
[
"I can't think of a rationale for why you're wanting h2's to display inline. In fact, why would you want two headers to read together? Think of the way it should be read. Do you want it to read:\n\"Header one header two\"\nor:\n\"Header One\"\n\"Header Two\"\nIf it's the first way, then it's probably your HTML that's messed up. If it's the second, then you should probably think of it's positioning rather than changing it's behavior and utilize other css methods like float and position.\n",
"You can just use a regular space, but add \"margin-right:_ px\" to the h2 css definition to adjust the spacing between tags. Negative values are allowed too.\n",
"Have you tried setting the \"margin\" property? Not sure if that directly applies to your question.\n",
"Throwing an in there seems to create a space:\n<html>\n <head><title>Blah</title></head>\n <body>\n <h2 style=\"display:inline;\">Something</h2>\n \n <h2 style=\"display:inline;\">Something Else</h2>\n </body>\n</html>\n\nIn this example, you actually end up with 2 spaces, so you might want to eliminate whitespace between the tags and the if you require only one space. Another option would be to add a left/right margin to the header element.\n",
"Apply the style margin: 0 0.5em to both headers - adjust 0.5 to suit (maybe 0.25 or 0.75 is better; also the first 0 is top/bottom margin, adjust as relevant).\nNote: Since you want a character space, you want em not px as suggested earlier.\n\nComplete example code...\n<!DOCTYPE html>\n<html>\n <head>\n <meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\"/> \n <title>Inline Header Example</title>\n <style type=\"text/css\">\n h2\n {\n display: inline;\n margin: 0 0.5em;\n }\n </style>\n </head>\n <body>\n <h2>First Header</h2>\n <h2>Second Header</h2>\n </body>\n</html>\n\n"
] |
[
2,
0,
0,
0,
0
] |
[
"Well the braking space is just a space, you know a \" \" without the quotes...\n",
"The answer is that it's not possible. You mean you want text that's in a larger block of text to flow just like the rest of it, as if the tag were < strong > instead of of < h2 >\nSince h2 is a block level element no matter how you style it, some browsers (cough) will choke on your attempt to flow it inline with other text. \n"
] |
[
-1,
-1
] |
[
"css",
"html",
"internet_explorer"
] |
stackoverflow_0000082981_css_html_internet_explorer.txt
|
Q:
How to access custom fields from the global class in a webhandler?
I added some custom fields (public booleans) to the global class in global.asax.cs which are initialized during the Application_Start event. How do I access them in a webhandler (ashx)? Or is it better to save them in the Application state object?
A:
You would probably need to access the class as the type that your Global.asax.cs is rather than the type it is inheriting from.
I believe it is more common to just use the Application State object for application wide variables.
A:
have you tried ((global)Application).PublicBooleanField ?
|
How to access custom fields from the global class in a webhandler?
|
I added some custom fields (public booleans) to the global class in global.asax.cs which are initialized during the Application_Start event. How do I access them in a webhandler (ashx)? Or is it better to save them in the Application state object?
|
[
"You would probably need to access the class as the type that your Global.asax.cs is rather than the type it is inheriting from.\nI believe it is more common to just use the Application State object for application wide variables.\n",
"have you tried ((global)Application).PublicBooleanField ? \n"
] |
[
2,
0
] |
[] |
[] |
[
"asp.net",
"c#"
] |
stackoverflow_0000083749_asp.net_c#.txt
|
Q:
What is the most effective tool you've used to track changes in a CVS repository?
I'm in Quality Assurance and use Fisheye to track checkins to CVS. What other options do people use?
We have tens of thousands of files and have plans for migrating to Team Foundation Server's code management tool 'at some point' When we do that, there will be lots of information that will be available.
A:
ViewVC provides a nice web interface to CVS (or SVN) and is reasonably easy to setup. It does not provide the same functionality as fisheye, however. I haven't tried the integration w/ a SQL DB backend though, I believe that will add some fisheye-like capabilities.
CVSTrac also provides a web interface, wiki, ticket system, and other features. I haven't set it up on our repository, but it does provide some fisheye-like features as well.
A:
You could have a mail sent to you at each commit... Look into the CVS Book.
|
What is the most effective tool you've used to track changes in a CVS repository?
|
I'm in Quality Assurance and use Fisheye to track checkins to CVS. What other options do people use?
We have tens of thousands of files and have plans for migrating to Team Foundation Server's code management tool 'at some point' When we do that, there will be lots of information that will be available.
|
[
"ViewVC provides a nice web interface to CVS (or SVN) and is reasonably easy to setup. It does not provide the same functionality as fisheye, however. I haven't tried the integration w/ a SQL DB backend though, I believe that will add some fisheye-like capabilities.\nCVSTrac also provides a web interface, wiki, ticket system, and other features. I haven't set it up on our repository, but it does provide some fisheye-like features as well.\n",
"You could have a mail sent to you at each commit... Look into the CVS Book.\n"
] |
[
2,
0
] |
[
"Sorry, this doesn't help with CVS, but I'd recommend switching to subversion, which is designed to be a CVS replacement. Then you can use trac to follow checkins, as well as manage change tickets and documentation. It was well worth the effort in my own projects.\nBut if you have to use CVS, there's always CVSweb\n"
] |
[
-1
] |
[
"cvs",
"version_control"
] |
stackoverflow_0000086302_cvs_version_control.txt
|
Q:
OpenID providers - what stops malicious providers?
So I like the OpenID idea. I support it on my site, and use it wherever it's possible (like here!). But I am not clear about one thing.
A site that supports OpenID basically accepts any OpenID provider out there, right? How does that work with sites that want to reduce bot-signups? What's to stop a malicious OpenID provider from setting up unlimited bot IDs automatically?
I have some ideas, and will post them as a possible answer, but I was wondering if anyone can see something obvious that I've missed?
A:
You have confused two different things - identification and authorization. Just because you know who somebody is, it doesn't mean you have to automatically give them permission to do anything. Simon Willison covers this nicely in An OpenID is not an account! More discussion on whitelisting is available in Social whitelisting with OpenID.
A:
The short answer to your question is, "It doesn't." OpenID deliberately provides only a mechanism for having a centralized authentication site; it's up to you to decide which OpenID providers you personally consider acceptable. For example, Microsoft recently decided to allow OpenID on its Healthvault site only from a select few providers. A company may decide only to allow OpenID logins from its LDAP-backed access point, a government agency may only accept OpenIDs from biometrics-backed sites, and a blog might only accept TypePad due to their intense spam vetting.
There seems to be a lot of confusion over OpenID. Its original goal was simply to provide a standard login mechanism so that, when I need a secure login mechanism, I can select from any or all OpenID providers to handle that for me. Allowing anyone anywhere to set up their own trusted OpenID provider was never the goal. Doing the second effectively is impossible—after all, even with encryption, there's no reason you can't set up your own provider to securely lie and say it's authenticating whomever you want. Having a single, standardized login mechanism is itself already a great step forward.
A:
OpenId isn't much more than the username and password a user selects when registering for your site. You don't rely on the OpenId framework to weed out bots; your registration system should still be doing that.
A:
Possible solution - you can still ask new IDs to pass a CAPTCHA test. Just like bots can sign up with fake/multiple email addresses to any site, but fail the "verification" step there as well.
Or are we going to have to start maintaining provider blacklists? Those won't really work very well, given how trivially easy it is to set up a new provider.
A:
As far as I can tell, OpenID addresses only identification, not authorization. Stopping bots is a matter of authorization.
A:
Notice that unlike conventional "per site" logins, OpenID gives you an identity that potentially transcends individual sites. Better yet, this identity is even a URI so its perfect for using with RDF to exchange or query arbitrary metadata about the identity.
You can do a few things with an OpenID that you can't do with a conventional username from a new user.
Firstly you can do some simple whitelist operations. If *.bigcorp.example are OpenIDs from Big Corp employees and you know Big Corp aren't spammers, then you can whitelist those OpenIDs. This ought to work well for sites that are semi-closed, maybe it's a social site for current and past employees.
Better though, you can make inferences from the other places that specific OpenID has been used. Suppose you have a map of OpenIDs to reputation values from Stackoverflow.com. When someone shows up at your web forum with an OpenID, you can see if they have decent reputation at Stackoverflow and skip the CAPTCHA or probationary period for those users.
|
OpenID providers - what stops malicious providers?
|
So I like the OpenID idea. I support it on my site, and use it wherever it's possible (like here!). But I am not clear about one thing.
A site that supports OpenID basically accepts any OpenID provider out there, right? How does that work with sites that want to reduce bot-signups? What's to stop a malicious OpenID provider from setting up unlimited bot IDs automatically?
I have some ideas, and will post them as a possible answer, but I was wondering if anyone can see something obvious that I've missed?
|
[
"You have confused two different things - identification and authorization. Just because you know who somebody is, it doesn't mean you have to automatically give them permission to do anything. Simon Willison covers this nicely in An OpenID is not an account! More discussion on whitelisting is available in Social whitelisting with OpenID.\n",
"The short answer to your question is, \"It doesn't.\" OpenID deliberately provides only a mechanism for having a centralized authentication site; it's up to you to decide which OpenID providers you personally consider acceptable. For example, Microsoft recently decided to allow OpenID on its Healthvault site only from a select few providers. A company may decide only to allow OpenID logins from its LDAP-backed access point, a government agency may only accept OpenIDs from biometrics-backed sites, and a blog might only accept TypePad due to their intense spam vetting.\nThere seems to be a lot of confusion over OpenID. Its original goal was simply to provide a standard login mechanism so that, when I need a secure login mechanism, I can select from any or all OpenID providers to handle that for me. Allowing anyone anywhere to set up their own trusted OpenID provider was never the goal. Doing the second effectively is impossible—after all, even with encryption, there's no reason you can't set up your own provider to securely lie and say it's authenticating whomever you want. Having a single, standardized login mechanism is itself already a great step forward.\n",
"OpenId isn't much more than the username and password a user selects when registering for your site. You don't rely on the OpenId framework to weed out bots; your registration system should still be doing that.\n",
"Possible solution - you can still ask new IDs to pass a CAPTCHA test. Just like bots can sign up with fake/multiple email addresses to any site, but fail the \"verification\" step there as well.\nOr are we going to have to start maintaining provider blacklists? Those won't really work very well, given how trivially easy it is to set up a new provider.\n",
"As far as I can tell, OpenID addresses only identification, not authorization. Stopping bots is a matter of authorization.\n",
"Notice that unlike conventional \"per site\" logins, OpenID gives you an identity that potentially transcends individual sites. Better yet, this identity is even a URI so its perfect for using with RDF to exchange or query arbitrary metadata about the identity.\nYou can do a few things with an OpenID that you can't do with a conventional username from a new user.\nFirstly you can do some simple whitelist operations. If *.bigcorp.example are OpenIDs from Big Corp employees and you know Big Corp aren't spammers, then you can whitelist those OpenIDs. This ought to work well for sites that are semi-closed, maybe it's a social site for current and past employees.\nBetter though, you can make inferences from the other places that specific OpenID has been used. Suppose you have a map of OpenIDs to reputation values from Stackoverflow.com. When someone shows up at your web forum with an OpenID, you can see if they have decent reputation at Stackoverflow and skip the CAPTCHA or probationary period for those users.\n"
] |
[
13,
9,
3,
2,
2,
2
] |
[] |
[] |
[
"openid",
"security"
] |
stackoverflow_0000086090_openid_security.txt
|
Q:
AJAX - How to Pass value back to server
First time working with UpdatePanels in .NET.
I have an updatepanel with a trigger pointed to an event on a FormView control. The UpdatePanel holds a ListView with related data from a separate database.
When the UpdatePanel refreshes, it needs values from the FormView control so that on the server it can use them to query the database.
For the life if me, I can't figure out how to get those values. The event I'm triggering from has them, but I want the updatepanel to refresh asynchronously. How do I pass values to the load event on the panel?
Googled this ad nauseum and can't seem to get to an answer here. A link or an explanation would be immensely helpful..
Jeff
A:
make a javascript function that will collect the pieces of form data, and then sends that data to an ASHX handler. the ASHX handler will do some work, and can reply with a response.
This is an example I made that calls a database to populate a grid using AJAX calls. There are better libraries for doing AJAX (prototype, ExtJS, etc), but this is the raw deal. (I know this can be refactored to be even cleaner, but you can get the idea well enough)
Works like this...
User enters text in the search box,
User clicks search button,
JavaScript gets form data,
javascript makes ajax call to ASHX,
ASHX receives request,
ASHX queries database,
ASHX parses the response into JSON/Javascript array,
ASHX sends response,
Javascript receives response,
javascript Eval()'s response to object,
javascript iterates object properties and fills grid
The html will look like this...
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
<title>Untitled Page</title>
<script type="text/javascript" src="AjaxHelper.js"></script>
</head>
<body>
<form id="form1" runat="server">
<div>
<asp:TextBox ID="txtSearchValue" runat="server"></asp:TextBox>
<input id="btnSearch" type="button" value="Search by partial full name" onclick="doSearch()"/>
<igtbl:ultrawebgrid id="uwgUsers" runat="server"
//infragistics grid crap
</igtbl:ultrawebgrid>--%>
</div>
</form>
</body>
</html>
The script that fires on click will look like this...
//this is tied to the button click. It takes care of input cleanup and calling the AJAX method
function doSearch(){
var eleVal;
var eleBtn;
eleVal = document.getElementById('txtSearchValue').value;
eleBtn = document.getElementById('btnSearch');
eleVal = trim(eleVal);
if (eleVal.length > 0) {
eleBtn.value = 'Searching...';
eleBtn.disabled = true;
refreshGridData(eleVal);
}
else {
alert("Please enter a value to search with. Unabated searches are not permitted.");
}
}
//This is the function that will go out and get the data and call load the Grid on AJAX call
//return.
function refreshGridData(searchString){
if (searchString =='undefined'){
searchString = "";
}
var xhr;
var gridData;
var url;
url = "DefaultHandler.ashx?partialUserFullName=" + escape(searchString);
xhr = GetXMLHttpRequestObject();
xhr.onreadystatechange = function() {
if (xhr.readystate==4) {
gridData = eval(xhr.responseText);
if (gridData.length > 0) {
//clear and fill the grid
clearAndPopulateGrid(gridData);
}
else {
//display appropriate message
}
} //if (xhr.readystate==4) {
} //xhr.onreadystatechange = function() {
xhr.open("GET", url, true);
xhr.send(null);
}
//this does the grid clearing and population, and enables the search button when complete.
function clearAndPopulateGrid(jsonObject) {
var grid = igtbl_getGridById('uwgUsers');
var eleBtn;
eleBtn = document.getElementById('btnSearch');
//clear the rows
for (x = grid.Rows.length; x >= 0; x--) {
grid.Rows.remove(x, false);
}
//add the new ones
for (x = 0; x < jsonObject.length; x++) {
var newRow = igtbl_addNew(grid.Id, 0, false, false);
//the cells should not be referenced by index value, so a name lookup should be implemented
newRow.getCell(0).setValue(jsonObject[x][1]);
newRow.getCell(1).setValue(jsonObject[x][2]);
newRow.getCell(2).setValue(jsonObject[x][3]);
}
grid = null;
eleBtn.disabled = false;
eleBtn.value = "Search by partial full name";
}
// this function will return the XMLHttpRequest Object for the current browser
function GetXMLHttpRequestObject() {
var XHR; //the object to return
var ua = navigator.userAgent.toLowerCase(); //gets the useragent text
try
{
//determine the browser type
if (!window.ActiveXObject)
{ //Non IE Browsers
XHR = new XMLHttpRequest();
}
else
{
if (ua.indexOf('msie 5') == -1)
{ //IE 5.x
XHR = new ActiveXObject("Msxml2.XMLHTTP");
}
else
{ //IE 6.x and up
XHR = new ActiveXObject("Microsoft.XMLHTTP");
}
} //end if (!window.ActiveXObject)
if (XHR == null)
{
throw "Unable to instantiate the XMLHTTPRequest object.";
}
}
catch (e)
{
alert("This browser does not appear to support AJAX functionality. error: " + e.name
+ " description: " + e.message);
}
return XHR;
} //end function GetXMLHttpRequestObject()
function trim(stringToTrim){
return stringToTrim.replace(/^\s\s*/, '').replace(/\s\s*$/, '');
}
And the ashx handler looks like this....
Imports System.Web
Imports System.Web.Services
Imports System.Data
Imports System.Data.SqlClient
Public Class DefaultHandler
Implements System.Web.IHttpHandler
Private Const CONN_STRING As String = "Data Source=;Initial Catalog=;User ID=;Password=;"
Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest
context.Response.ContentType = "text/plain"
context.Response.Expires = -1
Dim strPartialUserName As String
Dim strReturnValue As String = String.Empty
If context.Request.QueryString("partialUserFullName") Is Nothing = False Then
strPartialUserName = context.Request.QueryString("partialUserFullName").ToString()
If String.IsNullOrEmpty(strPartialUserName) = False Then
strReturnValue = SearchAndReturnJSResult(strPartialUserName)
End If
End If
context.Response.Write(strReturnValue)
End Sub
Private Function SearchAndReturnJSResult(ByVal partialUserName As String) As String
Dim strReturnValue As New StringBuilder()
Dim conn As SqlConnection
Dim strSQL As New StringBuilder()
Dim objParam As SqlParameter
Dim da As SqlDataAdapter
Dim ds As New DataSet()
Dim dr As DataRow
'define sql
strSQL.Append(" SELECT ")
strSQL.Append(" [id] ")
strSQL.Append(" ,([first_name] + ' ' + [last_name]) ")
strSQL.Append(" ,[email] ")
strSQL.Append(" FROM [person] (NOLOCK) ")
strSQL.Append(" WHERE [last_name] LIKE @lastName")
'clean up the partial user name for use in a like search
If partialUserName.EndsWith("%", StringComparison.InvariantCultureIgnoreCase) = False Then
partialUserName = partialUserName & "%"
End If
If partialUserName.StartsWith("%", StringComparison.InvariantCultureIgnoreCase) = False Then
partialUserName = partialUserName.Insert(0, "%")
End If
'create the oledb parameter... parameterized queries perform far better on repeatable
'operations
objParam = New SqlParameter("@lastName", SqlDbType.VarChar, 100)
objParam.Value = partialUserName
conn = New SqlConnection(CONN_STRING)
da = New SqlDataAdapter(strSQL.ToString(), conn)
da.SelectCommand.Parameters.Add(objParam)
Try 'to get a dataset.
da.Fill(ds)
Catch sqlex As SqlException
'Throw an appropriate exception if you can add details that will help understand the problem.
Throw New DataException("Unable to retrieve the results from the user search.", sqlex)
Finally
If conn.State = ConnectionState.Open Then
conn.Close()
End If
conn.Dispose()
da.Dispose()
End Try
'make sure we have a return value
If ds Is Nothing OrElse ds.Tables(0) Is Nothing OrElse ds.Tables(0).Rows.Count <= 0 Then
Return String.Empty
End If
'This converts the table into JS array.
strReturnValue.Append("[")
For Each dr In ds.Tables(0).Rows
strReturnValue.Append("['" & CStr(dr("username")) & "','" & CStr(dr("userfullname")) & "','" & CStr(dr("useremail")) & "'],")
Next
strReturnValue.Remove(strReturnValue.Length - 1, 1)
strReturnValue.Append("]")
'de-allocate what can be deallocated. Setting to Nothing for smaller types may
'incur performance hit because of a forced allocation to nothing before they are deallocated
'by garbage collection.
ds.Dispose()
strSQL.Length = 0
Return strReturnValue.ToString()
End Function
ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable
Get
Return False
End Get
End Property
End Class
A:
Try
...looking in the Request and
Response.
...setting a breakpoint
on the Load() method and query Me or
this in the watch or immediate
window to see if the values you want
are maybe just not where you are
expecting them?
...Put a (For Each ctl as Control in Me/This.Controls)
and inspecting each control that is iterated and see if you are even getting the controls you expect.
... its not in Sender or EventArgs?
Try NOT using Update panels.... They can often cause more trouble than they are worth. It may be faster and less headache to use regular AJAX to get it done.
A:
If you are working with an UpdatePanel just make sure that both controls are inside the panel and it will work as desired.
|
AJAX - How to Pass value back to server
|
First time working with UpdatePanels in .NET.
I have an updatepanel with a trigger pointed to an event on a FormView control. The UpdatePanel holds a ListView with related data from a separate database.
When the UpdatePanel refreshes, it needs values from the FormView control so that on the server it can use them to query the database.
For the life if me, I can't figure out how to get those values. The event I'm triggering from has them, but I want the updatepanel to refresh asynchronously. How do I pass values to the load event on the panel?
Googled this ad nauseum and can't seem to get to an answer here. A link or an explanation would be immensely helpful..
Jeff
|
[
"make a javascript function that will collect the pieces of form data, and then sends that data to an ASHX handler. the ASHX handler will do some work, and can reply with a response.\nThis is an example I made that calls a database to populate a grid using AJAX calls. There are better libraries for doing AJAX (prototype, ExtJS, etc), but this is the raw deal. (I know this can be refactored to be even cleaner, but you can get the idea well enough)\nWorks like this...\n\nUser enters text in the search box, \nUser clicks search button, \nJavaScript gets form data, \njavascript makes ajax call to ASHX, \nASHX receives request, \nASHX queries database,\nASHX parses the response into JSON/Javascript array, \nASHX sends response,\nJavascript receives response, \njavascript Eval()'s response to object,\njavascript iterates object properties and fills grid\n\nThe html will look like this... \n<html xmlns=\"http://www.w3.org/1999/xhtml\" >\n<head runat=\"server\">\n <title>Untitled Page</title>\n <script type=\"text/javascript\" src=\"AjaxHelper.js\"></script>\n</head>\n<body>\n <form id=\"form1\" runat=\"server\">\n <div>\n <asp:TextBox ID=\"txtSearchValue\" runat=\"server\"></asp:TextBox>\n <input id=\"btnSearch\" type=\"button\" value=\"Search by partial full name\" onclick=\"doSearch()\"/>\n\n <igtbl:ultrawebgrid id=\"uwgUsers\" runat=\"server\" \n//infragistics grid crap\n </igtbl:ultrawebgrid>--%>\n </div>\n </form>\n</body>\n</html>\n\nThe script that fires on click will look like this...\n//this is tied to the button click. It takes care of input cleanup and calling the AJAX method\nfunction doSearch(){\n var eleVal; \n var eleBtn;\n eleVal = document.getElementById('txtSearchValue').value;\n eleBtn = document.getElementById('btnSearch');\n eleVal = trim(eleVal);\n if (eleVal.length > 0) {\n eleBtn.value = 'Searching...';\n eleBtn.disabled = true;\n refreshGridData(eleVal);\n }\n else {\n alert(\"Please enter a value to search with. Unabated searches are not permitted.\");\n }\n}\n\n//This is the function that will go out and get the data and call load the Grid on AJAX call \n//return.\nfunction refreshGridData(searchString){\n\n if (searchString =='undefined'){\n searchString = \"\";\n }\n\n var xhr; \n var gridData;\n var url;\n\n url = \"DefaultHandler.ashx?partialUserFullName=\" + escape(searchString);\n xhr = GetXMLHttpRequestObject();\n\n xhr.onreadystatechange = function() {\n if (xhr.readystate==4) {\n gridData = eval(xhr.responseText);\n if (gridData.length > 0) {\n //clear and fill the grid\n clearAndPopulateGrid(gridData);\n }\n else {\n //display appropriate message\n }\n } //if (xhr.readystate==4) {\n } //xhr.onreadystatechange = function() {\n\n xhr.open(\"GET\", url, true);\n xhr.send(null);\n}\n\n//this does the grid clearing and population, and enables the search button when complete.\nfunction clearAndPopulateGrid(jsonObject) {\n\n var grid = igtbl_getGridById('uwgUsers');\n var eleBtn;\n eleBtn = document.getElementById('btnSearch');\n\n //clear the rows\n for (x = grid.Rows.length; x >= 0; x--) {\n grid.Rows.remove(x, false);\n }\n\n //add the new ones\n for (x = 0; x < jsonObject.length; x++) {\n var newRow = igtbl_addNew(grid.Id, 0, false, false);\n //the cells should not be referenced by index value, so a name lookup should be implemented\n newRow.getCell(0).setValue(jsonObject[x][1]); \n newRow.getCell(1).setValue(jsonObject[x][2]);\n newRow.getCell(2).setValue(jsonObject[x][3]);\n }\n\n grid = null;\n\n eleBtn.disabled = false;\n eleBtn.value = \"Search by partial full name\";\n}\n\n\n// this function will return the XMLHttpRequest Object for the current browser\nfunction GetXMLHttpRequestObject() {\n\n var XHR; //the object to return\n var ua = navigator.userAgent.toLowerCase(); //gets the useragent text\n try\n {\n //determine the browser type\n if (!window.ActiveXObject)\n { //Non IE Browsers\n XHR = new XMLHttpRequest(); \n }\n else \n {\n if (ua.indexOf('msie 5') == -1)\n { //IE 5.x\n XHR = new ActiveXObject(\"Msxml2.XMLHTTP\");\n }\n else\n { //IE 6.x and up \n XHR = new ActiveXObject(\"Microsoft.XMLHTTP\"); \n }\n } //end if (!window.ActiveXObject)\n\n if (XHR == null)\n {\n throw \"Unable to instantiate the XMLHTTPRequest object.\";\n }\n }\n catch (e)\n {\n alert(\"This browser does not appear to support AJAX functionality. error: \" + e.name\n + \" description: \" + e.message);\n }\n return XHR;\n} //end function GetXMLHttpRequestObject()\n\nfunction trim(stringToTrim){\n return stringToTrim.replace(/^\\s\\s*/, '').replace(/\\s\\s*$/, '');\n}\n\nAnd the ashx handler looks like this....\nImports System.Web\nImports System.Web.Services\nImports System.Data\nImports System.Data.SqlClient\n\nPublic Class DefaultHandler\n Implements System.Web.IHttpHandler\n\n Private Const CONN_STRING As String = \"Data Source=;Initial Catalog=;User ID=;Password=;\"\n\n Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest\n\n context.Response.ContentType = \"text/plain\"\n context.Response.Expires = -1\n\n Dim strPartialUserName As String\n Dim strReturnValue As String = String.Empty\n\n If context.Request.QueryString(\"partialUserFullName\") Is Nothing = False Then\n strPartialUserName = context.Request.QueryString(\"partialUserFullName\").ToString()\n\n If String.IsNullOrEmpty(strPartialUserName) = False Then\n strReturnValue = SearchAndReturnJSResult(strPartialUserName)\n End If\n End If\n\n context.Response.Write(strReturnValue)\n\n End Sub\n\n\n Private Function SearchAndReturnJSResult(ByVal partialUserName As String) As String\n\n Dim strReturnValue As New StringBuilder()\n Dim conn As SqlConnection\n Dim strSQL As New StringBuilder()\n Dim objParam As SqlParameter\n Dim da As SqlDataAdapter\n Dim ds As New DataSet()\n Dim dr As DataRow\n\n 'define sql\n strSQL.Append(\" SELECT \")\n strSQL.Append(\" [id] \")\n strSQL.Append(\" ,([first_name] + ' ' + [last_name]) \")\n strSQL.Append(\" ,[email] \")\n strSQL.Append(\" FROM [person] (NOLOCK) \")\n strSQL.Append(\" WHERE [last_name] LIKE @lastName\")\n\n 'clean up the partial user name for use in a like search\n If partialUserName.EndsWith(\"%\", StringComparison.InvariantCultureIgnoreCase) = False Then\n partialUserName = partialUserName & \"%\"\n End If\n\n If partialUserName.StartsWith(\"%\", StringComparison.InvariantCultureIgnoreCase) = False Then\n partialUserName = partialUserName.Insert(0, \"%\")\n End If\n\n 'create the oledb parameter... parameterized queries perform far better on repeatable\n 'operations\n objParam = New SqlParameter(\"@lastName\", SqlDbType.VarChar, 100)\n objParam.Value = partialUserName\n\n conn = New SqlConnection(CONN_STRING)\n da = New SqlDataAdapter(strSQL.ToString(), conn)\n da.SelectCommand.Parameters.Add(objParam)\n\n Try 'to get a dataset. \n da.Fill(ds)\n Catch sqlex As SqlException\n 'Throw an appropriate exception if you can add details that will help understand the problem.\n Throw New DataException(\"Unable to retrieve the results from the user search.\", sqlex)\n Finally\n If conn.State = ConnectionState.Open Then\n conn.Close()\n End If\n conn.Dispose()\n da.Dispose()\n End Try\n\n 'make sure we have a return value\n If ds Is Nothing OrElse ds.Tables(0) Is Nothing OrElse ds.Tables(0).Rows.Count <= 0 Then\n Return String.Empty\n End If\n\n 'This converts the table into JS array. \n strReturnValue.Append(\"[\")\n\n For Each dr In ds.Tables(0).Rows\n strReturnValue.Append(\"['\" & CStr(dr(\"username\")) & \"','\" & CStr(dr(\"userfullname\")) & \"','\" & CStr(dr(\"useremail\")) & \"'],\")\n Next\n\n strReturnValue.Remove(strReturnValue.Length - 1, 1)\n strReturnValue.Append(\"]\")\n\n 'de-allocate what can be deallocated. Setting to Nothing for smaller types may\n 'incur performance hit because of a forced allocation to nothing before they are deallocated\n 'by garbage collection.\n ds.Dispose()\n strSQL.Length = 0\n\n Return strReturnValue.ToString()\n\n End Function\n\n\n ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable\n Get\n Return False\n End Get\n End Property\n\nEnd Class\n\n",
"Try \n\n...looking in the Request and\nResponse. \n...setting a breakpoint\non the Load() method and query Me or\nthis in the watch or immediate\nwindow to see if the values you want\nare maybe just not where you are\nexpecting them? \n...Put a (For Each ctl as Control in Me/This.Controls)\nand inspecting each control that is iterated and see if you are even getting the controls you expect.\n... its not in Sender or EventArgs? \n\n\nTry NOT using Update panels.... They can often cause more trouble than they are worth. It may be faster and less headache to use regular AJAX to get it done. \n",
"If you are working with an UpdatePanel just make sure that both controls are inside the panel and it will work as desired.\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"asp.net_ajax",
"asynchronous"
] |
stackoverflow_0000085500_asp.net_ajax_asynchronous.txt
|
Q:
How to Update to Revision using Subclipse SVN plugin?
In subclipse, the Team > Update menu option performs an "svn update -r HEAD".
I want to run "svn update -r [revision number]" but can't find a menu option which will let me update to anything besides the HEAD revision.
A:
It is the "Replace With" menu option. It is not under "Team", but on the same level.
A:
Subclipse used to prompt but users complained. We did not want to add two update options. The easiest way to do it is just Team > Switch and do not change the URL. Switch and update are the same code paths within Subversion. If you do not change the URL it is just behaving like update and the Switch dialog exposes all the options available.
|
How to Update to Revision using Subclipse SVN plugin?
|
In subclipse, the Team > Update menu option performs an "svn update -r HEAD".
I want to run "svn update -r [revision number]" but can't find a menu option which will let me update to anything besides the HEAD revision.
|
[
"It is the \"Replace With\" menu option. It is not under \"Team\", but on the same level.\n",
"Subclipse used to prompt but users complained. We did not want to add two update options. The easiest way to do it is just Team > Switch and do not change the URL. Switch and update are the same code paths within Subversion. If you do not change the URL it is just behaving like update and the Switch dialog exposes all the options available.\n"
] |
[
10,
6
] |
[] |
[] |
[
"eclipse",
"subclipse",
"svn"
] |
stackoverflow_0000083443_eclipse_subclipse_svn.txt
|
Q:
Firing COM events in C++ - Synchronous or asynchronous?
I have an ActiveX control written using the MS ATL library and I am firing events via pDispatch->Invoke(..., DISPATCH_METHOD). The control will be used by a .NET client and my question is this - is the firing of the event a synchronous or asynchronous call? My concern is that, if synchronous, the application that handles the event could cause performance issues unless it returns immediately.
A:
It is synchronous from the point of view of the component generating the event. The control's thread of execution will call out into the receivers code and things are out of its control at that point.
Clients receiving the events must make sure they return quickly. If they need to do some significant amount of work then they should schedule this asynchronously. For example by posting a windows message, or using a separate thread.
|
Firing COM events in C++ - Synchronous or asynchronous?
|
I have an ActiveX control written using the MS ATL library and I am firing events via pDispatch->Invoke(..., DISPATCH_METHOD). The control will be used by a .NET client and my question is this - is the firing of the event a synchronous or asynchronous call? My concern is that, if synchronous, the application that handles the event could cause performance issues unless it returns immediately.
|
[
"It is synchronous from the point of view of the component generating the event. The control's thread of execution will call out into the receivers code and things are out of its control at that point.\nClients receiving the events must make sure they return quickly. If they need to do some significant amount of work then they should schedule this asynchronously. For example by posting a windows message, or using a separate thread.\n"
] |
[
4
] |
[] |
[] |
[
"activex",
"atl",
"c++",
"com"
] |
stackoverflow_0000086474_activex_atl_c++_com.txt
|
Q:
How do I use RegisterClientScriptBlock to register JavaScript?
ASP.NET 2.0 provides the ClientScript.RegisterClientScriptBlock() method for registering JavaScript in an ASP.NET Page.
The issue I'm having is passing the script when it's located in another directory. Specifically, the following syntax does not work:
ClientScript.RegisterClientScriptBlock(this.GetType(), "scriptName", "../dir/subdir/scriptName.js", true);
Instead of dropping the code into the page like this page says it should, it instead displays ../dir/subdir/script.js , my question is this:
Has anyone dealt with this before, and found a way to drop in the javascript in a separate file? Am I going about this the wrong way?
A:
What you're after is:
ClientScript.RegisterClientScriptInclude(this.GetType(), "scriptName", "../dir/subdir/scriptName.js")
A:
use: ClientScript.RegisterClientScriptInclude(key, url);
|
How do I use RegisterClientScriptBlock to register JavaScript?
|
ASP.NET 2.0 provides the ClientScript.RegisterClientScriptBlock() method for registering JavaScript in an ASP.NET Page.
The issue I'm having is passing the script when it's located in another directory. Specifically, the following syntax does not work:
ClientScript.RegisterClientScriptBlock(this.GetType(), "scriptName", "../dir/subdir/scriptName.js", true);
Instead of dropping the code into the page like this page says it should, it instead displays ../dir/subdir/script.js , my question is this:
Has anyone dealt with this before, and found a way to drop in the javascript in a separate file? Am I going about this the wrong way?
|
[
"What you're after is: \nClientScript.RegisterClientScriptInclude(this.GetType(), \"scriptName\", \"../dir/subdir/scriptName.js\")\n\n",
"use: ClientScript.RegisterClientScriptInclude(key, url);\n"
] |
[
5,
2
] |
[
"Your script value has to be a full script, so put in the following for your script value.\n<script type='text/javascript' src='yourpathhere'></script>\n\n"
] |
[
-1
] |
[
"asp.net",
"javascript"
] |
stackoverflow_0000086491_asp.net_javascript.txt
|
Q:
When is this VB6 member variable destroyed?
Suppose I have a class module clsMyClass with an object as a member variable. Listed below are two complete implementations of this very simple class.
Implementation 1:
Dim oObj As New clsObject
Implementation 2:
Dim oObj As clsObject
Private Sub Class_Initialize()
Set oObj = New clsObject
End Sub
Private Sub Class_Terminate()
Set oObj = Nothing
End Sub
Is there any functional difference between these two? In particular, is the lifetime of oObj the same?
A:
In implementation 1 the clsObject will not get instantiated until it is used. If it is never used, then the clsObject.Class_Initialize event will never fire.
In implementation 2, the clsObject instance will be created at the same time that the clsMyClass is instantiated. The clsObject.Class_Initialize will always be executed if clsMyClass is created.
A:
If in implementation 1 the declaration is inside the class and not a sub, yes the scope is the same for both examples.
A:
The object variable will be destroyed whenever garbage collection determines there are no more references to said object. So in your two examples, assuming the scope of clsObject is the same, there is no difference in when your object will be destroyed.
|
When is this VB6 member variable destroyed?
|
Suppose I have a class module clsMyClass with an object as a member variable. Listed below are two complete implementations of this very simple class.
Implementation 1:
Dim oObj As New clsObject
Implementation 2:
Dim oObj As clsObject
Private Sub Class_Initialize()
Set oObj = New clsObject
End Sub
Private Sub Class_Terminate()
Set oObj = Nothing
End Sub
Is there any functional difference between these two? In particular, is the lifetime of oObj the same?
|
[
"In implementation 1 the clsObject will not get instantiated until it is used. If it is never used, then the clsObject.Class_Initialize event will never fire. \nIn implementation 2, the clsObject instance will be created at the same time that the clsMyClass is instantiated. The clsObject.Class_Initialize will always be executed if clsMyClass is created.\n",
"If in implementation 1 the declaration is inside the class and not a sub, yes the scope is the same for both examples.\n",
"The object variable will be destroyed whenever garbage collection determines there are no more references to said object. So in your two examples, assuming the scope of clsObject is the same, there is no difference in when your object will be destroyed.\n"
] |
[
5,
0,
0
] |
[] |
[] |
[
"vb6"
] |
stackoverflow_0000086365_vb6.txt
|
Q:
Is there any good tool for working on Database apart from Toad which requires license and DBVisualiser not supportive
Is there any good tool for working on Dabtabase apart from Toad which requires license and DBVisualiser not supportive
A:
i'd recommend oracle SQLdeveloper. it has all the functionality you need for developing and maintanice
A:
Try fabForce Database Designer; it is GPL too.
http://fabforce.net/dbdesigner4/
A:
I am using the free version of WinSQL.
I am also looking at SQLDeveloper from Oracle. I have heard it mentionned in the same breath as Toad.
In a pinch, I find that Microsoft Query, which comes with Excel, is more than adequate.
A:
I've always used Golden 32 from Benthic - it's simple, cheap, and has the most common features I need (queries, viewing relationships, editing data). It's Oracle only. Aqua Data Studio from AquaFold is another option that supports multiple databases and has a ton of features, but it much more expensive. I used it for a while but kept going back to Golden because of how fast and easy to use it was.
A:
I find SQuirrel SQL Client is an excellent tool. It requires Java so depending on how you feel about that it may be a good thing or a bad thing. I find it's nice because I work on Windows, Linux and OS X and it works great on all platforms. If you work only on one platform you might be able to find better clients for that particular platform.
It has plug-ins for a lot of different databases, allows in table editing, has a data modelling module, auto-completion in the SQL editor and lots of other nifty features.
http://www.squirrelsql.org/
SQuirrel is currently released under GNU Lesser General Public License.
|
Is there any good tool for working on Database apart from Toad which requires license and DBVisualiser not supportive
|
Is there any good tool for working on Dabtabase apart from Toad which requires license and DBVisualiser not supportive
|
[
"i'd recommend oracle SQLdeveloper. it has all the functionality you need for developing and maintanice\n",
"Try fabForce Database Designer; it is GPL too.\nhttp://fabforce.net/dbdesigner4/\n",
"I am using the free version of WinSQL.\nI am also looking at SQLDeveloper from Oracle. I have heard it mentionned in the same breath as Toad.\nIn a pinch, I find that Microsoft Query, which comes with Excel, is more than adequate. \n",
"I've always used Golden 32 from Benthic - it's simple, cheap, and has the most common features I need (queries, viewing relationships, editing data). It's Oracle only. Aqua Data Studio from AquaFold is another option that supports multiple databases and has a ton of features, but it much more expensive. I used it for a while but kept going back to Golden because of how fast and easy to use it was.\n",
"I find SQuirrel SQL Client is an excellent tool. It requires Java so depending on how you feel about that it may be a good thing or a bad thing. I find it's nice because I work on Windows, Linux and OS X and it works great on all platforms. If you work only on one platform you might be able to find better clients for that particular platform.\nIt has plug-ins for a lot of different databases, allows in table editing, has a data modelling module, auto-completion in the SQL editor and lots of other nifty features.\nhttp://www.squirrelsql.org/\nSQuirrel is currently released under GNU Lesser General Public License.\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"database",
"sql"
] |
stackoverflow_0000080544_database_sql.txt
|
Q:
Search for host with MAC-address using Python
I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?
A:
You need ARP. Python's standard library doesn't include any code for that, so you either need to call an external program (your OS may have an 'arp' utility) or you need to build the packets yourself (possibly with a tool like Scapy.
A:
I don't think there is a built in way to get it from Python itself.
My question is, how are you getting the IP information from your network?
To get it from your local machine you could parse ifconfig (unix) or ipconfig (windows) with little difficulty.
A:
If you want a pure Python solution, you can take a look at Scapy to craft packets (you need to send ARP request, and inspect replies). Or if you don't mind invoking external program, you can use arping (on Un*x systems, I don't know of a Windows equivalent).
A:
It seems that there is not a native way of doing this with Python. Your best bet would be to parse the output of "ipconfig /all" on Windows, or "ifconfig" on Linux. Consider using os.popen() with some regexps.
A:
Depends on your platform. If you're using *nix, you can use the 'arp' command to look up the mac address for a given IP (assuming IPv4) address. If that doesn't work, you could ping the address and then look, or if you have access to the raw network (using BPF or some other mechanism), you could send your own ARP packets (but that is probably overkill).
A:
You would want to parse the output of 'arp', but the kernel ARP cache will only contain those IP address(es) if those hosts have communicated with the host where the Python script is running.
ifconfig can be used to display the MAC addresses of local interfaces, but not those on the LAN.
A:
Mark Pilgrim describes how to do this on Windows for the current machine with the Netbios module here. You can get the Netbios module as part of the Win32 package available at python.org. Unfortunately at the moment I cannot find the docs on the module.
|
Search for host with MAC-address using Python
|
I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?
|
[
"You need ARP. Python's standard library doesn't include any code for that, so you either need to call an external program (your OS may have an 'arp' utility) or you need to build the packets yourself (possibly with a tool like Scapy.\n",
"I don't think there is a built in way to get it from Python itself. \nMy question is, how are you getting the IP information from your network?\nTo get it from your local machine you could parse ifconfig (unix) or ipconfig (windows) with little difficulty.\n",
"If you want a pure Python solution, you can take a look at Scapy to craft packets (you need to send ARP request, and inspect replies). Or if you don't mind invoking external program, you can use arping (on Un*x systems, I don't know of a Windows equivalent).\n",
"It seems that there is not a native way of doing this with Python. Your best bet would be to parse the output of \"ipconfig /all\" on Windows, or \"ifconfig\" on Linux. Consider using os.popen() with some regexps.\n",
"Depends on your platform. If you're using *nix, you can use the 'arp' command to look up the mac address for a given IP (assuming IPv4) address. If that doesn't work, you could ping the address and then look, or if you have access to the raw network (using BPF or some other mechanism), you could send your own ARP packets (but that is probably overkill).\n",
"You would want to parse the output of 'arp', but the kernel ARP cache will only contain those IP address(es) if those hosts have communicated with the host where the Python script is running.\nifconfig can be used to display the MAC addresses of local interfaces, but not those on the LAN.\n",
"Mark Pilgrim describes how to do this on Windows for the current machine with the Netbios module here. You can get the Netbios module as part of the Win32 package available at python.org. Unfortunately at the moment I cannot find the docs on the module.\n"
] |
[
13,
1,
1,
1,
0,
0,
0
] |
[
"as python was not meant to deal with OS-specific issues (it's supposed to be interpreted and cross platform), i would execute an external command to do so:\nin unix the command is ifconfig\nif you execute it as a pipe you get the desired result:\nimport os\nmyPipe = os.popen2(\"/sbin/ifconfig\",\"a\")\nprint(myPipe[1].read())\n\n"
] |
[
-1
] |
[
"network_programming",
"python"
] |
stackoverflow_0000085577_network_programming_python.txt
|
Q:
How to autocomplete at the KornShell command line with the vi editor
In the KornShell (ksh) on AIX UNIX Version 5.3 with the editor mode set to vi using:
set -o vi
What are the key-strokes at the shell command line to autocomplete a file or directory name?
A:
ESC\ works fine on AIX4.2 at least. One thing I noticed is that it only autocompletes to the unique part of the file name.
So if you have the files x.txt, x171go and x171stop, the following will happen:
Press keys: Command line is:
x x
<ESC>\ x
1 x1
<ESC>\ x171
g<ESC>\ x171go
A:
Extending the other answers: <ESC>* will list all matching files on the command line. Then you can use the standard vi editing commands to remove the ones you don't care about. So to add to the above table:
<ESC><shift-8> x.txt x171 x171go
Then use backspace to go get rid of the last two, or hit <ESC> again and use the h or b to go backwards and dw to delete the ones you don't want.
|
How to autocomplete at the KornShell command line with the vi editor
|
In the KornShell (ksh) on AIX UNIX Version 5.3 with the editor mode set to vi using:
set -o vi
What are the key-strokes at the shell command line to autocomplete a file or directory name?
|
[
"ESC\\ works fine on AIX4.2 at least. One thing I noticed is that it only autocompletes to the unique part of the file name.\nSo if you have the files x.txt, x171go and x171stop, the following will happen:\nPress keys: Command line is:\nx x\n<ESC>\\ x\n1 x1\n<ESC>\\ x171\ng<ESC>\\ x171go\n\n",
"Extending the other answers: <ESC>* will list all matching files on the command line. Then you can use the standard vi editing commands to remove the ones you don't care about. So to add to the above table:\n<ESC><shift-8> x.txt x171 x171go\n\nThen use backspace to go get rid of the last two, or hit <ESC> again and use the h or b to go backwards and dw to delete the ones you don't want.\n"
] |
[
12,
3
] |
[] |
[] |
[
"aix",
"ksh",
"shell",
"unix",
"vi"
] |
stackoverflow_0000081022_aix_ksh_shell_unix_vi.txt
|
Q:
How do I implement an OpenID server in Rails?
I see a similar question for Ubuntu, but I'm interested in hosting my own OpenID provider through my Rails-based site that already has an identity and authentication system in place.
Note that I'm not looking for the delegate method to use the site as an OpenID.
What's the best way to do this properly?
A:
This "No Shit Guide To Supporting OpenID In Your Applications"
seems to be a step-by-step tutorial for what you want to do.
A:
Railscasts episode 68 OpenID authentication describes how to do exactly this. It's about a year old, so you may have to do some stuff differently. I'd also strongly for either an updated or newer OpenID plugin (the link for the one in the video is labeled "outdated").
Err, wait, that is to support OpenID authentication in a Rails application you are writing, not to have run an OpenID endpoint in rails.. Here is a guide to implimenting an OpenID server/endpoint in Rails pretty-much form scratch.. gem install openid-server might be easier, but you'll learn more implementing it yourself, and the code is pretty simple.
A:
This reminds me that the overview docs for ruby-openid server are still missing. But you can see the example, and until the docs are ported over, see the docs for the python implementation which follows the same object model.
|
How do I implement an OpenID server in Rails?
|
I see a similar question for Ubuntu, but I'm interested in hosting my own OpenID provider through my Rails-based site that already has an identity and authentication system in place.
Note that I'm not looking for the delegate method to use the site as an OpenID.
What's the best way to do this properly?
|
[
"This \"No Shit Guide To Supporting OpenID In Your Applications\"\nseems to be a step-by-step tutorial for what you want to do.\n",
"Railscasts episode 68 OpenID authentication describes how to do exactly this. It's about a year old, so you may have to do some stuff differently. I'd also strongly for either an updated or newer OpenID plugin (the link for the one in the video is labeled \"outdated\").\nErr, wait, that is to support OpenID authentication in a Rails application you are writing, not to have run an OpenID endpoint in rails.. Here is a guide to implimenting an OpenID server/endpoint in Rails pretty-much form scratch.. gem install openid-server might be easier, but you'll learn more implementing it yourself, and the code is pretty simple.\n",
"This reminds me that the overview docs for ruby-openid server are still missing. But you can see the example, and until the docs are ported over, see the docs for the python implementation which follows the same object model.\n"
] |
[
5,
4,
1
] |
[] |
[] |
[
"openid",
"ruby_on_rails"
] |
stackoverflow_0000045277_openid_ruby_on_rails.txt
|
Q:
What is the best way of adding in regularly used blocks of code when marking up in TextMate?
Caveat: I'm relatively new to coding as well as TextMate, so apologies if there is an obvious answer I'm missing here.
I do a lot of HTML/CSS markup, there are certain patterns that I use a lot, for example, forms, navigation menus etc. What I would like is a way to store those patterns and insert them quickly when I need them.
Is there a way to do this using TextMate?
A:
You can do this very easily in TextMate using Snippets. Just add a new snippet in the bundle editor, and set up how you want to trigger it. You can set a key shortcut, or have it pop up when you hit Tab after a certain word/pattern.
There are many things you can do with them—in your case, it would probably be very useful to set so-called "placeholders" in your snippets, which are the parts that change every time (e.g. the names of the fields in the form). Then, as soon as you insert the snippet, you can hit Tab to move between these.
A:
Along with the links provided above, I think you'll find this screencast useful. It gives a run through of some of the tools TextMate's HTML bundle already provides.
It's probably slightly off-topic though, but worth a look nonetheless.
A:
As mentioned prior snippets are what you are looking for.
For reference look here:
http://manual.macromates.com/en/snippets
http://screenflicker.com/mike/code/div-snippets/
|
What is the best way of adding in regularly used blocks of code when marking up in TextMate?
|
Caveat: I'm relatively new to coding as well as TextMate, so apologies if there is an obvious answer I'm missing here.
I do a lot of HTML/CSS markup, there are certain patterns that I use a lot, for example, forms, navigation menus etc. What I would like is a way to store those patterns and insert them quickly when I need them.
Is there a way to do this using TextMate?
|
[
"You can do this very easily in TextMate using Snippets. Just add a new snippet in the bundle editor, and set up how you want to trigger it. You can set a key shortcut, or have it pop up when you hit Tab after a certain word/pattern.\nThere are many things you can do with them—in your case, it would probably be very useful to set so-called \"placeholders\" in your snippets, which are the parts that change every time (e.g. the names of the fields in the form). Then, as soon as you insert the snippet, you can hit Tab to move between these.\n",
"Along with the links provided above, I think you'll find this screencast useful. It gives a run through of some of the tools TextMate's HTML bundle already provides. \nIt's probably slightly off-topic though, but worth a look nonetheless.\n",
"As mentioned prior snippets are what you are looking for.\nFor reference look here: \nhttp://manual.macromates.com/en/snippets\nhttp://screenflicker.com/mike/code/div-snippets/\n"
] |
[
6,
2,
1
] |
[] |
[] |
[
"design_patterns",
"html",
"markup",
"textmate"
] |
stackoverflow_0000043947_design_patterns_html_markup_textmate.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.