content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How to stop CAS security demands from a FullTrust assembly
I have a FullTrust assembly, Assembly A, which calls a 3rd party component, Assembly B. Is there any way I can, via A.dll.config or in A's code, prevent any CAS demands from propagating up the stack to Assembly B, which does not have FullTrust?
I do not want to alter the machine's security policy, if possible.
A:
You could create a separate appdomain, using the sandboxing API in 2.0. MSDN explains it far better than I can. Of course then you're marshalling over appdomains; but if you want isolation that's the price you have to pay.
|
How to stop CAS security demands from a FullTrust assembly
|
I have a FullTrust assembly, Assembly A, which calls a 3rd party component, Assembly B. Is there any way I can, via A.dll.config or in A's code, prevent any CAS demands from propagating up the stack to Assembly B, which does not have FullTrust?
I do not want to alter the machine's security policy, if possible.
|
[
"You could create a separate appdomain, using the sandboxing API in 2.0. MSDN explains it far better than I can. Of course then you're marshalling over appdomains; but if you want isolation that's the price you have to pay.\n"
] |
[
0
] |
[] |
[] |
[
".net",
"cas",
"full_trust"
] |
stackoverflow_0000075909_.net_cas_full_trust.txt
|
Q:
Database design for a booking application e.g. hotel
I've built one, but I'm convinced it's wrong.
I had a table for customer details, and another table with the each date staying (i.e. a week's holiday would have seven records).
Is there a better way?
I code in PHP with MySQL
A:
Here you go
I found it at this page:
A list of free database models.
WARNING: Currently (November '11), Google is reporting that site as containing malware: http://safebrowsing.clients.google.com/safebrowsing/diagnostic?client=Firefox&hl=en-US&site=http://www.databaseanswers.org/data_models/hotels/hotel_reservations_popkin.htm
A:
I work in the travel industry and have worked on a number of different PMS's. The last one I designed had the row per guest per night approach and it is the best approach I've come across yet.
Quite often in the industry there are particular pieces of information to each night of the stay. For example you need to know the rate for each night of the stay at the time the booking was made. The guest may also move room over the duration of their stay.
Performance wise it's quicker to do an equals lookup than a range in MySQL, so the startdate/enddate approach would be slower. To do a lookup for a range of dates do "where date in (dates)".
Roughly the schema I used is:
Bookings (id, main-guest-id, arrivaltime, departime,...)
BookingGuests (id, guest-id)
BookingGuestNights (date, room, rate)
A:
Some questions you need to ask yourself:
Is there a reason you need a record for each day of the stay?
Could you not just have a table for the stay and have an arrival date and either a number of nights or a departure date?
Is there specific bits of data that differ from day to day relating to one customer's stay?
A:
Some things that may break your model. These may not be a problem, but you should check with your client to see if they may occur.
Less than 1 day stays (short midday stays are common at some business hotels, for example)
Late check-outs/early check-ins. If you are just measuring the nights, and not dates/times, you may find it hard to arrange for these, or see potential clashes. One of our clients wanted a four hour gap, not always 10am-2pm.
A:
Wow, thanks for all the answers.
I had thought long and hard about the schema, and went with a record=night approach after trying the other way and having difficulty in converting to html.
I used CodeIgniter with the built in Calendar Class to display the booking info. Checking if a date was available was easier this way (at least after trying), so I went with it. But I'm convinced that it's not the best way, which is why I asked the question.
And thanks for the DB answers link, too.
Best,
Mei
A:
What's wrong with that? logging each date that the customer is staying allows for what I'd imagine are fairly standard reports such as being able to display the number of booked rooms on any given day.
A:
The answer heavily depends on your requirements... But I would expect only storing a record with the start and stop date for their stay is needed. If you explain your question more, we can give you more details.
A:
A tuple-per-day is a bit overkill, I think. A few columns on a "stay" table should suffice.
stay.check_in_time_scheduled
stay.check_in_time_actual
stay.check_out_time_scheduled
stay.check_out_time_actual
A:
Is creating a record for each day a person stays neccessary? It should only be neccessary if each day is significant, otherwise have a Customer/Guest table to contain the customer details, a Booking table to contain bookings for guests. Booking table would contain room, start date, end date, guest (or guests), etc.
If you need to record other things such as activities paid for, or meals, add those in other tables as required.
A:
One possible way to reduce the number of entries for each stay is, store the time-frame e.g. start-date and end-date. I need to know the operations you run against the data to give a more specific advice.
Generally speaking, if you need to check how many customers are staying on a given date you can do so with a stored procedure.
For some specific operations your design might be good. Even if that's the case I would still hold a "visits" table linking a customer to a unique stay, and a "days-of-visit" table where I would resolve each client's stay to its days.
Asaf.
A:
You're trading off database size with query simplicity (and probably performance)
Your current model gives simple queries, as its pretty easy to query for number of guests, vacancies in room X on night n, and so on, but the database size will increase fairly rapidly.
Moving to a start/stop or start/num nights model will make for some ... interesting queries at times :)
So a lot of the choice is to do with your SQL skill level :)
A:
I don't care for the schema in the diagram. It's rather ugly.
Schema Abstract
Table: Visit
The Visit table contains one row for each night stayed in a hotel.
Note: Visit contains
ixVisit
ixCusomer
dt
sNote
Table: Customer
ixCustomer
sFirstName
sLastName
Table: Stay
The Stay table includes one row that describes the entire visit. It is updated everytime Visit is udpated.
ixStay
dtArrive
dtLeave
sNote
Notes
A web app is two things: SELECT actions and CRUD actions. Most web apps are 99% SELECT, and 1% CRUD. Normalization tends to help CRUD much more than SELECT. You might look at my schema and panic, but it's fast. You will have to do a small amount of extra work for any CRUD activity, but your SELECTS will be so much faster because all of your SELECTS can hit the Stay table.
I like how Jeff Atwood puts it: "Normalize until it hurts, denormalize until it works"
For a website used by a busy hotel manager, how well it works is just as important as how fast it works.
|
Database design for a booking application e.g. hotel
|
I've built one, but I'm convinced it's wrong.
I had a table for customer details, and another table with the each date staying (i.e. a week's holiday would have seven records).
Is there a better way?
I code in PHP with MySQL
|
[
"Here you go\nI found it at this page:\nA list of free database models.\nWARNING: Currently (November '11), Google is reporting that site as containing malware: http://safebrowsing.clients.google.com/safebrowsing/diagnostic?client=Firefox&hl=en-US&site=http://www.databaseanswers.org/data_models/hotels/hotel_reservations_popkin.htm\n",
"I work in the travel industry and have worked on a number of different PMS's. The last one I designed had the row per guest per night approach and it is the best approach I've come across yet.\nQuite often in the industry there are particular pieces of information to each night of the stay. For example you need to know the rate for each night of the stay at the time the booking was made. The guest may also move room over the duration of their stay.\nPerformance wise it's quicker to do an equals lookup than a range in MySQL, so the startdate/enddate approach would be slower. To do a lookup for a range of dates do \"where date in (dates)\".\nRoughly the schema I used is:\nBookings (id, main-guest-id, arrivaltime, departime,...)\n\nBookingGuests (id, guest-id)\n\nBookingGuestNights (date, room, rate)\n\n",
"Some questions you need to ask yourself:\n\nIs there a reason you need a record for each day of the stay?\nCould you not just have a table for the stay and have an arrival date and either a number of nights or a departure date?\nIs there specific bits of data that differ from day to day relating to one customer's stay?\n\n",
"Some things that may break your model. These may not be a problem, but you should check with your client to see if they may occur.\n\nLess than 1 day stays (short midday stays are common at some business hotels, for example)\nLate check-outs/early check-ins. If you are just measuring the nights, and not dates/times, you may find it hard to arrange for these, or see potential clashes. One of our clients wanted a four hour gap, not always 10am-2pm.\n\n",
"Wow, thanks for all the answers. \nI had thought long and hard about the schema, and went with a record=night approach after trying the other way and having difficulty in converting to html.\nI used CodeIgniter with the built in Calendar Class to display the booking info. Checking if a date was available was easier this way (at least after trying), so I went with it. But I'm convinced that it's not the best way, which is why I asked the question.\nAnd thanks for the DB answers link, too.\nBest,\nMei\n",
"What's wrong with that? logging each date that the customer is staying allows for what I'd imagine are fairly standard reports such as being able to display the number of booked rooms on any given day.\n",
"The answer heavily depends on your requirements... But I would expect only storing a record with the start and stop date for their stay is needed. If you explain your question more, we can give you more details.\n",
"A tuple-per-day is a bit overkill, I think. A few columns on a \"stay\" table should suffice.\nstay.check_in_time_scheduled\nstay.check_in_time_actual\nstay.check_out_time_scheduled\nstay.check_out_time_actual\n\n",
"Is creating a record for each day a person stays neccessary? It should only be neccessary if each day is significant, otherwise have a Customer/Guest table to contain the customer details, a Booking table to contain bookings for guests. Booking table would contain room, start date, end date, guest (or guests), etc.\nIf you need to record other things such as activities paid for, or meals, add those in other tables as required.\n",
"One possible way to reduce the number of entries for each stay is, store the time-frame e.g. start-date and end-date. I need to know the operations you run against the data to give a more specific advice.\nGenerally speaking, if you need to check how many customers are staying on a given date you can do so with a stored procedure. \nFor some specific operations your design might be good. Even if that's the case I would still hold a \"visits\" table linking a customer to a unique stay, and a \"days-of-visit\" table where I would resolve each client's stay to its days.\nAsaf.\n",
"You're trading off database size with query simplicity (and probably performance)\nYour current model gives simple queries, as its pretty easy to query for number of guests, vacancies in room X on night n, and so on, but the database size will increase fairly rapidly.\nMoving to a start/stop or start/num nights model will make for some ... interesting queries at times :)\nSo a lot of the choice is to do with your SQL skill level :)\n",
"I don't care for the schema in the diagram. It's rather ugly.\nSchema Abstract\nTable: Visit\nThe Visit table contains one row for each night stayed in a hotel.\nNote: Visit contains\n\nixVisit\nixCusomer\ndt\nsNote\n\nTable: Customer\n\nixCustomer\nsFirstName\nsLastName\n\nTable: Stay\nThe Stay table includes one row that describes the entire visit. It is updated everytime Visit is udpated.\n\nixStay\ndtArrive\ndtLeave\nsNote\n\nNotes\nA web app is two things: SELECT actions and CRUD actions. Most web apps are 99% SELECT, and 1% CRUD. Normalization tends to help CRUD much more than SELECT. You might look at my schema and panic, but it's fast. You will have to do a small amount of extra work for any CRUD activity, but your SELECTS will be so much faster because all of your SELECTS can hit the Stay table.\nI like how Jeff Atwood puts it: \"Normalize until it hurts, denormalize until it works\"\nFor a website used by a busy hotel manager, how well it works is just as important as how fast it works.\n"
] |
[
6,
3,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"mysql",
"php"
] |
stackoverflow_0000067421_mysql_php.txt
|
Q:
How do you measure downstream bandwidth per user?
In a Linux-based system (specifically, Ubuntu Server 8.04), how can you measure downstream bandwidth on a per-user basis? Optimally, I would like a method that provides data directly instead of having to run another process and parse its output. I have a technique for measuring per-user upstream bandwidth by setting up one iptables filter per user and checking their counters at regular intervals, but this doesn't seem to be supported for downstream connections, which I assume is because iptables checks packets before they are routed to a process.
[edit 20080916 153434 EST] By "per-user", I mean literal accounts on the system. That is, any account with a real POSIX UID owning actual running processes. So, for Ubuntu Server 8.04, measurements would include values for root, www-data, my own account, etc.
A:
I don't see any obvious way to do this as implemented, but I'd expect it would be something you could do with a custom ip_conntrack module. You'd capture the uid when first creating the conntrack entry, then apply that same uid in both directions.
|
How do you measure downstream bandwidth per user?
|
In a Linux-based system (specifically, Ubuntu Server 8.04), how can you measure downstream bandwidth on a per-user basis? Optimally, I would like a method that provides data directly instead of having to run another process and parse its output. I have a technique for measuring per-user upstream bandwidth by setting up one iptables filter per user and checking their counters at regular intervals, but this doesn't seem to be supported for downstream connections, which I assume is because iptables checks packets before they are routed to a process.
[edit 20080916 153434 EST] By "per-user", I mean literal accounts on the system. That is, any account with a real POSIX UID owning actual running processes. So, for Ubuntu Server 8.04, measurements would include values for root, www-data, my own account, etc.
|
[
"I don't see any obvious way to do this as implemented, but I'd expect it would be something you could do with a custom ip_conntrack module. You'd capture the uid when first creating the conntrack entry, then apply that same uid in both directions.\n"
] |
[
1
] |
[] |
[] |
[
"bandwidth",
"inbound",
"linux"
] |
stackoverflow_0000076029_bandwidth_inbound_linux.txt
|
Q:
How can I save some JavaScript state information back to my server onUnload?
I have an ExtJS grid on a web page and I'd like to save some of its state information back to the server when the users leaves the page.
Can I do this with an Ajax request onUnload?
If not, what's a better solution?
A:
You can use an Ajax request, but be sure to make it a synchronous request rather than an asychronous one. Alternatively, simply save state whenever the user makes a change, this also protects the data if the user's browser crashes.
A:
There's an answer above that says to use a synchronous ajax call, and that is the best case scenario. The problem is that unload doesn't work everywhere. If you look here you'll find some tricks to help you get unload events in safari... You could also use Google Gears to save content user side for situations where the user will be coming back, but the only fully safe way to keep that information is to continuously send it as long as the user is on the page or making changes.
A:
You could also set a cookie using javascript on unload. I think the advantage ajax has over cookies is that you have the data available to you for reporting and the user (if logged in) can utilise the data across different machines.
The disadvantage of using ajax is that it might slow down the actual closing of the browser window, which could be annoying if the server is slow to respond.
A:
It depends on how the user leaves the page.
If there is a 'logoff' button in your GUI, you can trigger an ajax request when the user clicks on this button.
Otherwise I do not think it is a good idea to make a request in the onUnload. As said earlier you would have to make a synchronous request...
An alternative to the cookie solution would be an hidden text field. This is a technique usually used by tools such as RSH that deal with history issues that come with ajax.
|
How can I save some JavaScript state information back to my server onUnload?
|
I have an ExtJS grid on a web page and I'd like to save some of its state information back to the server when the users leaves the page.
Can I do this with an Ajax request onUnload?
If not, what's a better solution?
|
[
"You can use an Ajax request, but be sure to make it a synchronous request rather than an asychronous one. Alternatively, simply save state whenever the user makes a change, this also protects the data if the user's browser crashes.\n",
"There's an answer above that says to use a synchronous ajax call, and that is the best case scenario. The problem is that unload doesn't work everywhere. If you look here you'll find some tricks to help you get unload events in safari... You could also use Google Gears to save content user side for situations where the user will be coming back, but the only fully safe way to keep that information is to continuously send it as long as the user is on the page or making changes.\n",
"You could also set a cookie using javascript on unload. I think the advantage ajax has over cookies is that you have the data available to you for reporting and the user (if logged in) can utilise the data across different machines.\nThe disadvantage of using ajax is that it might slow down the actual closing of the browser window, which could be annoying if the server is slow to respond.\n",
"It depends on how the user leaves the page.\nIf there is a 'logoff' button in your GUI, you can trigger an ajax request when the user clicks on this button.\nOtherwise I do not think it is a good idea to make a request in the onUnload. As said earlier you would have to make a synchronous request...\nAn alternative to the cookie solution would be an hidden text field. This is a technique usually used by tools such as RSH that deal with history issues that come with ajax.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"ajax",
"extjs",
"javascript"
] |
stackoverflow_0000073335_ajax_extjs_javascript.txt
|
Q:
Open Source Database Plugin For Eclipse?
Does anyone know of a good open source plugin for database querying and exploring within Eclipse?
The active Database Exploring plugin within Eclipse is really geared around being associated with a Java project. While I am just trying to run ad-hoc queries and explore the schema. I am effectively looking for a just a common, quick querying tool without the overhead of having to create a code project. I have found a couple open source database plugins for Eclipse but these have not seen active development in over a year.
Any suggestions?
A:
I use SQL Explorer.
It comes as an Eclipse plugin or standalone.
http://eclipsesql.sourceforge.net/
A:
I use Quantum DB, and it seems to work quite well.
http://quantum.sourceforge.net/
|
Open Source Database Plugin For Eclipse?
|
Does anyone know of a good open source plugin for database querying and exploring within Eclipse?
The active Database Exploring plugin within Eclipse is really geared around being associated with a Java project. While I am just trying to run ad-hoc queries and explore the schema. I am effectively looking for a just a common, quick querying tool without the overhead of having to create a code project. I have found a couple open source database plugins for Eclipse but these have not seen active development in over a year.
Any suggestions?
|
[
"I use SQL Explorer.\nIt comes as an Eclipse plugin or standalone.\nhttp://eclipsesql.sourceforge.net/\n",
"I use Quantum DB, and it seems to work quite well. \nhttp://quantum.sourceforge.net/\n"
] |
[
1,
1
] |
[] |
[] |
[
"database",
"eclipse",
"ide"
] |
stackoverflow_0000075270_database_eclipse_ide.txt
|
Q:
What do you call the tags in Subversion and CVS that add automatic content?
Things like $log$ and $version$ which add data upon check-in to the file. I'm interested in seeing the other ones and what information they can provide, but I can't get much info unless I know what they are called.
A:
Both Subversion and CVS call them Keywords.
Have a look in the SVN manual here (scroll down to svn:keywords) or here for CVS.
A:
In SVN these are simply called "properties". You can read about them in the SVN book:
http://svnbook.red-bean.com/en/1.8/svn.advanced.props.html
Err, so, are they called properties or keywords? Oh, I see. In SVN you can associate arbitrary metadata, called "properties", with versioned files; some of the properties you can set are to set up keyword substitution in the files themselves.
A:
These are Keyword substitutions. The link to SVNBook 1.8 is here: http://svnbook.red-bean.com/en/1.8/svn.advanced.props.special.keywords.html.
Subversion's built-in keywords are:
Date / LastChangedDate
Revision / Rev / LastChangedRevision
Author / LastChangedBy
HeadURL / URL
Id
The keywords are case sensitive, and remember to surround them with $.
|
What do you call the tags in Subversion and CVS that add automatic content?
|
Things like $log$ and $version$ which add data upon check-in to the file. I'm interested in seeing the other ones and what information they can provide, but I can't get much info unless I know what they are called.
|
[
"Both Subversion and CVS call them Keywords.\nHave a look in the SVN manual here (scroll down to svn:keywords) or here for CVS.\n",
"In SVN these are simply called \"properties\". You can read about them in the SVN book:\nhttp://svnbook.red-bean.com/en/1.8/svn.advanced.props.html\n\nErr, so, are they called properties or keywords? Oh, I see. In SVN you can associate arbitrary metadata, called \"properties\", with versioned files; some of the properties you can set are to set up keyword substitution in the files themselves.\n",
"These are Keyword substitutions. The link to SVNBook 1.8 is here: http://svnbook.red-bean.com/en/1.8/svn.advanced.props.special.keywords.html.\nSubversion's built-in keywords are:\n\nDate / LastChangedDate\nRevision / Rev / LastChangedRevision\nAuthor / LastChangedBy\nHeadURL / URL\nId\n\nThe keywords are case sensitive, and remember to surround them with $.\n"
] |
[
7,
1,
0
] |
[] |
[] |
[
"cvs",
"svn",
"tags"
] |
stackoverflow_0000039770_cvs_svn_tags.txt
|
Q:
In JSTL/JSP, given a java.util.Date, how do I find the next day?
On a JSTL/JSP page, I have a java.util.Date object from my application. I need to find the day after the day specified by that object. I can use <jsp:scriptlet> to drop into Java and use java.util.Calendar to do the necessary calculations, but this feels clumsy and inelegant to me.
Is there some way to use JSP or JSTL tags to achieve this end without having to switch into full-on Java, or is the latter the only way to accomplish this?
A:
I'm not a fan of putting java code in your jsp.
I'd use a static method and a taglib to accomplish this.
Just my idea though. There are many ways to solve this problem.
public static Date addDay(Date date){
//TODO you may want to check for a null date and handle it.
Calendar cal = Calendar.getInstance();
cal.setTime (date);
cal.add (Calendar.DATE, 1);
return cal.getTime();
}
functions.tld
<?xml version="1.0" encoding="UTF-8" ?>
<taglib xmlns="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-jsptaglibrary_2_0.xsd"
version="2.0">
<description>functions library</description>
<display-name>functions</display-name>
<tlib-version>1.1</tlib-version>
<short-name>xfn</short-name>
<uri>http://yourdomain/functions.tld</uri>
<function>
<description>
Adds 1 day to a date.
</description>
<name>addDay</name>
<function-class>Functions</function-class>
<function-signature>java.util.Date addDay(java.util.Date)</function-signature>
<example>
${xfn:addDay(date)}
</example>
</function>
</taglib>
A:
While this does not answer your initial question, you could perhaps eliminate the hassle of going through java.util.Calendar by doing this:
// Date d given
d.setTime(d.getTime()+86400000);
A:
You have to either use a scriptlet or write your own tag. For the record, using Calendar would look like this:
Calendar cal = Calendar.getInstance();
cal.setTime (date);
cal.add (Calendar.DATE, 1);
date = cal.getTime ();
Truly horrible.
A:
Unfortunately there is no tag in the standard JSP/JSTL libraries that I know of that would allow you to do this date calculation.
The simplest, and most inelegant, solution is to just use some scriptlet code to do the calculation. You've already stated that you think this is a clunky solution, and I agree with you. I would probably write a custom JSP taglib to get this if I were you.
A:
In general, I think JSPs should not have data logic. They should get all the data they need to display from the Controller and all their logic should be about HOW the data is displayed, not WHAT is displayed. This is usually a lot simpler and a lot less code/XML than adding a custom tag.
And if there isn't any re-use happening, is a tiny scriptlet really that much worse than the taglib XML?
|
In JSTL/JSP, given a java.util.Date, how do I find the next day?
|
On a JSTL/JSP page, I have a java.util.Date object from my application. I need to find the day after the day specified by that object. I can use <jsp:scriptlet> to drop into Java and use java.util.Calendar to do the necessary calculations, but this feels clumsy and inelegant to me.
Is there some way to use JSP or JSTL tags to achieve this end without having to switch into full-on Java, or is the latter the only way to accomplish this?
|
[
"I'm not a fan of putting java code in your jsp.\nI'd use a static method and a taglib to accomplish this.\nJust my idea though. There are many ways to solve this problem.\npublic static Date addDay(Date date){\n //TODO you may want to check for a null date and handle it.\n Calendar cal = Calendar.getInstance();\n cal.setTime (date);\n cal.add (Calendar.DATE, 1);\n return cal.getTime();\n}\n\nfunctions.tld\n<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<taglib xmlns=\"http://java.sun.com/xml/ns/j2ee\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-jsptaglibrary_2_0.xsd\"\n version=\"2.0\">\n <description>functions library</description>\n <display-name>functions</display-name>\n <tlib-version>1.1</tlib-version>\n <short-name>xfn</short-name>\n <uri>http://yourdomain/functions.tld</uri>\n <function>\n <description>\n Adds 1 day to a date.\n </description>\n <name>addDay</name>\n <function-class>Functions</function-class>\n <function-signature>java.util.Date addDay(java.util.Date)</function-signature>\n <example>\n ${xfn:addDay(date)}\n </example>\n </function>\n</taglib>\n\n",
"While this does not answer your initial question, you could perhaps eliminate the hassle of going through java.util.Calendar by doing this:\n// Date d given\nd.setTime(d.getTime()+86400000);\n\n",
"You have to either use a scriptlet or write your own tag. For the record, using Calendar would look like this:\nCalendar cal = Calendar.getInstance();\ncal.setTime (date);\ncal.add (Calendar.DATE, 1);\ndate = cal.getTime ();\n\nTruly horrible.\n",
"Unfortunately there is no tag in the standard JSP/JSTL libraries that I know of that would allow you to do this date calculation.\nThe simplest, and most inelegant, solution is to just use some scriptlet code to do the calculation. You've already stated that you think this is a clunky solution, and I agree with you. I would probably write a custom JSP taglib to get this if I were you.\n",
"In general, I think JSPs should not have data logic. They should get all the data they need to display from the Controller and all their logic should be about HOW the data is displayed, not WHAT is displayed. This is usually a lot simpler and a lot less code/XML than adding a custom tag.\nAnd if there isn't any re-use happening, is a tiny scriptlet really that much worse than the taglib XML?\n"
] |
[
6,
2,
2,
1,
1
] |
[] |
[] |
[
"java",
"jsp",
"jstl"
] |
stackoverflow_0000074248_java_jsp_jstl.txt
|
Q:
HQL querying columns in a set
Is it possible to reach the individual columns of table2 using HQL with a configuration like this?
<hibernate-mapping>
<class table="table1">
<set name="table2" table="table2" lazy="true" cascade="all">
<key column="result_id"/>
<many-to-many column="group_id"/>
</set>
</class>
</hibernate-mapping>
A:
They're just properties of table1's table2 property.
select t1.table2.property1, t1.table2.property2, ... from table1 as t1
You might have to join, like so
select t2.property1, t2.property2, ...
from table1 as t1
inner join t1.table2 as t2
Here's the relevant part of the hibernate doc.
A:
You can query on them, but you can't make it part of the where clause. E.g.,
select t1.table2.x from table1 as t1
would work, but
select t1 from table1 as t1 where t1.table2.x = foo
would not.
A:
Let's say table2 has a column "color varchar(128)" and this column is properly mapped to Hibernate.
You should be able to do something like this:
from table1 where table2.color = 'red'
This will return all table1 rows that are linked to a table2 row whose color column is 'red'. Note that in your Hibernate mapping, your set has the same name as the table it references. The above query uses the name of the set, not the name of the table.
|
HQL querying columns in a set
|
Is it possible to reach the individual columns of table2 using HQL with a configuration like this?
<hibernate-mapping>
<class table="table1">
<set name="table2" table="table2" lazy="true" cascade="all">
<key column="result_id"/>
<many-to-many column="group_id"/>
</set>
</class>
</hibernate-mapping>
|
[
"They're just properties of table1's table2 property.\nselect t1.table2.property1, t1.table2.property2, ... from table1 as t1\n\nYou might have to join, like so\nselect t2.property1, t2.property2, ... \n from table1 as t1\n inner join t1.table2 as t2\n\nHere's the relevant part of the hibernate doc.\n",
"You can query on them, but you can't make it part of the where clause. E.g.,\nselect t1.table2.x from table1 as t1\n\nwould work, but\nselect t1 from table1 as t1 where t1.table2.x = foo\n\nwould not.\n",
"Let's say table2 has a column \"color varchar(128)\" and this column is properly mapped to Hibernate.\nYou should be able to do something like this:\nfrom table1 where table2.color = 'red'\n\nThis will return all table1 rows that are linked to a table2 row whose color column is 'red'. Note that in your Hibernate mapping, your set has the same name as the table it references. The above query uses the name of the set, not the name of the table.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"hibernate",
"hql",
"java"
] |
stackoverflow_0000075245_hibernate_hql_java.txt
|
Q:
Why does the Eclipse code formatter break in a Javadoc @see tag?
I'm using Eclipse 3.4 and have configured the Java code formatter with all of the options on the Comments tab enabled. The problem is that when I format a document comment that contains:
* @see <a href="test.html">test</a>
the code formatter inserts a space in the closing HTML, breaking it:
* @see <a href="test.html">test< /a>
Why? How do I stop this happening?
This is not fixed by disabling any of the options on the Comments tab, such as Format HTML tags. The only work-around I found is to disable Javadoc formatting completely by disabling both the Enable Javadoc comment formatting and Enable block comment formatting options, which means I then have to format comment blocks manually.
A:
I can only assume it's a bug in Eclipse. It only happens with @see tags, it happens also for all 3 builtin code formatter settings.
There are some interesting bugs reported already in the neighbourhood, but I couldn't find this specific one. See for example a search for @see in the Eclipse Bugzilla.
A:
Strict XML specifications require that the self closing tags should have a space before the closing slash like so:
<gcServer enabled="true" /> <!-- note the space just after "true" -->
I can only assume, like Bart said, that there is a bug in Eclipse's reformatter that thinks the closing tag is actually a self-closing tag. Another idea: Can you verify that your a tags are balanced (i.e. no unclosed tags higher up in the document)?
A:
This could be a bug in Eclipse 3.4. I'm using 3.3 (M20080221-1800), and do not observe this behavior.
|
Why does the Eclipse code formatter break in a Javadoc @see tag?
|
I'm using Eclipse 3.4 and have configured the Java code formatter with all of the options on the Comments tab enabled. The problem is that when I format a document comment that contains:
* @see <a href="test.html">test</a>
the code formatter inserts a space in the closing HTML, breaking it:
* @see <a href="test.html">test< /a>
Why? How do I stop this happening?
This is not fixed by disabling any of the options on the Comments tab, such as Format HTML tags. The only work-around I found is to disable Javadoc formatting completely by disabling both the Enable Javadoc comment formatting and Enable block comment formatting options, which means I then have to format comment blocks manually.
|
[
"I can only assume it's a bug in Eclipse. It only happens with @see tags, it happens also for all 3 builtin code formatter settings.\nThere are some interesting bugs reported already in the neighbourhood, but I couldn't find this specific one. See for example a search for @see in the Eclipse Bugzilla.\n",
"Strict XML specifications require that the self closing tags should have a space before the closing slash like so:\n<gcServer enabled=\"true\" /> <!-- note the space just after \"true\" -->\n\nI can only assume, like Bart said, that there is a bug in Eclipse's reformatter that thinks the closing tag is actually a self-closing tag. Another idea: Can you verify that your a tags are balanced (i.e. no unclosed tags higher up in the document)?\n",
"This could be a bug in Eclipse 3.4. I'm using 3.3 (M20080221-1800), and do not observe this behavior. \n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"eclipse",
"eclipse_3.4",
"java",
"javadoc"
] |
stackoverflow_0000045414_eclipse_eclipse_3.4_java_javadoc.txt
|
Q:
Deleting certain classes on running an external tool in eclipse?
I've set an external tool (sablecc) in eclipse (3.4) that generates a bunch of classes in the current project. I need to run this tool and regenerate these classes fairly frequently. This means that every time I want to run sablecc, I have to manually delete the packages/classes that sablecc creates in order to ensure that I don't have conflicts between the old and new generated classes. Is there some easy way to automate this from within eclipse or otherwise?
A:
Not sure if I understand your point right, I suppose you need to delete old classes before running sablecc because some of them would not be eventually created in new run.
It is probably best to write short Ant build.xml with the target, which first removes the classes (Ant delete task) and then runs sablecc (Ant exec task). It is also possible to preset eclipse so that it refreshes workspace after Ant finishes.
Put the build.xml anywhere to project, right click, Run As/Ant Build.
Just for the sake of the clean style, you could then call sablecc with its Ant task (implemented by org.sablecc.ant.taskdef), instead of running it externally in new process.
A:
You can tell Eclipse to refresh the workspace (or parts of it) after an external tool has been run. This should force Eclipse to detect any new/deleted classes.
A:
JesperE is referring to the option Refresh->Refresh resources on completion in your external tools configuration for running sablecc.
|
Deleting certain classes on running an external tool in eclipse?
|
I've set an external tool (sablecc) in eclipse (3.4) that generates a bunch of classes in the current project. I need to run this tool and regenerate these classes fairly frequently. This means that every time I want to run sablecc, I have to manually delete the packages/classes that sablecc creates in order to ensure that I don't have conflicts between the old and new generated classes. Is there some easy way to automate this from within eclipse or otherwise?
|
[
"Not sure if I understand your point right, I suppose you need to delete old classes before running sablecc because some of them would not be eventually created in new run.\nIt is probably best to write short Ant build.xml with the target, which first removes the classes (Ant delete task) and then runs sablecc (Ant exec task). It is also possible to preset eclipse so that it refreshes workspace after Ant finishes.\nPut the build.xml anywhere to project, right click, Run As/Ant Build.\nJust for the sake of the clean style, you could then call sablecc with its Ant task (implemented by org.sablecc.ant.taskdef), instead of running it externally in new process.\n",
"You can tell Eclipse to refresh the workspace (or parts of it) after an external tool has been run. This should force Eclipse to detect any new/deleted classes.\n",
"JesperE is referring to the option Refresh->Refresh resources on completion in your external tools configuration for running sablecc.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"eclipse",
"sablecc"
] |
stackoverflow_0000075411_eclipse_sablecc.txt
|
Q:
How to convert a FLV file recorded with Red5 / FMS to MP3?
I'm looking for a way to extract the audio part of a FLV file.
I'm recording from the user's microphone and the audio is encoded using the Nellymoser Asao Codec. This is the default codec and there's no way to change this.
A:
ffMpeg is the way to go !
It worked for me with SVN Rev 14277.
The command I used is : ffmpeg -i source.flv -nv -f mp3 destination.mp3
GOTCHA :
If you get this error message : Unsupported audio codec (n),
check the FLV Spec in the Audio Tags section.
ffMpeg can decode n=6 (Nellymoser).
But for n=4 (Nellymoser 8-kHz mono) and n=5 (Nellymoser 16-kHz mono) it doesn't work.
To fix this use the default microphone rate when recording your streams, overwise ffMpeg is unable to decode them.
Hope this helps !
A:
This isn't an exact answer, but some relevant notes I've made from investigating FLV files for a business requirement.
Most FLV audio is encoded in the MP3 format, meaning you can extract it directly from the FLV container. If the FLV was created from someone recording from their microphone, the audio is encoded with the Nellymoser Asao codec, which is proprietary (IIRC).
I'd check out libavcodec, which handles FLV/MP3/Nellymoser natively, and should let you get to the audio.
A:
I'm currently using FFmpeg version SVN-r12665 for this, with no problems (the console version, without any wrapper library). There are some caveats to using console applications from non-console .NET environments, but it's all fairly straightforward. Using the libavcodec DLL directly is much more cumbersome.
A:
I was going to recommend this: http://code.google.com/hosting/takenDown?project=nelly2pcm¬ice=7281.
But its been taken down. Glad I got a copy first :-)
|
How to convert a FLV file recorded with Red5 / FMS to MP3?
|
I'm looking for a way to extract the audio part of a FLV file.
I'm recording from the user's microphone and the audio is encoded using the Nellymoser Asao Codec. This is the default codec and there's no way to change this.
|
[
"ffMpeg is the way to go !\nIt worked for me with SVN Rev 14277.\nThe command I used is : ffmpeg -i source.flv -nv -f mp3 destination.mp3\nGOTCHA :\nIf you get this error message : Unsupported audio codec (n), \ncheck the FLV Spec in the Audio Tags section. \nffMpeg can decode n=6 (Nellymoser).\nBut for n=4 (Nellymoser 8-kHz mono) and n=5 (Nellymoser 16-kHz mono) it doesn't work.\nTo fix this use the default microphone rate when recording your streams, overwise ffMpeg is unable to decode them.\nHope this helps !\n",
"This isn't an exact answer, but some relevant notes I've made from investigating FLV files for a business requirement.\nMost FLV audio is encoded in the MP3 format, meaning you can extract it directly from the FLV container. If the FLV was created from someone recording from their microphone, the audio is encoded with the Nellymoser Asao codec, which is proprietary (IIRC). \nI'd check out libavcodec, which handles FLV/MP3/Nellymoser natively, and should let you get to the audio.\n",
"I'm currently using FFmpeg version SVN-r12665 for this, with no problems (the console version, without any wrapper library). There are some caveats to using console applications from non-console .NET environments, but it's all fairly straightforward. Using the libavcodec DLL directly is much more cumbersome.\n",
"I was going to recommend this: http://code.google.com/hosting/takenDown?project=nelly2pcm¬ice=7281.\nBut its been taken down. Glad I got a copy first :-)\n"
] |
[
6,
2,
1,
0
] |
[] |
[] |
[
"flv",
"mp3"
] |
stackoverflow_0000067647_flv_mp3.txt
|
Q:
How do you place a textbox object over a specific Cell when automating Excel?
We are automating Excel using VB.Net, and trying to place multiple lines of text on an Excel worksheet that we can set to not print. Between these we would have printable reports.
We can do this if we add textbox objects, and set the print object setting to false. (If you have another way, please direct me)
The code to add a textbox is:
ActiveSheet.Shapes.AddTextbox(msoTextOrientationHorizontal, 145.5, 227.25, 304.5, 21#)
but the positioning is in points. We need a way to place it over a specific cell, and size it with the cell. How can we find out where to put it when we just know which cell to put it over?
A:
If you have the cell name or position, you can do:
With ActiveSheet
.Shapes.AddTextbox msoTextOrientationHorizontal, .Cells(3,2).Left, .Cells(3,2).Top, .Cells(3,2).Width, .Cells(3,2).Height
End With
This will add a textbox over cell B3. When B3 is resized, the textbox is also.
A:
When you copy & paste a textbox, Excel will place the new textbox over whichever cell is currently selected. So you can achieve this very easily by simply using the VBA copy & paste commands. This can be particularly useful if you are going to be using a lot of very similar textboxes, as you are effectively creating a textbox template.
|
How do you place a textbox object over a specific Cell when automating Excel?
|
We are automating Excel using VB.Net, and trying to place multiple lines of text on an Excel worksheet that we can set to not print. Between these we would have printable reports.
We can do this if we add textbox objects, and set the print object setting to false. (If you have another way, please direct me)
The code to add a textbox is:
ActiveSheet.Shapes.AddTextbox(msoTextOrientationHorizontal, 145.5, 227.25, 304.5, 21#)
but the positioning is in points. We need a way to place it over a specific cell, and size it with the cell. How can we find out where to put it when we just know which cell to put it over?
|
[
"If you have the cell name or position, you can do:\nWith ActiveSheet\n .Shapes.AddTextbox msoTextOrientationHorizontal, .Cells(3,2).Left, .Cells(3,2).Top, .Cells(3,2).Width, .Cells(3,2).Height\nEnd With\n\nThis will add a textbox over cell B3. When B3 is resized, the textbox is also.\n",
"When you copy & paste a textbox, Excel will place the new textbox over whichever cell is currently selected. So you can achieve this very easily by simply using the VBA copy & paste commands. This can be particularly useful if you are going to be using a lot of very similar textboxes, as you are effectively creating a textbox template.\n"
] |
[
9,
0
] |
[] |
[] |
[
"automation",
"excel"
] |
stackoverflow_0000066934_automation_excel.txt
|
Q:
How do you reliably get the Quick Launch folder in XP and Vista?
We need to reliably get the Quick Launch folder for both All and Current users under both Vista and XP. I'm developing in C++, but this is probably more of a general Windows API question.
For reference, here is code to get the Application Data folder under both systems:
HRESULT hres;
CString basePath;
hres = SHGetSpecialFolderPath(this->GetSafeHwnd(), basePath.GetBuffer(MAX_PATH), CSIDL_APPDATA, FALSE);
basePath.ReleaseBuffer();
I suspect this is just a matter of knowing which sub-folder Microsoft uses.
Under Windows XP, the app data subfolder is:
Microsoft\Internet Explorer\Quick Launch
Under Vista, it appears that the sub-folder has been changed to:
Roaming\Microsoft\Internet Explorer\Quick Launch
but I'd like to make sure that this is the correct way to determine the correct location.
Finding the correct way to determine this location is quite important, as relying on hard coded folder names almost always breaks as you move into international installs, etc... The fact that the folder is named 'Roaming' in Vista makes me wonder if there is some special handling related to that folder (akin to the Local Settings folder under XP).
EDIT:
The following msdn article: http://msdn.microsoft.com/en-us/library/bb762494.aspx indicates that CSIDL_APPDATA has an equivalent ID of FOLDERID_RoamingAppData, which does seem to support StocksR's assertion that CSIDL_APPDATA does return C:\Users\xxxx\AppData\Roaming, so it should be possible to use the same relative path for CSIDL_APPDATA to get to quick launch (\Microsoft\Internet Explorer\Quick Launch).
So the following algorithm is correct per MS:
HRESULT hres;
CString basePath;
hres = SHGetSpecialFolderPath(this->GetSafeHwnd(), basePath.GetBuffer(MAX_PATH), CSIDL_APPDATA, FALSE);
basePath.ReleaseBuffer();
CString qlPath = basePath + "\\Microsoft\\Internet Explorer\\Quick Launch";
it would also be a good idea to check hres to ensure that the call to SHGetSpecialFolderPath was successful.
A:
AppData on vista refers to C:\Users\xxxx\AppData\Roaming not the C:\Users\xxxx\AppData folder it's self.
Also this artical http://www.microsoft.com/technet/scriptcenter/resources/qanda/sept05/hey0901.mspx on a microsoft site implies that you simply have to use the path relative to the appdata folder
A:
Great question!
Whatever you do, don't give into the temptation to dig into the registry to find this info!
Also, we must resist the temptation to hard code some path, even partially. If we get the special AppData path, then simply append a string onto the end, this may break under non-US installs of the software where the folder name is localized to that language. E.g. GetSpecialFolderPath(APP_DATA) + "\\Fonts" will not work on non-English versions of Windows.
Hopefully someone has the proper answer to your question; I'm curious to know it myself!
|
How do you reliably get the Quick Launch folder in XP and Vista?
|
We need to reliably get the Quick Launch folder for both All and Current users under both Vista and XP. I'm developing in C++, but this is probably more of a general Windows API question.
For reference, here is code to get the Application Data folder under both systems:
HRESULT hres;
CString basePath;
hres = SHGetSpecialFolderPath(this->GetSafeHwnd(), basePath.GetBuffer(MAX_PATH), CSIDL_APPDATA, FALSE);
basePath.ReleaseBuffer();
I suspect this is just a matter of knowing which sub-folder Microsoft uses.
Under Windows XP, the app data subfolder is:
Microsoft\Internet Explorer\Quick Launch
Under Vista, it appears that the sub-folder has been changed to:
Roaming\Microsoft\Internet Explorer\Quick Launch
but I'd like to make sure that this is the correct way to determine the correct location.
Finding the correct way to determine this location is quite important, as relying on hard coded folder names almost always breaks as you move into international installs, etc... The fact that the folder is named 'Roaming' in Vista makes me wonder if there is some special handling related to that folder (akin to the Local Settings folder under XP).
EDIT:
The following msdn article: http://msdn.microsoft.com/en-us/library/bb762494.aspx indicates that CSIDL_APPDATA has an equivalent ID of FOLDERID_RoamingAppData, which does seem to support StocksR's assertion that CSIDL_APPDATA does return C:\Users\xxxx\AppData\Roaming, so it should be possible to use the same relative path for CSIDL_APPDATA to get to quick launch (\Microsoft\Internet Explorer\Quick Launch).
So the following algorithm is correct per MS:
HRESULT hres;
CString basePath;
hres = SHGetSpecialFolderPath(this->GetSafeHwnd(), basePath.GetBuffer(MAX_PATH), CSIDL_APPDATA, FALSE);
basePath.ReleaseBuffer();
CString qlPath = basePath + "\\Microsoft\\Internet Explorer\\Quick Launch";
it would also be a good idea to check hres to ensure that the call to SHGetSpecialFolderPath was successful.
|
[
"AppData on vista refers to C:\\Users\\xxxx\\AppData\\Roaming not the C:\\Users\\xxxx\\AppData folder it's self.\nAlso this artical http://www.microsoft.com/technet/scriptcenter/resources/qanda/sept05/hey0901.mspx on a microsoft site implies that you simply have to use the path relative to the appdata folder \n",
"Great question! \nWhatever you do, don't give into the temptation to dig into the registry to find this info!\nAlso, we must resist the temptation to hard code some path, even partially. If we get the special AppData path, then simply append a string onto the end, this may break under non-US installs of the software where the folder name is localized to that language. E.g. GetSpecialFolderPath(APP_DATA) + \"\\\\Fonts\" will not work on non-English versions of Windows.\nHopefully someone has the proper answer to your question; I'm curious to know it myself!\n"
] |
[
2,
1
] |
[] |
[] |
[
"visual_c++",
"windows"
] |
stackoverflow_0000076080_visual_c++_windows.txt
|
Q:
IIS Authentication across servers
My production environment involves a pair of IIS 6 web servers, one running legacy .NET 1.1 applications and the other running .NET 2.0 applications. We cannot install .NET 2.0 alongside 1.1 on the same machine because it is a tightly-regulated 'Validated System' and would present a bureaucratic nightmare to revalidate.
Websites on both servers use Basic Authentication against Active Directory user accounts.
Is it possible for a web application on the 1.1 server to securely redirect a user to a page served on the 2.0 server, without requiring users to re-authenticate?
A:
No, because you're not using cookies for authentication in that scenario, so ScaleOvenStove's link won't help.
Basic authentication sends the login information in the HTTP headers with every request, but it's the browser that does this, when it sees a new server, new password request.
(Or indeed as suggested change the authentication on both systems to support single signon)
A:
In order to achieve this you could implement a single sign-on solution.
This solution would have one server be your master authentication server. This server would be responsible for authentication and creating a cookie for the user. When you redirect to the other server (on the same domain) check to see if the authentication cookie exists that was created by the authentication server, and if it exists, and has valid data, auto login the user. Make sure that you set the domain on the forms authentication ticket and cookie, and then both servers which exist on the same domain will be able to access this cookie.
I would google single sign on asp.net. There's a number of ways to achieve it, but it's definitely achievable.
|
IIS Authentication across servers
|
My production environment involves a pair of IIS 6 web servers, one running legacy .NET 1.1 applications and the other running .NET 2.0 applications. We cannot install .NET 2.0 alongside 1.1 on the same machine because it is a tightly-regulated 'Validated System' and would present a bureaucratic nightmare to revalidate.
Websites on both servers use Basic Authentication against Active Directory user accounts.
Is it possible for a web application on the 1.1 server to securely redirect a user to a page served on the 2.0 server, without requiring users to re-authenticate?
|
[
"No, because you're not using cookies for authentication in that scenario, so ScaleOvenStove's link won't help.\nBasic authentication sends the login information in the HTTP headers with every request, but it's the browser that does this, when it sees a new server, new password request.\n(Or indeed as suggested change the authentication on both systems to support single signon)\n",
"In order to achieve this you could implement a single sign-on solution. \nThis solution would have one server be your master authentication server. This server would be responsible for authentication and creating a cookie for the user. When you redirect to the other server (on the same domain) check to see if the authentication cookie exists that was created by the authentication server, and if it exists, and has valid data, auto login the user. Make sure that you set the domain on the forms authentication ticket and cookie, and then both servers which exist on the same domain will be able to access this cookie.\nI would google single sign on asp.net. There's a number of ways to achieve it, but it's definitely achievable.\n"
] |
[
1,
0
] |
[
"yes, check out here\nhttp://weblogs.asp.net/scottgu/archive/2005/12/10/432851.aspx\n"
] |
[
-1
] |
[
".net_1.1",
".net_2.0",
"asp.net",
"iis_6",
"single_sign_on"
] |
stackoverflow_0000076258_.net_1.1_.net_2.0_asp.net_iis_6_single_sign_on.txt
|
Q:
Production, Test, Developer Environments vs Security
What are current practices for enabling developers to build systems that contain private data? Can anyone point to a "best practices" guide for that sort of thing?
We have a Catch-22 here in that developers need to write applications that go against systems that have data that is considered "private." The IT administration would like for us developers to not have access to the data (ie. provide a schema or data structure, but not data itself) whereas most developers (myself included) would like to have access to the production data since not having a representative dataset can lead to bad assumptions (eg. the format of data) and bugs later on.
Does anyone have any formalized "best practices" for this type of thing? Especially official guildines from some "BigCo" (eg. Microsoft, IBM) might help since it is needed to convince management.
A:
My view of the world may be different, as I'm based in the UK, but for the past 20-odd years, I've worked primarily in the public sector on systems handling sensitive data.
The rules are **completely** cut-and-dried. No production data is allowed on the development estate.
As a fundamental principle, we do not want to be responsible for the loss of sensitive data. The users are perfectly good at that, themselves.
Within the past 12 months, my wife has moved from the same regime to one in the private sector where they allow developers access to production data and she's horrified by it. The legal implications (in the UK, at least) can be severe.
Developers don't **need** access to production data. It's simply laziness. Define and create test data to exercise defined test cases (including edge cases) and don't rely on the random-esque nature of production data.
If you **must** use production data (i.e. you manage to convince someone who doesn't know any better that it's acceptable), ensure the data is anonymised **before** it reaches the development estate.
A:
Often times, a subset of sanitized data will be provided that is representative of the private data, but not the private data itself.
A:
At my company, we started using Red-gate's data generator to generate test data. There is a bit of setup, but you can use the tools to generate very usable test data. Yes, I would prefer to use live production data, but it's not feasible (especially if you need to consider in HIPAA). It uses regex for each column and allows you to use look-up table's for related tables.
A:
At MediumCo, we strip proprietary data out of our production data in Test and Dev. It has hurt us a little in the past to not have exactly-representative data, but the clients have asked about this point before, and it's usually not an issue, as the environments are populated with a lot of fake proprietary data.
A:
I don't have any best practices paper or anything. But I would think that if you're developing out of an environment that is as protected as the environment that hosts the data in production, there wouldn't be a lot of argument to be made against it.
That is, if your production database is in a datacenter hosted and controlled and secured by your IT staff, if you have a development database that lives in the exact same scenario and doesn't offer any new ways to access the information - you would be in pretty good shape. As an added token of good will - it might be nice to offer to allow anyone worried about security a chance to do some kind of penetration test to ensure that you're telling the truth about security.
The other side of this, of course, is the analysis of the cost for not using the data: that is, it will lead to buggier code, which will cost $xxxxxx.xx in development time vs. virtually no cost to allow a small subset of your development team access to said data.
A:
To avoid the need to manually sanitise/anonymise data, you could use random text replacement - to replace every alphanumeric character in each text field with a random alphanumeric. This:
keeps the data similar in length, size etc. from the developer's point of view
does not cause problems with character sets
leaves date and number fields untouched, which allows for accurate testing with respect to date ranges and quantities
will satisfy most privacy requirements
If you wanted to go a little further you could run random number-for-number replacement on telephone numbers and zip codes, while using alphanumeric replacement on other text fields.
Having an automated replacement script allows you to get up-to-date data dumps from the live system regularly, so your tests are up-to-date with respect to the size and variability of the data in practice.
It does mean that a small number of operations will not be realistic (e.g. indexing on name fields, which in real life are clustered around common letters) but these should be limited.
|
Production, Test, Developer Environments vs Security
|
What are current practices for enabling developers to build systems that contain private data? Can anyone point to a "best practices" guide for that sort of thing?
We have a Catch-22 here in that developers need to write applications that go against systems that have data that is considered "private." The IT administration would like for us developers to not have access to the data (ie. provide a schema or data structure, but not data itself) whereas most developers (myself included) would like to have access to the production data since not having a representative dataset can lead to bad assumptions (eg. the format of data) and bugs later on.
Does anyone have any formalized "best practices" for this type of thing? Especially official guildines from some "BigCo" (eg. Microsoft, IBM) might help since it is needed to convince management.
|
[
"My view of the world may be different, as I'm based in the UK, but for the past 20-odd years, I've worked primarily in the public sector on systems handling sensitive data.\nThe rules are **completely** cut-and-dried. No production data is allowed on the development estate.\nAs a fundamental principle, we do not want to be responsible for the loss of sensitive data. The users are perfectly good at that, themselves.\nWithin the past 12 months, my wife has moved from the same regime to one in the private sector where they allow developers access to production data and she's horrified by it. The legal implications (in the UK, at least) can be severe.\nDevelopers don't **need** access to production data. It's simply laziness. Define and create test data to exercise defined test cases (including edge cases) and don't rely on the random-esque nature of production data.\nIf you **must** use production data (i.e. you manage to convince someone who doesn't know any better that it's acceptable), ensure the data is anonymised **before** it reaches the development estate.\n",
"Often times, a subset of sanitized data will be provided that is representative of the private data, but not the private data itself.\n",
"At my company, we started using Red-gate's data generator to generate test data. There is a bit of setup, but you can use the tools to generate very usable test data. Yes, I would prefer to use live production data, but it's not feasible (especially if you need to consider in HIPAA). It uses regex for each column and allows you to use look-up table's for related tables.\n",
"At MediumCo, we strip proprietary data out of our production data in Test and Dev. It has hurt us a little in the past to not have exactly-representative data, but the clients have asked about this point before, and it's usually not an issue, as the environments are populated with a lot of fake proprietary data.\n",
"I don't have any best practices paper or anything. But I would think that if you're developing out of an environment that is as protected as the environment that hosts the data in production, there wouldn't be a lot of argument to be made against it.\nThat is, if your production database is in a datacenter hosted and controlled and secured by your IT staff, if you have a development database that lives in the exact same scenario and doesn't offer any new ways to access the information - you would be in pretty good shape. As an added token of good will - it might be nice to offer to allow anyone worried about security a chance to do some kind of penetration test to ensure that you're telling the truth about security.\nThe other side of this, of course, is the analysis of the cost for not using the data: that is, it will lead to buggier code, which will cost $xxxxxx.xx in development time vs. virtually no cost to allow a small subset of your development team access to said data.\n",
"To avoid the need to manually sanitise/anonymise data, you could use random text replacement - to replace every alphanumeric character in each text field with a random alphanumeric. This:\n\nkeeps the data similar in length, size etc. from the developer's point of view\ndoes not cause problems with character sets\nleaves date and number fields untouched, which allows for accurate testing with respect to date ranges and quantities\nwill satisfy most privacy requirements\n\nIf you wanted to go a little further you could run random number-for-number replacement on telephone numbers and zip codes, while using alphanumeric replacement on other text fields.\nHaving an automated replacement script allows you to get up-to-date data dumps from the live system regularly, so your tests are up-to-date with respect to the size and variability of the data in practice.\nIt does mean that a small number of operations will not be realistic (e.g. indexing on name fields, which in real life are clustered around common letters) but these should be limited.\n"
] |
[
5,
3,
2,
1,
0,
0
] |
[] |
[] |
[
"security"
] |
stackoverflow_0000076210_security.txt
|
Q:
How can I debug a process (1.exe) running under another process (2.exe)?
1.exe doesn't give enough time for me to launch the IDE and attach 1.exe to the debugger to break into.
A:
I would suggest taking the same approach as with NT services in this case. They will also start and usually not give you enough time to attach the debugger for the start-up routines.
Details are described here: http://www.debuginfo.com/articles/debugstartup.html
In short you add a registry entry for the second exe:
HKLM\Software\Microsoft\Windows
NT\CurrentVersion\Image File Execution
Options\2.exe Debugger =
"c:\progs\msvs\common7\ide\devenv.exe
/debugexe" (REG_SZ)
Change the c:\progrs\msms\ to match your settings.
Hope that helps.
A:
I assume you have the source to 1.exe (if you're debugging it), then just insert a statement near the beginning that will cause it to hang around long enough to attach a debugger. ( getch() if you're desperate and it's not interactive. )
After the attach, just skip to the next statement and let it go.
A:
You could put in some preprocessor commands for debug builds - just remember to build your release in release mode:
#ifdef DEBUG
Thread.Sleep(10000);
#endif
A:
How is 1.exe launched? If you can launch it using CreateProcess(), you can start the process in a suspended state, attach the debugger, then release the new process.
A:
If you are willing to consider a debugger other than Visual Studio, WinDBG can auto-debug child processes (native code only).
A:
You did not mention what language you are using. But if you using C# or VB.NET you can add Debug.Break() or Stop to trigger the prompt to attach debugger to the process.
Or as mentioned above just use something like Console.Readline() or MessageBox.Show() to pause starting of process untill you can attach debugger to it.
|
How can I debug a process (1.exe) running under another process (2.exe)?
|
1.exe doesn't give enough time for me to launch the IDE and attach 1.exe to the debugger to break into.
|
[
"I would suggest taking the same approach as with NT services in this case. They will also start and usually not give you enough time to attach the debugger for the start-up routines.\nDetails are described here: http://www.debuginfo.com/articles/debugstartup.html\nIn short you add a registry entry for the second exe:\n\nHKLM\\Software\\Microsoft\\Windows\n NT\\CurrentVersion\\Image File Execution\n Options\\2.exe Debugger =\n \"c:\\progs\\msvs\\common7\\ide\\devenv.exe\n /debugexe\" (REG_SZ)\n\nChange the c:\\progrs\\msms\\ to match your settings.\nHope that helps.\n",
"I assume you have the source to 1.exe (if you're debugging it), then just insert a statement near the beginning that will cause it to hang around long enough to attach a debugger. ( getch() if you're desperate and it's not interactive. )\nAfter the attach, just skip to the next statement and let it go.\n",
"You could put in some preprocessor commands for debug builds - just remember to build your release in release mode:\n#ifdef DEBUG\nThread.Sleep(10000);\n#endif\n\n",
"How is 1.exe launched? If you can launch it using CreateProcess(), you can start the process in a suspended state, attach the debugger, then release the new process.\n",
"If you are willing to consider a debugger other than Visual Studio, WinDBG can auto-debug child processes (native code only).\n",
"You did not mention what language you are using. But if you using C# or VB.NET you can add Debug.Break() or Stop to trigger the prompt to attach debugger to the process.\nOr as mentioned above just use something like Console.Readline() or MessageBox.Show() to pause starting of process untill you can attach debugger to it.\n"
] |
[
6,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"debugging",
"visual_studio"
] |
stackoverflow_0000075763_debugging_visual_studio.txt
|
Q:
Using xsd to generate XML in .net
I'm working in a .net application where we need to generate XML files on the fly based on the dataset retrieved from the db. XML schema should be based on a xsd provided. I would like to know is there any way to bind or associate a dataset or each datarow with the xsd. I dont know whether it can be done at all or i may be thinking usage of XSDs at a wrong perspective. If i'm wrong please correct me and let me know of the best way to associate a data retrieved from db to a predefined schema.Thanks.
Update: If my perspective on xsd is wrong please shed some light on how xsds are used(or perhaps point me to some useful links).
A:
Use the schema document as a parameter to the command line xsd.exe program included with visual studio to generate class files or typed datasets that you can include in your project/solution. These classes or datasets can be serialized to xml and will conform to the schema document you used to create them.
The only problem with this is that it's not dynamic: you can't wait until runtime to get the schema files. But there's nothing built in that supports this otherwise.
A:
In addition to the solution suggested by Joel Coehoorn -- generate typed datasets or business entities from XSD -- let me add a couple of other approaches:
If you use a database that supports XML type like Oracle or MS SQL Server, you can construct XML right in your SQL queries and retrieve XML directly from the database bypassing population of dataset.
In case your database schema is not directly mapped to the given XSD, i.e. you already have a typed dataset or a set of XML-serializable business objects and those objects are serialized into XML that doesn't conform to XSD you're provided with, then you can use XSLT to transform your XML to another XML document which will be compliant with the given XSD.
|
Using xsd to generate XML in .net
|
I'm working in a .net application where we need to generate XML files on the fly based on the dataset retrieved from the db. XML schema should be based on a xsd provided. I would like to know is there any way to bind or associate a dataset or each datarow with the xsd. I dont know whether it can be done at all or i may be thinking usage of XSDs at a wrong perspective. If i'm wrong please correct me and let me know of the best way to associate a data retrieved from db to a predefined schema.Thanks.
Update: If my perspective on xsd is wrong please shed some light on how xsds are used(or perhaps point me to some useful links).
|
[
"Use the schema document as a parameter to the command line xsd.exe program included with visual studio to generate class files or typed datasets that you can include in your project/solution. These classes or datasets can be serialized to xml and will conform to the schema document you used to create them.\nThe only problem with this is that it's not dynamic: you can't wait until runtime to get the schema files. But there's nothing built in that supports this otherwise.\n",
"In addition to the solution suggested by Joel Coehoorn -- generate typed datasets or business entities from XSD -- let me add a couple of other approaches:\n\nIf you use a database that supports XML type like Oracle or MS SQL Server, you can construct XML right in your SQL queries and retrieve XML directly from the database bypassing population of dataset.\nIn case your database schema is not directly mapped to the given XSD, i.e. you already have a typed dataset or a set of XML-serializable business objects and those objects are serialized into XML that doesn't conform to XSD you're provided with, then you can use XSLT to transform your XML to another XML document which will be compliant with the given XSD.\n\n"
] |
[
3,
1
] |
[] |
[] |
[
".net",
"xml",
"xsd"
] |
stackoverflow_0000075935_.net_xml_xsd.txt
|
Q:
mmc could not create the snap in error when launching internet information services
I am getting this error when trying to run internet information services on a Virtual Machine running windows XP. Anyone have any idea?
A:
I was able to get the solution from somewhere else and here it is:
regsvr32 %windir%\system32\inetsrv\inetmgr.dll
|
mmc could not create the snap in error when launching internet information services
|
I am getting this error when trying to run internet information services on a Virtual Machine running windows XP. Anyone have any idea?
|
[
"I was able to get the solution from somewhere else and here it is:\nregsvr32 %windir%\\system32\\inetsrv\\inetmgr.dll\n\n"
] |
[
15
] |
[] |
[] |
[
"iis",
"iis_5"
] |
stackoverflow_0000075428_iis_iis_5.txt
|
Q:
Access to auto increment identity field after SQL insert in Java
Any advice on how to read auto-incrementing identity field assigned to newly created record from call through java.sql.Statement.executeUpdate?
I know how to do this in SQL for several DB platforms, but would like to know what database independent interfaces exist in java.sql to do this, and any input on people's experience with this across DB platforms.
A:
The following snibblet of code should do ya':
PreparedStatement stmt = conn.prepareStatement(sql,
Statement.RETURN_GENERATED_KEYS);
// ...
ResultSet res = stmt.getGeneratedKeys();
while (res.next())
System.out.println("Generated key: " + res.getInt(1));
This is known to work on the following databases
Derby
MySQL
SQL Server
For databases where it doesn't work (HSQLDB, Oracle, PostgreSQL, etc), you will need to futz with database-specific tricks. For example, on PostgreSQL you would make a call to SELECT NEXTVAL(...) for the sequence in question.
Note that the parameters for executeUpdate(...) are analogous.
A:
ResultSet keys = statement.getGeneratedKeys();
Later, just iterate over ResultSet.
A:
I've always had to make a second call using query after the insert.
You could use an ORM like hibernate. I think it does this stuff for you.
A:
@ScArcher2 : I agree, Hibernate needs to make a second call to get the newly generated identity UNLESS an advanced generator strategy is used (sequence, hilo...)
A:
@ScArcher2
Making a second call is extremely dangerous. The process of INSERTing and selecting the resultant auto-generated keys must be atomic, otherwise you may receive inconsistent results on the key select. Consider two asynchronous INSERTs where they both complete before either has a chance to select the generated keys. Which process gets which list of keys? Most cross-database ORMs have to do annoying things like in-process thread locking in order to keep results deterministic. This is not something you want to do by hand, especially if you are using a database which does support atomic generated key retrieval (HSQLDB is the only one I know of which does not).
|
Access to auto increment identity field after SQL insert in Java
|
Any advice on how to read auto-incrementing identity field assigned to newly created record from call through java.sql.Statement.executeUpdate?
I know how to do this in SQL for several DB platforms, but would like to know what database independent interfaces exist in java.sql to do this, and any input on people's experience with this across DB platforms.
|
[
"The following snibblet of code should do ya':\nPreparedStatement stmt = conn.prepareStatement(sql, \n Statement.RETURN_GENERATED_KEYS);\n// ...\n\nResultSet res = stmt.getGeneratedKeys();\nwhile (res.next())\n System.out.println(\"Generated key: \" + res.getInt(1));\n\nThis is known to work on the following databases\n\nDerby\nMySQL\nSQL Server\n\nFor databases where it doesn't work (HSQLDB, Oracle, PostgreSQL, etc), you will need to futz with database-specific tricks. For example, on PostgreSQL you would make a call to SELECT NEXTVAL(...) for the sequence in question.\nNote that the parameters for executeUpdate(...) are analogous.\n",
"ResultSet keys = statement.getGeneratedKeys();\n\nLater, just iterate over ResultSet.\n",
"I've always had to make a second call using query after the insert.\nYou could use an ORM like hibernate. I think it does this stuff for you.\n",
"@ScArcher2 : I agree, Hibernate needs to make a second call to get the newly generated identity UNLESS an advanced generator strategy is used (sequence, hilo...)\n",
"@ScArcher2\nMaking a second call is extremely dangerous. The process of INSERTing and selecting the resultant auto-generated keys must be atomic, otherwise you may receive inconsistent results on the key select. Consider two asynchronous INSERTs where they both complete before either has a chance to select the generated keys. Which process gets which list of keys? Most cross-database ORMs have to do annoying things like in-process thread locking in order to keep results deterministic. This is not something you want to do by hand, especially if you are using a database which does support atomic generated key retrieval (HSQLDB is the only one I know of which does not).\n"
] |
[
24,
3,
0,
0,
0
] |
[] |
[] |
[
"java",
"sql"
] |
stackoverflow_0000076254_java_sql.txt
|
Q:
Grabbing Users with a specific value in their profile
I'm using membership and roles for authentication in my vb .net application. We have about 5 roles in the application with certain roles filling out a specific profile value. Example is the role is store and the profile value is store number. Obviously if you work for headquarters you don't have a store number so I don't care about it. Each store can also have more than 1 employee.
I need to get the users for a specific store number. Meaning I would only want the users that belong to store number 101 to show up that list. The way that we are doing this now is going through all the users and adding the users that fit the criteria into a sorted list. This functions but the problem is when you start passing about 3,000 users or so. It just becomes to slow to be any good.
How would you guys find a different way of doing it? I really don't want to do custom stored procedure or changing the underlying classes because I'm afraid of it all breaking on a later version of .net that they change membership and roles.
A:
This is really the sort of thing you want to filter on in SQL. I don't think there is any trick to get around doing a linear scan of your data and get the results you want.
If doing this in SQL isn't an option then maybe you can avoid creating a second list and just sort your main user array and have the display only display the ones you care about. That would save the memory copy time at least.
A:
You're using the built-in .NET role manager that saves to a SQL Server instance I take it? What format are your user object in when you're currently looking at them to evaluate the criteria? If you post a code sample I have an idea...
A:
Public Shared Function LoadALLUsersInRole(ByVal Code As Integer, ByVal Role As String) As ArrayList
Dim pb As ProfileBase
Dim usersArrayList As New ArrayList
Dim i As Integer
Dim AllUsersInRole() As String = Roles.GetUsersInRole(Role)
For i = 0 To AllUsersInRole.Length - 1
pb = ProfileBase.Create(AllUsersInRole(i), True)
'Check to see if the current user in the collect belongs to this Store.
If CType(pb.GetPropertyValue("Store.Code"), Integer) = Code Then
usersArrayList.Add(AllUsersInRole(i))
End If
pb = Nothing
Next
Return usersArrayList
End Function
That is the example code of how I'm doing it. The reason that I don't want to do it on the SOL side is that I would have a huge dependency on the fact that membership and roles doesn't change.
|
Grabbing Users with a specific value in their profile
|
I'm using membership and roles for authentication in my vb .net application. We have about 5 roles in the application with certain roles filling out a specific profile value. Example is the role is store and the profile value is store number. Obviously if you work for headquarters you don't have a store number so I don't care about it. Each store can also have more than 1 employee.
I need to get the users for a specific store number. Meaning I would only want the users that belong to store number 101 to show up that list. The way that we are doing this now is going through all the users and adding the users that fit the criteria into a sorted list. This functions but the problem is when you start passing about 3,000 users or so. It just becomes to slow to be any good.
How would you guys find a different way of doing it? I really don't want to do custom stored procedure or changing the underlying classes because I'm afraid of it all breaking on a later version of .net that they change membership and roles.
|
[
"This is really the sort of thing you want to filter on in SQL. I don't think there is any trick to get around doing a linear scan of your data and get the results you want.\nIf doing this in SQL isn't an option then maybe you can avoid creating a second list and just sort your main user array and have the display only display the ones you care about. That would save the memory copy time at least.\n",
"You're using the built-in .NET role manager that saves to a SQL Server instance I take it? What format are your user object in when you're currently looking at them to evaluate the criteria? If you post a code sample I have an idea...\n",
" Public Shared Function LoadALLUsersInRole(ByVal Code As Integer, ByVal Role As String) As ArrayList\n Dim pb As ProfileBase\n Dim usersArrayList As New ArrayList\n Dim i As Integer\n Dim AllUsersInRole() As String = Roles.GetUsersInRole(Role)\n\n For i = 0 To AllUsersInRole.Length - 1\n\n pb = ProfileBase.Create(AllUsersInRole(i), True)\n\n 'Check to see if the current user in the collect belongs to this Store.\n If CType(pb.GetPropertyValue(\"Store.Code\"), Integer) = Code Then \n usersArrayList.Add(AllUsersInRole(i)) \n End If\n pb = Nothing\n Next\n\n Return usersArrayList\n End Function\n\nThat is the example code of how I'm doing it. The reason that I don't want to do it on the SOL side is that I would have a huge dependency on the fact that membership and roles doesn't change.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
".net",
"membership",
"roles"
] |
stackoverflow_0000075712_.net_membership_roles.txt
|
Q:
How to cycle through delimited tokens with a Regular Expression?
How can I create a regular expression that will grab delimited text from a string? For example, given a string like
text ###token1### text text ###token2### text text
I want a regex that will pull out ###token1###. Yes, I do want the delimiter as well. By adding another group, I can get both:
(###(.+?)###)
A:
/###(.+?)###/
if you want the ###'s then you need
/(###.+?###)/
the ? means non greedy, if you didn't have the ?, then it would grab too much.
e.g. '###token1### text text ###token2###' would all get grabbed.
My initial answer had a * instead of a +. * means 0 or more. + means 1 or more. * was wrong because that would allow ###### as a valid thing to find.
For playing around with regular expressions. I highly recommend http://www.weitz.de/regex-coach/ for windows. You can type in the string you want and your regular expression and see what it's actually doing.
Your selected text will be stored in \1 or $1 depending on where you are using your regular expression.
A:
In Perl, you actually want something like this:
$text = 'text ###token1### text text ###token2### text text';
while($text =~ m/###(.+?)###/g) {
print $1, "\n";
}
Which will give you each token in turn within the while loop. The (.*?) ensures that you get the shortest bit between the delimiters, preventing it from thinking the token is 'token1### text text ###token2'.
Or, if you just want to save them, not loop immediately:
@tokens = $text =~ m/###(.+?)###/g;
A:
Assuming you want to match ###token2### as well...
/###.+###/
A:
Use () and \x. A naive example that assumes the text within the tokens is always delimited by #:
text (#+.+#+) text text (#+.+#+) text text
The stuff in the () can then be grabbed by using \1 and \2 (\1 for the first set, \2 for the second in the replacement expression (assuming you're doing a search/replace in an editor). For example, the replacement expression could be:
token1: \1, token2: \2
For the above example, that should produce:
token1: ###token1###, token2: ###token2###
If you're using a regexp library in a program, you'd presumably call a function to get at the contents first and second token, which you've indicated with the ()s around them.
A:
Well when you are using delimiters such as this basically you just grab the first one then anything that does not match the ending delimiter followed by the ending delimiter. A special caution should be that in cases as the example above [^#] would not work as checking to ensure the end delimiter is not there since a singe # would cause the regex to fail (ie. "###foo#bar###). In the case above the regex to parse it would be the following assuming empty tokens are allowed (if not, change * to +):
###([^#]|#[^#]|##[^#])*###
|
How to cycle through delimited tokens with a Regular Expression?
|
How can I create a regular expression that will grab delimited text from a string? For example, given a string like
text ###token1### text text ###token2### text text
I want a regex that will pull out ###token1###. Yes, I do want the delimiter as well. By adding another group, I can get both:
(###(.+?)###)
|
[
"/###(.+?)###/\n\nif you want the ###'s then you need\n/(###.+?###)/\n\nthe ? means non greedy, if you didn't have the ?, then it would grab too much. \ne.g. '###token1### text text ###token2###' would all get grabbed. \nMy initial answer had a * instead of a +. * means 0 or more. + means 1 or more. * was wrong because that would allow ###### as a valid thing to find. \nFor playing around with regular expressions. I highly recommend http://www.weitz.de/regex-coach/ for windows. You can type in the string you want and your regular expression and see what it's actually doing. \nYour selected text will be stored in \\1 or $1 depending on where you are using your regular expression.\n",
"In Perl, you actually want something like this:\n$text = 'text ###token1### text text ###token2### text text';\n\nwhile($text =~ m/###(.+?)###/g) {\n print $1, \"\\n\";\n}\n\nWhich will give you each token in turn within the while loop. The (.*?) ensures that you get the shortest bit between the delimiters, preventing it from thinking the token is 'token1### text text ###token2'.\nOr, if you just want to save them, not loop immediately:\n@tokens = $text =~ m/###(.+?)###/g;\n\n",
"Assuming you want to match ###token2### as well...\n/###.+###/\n\n",
"Use () and \\x. A naive example that assumes the text within the tokens is always delimited by #:\ntext (#+.+#+) text text (#+.+#+) text text\n\nThe stuff in the () can then be grabbed by using \\1 and \\2 (\\1 for the first set, \\2 for the second in the replacement expression (assuming you're doing a search/replace in an editor). For example, the replacement expression could be:\ntoken1: \\1, token2: \\2\n\nFor the above example, that should produce:\ntoken1: ###token1###, token2: ###token2###\n\nIf you're using a regexp library in a program, you'd presumably call a function to get at the contents first and second token, which you've indicated with the ()s around them.\n",
"Well when you are using delimiters such as this basically you just grab the first one then anything that does not match the ending delimiter followed by the ending delimiter. A special caution should be that in cases as the example above [^#] would not work as checking to ensure the end delimiter is not there since a singe # would cause the regex to fail (ie. \"###foo#bar###). In the case above the regex to parse it would be the following assuming empty tokens are allowed (if not, change * to +):\n###([^#]|#[^#]|##[^#])*###\n"
] |
[
4,
1,
0,
0,
0
] |
[] |
[] |
[
"regex"
] |
stackoverflow_0000076411_regex.txt
|
Q:
Variable binding in PHP ADOdb
Does ADOdb do data sanitation or escaping within the same functionality by default? Or am I just confusing it with Code Igniter's built-in processes?
Does binding variables to parameters in ADOdb for PHP prevent SQL injection in any way?
A:
yes, you pass the array of parameters.
$rs = $db->Execute('select * from table where val=?', array('10'));
Rest of their docs can be found here:
A:
Correct - bound parameters are not vulnerable to SQL injection attacks.
|
Variable binding in PHP ADOdb
|
Does ADOdb do data sanitation or escaping within the same functionality by default? Or am I just confusing it with Code Igniter's built-in processes?
Does binding variables to parameters in ADOdb for PHP prevent SQL injection in any way?
|
[
"yes, you pass the array of parameters.\n$rs = $db->Execute('select * from table where val=?', array('10'));\n\nRest of their docs can be found here:\n",
"Correct - bound parameters are not vulnerable to SQL injection attacks.\n"
] |
[
4,
2
] |
[] |
[] |
[
"adodb_php",
"binding",
"parameters",
"php",
"sql_injection"
] |
stackoverflow_0000076359_adodb_php_binding_parameters_php_sql_injection.txt
|
Q:
Is there an inverse function of *SysUtils.Format* in Delphi
Has anyone written an 'UnFormat' routine for Delphi?
What I'm imagining is the inverse of SysUtils.Format and looks something like this
UnFormat('a number %n and another %n',[float1, float2]);
So you could unpack a string into a series of variables using format strings.
I've looked at the 'Format' routine in SysUtils, but I've never used assembly so it is meaningless to me.
A:
This is called scanf in C, I've made a Delphi look-a-like for this :
function ScanFormat(const Input, Format: string; Args: array of Pointer): Integer;
var
InputOffset: Integer;
FormatOffset: Integer;
InputChar: Char;
FormatChar: Char;
function _GetInputChar: Char;
begin
if InputOffset <= Length(Input) then
begin
Result := Input[InputOffset];
Inc(InputOffset);
end
else
Result := #0;
end;
function _PeekFormatChar: Char;
begin
if FormatOffset <= Length(Format) then
Result := Format[FormatOffset]
else
Result := #0;
end;
function _GetFormatChar: Char;
begin
Result := _PeekFormatChar;
if Result <> #0 then
Inc(FormatOffset);
end;
function _ScanInputString(const Arg: Pointer = nil): string;
var
EndChar: Char;
begin
Result := '';
EndChar := _PeekFormatChar;
InputChar := _GetInputChar;
while (InputChar > ' ')
and (InputChar <> EndChar) do
begin
Result := Result + InputChar;
InputChar := _GetInputChar;
end;
if InputChar <> #0 then
Dec(InputOffset);
if Assigned(Arg) then
PString(Arg)^ := Result;
end;
function _ScanInputInteger(const Arg: Pointer): Boolean;
var
Value: string;
begin
Value := _ScanInputString;
Result := TryStrToInt(Value, {out} PInteger(Arg)^);
end;
procedure _Raise;
begin
raise EConvertError.CreateFmt('Unknown ScanFormat character : "%s"!', [FormatChar]);
end;
begin
Result := 0;
InputOffset := 1;
FormatOffset := 1;
FormatChar := _GetFormatChar;
while FormatChar <> #0 do
begin
if FormatChar <> '%' then
begin
InputChar := _GetInputChar;
if (InputChar = #0)
or (FormatChar <> InputChar) then
Exit;
end
else
begin
FormatChar := _GetFormatChar;
case FormatChar of
'%':
if _GetInputChar <> '%' then
Exit;
's':
begin
_ScanInputString(Args[Result]);
Inc(Result);
end;
'd', 'u':
begin
if not _ScanInputInteger(Args[Result]) then
Exit;
Inc(Result);
end;
else
_Raise;
end;
end;
FormatChar := _GetFormatChar;
end;
end;
A:
I know it tends to scare people, but you could write a simple function to do this using regular expressions
'a number (.*?) and another (.*?)
If you are worried about reg expressions take a look at www.regexbuddy.com and you'll never look back.
A:
I tend to take care of this using a simple parser. I have two functions, one is called NumStringParts which returns the number of "parts" in a string with a specific delimiter (in your case above the space) and GetStrPart returns the specific part from a string with a specific delimiter. Both of these routines have been used since my Turbo Pascal days in many a project.
function NumStringParts(SourceStr,Delimiter:String):Integer;
var
offset : integer;
curnum : integer;
begin
curnum := 1;
offset := 1;
while (offset <> 0) do
begin
Offset := Pos(Delimiter,SourceStr);
if Offset <> 0 then
begin
Inc(CurNum);
Delete(SourceStr,1,(Offset-1)+Length(Delimiter));
end;
end;
result := CurNum;
end;
function GetStringPart(SourceStr,Delimiter:String;Num:Integer):string;
var
offset : integer;
CurNum : integer;
CurPart : String;
begin
CurNum := 1;
Offset := 1;
While (CurNum <= Num) and (Offset <> 0) do
begin
Offset := Pos(Delimiter,SourceStr);
if Offset <> 0 then
begin
CurPart := Copy(SourceStr,1,Offset-1);
Delete(SourceStr,1,(Offset-1)+Length(Delimiter));
Inc(CurNum)
end
else
CurPart := SourceStr;
end;
if CurNum >= Num then
Result := CurPart
else
Result := '';
end;
Example of usage:
var
st : string;
f1,f2 : double;
begin
st := 'a number 12.35 and another 13.415';
ShowMessage('Total String parts = '+IntToStr(NumStringParts(st,#32)));
f1 := StrToFloatDef(GetStringPart(st,#32,3),0.0);
f2 := StrToFloatDef(GetStringPart(st,#32,6),0.0);
ShowMessage('Float 1 = '+FloatToStr(F1)+' and Float 2 = '+FloatToStr(F2));
end;
These routines work wonders for simple or strict comma delimited strings too. These routines work wonderfully in Delphi 2009/2010.
|
Is there an inverse function of *SysUtils.Format* in Delphi
|
Has anyone written an 'UnFormat' routine for Delphi?
What I'm imagining is the inverse of SysUtils.Format and looks something like this
UnFormat('a number %n and another %n',[float1, float2]);
So you could unpack a string into a series of variables using format strings.
I've looked at the 'Format' routine in SysUtils, but I've never used assembly so it is meaningless to me.
|
[
"This is called scanf in C, I've made a Delphi look-a-like for this :\nfunction ScanFormat(const Input, Format: string; Args: array of Pointer): Integer;\nvar\n InputOffset: Integer;\n FormatOffset: Integer;\n InputChar: Char;\n FormatChar: Char;\n\n function _GetInputChar: Char;\n begin\n if InputOffset <= Length(Input) then\n begin\n Result := Input[InputOffset];\n Inc(InputOffset);\n end\n else\n Result := #0;\n end;\n\n function _PeekFormatChar: Char;\n begin\n if FormatOffset <= Length(Format) then\n Result := Format[FormatOffset]\n else\n Result := #0;\n end;\n\n function _GetFormatChar: Char;\n begin\n Result := _PeekFormatChar;\n if Result <> #0 then\n Inc(FormatOffset);\n end;\n\n function _ScanInputString(const Arg: Pointer = nil): string;\n var\n EndChar: Char;\n begin\n Result := '';\n EndChar := _PeekFormatChar;\n InputChar := _GetInputChar;\n while (InputChar > ' ')\n and (InputChar <> EndChar) do\n begin\n Result := Result + InputChar;\n InputChar := _GetInputChar;\n end;\n\n if InputChar <> #0 then\n Dec(InputOffset);\n\n if Assigned(Arg) then\n PString(Arg)^ := Result;\n end;\n\n function _ScanInputInteger(const Arg: Pointer): Boolean;\n var\n Value: string;\n begin\n Value := _ScanInputString;\n Result := TryStrToInt(Value, {out} PInteger(Arg)^);\n end;\n\n procedure _Raise;\n begin\n raise EConvertError.CreateFmt('Unknown ScanFormat character : \"%s\"!', [FormatChar]);\n end;\n\nbegin\n Result := 0;\n InputOffset := 1;\n FormatOffset := 1;\n FormatChar := _GetFormatChar;\n while FormatChar <> #0 do\n begin\n if FormatChar <> '%' then\n begin\n InputChar := _GetInputChar;\n if (InputChar = #0)\n or (FormatChar <> InputChar) then\n Exit;\n end\n else\n begin\n FormatChar := _GetFormatChar;\n case FormatChar of\n '%':\n if _GetInputChar <> '%' then\n Exit;\n 's':\n begin\n _ScanInputString(Args[Result]);\n Inc(Result);\n end;\n 'd', 'u':\n begin\n if not _ScanInputInteger(Args[Result]) then\n Exit;\n\n Inc(Result);\n end;\n else\n _Raise;\n end;\n end;\n\n FormatChar := _GetFormatChar;\n end;\nend;\n\n",
"I know it tends to scare people, but you could write a simple function to do this using regular expressions \n'a number (.*?) and another (.*?)\n\nIf you are worried about reg expressions take a look at www.regexbuddy.com and you'll never look back.\n",
"I tend to take care of this using a simple parser. I have two functions, one is called NumStringParts which returns the number of \"parts\" in a string with a specific delimiter (in your case above the space) and GetStrPart returns the specific part from a string with a specific delimiter. Both of these routines have been used since my Turbo Pascal days in many a project.\nfunction NumStringParts(SourceStr,Delimiter:String):Integer;\nvar\n offset : integer;\n curnum : integer;\nbegin\n curnum := 1;\n offset := 1;\n while (offset <> 0) do\n begin\n Offset := Pos(Delimiter,SourceStr);\n if Offset <> 0 then\n begin\n Inc(CurNum);\n Delete(SourceStr,1,(Offset-1)+Length(Delimiter));\n end;\n end;\n result := CurNum;\nend;\n\nfunction GetStringPart(SourceStr,Delimiter:String;Num:Integer):string;\nvar\n offset : integer;\n CurNum : integer;\n CurPart : String;\nbegin\n CurNum := 1;\n Offset := 1;\n While (CurNum <= Num) and (Offset <> 0) do\n begin\n Offset := Pos(Delimiter,SourceStr);\n if Offset <> 0 then\n begin\n CurPart := Copy(SourceStr,1,Offset-1);\n Delete(SourceStr,1,(Offset-1)+Length(Delimiter));\n Inc(CurNum)\n end\n else\n CurPart := SourceStr;\n end;\n if CurNum >= Num then\n Result := CurPart\n else\n Result := '';\nend;\n\nExample of usage:\n var\n st : string;\n f1,f2 : double; \n begin\n st := 'a number 12.35 and another 13.415';\n ShowMessage('Total String parts = '+IntToStr(NumStringParts(st,#32)));\n f1 := StrToFloatDef(GetStringPart(st,#32,3),0.0);\n f2 := StrToFloatDef(GetStringPart(st,#32,6),0.0);\n ShowMessage('Float 1 = '+FloatToStr(F1)+' and Float 2 = '+FloatToStr(F2)); \n end; \n\nThese routines work wonders for simple or strict comma delimited strings too. These routines work wonderfully in Delphi 2009/2010.\n"
] |
[
12,
4,
1
] |
[] |
[] |
[
"delphi",
"function",
"scanf"
] |
stackoverflow_0000072672_delphi_function_scanf.txt
|
Q:
How do I get an attribute value when using XSLT with unknown namespace?
I am receiving a 3rd party feed of which I cannot be certain of the namespace so I am currently having to use the local-name() function in my XSLT to get the element values. However I need to get an attribute from one such element and I don't know how to do this when the namespaces are unknown (hence need for local-name() function).
N.B. I am using .net 2.0 to process the XSLT
Here is a sample of the XML:
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<id>some id</id>
<title>some title</title>
<updated>2008-09-11T15:53:31+01:00</updated>
<link rel="self" href="http://www.somefeedurl.co.uk" />
<author>
<name>some author</name>
<uri>http://someuri.co.uk</uri>
</author>
<generator uri="http://aardvarkmedia.co.uk/">AardvarkMedia script</generator>
<entry>
<id>http://soemaddress.co.uk/branded3/80406</id>
<title type="html">My Ttile</title>
<link rel="alternate" href="http://www.someurl.co.uk" />
<updated>2008-02-13T00:00:00+01:00</updated>
<published>2002-09-11T14:16:20+01:00</published>
<category term="mycategorytext" label="restaurant">Test</category>
<content type="xhtml">
<div xmlns="http://www.w3.org/1999/xhtml">
<div class="vcard">
<p class="fn org">some title</p>
<p class="adr">
<abbr class="type" title="POSTAL" />
<span class="street-address">54 Some Street</span>
,
<span class="locality" />
,
<span class="country-name">UK</span>
</p>
<p class="tel">
<span class="value">0123456789</span>
</p>
<div class="geo">
<span class="latitude">51.99999</span>
,
<span class="longitude">-0.123456</span>
</div>
<p class="note">
<span class="type">Review</span>
<span class="value">Some content</span>
</p>
<p class="note">
<span class="type">Overall rating</span>
<span class="value">8</span>
</p>
</div>
</div>
</content>
<category term="cuisine-54" label="Spanish" />
<Point xmlns="http://www.w3.org/2003/01/geo/wgs84_pos#">
<lat>51.123456789</lat>
<long>-0.11111111</long>
</Point>
</entry>
</feed>
This is XSLT
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:wgs="http://www.w3.org/2003/01/geo/wgs84_pos#" exclude-result-prefixes="atom wgs">
<xsl:output method="xml" indent="yes"/>
<xsl:key name="uniqueVenuesKey" match="entry" use="id"/>
<xsl:key name="uniqueCategoriesKey" match="entry" use="category/@term"/>
<xsl:template match="/">
<locations>
<!-- Get all unique venues -->
<xsl:for-each select="/*[local-name()='feed']/*[local-name()='entry']">
<xsl:variable name="CurrentVenueKey" select="*[local-name()='id']" ></xsl:variable>
<xsl:variable name="CurrentVenueName" select="*[local-name()='title']" ></xsl:variable>
<xsl:variable name="CurrentVenueAddress1" select="*[local-name()='content']/*[local-name()='div']/*[local-name()='div']/*[local-name()='p'][@class='adr']/*[local-name()='span'][@class='street-address']" ></xsl:variable>
<xsl:variable name="CurrentVenueCity" select="*[local-name()='content']/*[local-name()='div']/*[local-name()='div']/*[local-name()='p'][@class='adr']/*[local-name()='span'][@class='locality']" ></xsl:variable>
<xsl:variable name="CurrentVenuePostcode" select="*[local-name()='postcode']" ></xsl:variable>
<xsl:variable name="CurrentVenueTelephone" select="*[local-name()='telephone']" ></xsl:variable>
<xsl:variable name="CurrentVenueLat" select="*[local-name()='Point']/*[local-name()='lat']" ></xsl:variable>
<xsl:variable name="CurrentVenueLong" select="*[local-name()='Point']/*[local-name()='long']" ></xsl:variable>
<xsl:variable name="CurrentCategory" select="WHATDOIPUTHERE"></xsl:variable>
<location>
<locationName>
<xsl:value-of select = "$CurrentVenueName" />
</locationName>
<category>
<xsl:value-of select = "$CurrentCategory" />
</category>
<description>
<xsl:value-of select = "$CurrentVenueName" />
</description>
<venueAddress>
<streetName>
<xsl:value-of select = "$CurrentVenueAddress1" />
</streetName>
<town>
<xsl:value-of select = "$CurrentVenueCity" />
</town>
<postcode>
<xsl:value-of select = "$CurrentVenuePostcode" />
</postcode>
<wgs84_latitude>
<xsl:value-of select = "$CurrentVenueLat" />
</wgs84_latitude>
<wgs84_longitude>
<xsl:value-of select = "$CurrentVenueLong" />
</wgs84_longitude>
</venueAddress>
<venuePhone>
<phonenumber>
<xsl:value-of select = "$CurrentVenueTelephone" />
</phonenumber>
</venuePhone>
</location>
</xsl:for-each>
</locations>
</xsl:template>
</xsl:stylesheet>
I'm trying to replace the $CurrentCategory variable the appropriate code to display mycategorytext
A:
I don't have an XSLT editor here, but have you tried using
*[local-name()='category']/@*[local-name()='term']
A:
According to http://www.w3.org/TR/2006/REC-xml-names-20060816/#scoping-defaulting
"Default namespace declarations do not apply directly to attribute names; the interpretation of unprefixed attributes is determined by the element on which they appear."
This means that your attributes aren't in a namespace. Just use "@term".
Just to be a bit clearer, there is no need for using local-name() to solve this problem.
The conventional way to deal with it would be to declare a prefix for the atom namespace in your XSLT, and then use that in your xpath queries.
You have already got this declaration on your stylesheet element (xmlns:atom="http://www.w3.org/2005/Atom"), so all that remains is to use it.
As I have already explained, the attribute is not affected by the default namespace, so your code would look like this (assuming that you were to add "xmlns:xhtml='http://www.w3.org/1999/xhtml'"):
<xsl:for-each select="/atom:feed/atom:entry">
<xsl:variable name="CurrentVenueKey" select="atom:id" />
<xsl:variable name="CurrentVenueName" select="atom:title" />
<xsl:variable name="CurrentVenueAddress1"
select="atom:content/xhtml:div/xhtml:div/xhtml:p[@class='adr']/xhtml:span[@class='street-address']" />
<xsl:variable name="CurrentVenueCity"
select="atom:content/xhtml:div/xhtml:div'/xhtml:p[@class='adr']/xhtml:span[@class='locality'] />
...
<xsl:variable name="CurrentCategory" select="atom:category/@term" />
.....
local-name() can be very useful if you really don't know the structure of the XML you are transforming, but in this case, if you receive anything other than what you're expecting, it will break in any case.
A:
I'm not really sure why you have to use local-name(), but if you share a little more info as to what xslt processor you are using along with the language, I'll be that can be figured out. I say this b/c you should be able to do something like:
<xsl:stylesheet xmlns="http://www.w3.org/2005/Atom" ..>
<xsl:template match="feed">
<xsl:apply-templates />
</xsl:template>
<xsl:template match="entry">
...
<xsl:variable name="current-category" select="category/@term" />
...
</xsl:template>
The two things I'm hoping help you out are the xmlns declaration at the top without a prefix. That sets the default namespace so you don't have to use the namespace prefixes. Likewise, you could call do 'xmlns:a="http://www.w3.org/2005/Atom"' and then do 'select="a:feed"'. The other thing to notice is using the '@term' which selects attributes. If you wanted to match on any attribute '@*' works just like it would for elements.
Again, depending on the processor, there might be other helpful tools at your disposal so if you can provide a little more information it might help. Also, the XSL mailing list might another helpful resource.
|
How do I get an attribute value when using XSLT with unknown namespace?
|
I am receiving a 3rd party feed of which I cannot be certain of the namespace so I am currently having to use the local-name() function in my XSLT to get the element values. However I need to get an attribute from one such element and I don't know how to do this when the namespaces are unknown (hence need for local-name() function).
N.B. I am using .net 2.0 to process the XSLT
Here is a sample of the XML:
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<id>some id</id>
<title>some title</title>
<updated>2008-09-11T15:53:31+01:00</updated>
<link rel="self" href="http://www.somefeedurl.co.uk" />
<author>
<name>some author</name>
<uri>http://someuri.co.uk</uri>
</author>
<generator uri="http://aardvarkmedia.co.uk/">AardvarkMedia script</generator>
<entry>
<id>http://soemaddress.co.uk/branded3/80406</id>
<title type="html">My Ttile</title>
<link rel="alternate" href="http://www.someurl.co.uk" />
<updated>2008-02-13T00:00:00+01:00</updated>
<published>2002-09-11T14:16:20+01:00</published>
<category term="mycategorytext" label="restaurant">Test</category>
<content type="xhtml">
<div xmlns="http://www.w3.org/1999/xhtml">
<div class="vcard">
<p class="fn org">some title</p>
<p class="adr">
<abbr class="type" title="POSTAL" />
<span class="street-address">54 Some Street</span>
,
<span class="locality" />
,
<span class="country-name">UK</span>
</p>
<p class="tel">
<span class="value">0123456789</span>
</p>
<div class="geo">
<span class="latitude">51.99999</span>
,
<span class="longitude">-0.123456</span>
</div>
<p class="note">
<span class="type">Review</span>
<span class="value">Some content</span>
</p>
<p class="note">
<span class="type">Overall rating</span>
<span class="value">8</span>
</p>
</div>
</div>
</content>
<category term="cuisine-54" label="Spanish" />
<Point xmlns="http://www.w3.org/2003/01/geo/wgs84_pos#">
<lat>51.123456789</lat>
<long>-0.11111111</long>
</Point>
</entry>
</feed>
This is XSLT
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:wgs="http://www.w3.org/2003/01/geo/wgs84_pos#" exclude-result-prefixes="atom wgs">
<xsl:output method="xml" indent="yes"/>
<xsl:key name="uniqueVenuesKey" match="entry" use="id"/>
<xsl:key name="uniqueCategoriesKey" match="entry" use="category/@term"/>
<xsl:template match="/">
<locations>
<!-- Get all unique venues -->
<xsl:for-each select="/*[local-name()='feed']/*[local-name()='entry']">
<xsl:variable name="CurrentVenueKey" select="*[local-name()='id']" ></xsl:variable>
<xsl:variable name="CurrentVenueName" select="*[local-name()='title']" ></xsl:variable>
<xsl:variable name="CurrentVenueAddress1" select="*[local-name()='content']/*[local-name()='div']/*[local-name()='div']/*[local-name()='p'][@class='adr']/*[local-name()='span'][@class='street-address']" ></xsl:variable>
<xsl:variable name="CurrentVenueCity" select="*[local-name()='content']/*[local-name()='div']/*[local-name()='div']/*[local-name()='p'][@class='adr']/*[local-name()='span'][@class='locality']" ></xsl:variable>
<xsl:variable name="CurrentVenuePostcode" select="*[local-name()='postcode']" ></xsl:variable>
<xsl:variable name="CurrentVenueTelephone" select="*[local-name()='telephone']" ></xsl:variable>
<xsl:variable name="CurrentVenueLat" select="*[local-name()='Point']/*[local-name()='lat']" ></xsl:variable>
<xsl:variable name="CurrentVenueLong" select="*[local-name()='Point']/*[local-name()='long']" ></xsl:variable>
<xsl:variable name="CurrentCategory" select="WHATDOIPUTHERE"></xsl:variable>
<location>
<locationName>
<xsl:value-of select = "$CurrentVenueName" />
</locationName>
<category>
<xsl:value-of select = "$CurrentCategory" />
</category>
<description>
<xsl:value-of select = "$CurrentVenueName" />
</description>
<venueAddress>
<streetName>
<xsl:value-of select = "$CurrentVenueAddress1" />
</streetName>
<town>
<xsl:value-of select = "$CurrentVenueCity" />
</town>
<postcode>
<xsl:value-of select = "$CurrentVenuePostcode" />
</postcode>
<wgs84_latitude>
<xsl:value-of select = "$CurrentVenueLat" />
</wgs84_latitude>
<wgs84_longitude>
<xsl:value-of select = "$CurrentVenueLong" />
</wgs84_longitude>
</venueAddress>
<venuePhone>
<phonenumber>
<xsl:value-of select = "$CurrentVenueTelephone" />
</phonenumber>
</venuePhone>
</location>
</xsl:for-each>
</locations>
</xsl:template>
</xsl:stylesheet>
I'm trying to replace the $CurrentCategory variable the appropriate code to display mycategorytext
|
[
"I don't have an XSLT editor here, but have you tried using\n*[local-name()='category']/@*[local-name()='term']\n\n",
"According to http://www.w3.org/TR/2006/REC-xml-names-20060816/#scoping-defaulting\n\"Default namespace declarations do not apply directly to attribute names; the interpretation of unprefixed attributes is determined by the element on which they appear.\"\nThis means that your attributes aren't in a namespace. Just use \"@term\". \nJust to be a bit clearer, there is no need for using local-name() to solve this problem. \nThe conventional way to deal with it would be to declare a prefix for the atom namespace in your XSLT, and then use that in your xpath queries. \nYou have already got this declaration on your stylesheet element (xmlns:atom=\"http://www.w3.org/2005/Atom\"), so all that remains is to use it. \nAs I have already explained, the attribute is not affected by the default namespace, so your code would look like this (assuming that you were to add \"xmlns:xhtml='http://www.w3.org/1999/xhtml'\"): \n <xsl:for-each select=\"/atom:feed/atom:entry\">\n <xsl:variable name=\"CurrentVenueKey\" select=\"atom:id\" />\n <xsl:variable name=\"CurrentVenueName\" select=\"atom:title\" />\n <xsl:variable name=\"CurrentVenueAddress1\" \n select=\"atom:content/xhtml:div/xhtml:div/xhtml:p[@class='adr']/xhtml:span[@class='street-address']\" />\n <xsl:variable name=\"CurrentVenueCity\" \n select=\"atom:content/xhtml:div/xhtml:div'/xhtml:p[@class='adr']/xhtml:span[@class='locality'] />\n...\n <xsl:variable name=\"CurrentCategory\" select=\"atom:category/@term\" />\n\n..... \n\nlocal-name() can be very useful if you really don't know the structure of the XML you are transforming, but in this case, if you receive anything other than what you're expecting, it will break in any case. \n",
"I'm not really sure why you have to use local-name(), but if you share a little more info as to what xslt processor you are using along with the language, I'll be that can be figured out. I say this b/c you should be able to do something like:\n<xsl:stylesheet xmlns=\"http://www.w3.org/2005/Atom\" ..>\n\n<xsl:template match=\"feed\">\n <xsl:apply-templates />\n</xsl:template>\n\n<xsl:template match=\"entry\">\n ... \n <xsl:variable name=\"current-category\" select=\"category/@term\" />\n ...\n</xsl:template>\n\nThe two things I'm hoping help you out are the xmlns declaration at the top without a prefix. That sets the default namespace so you don't have to use the namespace prefixes. Likewise, you could call do 'xmlns:a=\"http://www.w3.org/2005/Atom\"' and then do 'select=\"a:feed\"'. The other thing to notice is using the '@term' which selects attributes. If you wanted to match on any attribute '@*' works just like it would for elements. \nAgain, depending on the processor, there might be other helpful tools at your disposal so if you can provide a little more information it might help. Also, the XSL mailing list might another helpful resource.\n"
] |
[
13,
2,
0
] |
[] |
[] |
[
"namespaces",
"xml",
"xslt"
] |
stackoverflow_0000076204_namespaces_xml_xslt.txt
|
Q:
Globalization architecture
I need to store products for an e-commerce solution in a database. Each product should have descriptive information, such as name, description etc.
I need any product to be localized to x number of languages.
What I've done so far, is to make any column that should be localized and nvarchar(MAX) and then i store an XML string like this:
<cultures>
<culture code="en-us">Super fast laptop</culture>
<culture code="da-dk">Super hurtig bærbar</culture>
</cultures>
And when I load it from the database, into my business logic objects, I parse the XML string to a Dictionary<string, string> where the key is the culture/language code.
So when I want to display the name of a product, I do this:
lblName.Text = product.Name["en-us"];
Does anyone have a better solution?
A:
You should store the current language somewhere (in a singleton, for instance) and in the product.Name property use the language setting to get the correct string. This way you only have to write the language specific code once for each field rather than thinking about languages everywhere the field is used.
For example, assuming your singleton is defined in the Localizer class that stores an enum corresponding to the current language:
public class Product
{
private idType id;
public string Name
{
get
{
return Localizer.Instance.GetLocalString(id, "Name");
}
}
}
Where GetLocalString looks something like:
public string GetLocalString(idType objectId, string fieldName)
{
switch (_currentLanguage)
{
case Language.English:
// db access code to retrieve your string, may need to include the table
// the object is in (e.g. "Products" "Orders" etc.)
db.GetValue(objectId, fieldName, "en-us");
break;
}
}
A:
Rob Conery's MVC Storefront webcast series has a video on this issue (he gets to the database around 5:30). He stores a list of cultures, and then has a Product table for non-localized data and a ProductCultureDetail table for localized text.
A:
resource files
A:
This is basically the approach we took with Microsoft Commerce Server 2002. Yeah indexed views will help your performance.
|
Globalization architecture
|
I need to store products for an e-commerce solution in a database. Each product should have descriptive information, such as name, description etc.
I need any product to be localized to x number of languages.
What I've done so far, is to make any column that should be localized and nvarchar(MAX) and then i store an XML string like this:
<cultures>
<culture code="en-us">Super fast laptop</culture>
<culture code="da-dk">Super hurtig bærbar</culture>
</cultures>
And when I load it from the database, into my business logic objects, I parse the XML string to a Dictionary<string, string> where the key is the culture/language code.
So when I want to display the name of a product, I do this:
lblName.Text = product.Name["en-us"];
Does anyone have a better solution?
|
[
"You should store the current language somewhere (in a singleton, for instance) and in the product.Name property use the language setting to get the correct string. This way you only have to write the language specific code once for each field rather than thinking about languages everywhere the field is used. \nFor example, assuming your singleton is defined in the Localizer class that stores an enum corresponding to the current language:\npublic class Product\n{\n private idType id;\n public string Name\n {\n get\n {\n return Localizer.Instance.GetLocalString(id, \"Name\");\n }\n }\n}\n\nWhere GetLocalString looks something like:\n public string GetLocalString(idType objectId, string fieldName)\n {\n switch (_currentLanguage)\n {\n case Language.English:\n // db access code to retrieve your string, may need to include the table\n // the object is in (e.g. \"Products\" \"Orders\" etc.)\n db.GetValue(objectId, fieldName, \"en-us\");\n break;\n }\n }\n\n",
"Rob Conery's MVC Storefront webcast series has a video on this issue (he gets to the database around 5:30). He stores a list of cultures, and then has a Product table for non-localized data and a ProductCultureDetail table for localized text.\n",
"resource files\n",
"This is basically the approach we took with Microsoft Commerce Server 2002. Yeah indexed views will help your performance.\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"architecture",
"c#",
"globalization",
"localization"
] |
stackoverflow_0000028577_architecture_c#_globalization_localization.txt
|
Q:
how to put an .net application in system tray when minimized?
can anyone please suggest a good code example of vb.net/c# code to put the application in system tray when minized.
A:
Add a NotifyIcon control to your form, then use the following code:
private void frm_main_Resize(object sender, EventArgs e)
{
if (this.WindowState == FormWindowState.Minimized)
{
this.ShowInTaskbar = false;
this.Hide();
notifyIcon1.Visible = true;
}
}
private void notifyIcon1_MouseDoubleClick(object sender, MouseEventArgs e)
{
this.Show();
this.WindowState = FormWindowState.Normal;
this.ShowInTaskbar = true;
notifyIcon1.Visible = false;
}
You may not need to set the ShowInTaskbar property.
A:
You can leverage a built in control called NotifyIcon. This creates a tray icon when shown. @Phillip has a code example that is somewhat complete.
There is a gotcha though:
You must override your applications main form Dispose method to call Dispose on NotifyIcon, otherwise it will stay in your tray after application exits.
public void Form_Dispose(object sender, EventArgs e)
{
if (this.Disposing)
notifyIcon1.Dispose();
}
Something like that.
A:
You can do this by adding a NotifyIcon to your form and handling the form's resize event. To get back from the tray handle the NotifyIcon's double-click event.
If you want to add a little animation you can do this too...
1) Add the following module:
Module AnimatedMinimizeToTray
Structure RECT
Public left As Integer
Public top As Integer
Public right As Integer
Public bottom As Integer
End Structure
Structure APPBARDATA
Public cbSize As Integer
Public hWnd As IntPtr
Public uCallbackMessage As Integer
Public uEdge As ABEdge
Public rc As RECT
Public lParam As IntPtr
End Structure
Enum ABMsg
ABM_NEW = 0
ABM_REMOVE = 1
ABM_QUERYPOS = 2
ABM_SETPOS = 3
ABM_GETSTATE = 4
ABM_GETTASKBARPOS = 5
ABM_ACTIVATE = 6
ABM_GETAUTOHIDEBAR = 7
ABM_SETAUTOHIDEBAR = 8
ABM_WINDOWPOSCHANGED = 9
ABM_SETSTATE = 10
End Enum
Enum ABNotify
ABN_STATECHANGE = 0
ABN_POSCHANGED
ABN_FULLSCREENAPP
ABN_WINDOWARRANGE
End Enum
Enum ABEdge
ABE_LEFT = 0
ABE_TOP
ABE_RIGHT
ABE_BOTTOM
End Enum
Public Declare Function SHAppBarMessage Lib "shell32.dll" Alias "SHAppBarMessage" (ByVal dwMessage As Integer, ByRef pData As APPBARDATA) As Integer
Public Const ABM_GETTASKBARPOS As Integer = &H5&
Public Const WM_SYSCOMMAND As Integer = &H112
Public Const SC_MINIMIZE As Integer = &HF020
Public Sub AnimateWindow(ByVal ToTray As Boolean, ByRef frm As Form, ByRef icon As NotifyIcon)
' get the screen dimensions
Dim screenRect As Rectangle = Screen.GetBounds(frm.Location)
' figure out where the taskbar is (and consequently the tray)
Dim destPoint As Point
Dim BarData As APPBARDATA
BarData.cbSize = System.Runtime.InteropServices.Marshal.SizeOf(BarData)
SHAppBarMessage(ABMsg.ABM_GETTASKBARPOS, BarData)
Select Case BarData.uEdge
Case ABEdge.ABE_BOTTOM, ABEdge.ABE_RIGHT
' Tray is to the Bottom Right
destPoint = New Point(screenRect.Width, screenRect.Height)
Case ABEdge.ABE_LEFT
' Tray is to the Bottom Left
destPoint = New Point(0, screenRect.Height)
Case ABEdge.ABE_TOP
' Tray is to the Top Right
destPoint = New Point(screenRect.Width, 0)
End Select
' setup our loop based on the direction
Dim a, b, s As Single
If ToTray Then
a = 0
b = 1
s = 0.05
Else
a = 1
b = 0
s = -0.05
End If
' "animate" the window
Dim curPoint As Point, curSize As Size
Dim startPoint As Point = frm.Location
Dim dWidth As Integer = destPoint.X - startPoint.X
Dim dHeight As Integer = destPoint.Y - startPoint.Y
Dim startWidth As Integer = frm.Width
Dim startHeight As Integer = frm.Height
Dim i As Single
For i = a To b Step s
curPoint = New Point(startPoint.X + i * dWidth, startPoint.Y + i * dHeight)
curSize = New Size((1 - i) * startWidth, (1 - i) * startHeight)
ControlPaint.DrawReversibleFrame(New Rectangle(curPoint, curSize), frm.BackColor, FrameStyle.Thick)
System.Threading.Thread.Sleep(15)
ControlPaint.DrawReversibleFrame(New Rectangle(curPoint, curSize), frm.BackColor, FrameStyle.Thick)
Next
If ToTray Then
' hide the form and show the notifyicon
frm.Hide()
icon.Visible = True
Else
' hide the notifyicon and show the form
icon.Visible = False
frm.Show()
End If
End Sub
End Module
2) Add a NotifyIcon to your form an add the following:
Protected Overrides Sub WndProc(ByRef m As System.Windows.Forms.Message)
If m.Msg = WM_SYSCOMMAND AndAlso m.WParam.ToInt32() = SC_MINIMIZE Then
AnimateWindow(True, Me, NotifyIcon1)
Exit Sub
End If
MyBase.WndProc(m)
End Sub
Private Sub NotifyIcon1_DoubleClick(ByVal sender As Object, ByVal e As System.EventArgs) Handles NotifyIcon1.DoubleClick
AnimateWindow(False, Me, NotifyIcon1)
End Sub
|
how to put an .net application in system tray when minimized?
|
can anyone please suggest a good code example of vb.net/c# code to put the application in system tray when minized.
|
[
"Add a NotifyIcon control to your form, then use the following code:\n private void frm_main_Resize(object sender, EventArgs e)\n {\n if (this.WindowState == FormWindowState.Minimized)\n {\n this.ShowInTaskbar = false;\n this.Hide();\n notifyIcon1.Visible = true;\n }\n }\n\n private void notifyIcon1_MouseDoubleClick(object sender, MouseEventArgs e)\n {\n this.Show();\n this.WindowState = FormWindowState.Normal;\n this.ShowInTaskbar = true;\n notifyIcon1.Visible = false;\n }\n\nYou may not need to set the ShowInTaskbar property.\n",
"You can leverage a built in control called NotifyIcon. This creates a tray icon when shown. @Phillip has a code example that is somewhat complete.\nThere is a gotcha though:\nYou must override your applications main form Dispose method to call Dispose on NotifyIcon, otherwise it will stay in your tray after application exits.\npublic void Form_Dispose(object sender, EventArgs e)\n{\n if (this.Disposing)\n notifyIcon1.Dispose();\n}\n\nSomething like that.\n",
"You can do this by adding a NotifyIcon to your form and handling the form's resize event. To get back from the tray handle the NotifyIcon's double-click event.\nIf you want to add a little animation you can do this too...\n1) Add the following module:\nModule AnimatedMinimizeToTray\nStructure RECT\n Public left As Integer\n Public top As Integer\n Public right As Integer\n Public bottom As Integer\nEnd Structure\n\nStructure APPBARDATA\n Public cbSize As Integer\n Public hWnd As IntPtr\n Public uCallbackMessage As Integer\n Public uEdge As ABEdge\n Public rc As RECT\n Public lParam As IntPtr\nEnd Structure\n\nEnum ABMsg\n ABM_NEW = 0\n ABM_REMOVE = 1\n ABM_QUERYPOS = 2\n ABM_SETPOS = 3\n ABM_GETSTATE = 4\n ABM_GETTASKBARPOS = 5\n ABM_ACTIVATE = 6\n ABM_GETAUTOHIDEBAR = 7\n ABM_SETAUTOHIDEBAR = 8\n ABM_WINDOWPOSCHANGED = 9\n ABM_SETSTATE = 10\nEnd Enum\n\nEnum ABNotify\n ABN_STATECHANGE = 0\n ABN_POSCHANGED\n ABN_FULLSCREENAPP\n ABN_WINDOWARRANGE\nEnd Enum\n\nEnum ABEdge\n ABE_LEFT = 0\n ABE_TOP\n ABE_RIGHT\n ABE_BOTTOM\nEnd Enum\n\nPublic Declare Function SHAppBarMessage Lib \"shell32.dll\" Alias \"SHAppBarMessage\" (ByVal dwMessage As Integer, ByRef pData As APPBARDATA) As Integer\nPublic Const ABM_GETTASKBARPOS As Integer = &H5&\nPublic Const WM_SYSCOMMAND As Integer = &H112\nPublic Const SC_MINIMIZE As Integer = &HF020\n\nPublic Sub AnimateWindow(ByVal ToTray As Boolean, ByRef frm As Form, ByRef icon As NotifyIcon)\n ' get the screen dimensions\n Dim screenRect As Rectangle = Screen.GetBounds(frm.Location)\n\n ' figure out where the taskbar is (and consequently the tray)\n Dim destPoint As Point\n Dim BarData As APPBARDATA\n BarData.cbSize = System.Runtime.InteropServices.Marshal.SizeOf(BarData)\n SHAppBarMessage(ABMsg.ABM_GETTASKBARPOS, BarData)\n Select Case BarData.uEdge\n Case ABEdge.ABE_BOTTOM, ABEdge.ABE_RIGHT\n ' Tray is to the Bottom Right\n destPoint = New Point(screenRect.Width, screenRect.Height)\n\n Case ABEdge.ABE_LEFT\n ' Tray is to the Bottom Left\n destPoint = New Point(0, screenRect.Height)\n\n Case ABEdge.ABE_TOP\n ' Tray is to the Top Right\n destPoint = New Point(screenRect.Width, 0)\n\n End Select\n\n ' setup our loop based on the direction\n Dim a, b, s As Single\n If ToTray Then\n a = 0\n b = 1\n s = 0.05\n Else\n a = 1\n b = 0\n s = -0.05\n End If\n\n ' \"animate\" the window\n Dim curPoint As Point, curSize As Size\n Dim startPoint As Point = frm.Location\n Dim dWidth As Integer = destPoint.X - startPoint.X\n Dim dHeight As Integer = destPoint.Y - startPoint.Y\n Dim startWidth As Integer = frm.Width\n Dim startHeight As Integer = frm.Height\n Dim i As Single\n For i = a To b Step s\n curPoint = New Point(startPoint.X + i * dWidth, startPoint.Y + i * dHeight)\n curSize = New Size((1 - i) * startWidth, (1 - i) * startHeight)\n ControlPaint.DrawReversibleFrame(New Rectangle(curPoint, curSize), frm.BackColor, FrameStyle.Thick)\n System.Threading.Thread.Sleep(15)\n ControlPaint.DrawReversibleFrame(New Rectangle(curPoint, curSize), frm.BackColor, FrameStyle.Thick)\n Next\n\n\n If ToTray Then\n ' hide the form and show the notifyicon\n frm.Hide()\n icon.Visible = True\n Else\n ' hide the notifyicon and show the form\n icon.Visible = False\n frm.Show()\n End If\n\nEnd Sub\nEnd Module\n\n2) Add a NotifyIcon to your form an add the following:\nProtected Overrides Sub WndProc(ByRef m As System.Windows.Forms.Message)\n If m.Msg = WM_SYSCOMMAND AndAlso m.WParam.ToInt32() = SC_MINIMIZE Then\n AnimateWindow(True, Me, NotifyIcon1)\n Exit Sub\n End If\n MyBase.WndProc(m)\nEnd Sub\n\nPrivate Sub NotifyIcon1_DoubleClick(ByVal sender As Object, ByVal e As System.EventArgs) Handles NotifyIcon1.DoubleClick\n AnimateWindow(False, Me, NotifyIcon1)\nEnd Sub\n\n"
] |
[
18,
2,
0
] |
[] |
[] |
[
".net",
"system",
"system_tray"
] |
stackoverflow_0000076079_.net_system_system_tray.txt
|
Q:
What are the major new 'core' features of MS SQL Server 2008?
I'm looking mainly at things like new SQL syntax, new kinds of locking, new capabilities etc. Not so much in the surrounding services like data warehousing and reports...
A:
There's a great article on the new T-SQL features here (by SQL guru Itzik Ben-Gan). It covers
Declaring and initializing variables
Compound assignment operators
Table value constructor support through the VALUES clause
Enhancements to the CONVERT function
New date and time data types and functions
Large UDTs (GEOMETRY and GEOGRAPHY)
The HIERARCHYID data type
Table types and table-valued parameters
The MERGE statement, grouping sets enhancements
DDL trigger enhancements
Sparse columns
Filtered indexes
Large CLR user-defined aggregates
Multi-input CLR user-defined aggregates
The ORDER option for CLR table-valued functions
Object dependencies
Change data capture
Collation alignment with Microsoft® Windows®
Deprecation
A:
Filestream blob storage is the biggest bonus to me
A:
New separate types for Date and Time, instead of just Datetime
New geographic types for lattitude/longitude
Change Data Capture is pretty neat if you're doing anything where auditing is important
Configuration Servers, for maintaining multiple databases.
That's what caught my attention at the Heroes Happen Here launch back in April.
A:
HotAdd CPU. http://msdn.microsoft.com/en-us/library/bb964703.aspx
A:
Did you check the whitepaper on the website?
SQL Server 2008 Overview.
I cannot recall off the top of my head, but it atleast has a nice database to object linking functionality. They have geospatial types too, if you need to use those.
A:
white paper on SQL Server 2008
This should cover most of the new features. I noticed the new date time data types and new security features.
A:
Page compressiong sounds really nice to me. Haven't used it yet, though.
http://sqlblog.com/blogs/linchi_shea/archive/2008/05/11/sql-server-2008-page-compression-compression-ratios-from-real-world-databases.aspx
A:
Sparse indexing for those with lots of NULLs. Also the DATETIME2 data type that a lot of people have been waiting for 0001-01-01 through 9999-12-31.
|
What are the major new 'core' features of MS SQL Server 2008?
|
I'm looking mainly at things like new SQL syntax, new kinds of locking, new capabilities etc. Not so much in the surrounding services like data warehousing and reports...
|
[
"There's a great article on the new T-SQL features here (by SQL guru Itzik Ben-Gan). It covers\n\nDeclaring and initializing variables\nCompound assignment operators\nTable value constructor support through the VALUES clause\nEnhancements to the CONVERT function\nNew date and time data types and functions\nLarge UDTs (GEOMETRY and GEOGRAPHY)\nThe HIERARCHYID data type\nTable types and table-valued parameters\nThe MERGE statement, grouping sets enhancements\nDDL trigger enhancements\nSparse columns\nFiltered indexes\nLarge CLR user-defined aggregates\nMulti-input CLR user-defined aggregates\nThe ORDER option for CLR table-valued functions\nObject dependencies\nChange data capture\nCollation alignment with Microsoft® Windows®\nDeprecation\n\n",
"Filestream blob storage is the biggest bonus to me\n",
"\nNew separate types for Date and Time, instead of just Datetime\nNew geographic types for lattitude/longitude\nChange Data Capture is pretty neat if you're doing anything where auditing is important\nConfiguration Servers, for maintaining multiple databases.\n\nThat's what caught my attention at the Heroes Happen Here launch back in April.\n",
"HotAdd CPU. http://msdn.microsoft.com/en-us/library/bb964703.aspx\n",
"Did you check the whitepaper on the website? \n SQL Server 2008 Overview. \nI cannot recall off the top of my head, but it atleast has a nice database to object linking functionality. They have geospatial types too, if you need to use those.\n",
"white paper on SQL Server 2008\nThis should cover most of the new features. I noticed the new date time data types and new security features.\n",
"Page compressiong sounds really nice to me. Haven't used it yet, though.\nhttp://sqlblog.com/blogs/linchi_shea/archive/2008/05/11/sql-server-2008-page-compression-compression-ratios-from-real-world-databases.aspx\n",
"Sparse indexing for those with lots of NULLs. Also the DATETIME2 data type that a lot of people have been waiting for 0001-01-01 through 9999-12-31.\n"
] |
[
4,
3,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000075487_sql_server.txt
|
Q:
Nant and changing file properties (read-only to writable)
As part of the Nant copy task, I would like to change the properties of the files in the target location. For instance make the files "read-write" from "read-only". How would I do this?
A:
Use the <attrib> task. For example, to make the file "test.txt" read/write, you would use
<attrib file="test.txt" readonly="false"/>
A:
Also, for a list of files, the command is:
<attrib readonly="false">
<fileset basedir="mydirectory">
<include name="**"/>
</fileset>
</attrib>
|
Nant and changing file properties (read-only to writable)
|
As part of the Nant copy task, I would like to change the properties of the files in the target location. For instance make the files "read-write" from "read-only". How would I do this?
|
[
"Use the <attrib> task. For example, to make the file \"test.txt\" read/write, you would use\n<attrib file=\"test.txt\" readonly=\"false\"/>\n\n",
"Also, for a list of files, the command is:\n<attrib readonly=\"false\">\n <fileset basedir=\"mydirectory\">\n <include name=\"**\"/>\n </fileset>\n</attrib>\n\n"
] |
[
10,
6
] |
[] |
[] |
[
"nant"
] |
stackoverflow_0000075441_nant.txt
|
Q:
How to configure IIS7 to allow zip file uploads using classic asp?
I recently installed Windows 2008 Server to replace a crashed hard drive on a web server with a variety of web pages including several classic ASP applications. One of these makes extensive use of file uploads using a com tool that has worked for several years.
More information:
My users did not provide good information in that very small zips (65K) work once I tested it myself, but larger ones do not. I did not test for the cut-off, but 365K fails. And it is not only zip files after all. A 700K doc file failed also. ErrorCode 800a0035.
A:
Soneone named Anthony Jones in microsoft.public.inetserver.asp.general provided the answer as follows:
In IIS7 IIS manager click in the web
site and double click the ASP icon in
the features view. Expand Limits
Properties and modify the Maximum
Requesting Entity Body Limit.
To which I replied:
That did the trick. And it was so easy. You have no idea how many things I tried that did not work.
I think there may be a second part though. One of the things I had done was to change the
setting in applicationhost.config from:
<sectionGroup name="system.webServer">
<section name="asp" overrideModeDefault="Deny" />
to
<section name="asp" overrideModeDefault="Allow" />
After I made your change and tested it, I changed the above to Deny just on general principals of not fixing what was not broken. The website immediately stopped working until I changed it back to Allow.
A:
There is a size limit that you will probably need to set - what's the 500 error?
|
How to configure IIS7 to allow zip file uploads using classic asp?
|
I recently installed Windows 2008 Server to replace a crashed hard drive on a web server with a variety of web pages including several classic ASP applications. One of these makes extensive use of file uploads using a com tool that has worked for several years.
More information:
My users did not provide good information in that very small zips (65K) work once I tested it myself, but larger ones do not. I did not test for the cut-off, but 365K fails. And it is not only zip files after all. A 700K doc file failed also. ErrorCode 800a0035.
|
[
"Soneone named Anthony Jones in microsoft.public.inetserver.asp.general provided the answer as follows:\n\nIn IIS7 IIS manager click in the web\n site and double click the ASP icon in \n the features view. Expand Limits\n Properties and modify the Maximum \n Requesting Entity Body Limit.\n\nTo which I replied:\nThat did the trick. And it was so easy. You have no idea how many things I tried that did not work.\nI think there may be a second part though. One of the things I had done was to change the \nsetting in applicationhost.config from:\n <sectionGroup name=\"system.webServer\">\n <section name=\"asp\" overrideModeDefault=\"Deny\" />\n\nto\n <section name=\"asp\" overrideModeDefault=\"Allow\" />\n\nAfter I made your change and tested it, I changed the above to Deny just on general principals of not fixing what was not broken. The website immediately stopped working until I changed it back to Allow.\n",
"There is a size limit that you will probably need to set - what's the 500 error?\n"
] |
[
1,
0
] |
[] |
[] |
[
"asp_classic",
"iis",
"iis_7"
] |
stackoverflow_0000069788_asp_classic_iis_iis_7.txt
|
Q:
How to get Emacs to unwrap a block of code?
Say I have a line in an emacs buffer that looks like this:
foo -option1 value1 -option2 value2 -option3 value3 \
-option4 value4 ...
I want it to look like this:
foo -option1 value1 \
-option2 value2 \
-option3 value3 \
-option4 value4 \
...
I want each option/value pair on a separate line. I also want those subsequent lines indented appropriately according to mode rather than to add a fixed amount of whitespace. I would prefer that the code work on the current block, stopping at the first non-blank line or line that does not contain an option/value pair though I could settle for it working on a selected region.
Anybody know of an elisp function to do this?
A:
Nobody had what I was looking for so I decided to dust off my elisp manual and do it myself. This seems to work well enough, though the output isn't precisely what I asked for. In this version the first option goes on a line by itself instead of staying on the first line like in my original question.
(defun tcl-multiline-options ()
"spread option/value pairs across multiple lines with continuation characters"
(interactive)
(save-excursion
(tcl-join-continuations)
(beginning-of-line)
(while (re-search-forward " -[^ ]+ +" (line-end-position) t)
(goto-char (match-beginning 0))
(insert " \\\n")
(goto-char (+(match-end 0) 3))
(indent-according-to-mode)
(forward-sexp))))
(defun tcl-join-continuations ()
"join multiple continuation lines into a single physical line"
(interactive)
(while (progn (end-of-line) (char-equal (char-before) ?\\))
(forward-line 1))
(while (save-excursion (end-of-line 0) (char-equal (char-before) ?\\))
(end-of-line 0)
(delete-char -1)
(delete-char 1)
(fixup-whitespace)))
A:
In this case I would use a macro. You can start recording a macro with C-x (, and stop recording it with C-x ). When you want to replay the macro type C-x e.
In this case, I would type, C-a C-x ( C-s v a l u e C-f C-f \ RET SPC SPC SPC SPC C-x )
That would record a macro that searches for "value", moves forward 2, inserts a slash and newline, and finally spaces the new line over to line up. Then you could repeat this macro a few times.
EDIT: I just realized, your literal text may not be as easy to search as "value1". You could also search for spaces and cycle through the hits. For example, hitting, C-s a few times after the first match to skip over some of the matches.
Note: Since your example is "ad-hoc" this solution will be too. Often you use macros when you need an ad-hoc solution. One way to make the macro apply more consistently is to put the original statement all on one line (can also be done by a macro or manually).
EDIT: Thanks for the comment about ( versus C-(, you were right my mistake!
A:
Personally, I do stuff like this all the time.
But I don't write a function to do it unless I'll be doing it
every day for a year.
You can easily do it with query-replace, like this:
m-x (query-replace " -option" "^Q^J -option")
I say ^Q^J as that is what you'll type to quote a newline and put it in
the string.
Then just press 'y' for the strings to replace, and 'n' to skip the wierd
corner cases you'd find.
Another workhorse function is query-replace-regexp that can do
replacements of regular expressions.
and also grep-query-replace, which will perform query-replace by parsing
the output of a grep command. This is useful because you can search
for "foo" in 100 files, then do the query-replace on each occurrence
skipping from file to file.
A:
Your mode may support this already. In C mode and Makefile mode, at least, M-q (fill-paragraph) will insert line continuations in the fill-column and wrap your lines.
What mode are you editing this in?
|
How to get Emacs to unwrap a block of code?
|
Say I have a line in an emacs buffer that looks like this:
foo -option1 value1 -option2 value2 -option3 value3 \
-option4 value4 ...
I want it to look like this:
foo -option1 value1 \
-option2 value2 \
-option3 value3 \
-option4 value4 \
...
I want each option/value pair on a separate line. I also want those subsequent lines indented appropriately according to mode rather than to add a fixed amount of whitespace. I would prefer that the code work on the current block, stopping at the first non-blank line or line that does not contain an option/value pair though I could settle for it working on a selected region.
Anybody know of an elisp function to do this?
|
[
"Nobody had what I was looking for so I decided to dust off my elisp manual and do it myself. This seems to work well enough, though the output isn't precisely what I asked for. In this version the first option goes on a line by itself instead of staying on the first line like in my original question.\n(defun tcl-multiline-options ()\n \"spread option/value pairs across multiple lines with continuation characters\"\n (interactive)\n (save-excursion\n (tcl-join-continuations)\n (beginning-of-line)\n (while (re-search-forward \" -[^ ]+ +\" (line-end-position) t)\n (goto-char (match-beginning 0))\n (insert \" \\\\\\n\")\n (goto-char (+(match-end 0) 3))\n (indent-according-to-mode)\n (forward-sexp))))\n\n(defun tcl-join-continuations ()\n \"join multiple continuation lines into a single physical line\"\n (interactive)\n (while (progn (end-of-line) (char-equal (char-before) ?\\\\))\n (forward-line 1))\n (while (save-excursion (end-of-line 0) (char-equal (char-before) ?\\\\))\n (end-of-line 0)\n (delete-char -1)\n (delete-char 1)\n (fixup-whitespace)))\n\n",
"In this case I would use a macro. You can start recording a macro with C-x (, and stop recording it with C-x ). When you want to replay the macro type C-x e.\nIn this case, I would type, C-a C-x ( C-s v a l u e C-f C-f \\ RET SPC SPC SPC SPC C-x )\nThat would record a macro that searches for \"value\", moves forward 2, inserts a slash and newline, and finally spaces the new line over to line up. Then you could repeat this macro a few times.\nEDIT: I just realized, your literal text may not be as easy to search as \"value1\". You could also search for spaces and cycle through the hits. For example, hitting, C-s a few times after the first match to skip over some of the matches.\nNote: Since your example is \"ad-hoc\" this solution will be too. Often you use macros when you need an ad-hoc solution. One way to make the macro apply more consistently is to put the original statement all on one line (can also be done by a macro or manually).\nEDIT: Thanks for the comment about ( versus C-(, you were right my mistake!\n",
"Personally, I do stuff like this all the time.\nBut I don't write a function to do it unless I'll be doing it\nevery day for a year.\nYou can easily do it with query-replace, like this:\nm-x (query-replace \" -option\" \"^Q^J -option\")\nI say ^Q^J as that is what you'll type to quote a newline and put it in\nthe string.\nThen just press 'y' for the strings to replace, and 'n' to skip the wierd\ncorner cases you'd find.\nAnother workhorse function is query-replace-regexp that can do\nreplacements of regular expressions.\nand also grep-query-replace, which will perform query-replace by parsing\nthe output of a grep command. This is useful because you can search\nfor \"foo\" in 100 files, then do the query-replace on each occurrence\nskipping from file to file.\n",
"Your mode may support this already. In C mode and Makefile mode, at least, M-q (fill-paragraph) will insert line continuations in the fill-column and wrap your lines.\nWhat mode are you editing this in?\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"emacs"
] |
stackoverflow_0000068993_emacs.txt
|
Q:
Rhino Mocks: Is there any way to verify a constraint on an object property's property?
If I have
class ObjA {
public ObjB B;
}
class ObjB {
public bool Val;
}
and
class ObjectToMock {
public DoSomething(ObjA obj){...}
}
Is there any way to define an expectation that not only will DoSomething get called but that obj.B.Val == true?
I have tried
Expect.Call(delegate {
mockObj.DoSomething(null);
}).Constraints(new PropertyIs("B.Val", true));
but it seems to fail no matter what the value is.
A:
You can try using Is.Matching() and providing a predicate constraint (moved out-of line for clarity):
Predicate nestedBValIsTrue = delegate(ObjA a) { return a.B.Val == true;};
Expect.Call( delegate {mockobj.DoSomething(null);})
.Constraints( Is.Matching(nestedBValIsTrue));
|
Rhino Mocks: Is there any way to verify a constraint on an object property's property?
|
If I have
class ObjA {
public ObjB B;
}
class ObjB {
public bool Val;
}
and
class ObjectToMock {
public DoSomething(ObjA obj){...}
}
Is there any way to define an expectation that not only will DoSomething get called but that obj.B.Val == true?
I have tried
Expect.Call(delegate {
mockObj.DoSomething(null);
}).Constraints(new PropertyIs("B.Val", true));
but it seems to fail no matter what the value is.
|
[
"You can try using Is.Matching() and providing a predicate constraint (moved out-of line for clarity):\n Predicate nestedBValIsTrue = delegate(ObjA a) { return a.B.Val == true;};\n Expect.Call( delegate {mockobj.DoSomething(null);})\n .Constraints( Is.Matching(nestedBValIsTrue));\n\n"
] |
[
2
] |
[] |
[] |
[
".net",
"rhino_mocks"
] |
stackoverflow_0000076584_.net_rhino_mocks.txt
|
Q:
Can I alter how types are resolved and instantiated in .NET?
In some languages you can override the "new" keyword to control how types are instantiated. You can't do this directly in .NET. However, I was wondering if there is a way to, say, handle a "Type not found" exception and manually resolve a type before whoever "new"ed up that type blows up?
I'm using a serializer that reads in an xml-based file and instantiates types described within it. I don't have any control over the serializer, but I'd like to interact with the process, hopefully without writing my own appdomain host.
Please don't suggest alternative serialization methods.
A:
You can attach an event handler to AppDomain.CurrentDomain.AssemblyResolve to take part in the process.
Your EventHandler should return the assembly that is responsible for the type passed in the ResolveEventArgs.
You can read more about it at MSDN
A:
There's also the AppDomain.TypeResolve event that you can override.
A:
select isn't broken discusses how to look at it differently - the fault may be in your design not your tooling.
I think that trying to get "new" to do something else is going to be the wrong approach.
Think of why operator overloading has to be used with caution - it's counter-intuitive and hard to debug when there are hidden changes in the language semantics.
Step back and look at the design in a larger context, try to find a more sensible way to solve the problem.
|
Can I alter how types are resolved and instantiated in .NET?
|
In some languages you can override the "new" keyword to control how types are instantiated. You can't do this directly in .NET. However, I was wondering if there is a way to, say, handle a "Type not found" exception and manually resolve a type before whoever "new"ed up that type blows up?
I'm using a serializer that reads in an xml-based file and instantiates types described within it. I don't have any control over the serializer, but I'd like to interact with the process, hopefully without writing my own appdomain host.
Please don't suggest alternative serialization methods.
|
[
"You can attach an event handler to AppDomain.CurrentDomain.AssemblyResolve to take part in the process.\nYour EventHandler should return the assembly that is responsible for the type passed in the ResolveEventArgs.\nYou can read more about it at MSDN\n",
"There's also the AppDomain.TypeResolve event that you can override.\n",
"select isn't broken discusses how to look at it differently - the fault may be in your design not your tooling.\nI think that trying to get \"new\" to do something else is going to be the wrong approach.\nThink of why operator overloading has to be used with caution - it's counter-intuitive and hard to debug when there are hidden changes in the language semantics.\nStep back and look at the design in a larger context, try to find a more sensible way to solve the problem.\n"
] |
[
5,
1,
1
] |
[
"You should check out Reflection and the Activator class. They will allow you to create objects from strings. Granted, the object has to be in one of the assemblies that you have access to.\n"
] |
[
-1
] |
[
".net",
"new_operator"
] |
stackoverflow_0000076864_.net_new_operator.txt
|
Q:
Which open-source C++ database GUI project should I help with?
I am looking for an open-source project involving c++ GUI(s) working with a database. I have not done it before, and am looking for a way to get my feet wet. Which can I work on?
A:
How about this one http://sourceforge.net/projects/sqlitebrowser/:
SQLite Database browser is a light GUI editor for SQLite databases, built on top of QT. The main goal of the project is to allow non-technical users to create, modify and edit SQLite databases using a set of wizards and a spreadsheet-like interface.
A:
Do a project you can get involved in and passionate about. Hopefully a product you use every day.
A:
Anything that you like and feel that you can contribute to.
A:
In my brief experience contributing to an open-source project, I found two points keep me contributing:
Great people - the other people contributing were fun to collaborate with and hang out with (virtually).
Project you care about - doesn't really matter which project as long as the its goals are something you want to spend your free time working on.
A:
Sourceforge has a help wanted page: http://sourceforge.net/people/
browse the postings to see if a project is in your expertise or find one that sound interesting...
And let me be the first to say thank you for being willing to contribute your time and knowlede to the open source movement.
|
Which open-source C++ database GUI project should I help with?
|
I am looking for an open-source project involving c++ GUI(s) working with a database. I have not done it before, and am looking for a way to get my feet wet. Which can I work on?
|
[
"How about this one http://sourceforge.net/projects/sqlitebrowser/:\n\nSQLite Database browser is a light GUI editor for SQLite databases, built on top of QT. The main goal of the project is to allow non-technical users to create, modify and edit SQLite databases using a set of wizards and a spreadsheet-like interface.\n\n",
"Do a project you can get involved in and passionate about. Hopefully a product you use every day. \n",
"Anything that you like and feel that you can contribute to.\n",
"In my brief experience contributing to an open-source project, I found two points keep me contributing:\n\nGreat people - the other people contributing were fun to collaborate with and hang out with (virtually).\nProject you care about - doesn't really matter which project as long as the its goals are something you want to spend your free time working on.\n\n",
"Sourceforge has a help wanted page: http://sourceforge.net/people/\nbrowse the postings to see if a project is in your expertise or find one that sound interesting...\nAnd let me be the first to say thank you for being willing to contribute your time and knowlede to the open source movement.\n"
] |
[
2,
2,
1,
1,
0
] |
[] |
[] |
[
"c++",
"database",
"open_source",
"qt",
"qt4"
] |
stackoverflow_0000077013_c++_database_open_source_qt_qt4.txt
|
Q:
Find minimal necessary java classpath
Is there a tool to detect unneeded jar-files?
For instance say that I have myapp.jar, which I can launch with a classpath containing hibernate.jar, junit.jar and easymock.jar. But actually it will work fine using only hibernate.jar, since the code that calls junit.jar is not reachable.
I realize that reflection might complicate things, but I could live with a tool that ignored reflection. Except for that it seems like a relatively simple problem to solve.
If there is no such tool, what is best practices for deciding which dependencies are needed? It seems to me that it must be a common problem.
A:
This is not possible in a system that might use reflection.
That said, a static analysis tool could do a pretty good job if you don't use ANY reflection.
A:
Have you taken a look at Dependency Finder?
http://depfind.sourceforge.net/
A handy list of most of the other available Java dependency tools is also available on that site.
A:
I have used
http://code.google.com/p/jarjar/
and found it to be pretty good.
Also, you will find out if you have broken any reflection easily if you have a good set of unit/acceptance tests :).
A:
Something to add to Bill K's reply: you might not use reflection at all, but the JARs you are using might. I remember encountering something like that with xalan & xerces, where a ClassNotFoundException has been thrown at runtime.
|
Find minimal necessary java classpath
|
Is there a tool to detect unneeded jar-files?
For instance say that I have myapp.jar, which I can launch with a classpath containing hibernate.jar, junit.jar and easymock.jar. But actually it will work fine using only hibernate.jar, since the code that calls junit.jar is not reachable.
I realize that reflection might complicate things, but I could live with a tool that ignored reflection. Except for that it seems like a relatively simple problem to solve.
If there is no such tool, what is best practices for deciding which dependencies are needed? It seems to me that it must be a common problem.
|
[
"This is not possible in a system that might use reflection.\nThat said, a static analysis tool could do a pretty good job if you don't use ANY reflection.\n",
"Have you taken a look at Dependency Finder?\nhttp://depfind.sourceforge.net/\nA handy list of most of the other available Java dependency tools is also available on that site.\n",
"I have used \nhttp://code.google.com/p/jarjar/ \nand found it to be pretty good.\nAlso, you will find out if you have broken any reflection easily if you have a good set of unit/acceptance tests :).\n",
"Something to add to Bill K's reply: you might not use reflection at all, but the JARs you are using might. I remember encountering something like that with xalan & xerces, where a ClassNotFoundException has been thrown at runtime.\n"
] |
[
3,
2,
1,
0
] |
[] |
[] |
[
"dependencies",
"jar",
"java"
] |
stackoverflow_0000076291_dependencies_jar_java.txt
|
Q:
Cleanest way to implement collapsable entries in a table generated via asp:Repeater?
Before anyone suggests scrapping the table tags altogether, I'm just modifying this part of a very large system, so it really wouldn't be wise for me to revise the table structure (the app is filled with similar tables).
This is a webapp in C# .NET - data comes in from a webservice and is displayed onscreen in a table. The table's rows are generated with asp:Repeaters, so that the rows alternate colers nicely. The table previously held one item of data per row. Now, essentially, the table has sub-headers... The first row is the date, the second row shows a line of data, and all the next rows are data rows until data of a new date comes in, in which case there will be another sub-header row.
At first I thought I could cheat a little and do this pretty easily to keep the current repeater structure- I just need to feed some cells the empty string so that no data appears in them. Now, however, we're considering one of those +/- collapsers next to each date, so that they can collapse all the data. My mind immediately went to hiding rows when a button is pressed... but I don't know how to hide rows from the code behind unless the row has a unique id, and I'm not sure if you can do that with repeaters.
I hope I've expressed the problem well. I'm sure I'll find a way TBH but I just saw this site on slashdot and thought I'd give it a whirl :)
A:
When you build the row in the databinding event, you can add in a unique identifier using say the id of the data field or something else that you use to make it unique.
Then you could use a client side method to expand collapse if you want to fill it with data in the beginning, toggling the style.display setting in Javascript for the table row element.
A:
just wrap the contents of the item template in an asp:Panel, then you have you have a unique id. Then throw in some jquery for some spice ;)
edit: just noticed that you are using a table. put the id on the row. then toggle it.
|
Cleanest way to implement collapsable entries in a table generated via asp:Repeater?
|
Before anyone suggests scrapping the table tags altogether, I'm just modifying this part of a very large system, so it really wouldn't be wise for me to revise the table structure (the app is filled with similar tables).
This is a webapp in C# .NET - data comes in from a webservice and is displayed onscreen in a table. The table's rows are generated with asp:Repeaters, so that the rows alternate colers nicely. The table previously held one item of data per row. Now, essentially, the table has sub-headers... The first row is the date, the second row shows a line of data, and all the next rows are data rows until data of a new date comes in, in which case there will be another sub-header row.
At first I thought I could cheat a little and do this pretty easily to keep the current repeater structure- I just need to feed some cells the empty string so that no data appears in them. Now, however, we're considering one of those +/- collapsers next to each date, so that they can collapse all the data. My mind immediately went to hiding rows when a button is pressed... but I don't know how to hide rows from the code behind unless the row has a unique id, and I'm not sure if you can do that with repeaters.
I hope I've expressed the problem well. I'm sure I'll find a way TBH but I just saw this site on slashdot and thought I'd give it a whirl :)
|
[
"When you build the row in the databinding event, you can add in a unique identifier using say the id of the data field or something else that you use to make it unique.\nThen you could use a client side method to expand collapse if you want to fill it with data in the beginning, toggling the style.display setting in Javascript for the table row element.\n",
"just wrap the contents of the item template in an asp:Panel, then you have you have a unique id. Then throw in some jquery for some spice ;)\nedit: just noticed that you are using a table. put the id on the row. then toggle it.\n"
] |
[
1,
0
] |
[] |
[] |
[
"c#",
"repeater"
] |
stackoverflow_0000077082_c#_repeater.txt
|
Q:
Premature Redo Log Switching in Oracle RAC
What are the possible causes of premature redo log switching in Oracle other than reaching the specified file size and executing ALTER SYSTEM SWITCH LOGFILE?
We have a situation where some (but not all) of our nodes are prematurely switching redo log files before filling up. This happens every 5 - 15 minutes and the size of the logs in each case vary wildly (from 15% - 100% of the specified size).
A:
This article says that it behaves differently in RAC.
In a parallel server environment, the
LGWR process in each instance holds a
KK instance lock on its own thread.
The id2 field identifies the thread
number. This lock is used to trigger
forced log switches from remote
instances. A log switch is forced
whenever the current SCN for a thread
falls behind the force SCN recorded in
the database entry section of the
controlfile. The force SCN is one more
than the highest high SCN of any log
file reused in any thread.
|
Premature Redo Log Switching in Oracle RAC
|
What are the possible causes of premature redo log switching in Oracle other than reaching the specified file size and executing ALTER SYSTEM SWITCH LOGFILE?
We have a situation where some (but not all) of our nodes are prematurely switching redo log files before filling up. This happens every 5 - 15 minutes and the size of the logs in each case vary wildly (from 15% - 100% of the specified size).
|
[
"This article says that it behaves differently in RAC.\n\nIn a parallel server environment, the\n LGWR process in each instance holds a\n KK instance lock on its own thread.\n The id2 field identifies the thread\n number. This lock is used to trigger\n forced log switches from remote\n instances. A log switch is forced\n whenever the current SCN for a thread\n falls behind the force SCN recorded in\n the database entry section of the\n controlfile. The force SCN is one more\n than the highest high SCN of any log\n file reused in any thread.\n\n"
] |
[
1
] |
[] |
[] |
[
"database",
"oracle",
"sql"
] |
stackoverflow_0000073607_database_oracle_sql.txt
|
Q:
Delegating a task in and getting notified when it completes (in C#)
Conceptually, I would like to accomplish the following but have had trouble understand how to code it properly in C#:
SomeMethod { // Member of AClass{}
DoSomething;
Start WorkerMethod() from BClass in another thread;
DoSomethingElse;
}
Then, when WorkerMethod() is complete, run this:
void SomeOtherMethod() // Also member of AClass{}
{ ... }
Can anyone please give an example of that?
A:
The BackgroundWorker class was added to .NET 2.0 for this exact purpose.
In a nutshell you do:
BackgroundWorker worker = new BackgroundWorker();
worker.DoWork += delegate { myBClass.DoHardWork(); }
worker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(SomeOtherMethod);
worker.RunWorkerAsync();
You can also add fancy stuff like cancellation and progress reporting if you want :)
A:
In .Net 2 the BackgroundWorker was introduced, this makes running async operations really easy:
BackgroundWorker bw = new BackgroundWorker { WorkerReportsProgress = true };
bw.DoWork += (sender, e) =>
{
//what happens here must not touch the form
//as it's in a different thread
};
bw.ProgressChanged += ( sender, e ) =>
{
//update progress bars here
};
bw.RunWorkerCompleted += (sender, e) =>
{
//now you're back in the UI thread you can update the form
//remember to dispose of bw now
};
worker.RunWorkerAsync();
In .Net 1 you have to use threads.
A:
You have to use AsyncCallBacks. You can use AsyncCallBacks to specify a delegate to a method, and then specify CallBack Methods that get called once the execution of the target method completes.
Here is a small Example, run and see it for yourself.
class Program
{
public delegate void AsyncMethodCaller();
public static void WorkerMethod()
{
Console.WriteLine("I am the first method that is called.");
Thread.Sleep(5000);
Console.WriteLine("Exiting from WorkerMethod.");
}
public static void SomeOtherMethod(IAsyncResult result)
{
Console.WriteLine("I am called after the Worker Method completes.");
}
static void Main(string[] args)
{
AsyncMethodCaller asyncCaller = new AsyncMethodCaller(WorkerMethod);
AsyncCallback callBack = new AsyncCallback(SomeOtherMethod);
IAsyncResult result = asyncCaller.BeginInvoke(callBack, null);
Console.WriteLine("Worker method has been called.");
Console.WriteLine("Waiting for all invocations to complete.");
Console.Read();
}
}
A:
Although there are several possibilities here, I would use a delegate, asynchronously called using BeginInvoke method.
Warning : don't forget to always call EndInvoke on the IAsyncResult to avoid eventual memory leaks, as described in this article.
A:
Check out BackgroundWorker.
A:
Use Async Delegates:
// Method that does the real work
public int SomeMethod(int someInput)
{
Thread.Sleep(20);
Console.WriteLine(”Processed input : {0}”,someInput);
return someInput+1;
}
// Method that will be called after work is complete
public void EndSomeOtherMethod(IAsyncResult result)
{
SomeMethodDelegate myDelegate = result.AsyncState as SomeMethodDelegate;
// obtain the result
int resultVal = myDelegate.EndInvoke(result);
Console.WriteLine(”Returned output : {0}”,resultVal);
}
// Define a delegate
delegate int SomeMethodDelegate(int someInput);
SomeMethodDelegate someMethodDelegate = SomeMethod;
// Call the method that does the real work
// Give the method name that must be called once the work is completed.
someMethodDelegate.BeginInvoke(10, // Input parameter to SomeMethod()
EndSomeOtherMethod, // Callback Method
someMethodDelegate); // AsyncState
A:
Ok, I'm unsure of how you want to go about this. From your example, it looks like WorkerMethod does not create its own thread to execute under, but you want to call that method on another thread.
In that case, create a short worker method that calls WorkerMethod then calls SomeOtherMethod, and queue that method up on another thread. Then when WorkerMethod completes, SomeOtherMethod is called. For example:
public class AClass
{
public void SomeMethod()
{
DoSomething();
ThreadPool.QueueUserWorkItem(delegate(object state)
{
BClass.WorkerMethod();
SomeOtherMethod();
});
DoSomethingElse();
}
private void SomeOtherMethod()
{
// handle the fact that WorkerMethod has completed.
// Note that this is called on the Worker Thread, not
// the main thread.
}
}
|
Delegating a task in and getting notified when it completes (in C#)
|
Conceptually, I would like to accomplish the following but have had trouble understand how to code it properly in C#:
SomeMethod { // Member of AClass{}
DoSomething;
Start WorkerMethod() from BClass in another thread;
DoSomethingElse;
}
Then, when WorkerMethod() is complete, run this:
void SomeOtherMethod() // Also member of AClass{}
{ ... }
Can anyone please give an example of that?
|
[
"The BackgroundWorker class was added to .NET 2.0 for this exact purpose.\nIn a nutshell you do:\nBackgroundWorker worker = new BackgroundWorker();\nworker.DoWork += delegate { myBClass.DoHardWork(); }\nworker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(SomeOtherMethod);\nworker.RunWorkerAsync();\n\nYou can also add fancy stuff like cancellation and progress reporting if you want :)\n",
"In .Net 2 the BackgroundWorker was introduced, this makes running async operations really easy:\nBackgroundWorker bw = new BackgroundWorker { WorkerReportsProgress = true };\n\nbw.DoWork += (sender, e) => \n {\n //what happens here must not touch the form\n //as it's in a different thread\n };\n\nbw.ProgressChanged += ( sender, e ) =>\n {\n //update progress bars here\n };\n\nbw.RunWorkerCompleted += (sender, e) => \n {\n //now you're back in the UI thread you can update the form\n //remember to dispose of bw now\n };\n\nworker.RunWorkerAsync();\n\nIn .Net 1 you have to use threads.\n",
"You have to use AsyncCallBacks. You can use AsyncCallBacks to specify a delegate to a method, and then specify CallBack Methods that get called once the execution of the target method completes.\nHere is a small Example, run and see it for yourself.\nclass Program\n {\n public delegate void AsyncMethodCaller();\n\n\n public static void WorkerMethod()\n {\n Console.WriteLine(\"I am the first method that is called.\");\n Thread.Sleep(5000);\n Console.WriteLine(\"Exiting from WorkerMethod.\");\n }\n\n public static void SomeOtherMethod(IAsyncResult result)\n {\n Console.WriteLine(\"I am called after the Worker Method completes.\");\n }\n\n\n\n static void Main(string[] args)\n {\n AsyncMethodCaller asyncCaller = new AsyncMethodCaller(WorkerMethod);\n AsyncCallback callBack = new AsyncCallback(SomeOtherMethod);\n IAsyncResult result = asyncCaller.BeginInvoke(callBack, null);\n Console.WriteLine(\"Worker method has been called.\");\n Console.WriteLine(\"Waiting for all invocations to complete.\");\n Console.Read();\n\n }\n}\n\n",
"Although there are several possibilities here, I would use a delegate, asynchronously called using BeginInvoke method.\nWarning : don't forget to always call EndInvoke on the IAsyncResult to avoid eventual memory leaks, as described in this article.\n",
"Check out BackgroundWorker.\n",
"Use Async Delegates:\n// Method that does the real work\npublic int SomeMethod(int someInput)\n{\nThread.Sleep(20);\nConsole.WriteLine(”Processed input : {0}”,someInput);\nreturn someInput+1;\n} \n\n\n// Method that will be called after work is complete\npublic void EndSomeOtherMethod(IAsyncResult result)\n{\nSomeMethodDelegate myDelegate = result.AsyncState as SomeMethodDelegate;\n// obtain the result\nint resultVal = myDelegate.EndInvoke(result);\nConsole.WriteLine(”Returned output : {0}”,resultVal);\n}\n\n// Define a delegate\ndelegate int SomeMethodDelegate(int someInput);\nSomeMethodDelegate someMethodDelegate = SomeMethod;\n\n// Call the method that does the real work\n// Give the method name that must be called once the work is completed.\nsomeMethodDelegate.BeginInvoke(10, // Input parameter to SomeMethod()\nEndSomeOtherMethod, // Callback Method\nsomeMethodDelegate); // AsyncState\n\n",
"Ok, I'm unsure of how you want to go about this. From your example, it looks like WorkerMethod does not create its own thread to execute under, but you want to call that method on another thread. \nIn that case, create a short worker method that calls WorkerMethod then calls SomeOtherMethod, and queue that method up on another thread. Then when WorkerMethod completes, SomeOtherMethod is called. For example:\npublic class AClass\n{\n public void SomeMethod()\n {\n DoSomething();\n\n ThreadPool.QueueUserWorkItem(delegate(object state)\n {\n BClass.WorkerMethod();\n SomeOtherMethod();\n });\n\n DoSomethingElse();\n }\n\n private void SomeOtherMethod()\n {\n // handle the fact that WorkerMethod has completed. \n // Note that this is called on the Worker Thread, not\n // the main thread.\n }\n}\n\n"
] |
[
13,
5,
2,
2,
1,
1,
0
] |
[] |
[] |
[
"c#",
"delegates",
"notifications"
] |
stackoverflow_0000074880_c#_delegates_notifications.txt
|
Q:
libxml2-p25 on OS X 10.5 needs sudo?
When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine.
I have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo?
A:
Check your path by running:
'echo $PATH'
A:
I would suspect the permissions on the library. Can you do a strace or similar to find out the filenames it's looking for, and then check the permissions on them?
A:
The PATH environment variable was the mistake.
|
libxml2-p25 on OS X 10.5 needs sudo?
|
When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine.
I have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo?
|
[
"Check your path by running:\n'echo $PATH'\n\n",
"I would suspect the permissions on the library. Can you do a strace or similar to find out the filenames it's looking for, and then check the permissions on them?\n",
"The PATH environment variable was the mistake.\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"libxml2",
"macos",
"python"
] |
stackoverflow_0000068541_libxml2_macos_python.txt
|
Q:
SharePoint Content Query Web Part
I have a content query web part that queries by content type against a site collection. I have grouped it by content type so I have:
-- Agenda (Content Type)
----Agenda #1
----Agenda #2
-- Report (Content Type)
----Report #1
----Report #2
I would like to show a second grouping for site, so:
-- Agenda (Content Type)
----This Site
------Agenda #1
----That Site
------Agenda #2
-- Report (Content Type)
----This Site
------Report #1
------Report #2
Does anyone know the best way to achieve this?
All the best
Kieran
A:
You have full access to change the xslt for the content query webpart. I reccomend exporting the webpart, saving the inline xslt into the style library and changing the webpart from using inline xslt to using a link to the file in the style library. This allows you to edit the xslt file using sharepoint designer which is much easier.
|
SharePoint Content Query Web Part
|
I have a content query web part that queries by content type against a site collection. I have grouped it by content type so I have:
-- Agenda (Content Type)
----Agenda #1
----Agenda #2
-- Report (Content Type)
----Report #1
----Report #2
I would like to show a second grouping for site, so:
-- Agenda (Content Type)
----This Site
------Agenda #1
----That Site
------Agenda #2
-- Report (Content Type)
----This Site
------Report #1
------Report #2
Does anyone know the best way to achieve this?
All the best
Kieran
|
[
"You have full access to change the xslt for the content query webpart. I reccomend exporting the webpart, saving the inline xslt into the style library and changing the webpart from using inline xslt to using a link to the file in the style library. This allows you to edit the xslt file using sharepoint designer which is much easier.\n"
] |
[
3
] |
[] |
[] |
[
"list",
"moss",
"sharepoint"
] |
stackoverflow_0000072624_list_moss_sharepoint.txt
|
Q:
Using Outlook API to get to a specific folder
I'm trying to write some C# code to get to a specific folder in an Outlook mailbox. I have the following code:
Outlook.Application oApp = new Outlook.Application();
Outlook.NameSpace oNS = oApp.GetNamespace("mapi");
Outlook.Recipient oRecip = oNS.CreateRecipient("AccountNameHere");
oRecip.Resolve();
if (oRecip.Resolved)
{
oInbox = oNS.GetSharedDefaultFolder(oRecip, Outlook.OlDefaultFolders.olFolderInbox);
oInboxMsgs = oInbox.Items;
ItemCount = oInboxMsgs.Count;
Console.Writeline("There are {0] items.", ItemCount.ToString())
}
This will get me to to the "Inbox" folder. I'm trying to get to a folder at the same level as the Inbox folder. I believe I need to use GetFolderFromID instead of GetSharedDefaultFolder, but I don't understand how to use it. Is there a way to iterate through all the top level folders? How might I determine the EntryID and StoreID of the folder?
Thanks!
A:
You can use the Folders collection member of the Outlook.NameSpace object. That way you can iterate through the collection and find your folder by it's name. In case you still want to use GetFolderFromID, you can use OutlookSpy tool to get the EntryID and StoreID values.
|
Using Outlook API to get to a specific folder
|
I'm trying to write some C# code to get to a specific folder in an Outlook mailbox. I have the following code:
Outlook.Application oApp = new Outlook.Application();
Outlook.NameSpace oNS = oApp.GetNamespace("mapi");
Outlook.Recipient oRecip = oNS.CreateRecipient("AccountNameHere");
oRecip.Resolve();
if (oRecip.Resolved)
{
oInbox = oNS.GetSharedDefaultFolder(oRecip, Outlook.OlDefaultFolders.olFolderInbox);
oInboxMsgs = oInbox.Items;
ItemCount = oInboxMsgs.Count;
Console.Writeline("There are {0] items.", ItemCount.ToString())
}
This will get me to to the "Inbox" folder. I'm trying to get to a folder at the same level as the Inbox folder. I believe I need to use GetFolderFromID instead of GetSharedDefaultFolder, but I don't understand how to use it. Is there a way to iterate through all the top level folders? How might I determine the EntryID and StoreID of the folder?
Thanks!
|
[
"You can use the Folders collection member of the Outlook.NameSpace object. That way you can iterate through the collection and find your folder by it's name. In case you still want to use GetFolderFromID, you can use OutlookSpy tool to get the EntryID and StoreID values.\n"
] |
[
5
] |
[] |
[] |
[
"api",
"c#",
"outlook"
] |
stackoverflow_0000076964_api_c#_outlook.txt
|
Q:
What is the best way to handle sessions for a PHP site on multiple hosts?
PHP stores its session information on the file system of the host of the server establishing that session. In a multiple-host PHP environment, where load is unintelligently distributed amongst each host, PHP session variables are not available to each request (unless by chance the request is assigned to the same host -- assume we have no control over the load balancer).
This site, dubbed "The Hitchhikers Guide to PHP Load Balancing" suggests overriding PHPs session handler and storing session information in the shared database.
What, in your humble opinion, is the best way to maintain session information in a multiple PHP host environment?
UPDATE: Thanks for the great feedback. For anyone looking for example code, we found a useful tutorial on writing a Session Manager class for MySQL which I recommend checking out.
A:
Database, or Database+Memcache. Generally speaking sessions should not be written to very often. Start with a database solution that only writes to the db when the session data has changed. Memcache should be added later as a performance enhancement. A db solution will be very fast because you are only ever looking up primary keys. Make sure the db has row locking, not table locking (myISAM). MemCache only is a bad idea... If it overflows, crashes, or is restarted, the users will be logged out.
A:
Whatever you do, do not store it on the server itself (even if you're only using one server, or in a 1+1 failover scenario). It'll put you on a dead end.
I would say, use Database+Memcache for storage/retrieval, it'll keep you out of Zend's grasp (and believe me things do break down at some point with Zend). Since you will be able to easily partition by UserID or SessionID even going with MySQL will leave things quite scalable.
(Edit: additionally, going with DB+Memcache does not bind you to a comercial party, it does not bind you to PHP either -- something you might be happy for down the road)
A:
Storing the session data in a shared db works, but can be slow. If it's a really big site, memcache is probably a better option.
A:
Depending on your project's budget, you may also consider Zend Platform for your production machines, which in addition to a bunch of other great features, includes configurable session clustering, which works sort of like a CDN does.
|
What is the best way to handle sessions for a PHP site on multiple hosts?
|
PHP stores its session information on the file system of the host of the server establishing that session. In a multiple-host PHP environment, where load is unintelligently distributed amongst each host, PHP session variables are not available to each request (unless by chance the request is assigned to the same host -- assume we have no control over the load balancer).
This site, dubbed "The Hitchhikers Guide to PHP Load Balancing" suggests overriding PHPs session handler and storing session information in the shared database.
What, in your humble opinion, is the best way to maintain session information in a multiple PHP host environment?
UPDATE: Thanks for the great feedback. For anyone looking for example code, we found a useful tutorial on writing a Session Manager class for MySQL which I recommend checking out.
|
[
"Database, or Database+Memcache. Generally speaking sessions should not be written to very often. Start with a database solution that only writes to the db when the session data has changed. Memcache should be added later as a performance enhancement. A db solution will be very fast because you are only ever looking up primary keys. Make sure the db has row locking, not table locking (myISAM). MemCache only is a bad idea... If it overflows, crashes, or is restarted, the users will be logged out.\n",
"Whatever you do, do not store it on the server itself (even if you're only using one server, or in a 1+1 failover scenario). It'll put you on a dead end.\nI would say, use Database+Memcache for storage/retrieval, it'll keep you out of Zend's grasp (and believe me things do break down at some point with Zend). Since you will be able to easily partition by UserID or SessionID even going with MySQL will leave things quite scalable.\n(Edit: additionally, going with DB+Memcache does not bind you to a comercial party, it does not bind you to PHP either -- something you might be happy for down the road)\n",
"Storing the session data in a shared db works, but can be slow. If it's a really big site, memcache is probably a better option.\n",
"Depending on your project's budget, you may also consider Zend Platform for your production machines, which in addition to a bunch of other great features, includes configurable session clustering, which works sort of like a CDN does.\n"
] |
[
15,
2,
1,
1
] |
[] |
[] |
[
"load_balancing",
"memcached",
"mysql",
"php",
"session"
] |
stackoverflow_0000076712_load_balancing_memcached_mysql_php_session.txt
|
Q:
Which type of external drives are good for SQL backup files?
As a part of database maintenance we are thinking of taking daily backups onto an external/firewire drives. Are there any specific recommended drives for the frequent read/write operations from sql server 2000 to take backups?
A:
Whatever you do, just don't use USB 1.1.
A:
The simple fact is that harddrives over a period of time will fail. The best two solutions
I can recommend unfortunately do not avail of using harddrives.
Using a tape backup, granted is slower but you get the flexibility of having the option of offsite backups. It is easy to put a tape in the boot of a car. Rotating the tapes means that you can have pretty recent protection against any unforseen situations.
Another option is an online backup solution where the backups are encrypted and copied offsite. My reccommendation is definitly at least having some sort of offsite backup external to the building that you keep the SQL servers. After all it is "disaster" recovery.
A:
Pretty much any external drive can be used here, provided it has the space to hold your backups and enough performance to get the backups there. The specifics depend on your exact requirements.
In my experience, FireWire tends to outperform USB for disk activity, regardless of their theoretical maximum transfer rates. And FireWire 800 will perform even better yet. I have found poor performance from FireWire and USB drives when you have multiple concurrent reads/writes going on, but with backups, it's generally more large sequential reads and writes.
Another option that is a little bit more complex to setup and manage, but can provide you with greater flexibility and performance is external SATA (eSATA). You can even get Hot Swappable external SATA enclosures for even greater convenience, and ease of taking your backups offsite.
However, another related option that I've had excellent success with is to setup a separate server to act as your backup server. You can use whatever disk options you choose (FireWire, SATA, eSATA, SCSI, FiberChannel, iSCSI, etc), and share out that disk storage as a network share (I use NFS and Samba on a Linux box, but for a Windows oriented network, a Windows share will work fine). You can then access the shares across the network and backup multiple machines to it. Also, the separation of backup server from your production machines will give you greater flexibility if you need to take it offline for maintenance, adding/removing storage, etc.
A:
Drobo!
A USB hard drive RAID array that uses normal - off the shelf hard drives. 4 bays, when you need more space, buy another hard drive. Out of bays? Buy bigger hard drives and replace your smallest in the array.
http://www.drobo.com/
A:
Depending on the size of the databases speed of the drive can be a real factor. I would look into something like Drobo but with an eSata or SAS interface. There is nothing more entertaining than watching a terabyte go through USB 2.0. Also, you might consider something like hyperbac or RedGate SQL Backup to compress the backup and make it easier to fit on the drive as well.
A:
For the most part, external drives aren't a good option - unless your database is really small.
Other than some of the options others have listed, you can also use UNC/Network shares as a great 'off-box' option.
Check out the following video for some other options:
SQL Server Backup Options (Free Video)
And the videos on configuring backups on the site will show you how to specify a network path for backup purposes.
|
Which type of external drives are good for SQL backup files?
|
As a part of database maintenance we are thinking of taking daily backups onto an external/firewire drives. Are there any specific recommended drives for the frequent read/write operations from sql server 2000 to take backups?
|
[
"Whatever you do, just don't use USB 1.1.\n",
"The simple fact is that harddrives over a period of time will fail. The best two solutions\nI can recommend unfortunately do not avail of using harddrives.\nUsing a tape backup, granted is slower but you get the flexibility of having the option of offsite backups. It is easy to put a tape in the boot of a car. Rotating the tapes means that you can have pretty recent protection against any unforseen situations. \nAnother option is an online backup solution where the backups are encrypted and copied offsite. My reccommendation is definitly at least having some sort of offsite backup external to the building that you keep the SQL servers. After all it is \"disaster\" recovery.\n",
"Pretty much any external drive can be used here, provided it has the space to hold your backups and enough performance to get the backups there. The specifics depend on your exact requirements.\nIn my experience, FireWire tends to outperform USB for disk activity, regardless of their theoretical maximum transfer rates. And FireWire 800 will perform even better yet. I have found poor performance from FireWire and USB drives when you have multiple concurrent reads/writes going on, but with backups, it's generally more large sequential reads and writes.\nAnother option that is a little bit more complex to setup and manage, but can provide you with greater flexibility and performance is external SATA (eSATA). You can even get Hot Swappable external SATA enclosures for even greater convenience, and ease of taking your backups offsite.\nHowever, another related option that I've had excellent success with is to setup a separate server to act as your backup server. You can use whatever disk options you choose (FireWire, SATA, eSATA, SCSI, FiberChannel, iSCSI, etc), and share out that disk storage as a network share (I use NFS and Samba on a Linux box, but for a Windows oriented network, a Windows share will work fine). You can then access the shares across the network and backup multiple machines to it. Also, the separation of backup server from your production machines will give you greater flexibility if you need to take it offline for maintenance, adding/removing storage, etc.\n",
"Drobo!\nA USB hard drive RAID array that uses normal - off the shelf hard drives. 4 bays, when you need more space, buy another hard drive. Out of bays? Buy bigger hard drives and replace your smallest in the array.\nhttp://www.drobo.com/\n",
"Depending on the size of the databases speed of the drive can be a real factor. I would look into something like Drobo but with an eSata or SAS interface. There is nothing more entertaining than watching a terabyte go through USB 2.0. Also, you might consider something like hyperbac or RedGate SQL Backup to compress the backup and make it easier to fit on the drive as well.\n",
"For the most part, external drives aren't a good option - unless your database is really small. \nOther than some of the options others have listed, you can also use UNC/Network shares as a great 'off-box' option. \nCheck out the following video for some other options:\nSQL Server Backup Options (Free Video)\nAnd the videos on configuring backups on the site will show you how to specify a network path for backup purposes. \n"
] |
[
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000075919_sql_server.txt
|
Q:
How important is a database in managing information?
I have been hired to help write an application that manages certain information for the end user. It is intended to manage a few megabytes of information, but also manage scanned images in full resolution. Should this project use a database, and why or why not?
A:
Any question "Should I use a certain tool?" comes down to asking exactly what you want to do. You should ask yourself - "Do I want to write my own storage for this data?"
Most web based applications are written against a database because most databases support many "free" features - you can have multiple webservers. You can use standard tools to edit, verify and backup your data. You can have a robust storage solution with transactions.
A:
The database won't help you much in dealing with the image data itself, but anything that manages a bunch of images is going to have meta-data about the images that you'll be dealing with. Depending on the meta-data and what you want to do with it, a database can be quite helpful indeed with that.
And just because the database doesn't help you much with the image data, that doesn't mean you can't store the images in the database. You would store them in a BLOB column of a SQL database.
A:
If the amount of data is small, or installed on many client machines, you might not want the overhead of a database.
Is it intended to be installed on many users machines? Adding the overhead of ensuring you can run whatever database engine you choose on a client installed app is not optimal. Since the amount of data is small, I think XML would be adequate here. You could Base64 encode the images and store them as CDATA.
Will the application be run on a server? If you have concurrent users, then databases have concepts for handling these scenarios (transactions), and that can be helpful. And the scanned image data would be appropriate for a BLOB.
A:
You shouldn't store images in the database, as is the general consensus here.
The file system is just much better at storing images than your database is.
You should use a database to store meta information about those images, such as a title, description, etc, and just store a URL or path to the images.
A:
When it comes to storing images in a database I try to avoid it. In your case from what I can gather of your question there is a possibilty for a subsantial number of fairly large images, so I would probably strong oppose it.
If this is a web application I would use a database for quick searching and indexing of images using keywords and other parameters. Then have a column pointing to the location of the image in a filesystem if possible with some kind of folder structure to help further decrease the image load time.
If you need greater security due to the directory being available (network share) and the application is local then you should probably bite the bullet and store the images in the database.
A:
My gut reaction is "why not?" A database is going to provide a framework for storing information, with all of the input/output/optimization functions provided in a documented format. You can go with a server-side solution, or a local database such as SQLite or the local version of SQL Server. Either way you have a robust, documented data management framework.
A:
This post should give you most of the opinions you need about storing images in the database. Do you also mean 'should I use a database for the other information?' or are you just asking about the images?
A:
Our CMS stores all of the check images we process. It uses a database for metadata and lets the file system handle the scanned images.
A simple database like SQLite sounds appropriate - it will let you store file metadata in a consistent, transactional way. Then store the path to each image in the database and let the file system do what it does best - manage files.
SQL Server 2008 has a new data type built for in-database files, but before that BLOB was the way to store files inside the database. On a small scale that would work too.
A:
A database is meant to manage large volumes of data, and are supposed to give you fast access to read and write that data in spite of the size. Put simply, they manage scale for data - scale that you don't want to deal with. If you have only a few users (hundreds?), you could just as easily manage the data on disk (say XML?) and keep the data in memory. The images should clearly not go in to the database so the question is how much data, or for how many users are you maintaining this database instance?
A:
If you want to have a structured way to store and retrieve information, a database is most definitely the way to go. It makes your application flexible and more powerful, and lets you focus on the actual application rather than incidentals like trying to write your own storage system.
For individual applications, SQLite is great. It fits right in an app as a file; no need for a whole DRBMS juggernaut.
A:
There are a lot of factors to this. But, being a database weenie, I would err on the side of having a database. It just makes life easier when things changes. and things will change.
Depending on the images, you might store them on the file system or actually blob them and put them in the database (Not supported in all DBMS's). If the files are very small, then I would blob them. If they are big, then I would keep them on he file system and manage them yourself.
There are so many free or cheap DBMS's out there that there really is no excuse not to use one. I'm a SQL Server guy, but f your application is that simple, then the free version of mysql should do the job. In fact, it has some pretty cool stuff in there.
|
How important is a database in managing information?
|
I have been hired to help write an application that manages certain information for the end user. It is intended to manage a few megabytes of information, but also manage scanned images in full resolution. Should this project use a database, and why or why not?
|
[
"Any question \"Should I use a certain tool?\" comes down to asking exactly what you want to do. You should ask yourself - \"Do I want to write my own storage for this data?\"\nMost web based applications are written against a database because most databases support many \"free\" features - you can have multiple webservers. You can use standard tools to edit, verify and backup your data. You can have a robust storage solution with transactions. \n",
"The database won't help you much in dealing with the image data itself, but anything that manages a bunch of images is going to have meta-data about the images that you'll be dealing with. Depending on the meta-data and what you want to do with it, a database can be quite helpful indeed with that.\nAnd just because the database doesn't help you much with the image data, that doesn't mean you can't store the images in the database. You would store them in a BLOB column of a SQL database.\n",
"If the amount of data is small, or installed on many client machines, you might not want the overhead of a database.\nIs it intended to be installed on many users machines? Adding the overhead of ensuring you can run whatever database engine you choose on a client installed app is not optimal. Since the amount of data is small, I think XML would be adequate here. You could Base64 encode the images and store them as CDATA.\nWill the application be run on a server? If you have concurrent users, then databases have concepts for handling these scenarios (transactions), and that can be helpful. And the scanned image data would be appropriate for a BLOB.\n",
"You shouldn't store images in the database, as is the general consensus here.\nThe file system is just much better at storing images than your database is.\nYou should use a database to store meta information about those images, such as a title, description, etc, and just store a URL or path to the images.\n",
"When it comes to storing images in a database I try to avoid it. In your case from what I can gather of your question there is a possibilty for a subsantial number of fairly large images, so I would probably strong oppose it.\nIf this is a web application I would use a database for quick searching and indexing of images using keywords and other parameters. Then have a column pointing to the location of the image in a filesystem if possible with some kind of folder structure to help further decrease the image load time.\nIf you need greater security due to the directory being available (network share) and the application is local then you should probably bite the bullet and store the images in the database.\n",
"My gut reaction is \"why not?\" A database is going to provide a framework for storing information, with all of the input/output/optimization functions provided in a documented format. You can go with a server-side solution, or a local database such as SQLite or the local version of SQL Server. Either way you have a robust, documented data management framework.\n",
"This post should give you most of the opinions you need about storing images in the database. Do you also mean 'should I use a database for the other information?' or are you just asking about the images?\n",
"Our CMS stores all of the check images we process. It uses a database for metadata and lets the file system handle the scanned images. \nA simple database like SQLite sounds appropriate - it will let you store file metadata in a consistent, transactional way. Then store the path to each image in the database and let the file system do what it does best - manage files.\nSQL Server 2008 has a new data type built for in-database files, but before that BLOB was the way to store files inside the database. On a small scale that would work too.\n",
"A database is meant to manage large volumes of data, and are supposed to give you fast access to read and write that data in spite of the size. Put simply, they manage scale for data - scale that you don't want to deal with. If you have only a few users (hundreds?), you could just as easily manage the data on disk (say XML?) and keep the data in memory. The images should clearly not go in to the database so the question is how much data, or for how many users are you maintaining this database instance?\n",
"If you want to have a structured way to store and retrieve information, a database is most definitely the way to go. It makes your application flexible and more powerful, and lets you focus on the actual application rather than incidentals like trying to write your own storage system. \nFor individual applications, SQLite is great. It fits right in an app as a file; no need for a whole DRBMS juggernaut. \n",
"There are a lot of factors to this. But, being a database weenie, I would err on the side of having a database. It just makes life easier when things changes. and things will change.\nDepending on the images, you might store them on the file system or actually blob them and put them in the database (Not supported in all DBMS's). If the files are very small, then I would blob them. If they are big, then I would keep them on he file system and manage them yourself.\nThere are so many free or cheap DBMS's out there that there really is no excuse not to use one. I'm a SQL Server guy, but f your application is that simple, then the free version of mysql should do the job. In fact, it has some pretty cool stuff in there.\n"
] |
[
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"database"
] |
stackoverflow_0000076934_database.txt
|
Q:
Long term source code archiving: Is it possible?
I'm curious about keeping source code around reliably and securely for several years. From my research/experience:
Optical media, such as burned DVD-R's lose bits of data over time. After a couple years, I don't get all the files off that I put on them. Read errors, etc.
Hard drives are mechanical and subject to failure/obsolescence with expensive data recovery fees, that hardly keep your data private (you send it away to some company).
Magnetic tape storage: see #2.
Online storage is subject to the whim of some data storage center, the security or lack of security there, and the possibility that the company folds, etc. Plus it's expensive, and you can't guarantee that they aren't peeking in.
I've found over time that I've lost source code to old projects I've done due to these problems. Are there any other solutions?
Summary of answers:
1. Use multiple methods for redundancy.
2. Print out your source code either as text or barcode.
3. RAID arrays are better for local storage.
4. Open sourcing your project will make it last forever.
5. Encryption is the answer to security.
6. Magnetic tape storage is durable.
7. Distributed/guaranteed online storage is cheap and reliable.
8. Use source control to maintain history, and backup the repo.
A:
The best answer is "in multiple places". If I were concerned about keeping my source code for as long as possible I would do:
1) Backup to some optical media on a regular basis, say burn it to DVD once a month and archive it offsite.
2) Back it up to multiple hard drives on my local machines
3) Back it up to Amazon's S3 service. They have guarantees, it's a distributed system so no single points of failure and you can easily encrypt your data so they can't "peek" at it.
With those three steps your chances of losing data are effectively zero. There is no such thing as too many backups for VERY important data.
A:
Based on your level of paranoia, I'd recommend a printer and a safe.
More seriously, a RAID array isn't so expensive anymore, and so long as you continue to use and monitor it, a properly set-up array is virtually guaranteed never to lose data.
A:
Any data you want to keep should be stored in multiple places on multiple formats. While the odds of any one failing may be significant, the odds of all of them failing are pretty small.
A:
If you want to archive something for a long time, I would go with a tape drive. They may not hold a whole lot, but they are reliable and pretty much the storage medium of choice for data archiving. I've never personally experienced dataloss on a tape drive, however.
A:
The best way to back up your projects is to make them open source and famous. That way there will always be people with a copy of it and able to send it to you.
After that, just care of the magnetic/optical media, continued renewal of it and multiple copies (online as well, remember you can encrypt it) on multiple media (including, why not, RAID sets)
A:
I think you'd be surprised how reasonably priced online storage is these days. Amazon S3 (simple storage solution) is $0.10 per gigabyte per month, with upload costs of $0.10 per GB and download costing $0.17 per GB maximum.
Therefore, if you stored 20GB for a month, uploaded 20GB and downloaded 20GB it would cost you $8.40 (slightly more expensive in the European data center at $9).
That's cheap enough to store your data in both US and EU data centers AND on dvd - the chances of losing all three are slim, to say the least.
There are also front-ends available, such as JungleDisk.
http://aws.amazon.com
http://www.jungledisk.com/
http://www.google.co.uk/search?q=amazon%20s3%20clients
A:
Don't forget to use Subversion (http://subversion.tigris.org/). I subversion my whole life (it's awesome).
A:
The best home-usable solution I've seen was printing out the backups using a 2D barcode - the data density was fairly high, it could be re-scanned fairly easily (presuming a sheet-feeding scanner), and it moved the problem from the digital domain back into the physical one - which is fairly easily met by something like a safe deposit box, or a company like Iron Mountain.
The other answer is 'all of the above'. Redundancy always helps.
A:
For my projects, I use a combination of 1, 2, & 4. If it's really important data, you need to have multiple copies in multiple places. My important data is replicated to 3-4 locations every night.
If you want a simpler solution, I recommend you get an online storage account from a well known provider which has an insured reliability guarantee. If you are worried about security, only upload data inside TrueCrypt encrypted archives. As far as cost, it will probably be pricey... But if it's really that important the cost is nothing.
A:
For regulatory mandated archival of electronic data, we keep the data on a RAID and on backup tapes in two separate locations (one of which is Iron Mountain). We also replace the tapes and RAID every few years.
A:
If you need to keep it "forever" probably the safest way is to print out the code and stick that in a plastic envelope to keep it safe from the elements. I can't tell you how much code I've lost to a backup means which are no longer reachable.... I don't have a paper card reader to read my old cobol deck, no drive for my 5 1/4" floppies, or my 3 1/2" floppies. but yet the print out that I made of my first big project still sits readable...even after my once 3 year old decided that it would make a good coloring book.
A:
When you state "back up source code", I hope you include in your meaning the backing up of your version control system too.
Backing your current source code (to multiple places) is definitely critical, but backing up your history of changes as preseved by your VCS is paramount in my opinion. It may seem trivial especially when we are always "living in the present, looking towards the future". However, there have been way too many times when we have wanted to look backward to investigate an issue, review the chain of changes, see who did what, whether we can rollback to a previous build/version. All the more important if you practise heavy branching and merging. Archiving a single trunk will not do.
Your version control system may come with documentation and suggestions on backup strategies.
A:
One way would be to periodically recycle your storage media, i.e. read data off the decaying medium and write it to a fresh one. There exist programs to assist you with this, e.g. dvdisaster. In the end, nothing lasts forever. Just pick the least annoying solution.
As for #2: you can store data in encrypted form to prevent data recovery experts from making sense of it.
A:
I think Option 2 works well enough if you have the write backup mechanisms in place. They need not be expensive ones involving a third-party, either (except for disaster recovery).
A RAID 5 configured server would do the trick. If a hard drive fails, replace it. It is HIGHLY unlikely that all the hard drives will fail at the same time. Even a mirrored RAID 1 drive would be good enough in some cases.
If option 2 still seems like a crappy solution, the only other thing I can think of is to print out hard-copies of the source code, which has many more problems than any of the above solutions.
A:
Online storage is subject to the whim of some data storage center, the security or lack of security there, and the possibility that the company folds, etc. Plus it's expensive,
Not necessarily expensive (see rsync.net for example), nor insecure. You can certainly encrypt your stuff too.
and you can't guarantee that they aren't peeking in.
True, but there's probably much more interesting stuff to peek at than your source-code. ;-)
More seriously, a RAID array isn't so expensive anymore
RAID is not backup.
A:
I was just talking with a guy who is an expert in microfilm. While it is an old technology, for long term storage it is one of the most enduring forms of data storage if properly maintained. It doesn't require sophisticated equipment (magifying lens and a light) to read altough storing it may take some work.
Then again, as was previously mentioned, if you are only talking in the spans of a few years instead of decades printing it off to paper and storing it in a controlled environment is probable the best way. If you want to get really creative you could laminate every sheet!
A:
Drobo for local backup
DVD for short-term local archiving
Amazon S3 for off-site,long-term archiving
|
Long term source code archiving: Is it possible?
|
I'm curious about keeping source code around reliably and securely for several years. From my research/experience:
Optical media, such as burned DVD-R's lose bits of data over time. After a couple years, I don't get all the files off that I put on them. Read errors, etc.
Hard drives are mechanical and subject to failure/obsolescence with expensive data recovery fees, that hardly keep your data private (you send it away to some company).
Magnetic tape storage: see #2.
Online storage is subject to the whim of some data storage center, the security or lack of security there, and the possibility that the company folds, etc. Plus it's expensive, and you can't guarantee that they aren't peeking in.
I've found over time that I've lost source code to old projects I've done due to these problems. Are there any other solutions?
Summary of answers:
1. Use multiple methods for redundancy.
2. Print out your source code either as text or barcode.
3. RAID arrays are better for local storage.
4. Open sourcing your project will make it last forever.
5. Encryption is the answer to security.
6. Magnetic tape storage is durable.
7. Distributed/guaranteed online storage is cheap and reliable.
8. Use source control to maintain history, and backup the repo.
|
[
"The best answer is \"in multiple places\". If I were concerned about keeping my source code for as long as possible I would do: \n1) Backup to some optical media on a regular basis, say burn it to DVD once a month and archive it offsite. \n2) Back it up to multiple hard drives on my local machines \n3) Back it up to Amazon's S3 service. They have guarantees, it's a distributed system so no single points of failure and you can easily encrypt your data so they can't \"peek\" at it. \nWith those three steps your chances of losing data are effectively zero. There is no such thing as too many backups for VERY important data. \n",
"Based on your level of paranoia, I'd recommend a printer and a safe.\nMore seriously, a RAID array isn't so expensive anymore, and so long as you continue to use and monitor it, a properly set-up array is virtually guaranteed never to lose data.\n",
"Any data you want to keep should be stored in multiple places on multiple formats. While the odds of any one failing may be significant, the odds of all of them failing are pretty small.\n",
"If you want to archive something for a long time, I would go with a tape drive. They may not hold a whole lot, but they are reliable and pretty much the storage medium of choice for data archiving. I've never personally experienced dataloss on a tape drive, however.\n",
"The best way to back up your projects is to make them open source and famous. That way there will always be people with a copy of it and able to send it to you.\nAfter that, just care of the magnetic/optical media, continued renewal of it and multiple copies (online as well, remember you can encrypt it) on multiple media (including, why not, RAID sets)\n",
"I think you'd be surprised how reasonably priced online storage is these days. Amazon S3 (simple storage solution) is $0.10 per gigabyte per month, with upload costs of $0.10 per GB and download costing $0.17 per GB maximum.\nTherefore, if you stored 20GB for a month, uploaded 20GB and downloaded 20GB it would cost you $8.40 (slightly more expensive in the European data center at $9).\nThat's cheap enough to store your data in both US and EU data centers AND on dvd - the chances of losing all three are slim, to say the least.\nThere are also front-ends available, such as JungleDisk.\nhttp://aws.amazon.com\nhttp://www.jungledisk.com/\nhttp://www.google.co.uk/search?q=amazon%20s3%20clients\n",
"Don't forget to use Subversion (http://subversion.tigris.org/). I subversion my whole life (it's awesome).\n",
"The best home-usable solution I've seen was printing out the backups using a 2D barcode - the data density was fairly high, it could be re-scanned fairly easily (presuming a sheet-feeding scanner), and it moved the problem from the digital domain back into the physical one - which is fairly easily met by something like a safe deposit box, or a company like Iron Mountain.\nThe other answer is 'all of the above'. Redundancy always helps. \n",
"For my projects, I use a combination of 1, 2, & 4. If it's really important data, you need to have multiple copies in multiple places. My important data is replicated to 3-4 locations every night.\nIf you want a simpler solution, I recommend you get an online storage account from a well known provider which has an insured reliability guarantee. If you are worried about security, only upload data inside TrueCrypt encrypted archives. As far as cost, it will probably be pricey... But if it's really that important the cost is nothing.\n",
"For regulatory mandated archival of electronic data, we keep the data on a RAID and on backup tapes in two separate locations (one of which is Iron Mountain). We also replace the tapes and RAID every few years.\n",
"If you need to keep it \"forever\" probably the safest way is to print out the code and stick that in a plastic envelope to keep it safe from the elements. I can't tell you how much code I've lost to a backup means which are no longer reachable.... I don't have a paper card reader to read my old cobol deck, no drive for my 5 1/4\" floppies, or my 3 1/2\" floppies. but yet the print out that I made of my first big project still sits readable...even after my once 3 year old decided that it would make a good coloring book.\n",
"When you state \"back up source code\", I hope you include in your meaning the backing up of your version control system too.\nBacking your current source code (to multiple places) is definitely critical, but backing up your history of changes as preseved by your VCS is paramount in my opinion. It may seem trivial especially when we are always \"living in the present, looking towards the future\". However, there have been way too many times when we have wanted to look backward to investigate an issue, review the chain of changes, see who did what, whether we can rollback to a previous build/version. All the more important if you practise heavy branching and merging. Archiving a single trunk will not do.\nYour version control system may come with documentation and suggestions on backup strategies.\n",
"One way would be to periodically recycle your storage media, i.e. read data off the decaying medium and write it to a fresh one. There exist programs to assist you with this, e.g. dvdisaster. In the end, nothing lasts forever. Just pick the least annoying solution.\nAs for #2: you can store data in encrypted form to prevent data recovery experts from making sense of it.\n",
"I think Option 2 works well enough if you have the write backup mechanisms in place. They need not be expensive ones involving a third-party, either (except for disaster recovery).\nA RAID 5 configured server would do the trick. If a hard drive fails, replace it. It is HIGHLY unlikely that all the hard drives will fail at the same time. Even a mirrored RAID 1 drive would be good enough in some cases.\nIf option 2 still seems like a crappy solution, the only other thing I can think of is to print out hard-copies of the source code, which has many more problems than any of the above solutions. \n",
"\nOnline storage is subject to the whim of some data storage center, the security or lack of security there, and the possibility that the company folds, etc. Plus it's expensive,\n\nNot necessarily expensive (see rsync.net for example), nor insecure. You can certainly encrypt your stuff too.\n\nand you can't guarantee that they aren't peeking in.\n\nTrue, but there's probably much more interesting stuff to peek at than your source-code. ;-)\n\nMore seriously, a RAID array isn't so expensive anymore\n\nRAID is not backup.\n",
"I was just talking with a guy who is an expert in microfilm. While it is an old technology, for long term storage it is one of the most enduring forms of data storage if properly maintained. It doesn't require sophisticated equipment (magifying lens and a light) to read altough storing it may take some work.\nThen again, as was previously mentioned, if you are only talking in the spans of a few years instead of decades printing it off to paper and storing it in a controlled environment is probable the best way. If you want to get really creative you could laminate every sheet!\n",
"Drobo for local backup\nDVD for short-term local archiving \nAmazon S3 for off-site,long-term archiving\n"
] |
[
7,
6,
3,
3,
3,
3,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"storage",
"version_control"
] |
stackoverflow_0000073745_storage_version_control.txt
|
Q:
How can I make my applications scale well?
In general, what kinds of design decisions help an application scale well?
(Note: Having just learned about Big O Notation, I'm looking to gather more principles of programming here. I've attempted to explain Big O Notation by answering my own question below, but I want the community to improve both this question and the answers.)
Responses so far
1) Define scaling. Do you need to scale for lots of users, traffic, objects in a virtual environment?
2) Look at your algorithms. Will the amount of work they do scale linearly with the actual amount of work - i.e. number of items to loop through, number of users, etc?
3) Look at your hardware. Is your application designed such that you can run it on multiple machines if one can't keep up?
Secondary thoughts
1) Don't optimize too much too soon - test first. Maybe bottlenecks will happen in unforseen places.
2) Maybe the need to scale will not outpace Moore's Law, and maybe upgrading hardware will be cheaper than refactoring.
A:
The only thing I would say is write your application so that it can be deployed on a cluster from the very start. Anything above that is a premature optimisation. Your first job should be getting enough users to have a scaling problem.
Build the code as simple as you can first, then profile the system second and optimise only when there is an obvious performance problem.
Often the figures from profiling your code are counter-intuitive; the bottle-necks tend to reside in modules you didn't think would be slow. Data is king when it comes to optimisation. If you optimise the parts you think will be slow, you will often optimise the wrong things.
A:
Ok, so you've hit on a key point in using the "big O notation". That's one dimension that can certainly bite you in the rear if you're not paying attention. There are also other dimensions at play that some folks don't see through the "big O" glasses (but if you look closer they really are).
A simple example of that dimension is a database join. There are "best practices" in constructing, say, a left inner join which will help to make the sql execute more efficiently. If you break down the relational calculus or even look at an explain plan (Oracle) you can easily see which indexes are being used in which order and if any table scans or nested operations are occurring.
The concept of profiling is also key. You have to be instrumented thoroughly and at the right granularity across all the moving parts of the architecture in order to identify and fix any inefficiencies. Say for example you're building a 3-tier, multi-threaded, MVC2 web-based application with liberal use of AJAX and client side processing along with an OR Mapper between your app and the DB. A simplistic linear single request/response flow looks like:
browser -> web server -> app server -> DB -> app server -> XSLT -> web server -> browser JS engine execution & rendering
You should have some method for measuring performance (response times, throughput measured in "stuff per unit time", etc.) in each of those distinct areas, not only at the box and OS level (CPU, memory, disk i/o, etc.), but specific to each tier's service. So on the web server you'll need to know all the counters for the web server your're using. In the app tier, you'll need that plus visibility into whatever virtual machine you're using (jvm, clr, whatever). Most OR mappers manifest inside the virtual machine, so make sure you're paying attention to all the specifics if they're visible to you at that layer. Inside the DB, you'll need to know everything that's being executed and all the specific tuning parameters for your flavor of DB. If you have big bucks, BMC Patrol is a pretty good bet for most of it (with appropriate knowledge modules (KMs)). At the cheap end, you can certainly roll your own but your mileage will vary based on your depth of expertise.
Presuming everything is synchronous (no queue-based things going on that you need to wait for), there are tons of opportunities for performance and/or scalability issues. But since your post is about scalability, let's ignore the browser except for any remote XHR calls that will invoke another request/response from the web server.
So given this problem domain, what decisions could you make to help with scalability?
Connection handling. This is also bound to session management and authentication. That has to be as clean and lightweight as possible without compromising security. The metric is maximum connections per unit time.
Session failover at each tier. Necessary or not? We assume that each tier will be a cluster of boxes horizontally under some load balancing mechanism. Load balancing is typically very lightweight, but some implementations of session failover can be heavier than desired. Also whether you're running with sticky sessions can impact your options deeper in the architecture. You also have to decide whether to tie a web server to a specific app server or not. In the .NET remoting world, it's probably easier to tether them together. If you use the Microsoft stack, it may be more scalable to do 2-tier (skip the remoting), but you have to make a substantial security tradeoff. On the java side, I've always seen it at least 3-tier. No reason to do it otherwise.
Object hierarchy. Inside the app, you need the cleanest possible, lightest weight object structure possible. Only bring the data you need when you need it. Viciously excise any unnecessary or superfluous getting of data.
OR mapper inefficiencies. There is an impedance mismatch between object design and relational design. The many-to-many construct in an RDBMS is in direct conflict with object hierarchies (person.address vs. location.resident). The more complex your data structures, the less efficient your OR mapper will be. At some point you may have to cut bait in a one-off situation and do a more...uh...primitive data access approach (Stored Procedure + Data Access Layer) in order to squeeze more performance or scalability out of a particularly ugly module. Understand the cost involved and make it a conscious decision.
XSL transforms. XML is a wonderful, normalized mechanism for data transport, but man can it be a huge performance dog! Depending on how much data you're carrying around with you and which parser you choose and how complex your structure is, you could easily paint yourself into a very dark corner with XSLT. Yes, academically it's a brilliantly clean way of doing a presentation layer, but in the real world there can be catastrophic performance issues if you don't pay particular attention to this. I've seen a system consume over 30% of transaction time just in XSLT. Not pretty if you're trying to ramp up 4x the user base without buying additional boxes.
Can you buy your way out of a scalability jam? Absolutely. I've watched it happen more times than I'd like to admit. Moore's Law (as you already mentioned) is still valid today. Have some extra cash handy just in case.
Caching is a great tool to reduce the strain on the engine (increasing speed and throughput is a handy side-effect). It comes at a cost though in terms of memory footprint and complexity in invalidating the cache when it's stale. My decision would be to start completely clean and slowly add caching only where you decide it's useful to you. Too many times the complexities are underestimated and what started out as a way to fix performance problems turns out to cause functional problems. Also, back to the data usage comment. If you're creating gigabytes worth of objects every minute, it doesn't matter if you cache or not. You'll quickly max out your memory footprint and garbage collection will ruin your day. So I guess the takeaway is to make sure you understand exactly what's going on inside your virtual machine (object creation, destruction, GCs, etc.) so that you can make the best possible decisions.
Sorry for the verbosity. Just got rolling and forgot to look up. Hope some of this touches on the spirit of your inquiry and isn't too rudimentary a conversation.
A:
Well there's this blog called High Scalibility that contains a lot of information on this topic. Some useful stuff.
A:
Often the most effective way to do this is by a well thought through design where scaling is a part of it.
Decide what scaling actually means for your project. Is infinite amount of users, is it being able to handle a slashdotting on a website is it development-cycles?
Use this to focus your development efforts
A:
Jeff and Joel discuss scaling in the Stack Overflow Podcast #19.
A:
One good idea is to determine how much work each additional task creates. This can depend on how the algorithm is structured.
For example, imagine you have some virtual cars in a city. At any moment, you want each car to have a map showing where all the cars are.
One way to approach this would be:
for each car {
determine my position;
for each car {
add my position to this car's map;
}
}
This seems straightforward: look at the first car's position, add it to the map of every other car. Then look at the second car's position, add it to the map of every other car. Etc.
But there is a scalability problem. When there are 2 cars, this strategy takes 4 "add my position" steps; when there are 3 cars, it takes 9 steps. For each "position update," you have to cycle through the whole list of cars - and every car needs its position updated.
Ignoring how many other things must be done to each car (for example, it may take a fixed number of steps to calculate the position of an individual car), for N cars, it takes N2 "visits to cars" to run this algorithm. This is no problem when you've got 5 cars and 25 steps. But as you add cars, you will see the system bog down. 100 cars will take 10,000 steps, and 101 cars will take 10,201 steps!
A better approach would be to undo the nesting of the for loops.
for each car {
add my position to a list;
}
for each car {
give me an updated copy of the master list;
}
With this strategy, the number of steps is a multiple of N, not of N2. So 100 cars will take 100 times the work of 1 car - NOT 10,000 times the work.
This concept is sometimes expressed in "big O notation" - the number of steps needed are "big O of N" or "big O of N2."
Note that this concept is only concerned with scalability - not optimizing the number of steps for each car. Here we don't care if it takes 5 steps or 50 steps per car - the main thing is that N cars take (X * N) steps, not (X * N2).
A:
FWIW, most systems will scale most effectively by ignoring this until it's a problem- Moore's law is still holding, and unless your traffic is growing faster than Moore's law does, it's usually cheaper to just buy a bigger box (at $2 or $3K a pop) than to pay developers.
That said, the most important place to focus is your data tier; that is the hardest part of your application to scale out, as it usually needs to be authoritative, and clustered commercial databases are very expensive- the open source variations are usually very tricky to get right.
If you think there is a high likelihood that your application will need to scale, it may be intelligent to look into systems like memcached or map reduce relatively early in your development.
|
How can I make my applications scale well?
|
In general, what kinds of design decisions help an application scale well?
(Note: Having just learned about Big O Notation, I'm looking to gather more principles of programming here. I've attempted to explain Big O Notation by answering my own question below, but I want the community to improve both this question and the answers.)
Responses so far
1) Define scaling. Do you need to scale for lots of users, traffic, objects in a virtual environment?
2) Look at your algorithms. Will the amount of work they do scale linearly with the actual amount of work - i.e. number of items to loop through, number of users, etc?
3) Look at your hardware. Is your application designed such that you can run it on multiple machines if one can't keep up?
Secondary thoughts
1) Don't optimize too much too soon - test first. Maybe bottlenecks will happen in unforseen places.
2) Maybe the need to scale will not outpace Moore's Law, and maybe upgrading hardware will be cheaper than refactoring.
|
[
"The only thing I would say is write your application so that it can be deployed on a cluster from the very start. Anything above that is a premature optimisation. Your first job should be getting enough users to have a scaling problem. \nBuild the code as simple as you can first, then profile the system second and optimise only when there is an obvious performance problem.\nOften the figures from profiling your code are counter-intuitive; the bottle-necks tend to reside in modules you didn't think would be slow. Data is king when it comes to optimisation. If you optimise the parts you think will be slow, you will often optimise the wrong things.\n",
"Ok, so you've hit on a key point in using the \"big O notation\". That's one dimension that can certainly bite you in the rear if you're not paying attention. There are also other dimensions at play that some folks don't see through the \"big O\" glasses (but if you look closer they really are). \nA simple example of that dimension is a database join. There are \"best practices\" in constructing, say, a left inner join which will help to make the sql execute more efficiently. If you break down the relational calculus or even look at an explain plan (Oracle) you can easily see which indexes are being used in which order and if any table scans or nested operations are occurring.\nThe concept of profiling is also key. You have to be instrumented thoroughly and at the right granularity across all the moving parts of the architecture in order to identify and fix any inefficiencies. Say for example you're building a 3-tier, multi-threaded, MVC2 web-based application with liberal use of AJAX and client side processing along with an OR Mapper between your app and the DB. A simplistic linear single request/response flow looks like:\n\nbrowser -> web server -> app server -> DB -> app server -> XSLT -> web server -> browser JS engine execution & rendering\n\nYou should have some method for measuring performance (response times, throughput measured in \"stuff per unit time\", etc.) in each of those distinct areas, not only at the box and OS level (CPU, memory, disk i/o, etc.), but specific to each tier's service. So on the web server you'll need to know all the counters for the web server your're using. In the app tier, you'll need that plus visibility into whatever virtual machine you're using (jvm, clr, whatever). Most OR mappers manifest inside the virtual machine, so make sure you're paying attention to all the specifics if they're visible to you at that layer. Inside the DB, you'll need to know everything that's being executed and all the specific tuning parameters for your flavor of DB. If you have big bucks, BMC Patrol is a pretty good bet for most of it (with appropriate knowledge modules (KMs)). At the cheap end, you can certainly roll your own but your mileage will vary based on your depth of expertise.\nPresuming everything is synchronous (no queue-based things going on that you need to wait for), there are tons of opportunities for performance and/or scalability issues. But since your post is about scalability, let's ignore the browser except for any remote XHR calls that will invoke another request/response from the web server.\nSo given this problem domain, what decisions could you make to help with scalability?\n\nConnection handling. This is also bound to session management and authentication. That has to be as clean and lightweight as possible without compromising security. The metric is maximum connections per unit time.\nSession failover at each tier. Necessary or not? We assume that each tier will be a cluster of boxes horizontally under some load balancing mechanism. Load balancing is typically very lightweight, but some implementations of session failover can be heavier than desired. Also whether you're running with sticky sessions can impact your options deeper in the architecture. You also have to decide whether to tie a web server to a specific app server or not. In the .NET remoting world, it's probably easier to tether them together. If you use the Microsoft stack, it may be more scalable to do 2-tier (skip the remoting), but you have to make a substantial security tradeoff. On the java side, I've always seen it at least 3-tier. No reason to do it otherwise.\nObject hierarchy. Inside the app, you need the cleanest possible, lightest weight object structure possible. Only bring the data you need when you need it. Viciously excise any unnecessary or superfluous getting of data.\nOR mapper inefficiencies. There is an impedance mismatch between object design and relational design. The many-to-many construct in an RDBMS is in direct conflict with object hierarchies (person.address vs. location.resident). The more complex your data structures, the less efficient your OR mapper will be. At some point you may have to cut bait in a one-off situation and do a more...uh...primitive data access approach (Stored Procedure + Data Access Layer) in order to squeeze more performance or scalability out of a particularly ugly module. Understand the cost involved and make it a conscious decision.\nXSL transforms. XML is a wonderful, normalized mechanism for data transport, but man can it be a huge performance dog! Depending on how much data you're carrying around with you and which parser you choose and how complex your structure is, you could easily paint yourself into a very dark corner with XSLT. Yes, academically it's a brilliantly clean way of doing a presentation layer, but in the real world there can be catastrophic performance issues if you don't pay particular attention to this. I've seen a system consume over 30% of transaction time just in XSLT. Not pretty if you're trying to ramp up 4x the user base without buying additional boxes.\nCan you buy your way out of a scalability jam? Absolutely. I've watched it happen more times than I'd like to admit. Moore's Law (as you already mentioned) is still valid today. Have some extra cash handy just in case.\nCaching is a great tool to reduce the strain on the engine (increasing speed and throughput is a handy side-effect). It comes at a cost though in terms of memory footprint and complexity in invalidating the cache when it's stale. My decision would be to start completely clean and slowly add caching only where you decide it's useful to you. Too many times the complexities are underestimated and what started out as a way to fix performance problems turns out to cause functional problems. Also, back to the data usage comment. If you're creating gigabytes worth of objects every minute, it doesn't matter if you cache or not. You'll quickly max out your memory footprint and garbage collection will ruin your day. So I guess the takeaway is to make sure you understand exactly what's going on inside your virtual machine (object creation, destruction, GCs, etc.) so that you can make the best possible decisions.\n\nSorry for the verbosity. Just got rolling and forgot to look up. Hope some of this touches on the spirit of your inquiry and isn't too rudimentary a conversation.\n",
"Well there's this blog called High Scalibility that contains a lot of information on this topic. Some useful stuff.\n",
"Often the most effective way to do this is by a well thought through design where scaling is a part of it. \nDecide what scaling actually means for your project. Is infinite amount of users, is it being able to handle a slashdotting on a website is it development-cycles?\nUse this to focus your development efforts\n",
"Jeff and Joel discuss scaling in the Stack Overflow Podcast #19.\n",
"One good idea is to determine how much work each additional task creates. This can depend on how the algorithm is structured.\nFor example, imagine you have some virtual cars in a city. At any moment, you want each car to have a map showing where all the cars are.\nOne way to approach this would be:\n\n for each car {\n determine my position; \n for each car { \n add my position to this car's map; \n }\n }\n\nThis seems straightforward: look at the first car's position, add it to the map of every other car. Then look at the second car's position, add it to the map of every other car. Etc.\nBut there is a scalability problem. When there are 2 cars, this strategy takes 4 \"add my position\" steps; when there are 3 cars, it takes 9 steps. For each \"position update,\" you have to cycle through the whole list of cars - and every car needs its position updated. \nIgnoring how many other things must be done to each car (for example, it may take a fixed number of steps to calculate the position of an individual car), for N cars, it takes N2 \"visits to cars\" to run this algorithm. This is no problem when you've got 5 cars and 25 steps. But as you add cars, you will see the system bog down. 100 cars will take 10,000 steps, and 101 cars will take 10,201 steps!\nA better approach would be to undo the nesting of the for loops.\n\n for each car { \n add my position to a list; \n } \n for each car { \n give me an updated copy of the master list; \n }\n\nWith this strategy, the number of steps is a multiple of N, not of N2. So 100 cars will take 100 times the work of 1 car - NOT 10,000 times the work.\nThis concept is sometimes expressed in \"big O notation\" - the number of steps needed are \"big O of N\" or \"big O of N2.\"\nNote that this concept is only concerned with scalability - not optimizing the number of steps for each car. Here we don't care if it takes 5 steps or 50 steps per car - the main thing is that N cars take (X * N) steps, not (X * N2).\n",
"FWIW, most systems will scale most effectively by ignoring this until it's a problem- Moore's law is still holding, and unless your traffic is growing faster than Moore's law does, it's usually cheaper to just buy a bigger box (at $2 or $3K a pop) than to pay developers.\nThat said, the most important place to focus is your data tier; that is the hardest part of your application to scale out, as it usually needs to be authoritative, and clustered commercial databases are very expensive- the open source variations are usually very tricky to get right.\nIf you think there is a high likelihood that your application will need to scale, it may be intelligent to look into systems like memcached or map reduce relatively early in your development.\n"
] |
[
11,
6,
4,
3,
2,
1,
1
] |
[] |
[] |
[
"algorithm",
"language_agnostic",
"scalability"
] |
stackoverflow_0000041367_algorithm_language_agnostic_scalability.txt
|
Q:
.NET XML Seralization
I'm working on a set of classes that will be used to serialize to XML. The XML is not controlled by me and is organized rather well. Unfortunately, there are several sets of nested nodes, the purpose of some of them is just to hold a collection of their children. Based on my current knowledge of XML Serialization, those nodes require another class.
Is there a way to make a class serialize to a set of XML nodes instead of just one. Because I feel like I'm being as clear as mud, say we have the xml:
<root>
<users>
<user id="">
<firstname />
<lastname />
...
</user>
<user id="">
<firstname />
<lastname />
...
</user>
</users>
<groups>
<group id="" groupname="">
<userid />
<userid />
</group>
<group id="" groupname="">
<userid />
<userid />
</group>
</groups>
</root>
Ideally, 3 classes would be best. A class root with collections of user and group objects. However, best I can figure is that I need a class for root, users, user, groups and group, where users and groups contain only collections of user and group respectively, and root contains a users, and groups object.
Anyone out there who knows better than me? (don't lie, I know there are).
A:
Are you not using the XmlSerializer? It's pretty damn good and makes doing things like this real easy (I use it quite a lot!).
You can simply decorate your class properties with some attributes and the rest is all done for you..
Have you considered using XmlSerializer or is there a particular reason why not?
Heres a code snippet of all the work required to get the above to serialize (both ways):
[XmlArray("users"),
XmlArrayItem("user")]
public List<User> Users
{
get { return _users; }
}
A:
You would only need to have Users defined as an array of User objects. The XmlSerializer will render it appropriately for you.
See this link for an example:
http://www.informit.com/articles/article.aspx?p=23105&seqNum=4
Additionally, I would recommend using Visual Studio to generate an XSD and using the commandline utility XSD.EXE to spit out the class hierarchy for you, as per http://quickstart.developerfusion.co.uk/quickstart/howto/doc/xmlserialization/XSDToCls.aspx
A:
I wrote this class up back in the day to do what I think, is similar to what you are trying to do. You would use methods of this class on objects that you wish to serialize to XML. For instance, given an employee...
using Utilities;
using System.Xml.Serialization;
[XmlRoot("Employee")]
public class Employee
{
private String name = "Steve";
[XmlElement("Name")]
public string Name { get { return name; } set{ name = value; } }
public static void Main(String[] args)
{
Employee e = new Employee();
XmlObjectSerializer.Save("c:\steve.xml", e);
}
}
this code should output:
<Employee>
<Name>Steve</Name>
</Employee>
The object type (Employee) must be serializable. Try [Serializable(true)].
I have a better version of this code someplace, I was just learning when I wrote it.
Anyway, check out the code below. I'm using it in some project, so it definitly works.
using System;
using System.IO;
using System.Xml.Serialization;
namespace Utilities
{
/// <summary>
/// Opens and Saves objects to Xml
/// </summary>
/// <projectIndependent>True</projectIndependent>
public static class XmlObjectSerializer
{
/// <summary>
/// Serializes and saves data contained in obj to an XML file located at filePath <para></para>
/// </summary>
/// <param name="filePath">The file path to save to</param>
/// <param name="obj">The object to save</param>
/// <exception cref="System.IO.IOException">Thrown if an error occurs while saving the object. See inner exception for details</exception>
public static void Save(String filePath, Object obj)
{
// allows access to the file
StreamWriter oWriter = null;
try
{
// Open a stream to the file path
oWriter = new StreamWriter(filePath);
// Create a serializer for the object's type
XmlSerializer oSerializer = new XmlSerializer(obj.GetType());
// Serialize the object and write to the file
oSerializer.Serialize(oWriter.BaseStream, obj);
}
catch (Exception ex)
{
// throw any errors as IO exceptions
throw new IOException("An error occurred while saving the object", ex);
}
finally
{
// if a stream is open
if (oWriter != null)
{
// close it
oWriter.Close();
}
}
}
/// <summary>
/// Deserializes saved object data of type T in an XML file
/// located at filePath
/// </summary>
/// <typeparam name="T">Type of object to deserialize</typeparam>
/// <param name="filePath">The path to open the object from</param>
/// <returns>An object representing the file or the default value for type T</returns>
/// <exception cref="System.IO.IOException">Thrown if the file could not be opened. See inner exception for details</exception>
public static T Open<T>(String filePath)
{
// gets access to the file
StreamReader oReader = null;
// the deserialized data
Object data;
try
{
// Open a stream to the file
oReader = new StreamReader(filePath);
// Create a deserializer for the object's type
XmlSerializer oDeserializer = new XmlSerializer(typeof(T));
// Deserialize the data and store it
data = oDeserializer.Deserialize(oReader.BaseStream);
//
// Return the deserialized object
// don't cast it if it's null
// will be null if open failed
//
if (data != null)
{
return (T)data;
}
else
{
return default(T);
}
}
catch (Exception ex)
{
// throw error
throw new IOException("An error occurred while opening the file", ex);
}
finally
{
// Close the stream
oReader.Close();
}
}
}
}
|
.NET XML Seralization
|
I'm working on a set of classes that will be used to serialize to XML. The XML is not controlled by me and is organized rather well. Unfortunately, there are several sets of nested nodes, the purpose of some of them is just to hold a collection of their children. Based on my current knowledge of XML Serialization, those nodes require another class.
Is there a way to make a class serialize to a set of XML nodes instead of just one. Because I feel like I'm being as clear as mud, say we have the xml:
<root>
<users>
<user id="">
<firstname />
<lastname />
...
</user>
<user id="">
<firstname />
<lastname />
...
</user>
</users>
<groups>
<group id="" groupname="">
<userid />
<userid />
</group>
<group id="" groupname="">
<userid />
<userid />
</group>
</groups>
</root>
Ideally, 3 classes would be best. A class root with collections of user and group objects. However, best I can figure is that I need a class for root, users, user, groups and group, where users and groups contain only collections of user and group respectively, and root contains a users, and groups object.
Anyone out there who knows better than me? (don't lie, I know there are).
|
[
"Are you not using the XmlSerializer? It's pretty damn good and makes doing things like this real easy (I use it quite a lot!).\nYou can simply decorate your class properties with some attributes and the rest is all done for you..\nHave you considered using XmlSerializer or is there a particular reason why not?\nHeres a code snippet of all the work required to get the above to serialize (both ways):\n[XmlArray(\"users\"),\nXmlArrayItem(\"user\")]\npublic List<User> Users\n{\n get { return _users; }\n}\n\n",
"You would only need to have Users defined as an array of User objects. The XmlSerializer will render it appropriately for you.\nSee this link for an example:\nhttp://www.informit.com/articles/article.aspx?p=23105&seqNum=4\nAdditionally, I would recommend using Visual Studio to generate an XSD and using the commandline utility XSD.EXE to spit out the class hierarchy for you, as per http://quickstart.developerfusion.co.uk/quickstart/howto/doc/xmlserialization/XSDToCls.aspx\n",
"I wrote this class up back in the day to do what I think, is similar to what you are trying to do. You would use methods of this class on objects that you wish to serialize to XML. For instance, given an employee...\nusing Utilities;\nusing System.Xml.Serialization;\n[XmlRoot(\"Employee\")]\npublic class Employee\n{\n private String name = \"Steve\";\n [XmlElement(\"Name\")]\n public string Name { get { return name; } set{ name = value; } }\n\n public static void Main(String[] args)\n {\n Employee e = new Employee();\n XmlObjectSerializer.Save(\"c:\\steve.xml\", e);\n }\n\n}\nthis code should output:\n<Employee>\n <Name>Steve</Name>\n</Employee>\n\nThe object type (Employee) must be serializable. Try [Serializable(true)].\nI have a better version of this code someplace, I was just learning when I wrote it.\nAnyway, check out the code below. I'm using it in some project, so it definitly works.\nusing System;\nusing System.IO;\nusing System.Xml.Serialization;\n\nnamespace Utilities\n{\n /// <summary>\n /// Opens and Saves objects to Xml\n /// </summary>\n /// <projectIndependent>True</projectIndependent>\n public static class XmlObjectSerializer\n {\n /// <summary>\n /// Serializes and saves data contained in obj to an XML file located at filePath <para></para> \n /// </summary>\n /// <param name=\"filePath\">The file path to save to</param>\n /// <param name=\"obj\">The object to save</param>\n /// <exception cref=\"System.IO.IOException\">Thrown if an error occurs while saving the object. See inner exception for details</exception>\n public static void Save(String filePath, Object obj)\n {\n // allows access to the file\n StreamWriter oWriter = null;\n\n try\n {\n // Open a stream to the file path\n oWriter = new StreamWriter(filePath);\n\n // Create a serializer for the object's type\n XmlSerializer oSerializer = new XmlSerializer(obj.GetType());\n\n // Serialize the object and write to the file\n oSerializer.Serialize(oWriter.BaseStream, obj);\n }\n catch (Exception ex)\n {\n // throw any errors as IO exceptions\n throw new IOException(\"An error occurred while saving the object\", ex);\n }\n finally\n {\n // if a stream is open\n if (oWriter != null)\n {\n // close it\n oWriter.Close();\n }\n }\n }\n\n /// <summary>\n /// Deserializes saved object data of type T in an XML file\n /// located at filePath \n /// </summary>\n /// <typeparam name=\"T\">Type of object to deserialize</typeparam>\n /// <param name=\"filePath\">The path to open the object from</param>\n /// <returns>An object representing the file or the default value for type T</returns>\n /// <exception cref=\"System.IO.IOException\">Thrown if the file could not be opened. See inner exception for details</exception>\n public static T Open<T>(String filePath)\n {\n // gets access to the file\n StreamReader oReader = null;\n\n // the deserialized data\n Object data;\n\n try\n {\n // Open a stream to the file\n oReader = new StreamReader(filePath);\n\n // Create a deserializer for the object's type\n XmlSerializer oDeserializer = new XmlSerializer(typeof(T));\n\n // Deserialize the data and store it\n data = oDeserializer.Deserialize(oReader.BaseStream);\n\n //\n // Return the deserialized object\n // don't cast it if it's null\n // will be null if open failed\n //\n if (data != null)\n {\n return (T)data;\n }\n else\n {\n return default(T);\n }\n }\n catch (Exception ex)\n {\n // throw error\n throw new IOException(\"An error occurred while opening the file\", ex);\n }\n finally\n {\n // Close the stream\n oReader.Close();\n }\n }\n }\n}\n\n"
] |
[
6,
0,
0
] |
[] |
[] |
[
".net",
"serialization",
"xml",
"xml_serialization"
] |
stackoverflow_0000076793_.net_serialization_xml_xml_serialization.txt
|
Q:
SQL Server 2005 Temporary Tables
In a stored procedure, when is #Temptable created in SQL Server 2005? When creating the query execution plan or when executing the stored procedure?
if (@x = 1)
begin
select 1 as Text into #Temptable
end
else
begin
select 2 as Text into #Temptable
end
A:
It's created when it's executed and dropped when the session ends.
A:
You might also want to consider table variables, whose lifecycle is completely managed for you.
DECLARE @MyTable TABLE (MyPK INT IDENTITY, MyName VARCHAR(100))
INSERT INTO @MyTable ( MyName ) VALUES ( 'Icarus' )
INSERT INTO @MyTable ( MyName ) VALUES ( 'Daedalus' )
SELECT * FROM @MyTable
I almost always use this approach, but it does have disadvantages. Most notably, you can only use indexes that you can declare within the TABLE() construct, essentially meaning that you're limited to the primary key only -- no using ALTER TABLE.
A:
Interesting question.
For the type of temporary table you're creating, I think it's when the stored procedure is executed. Tables created with the # prefix are accessible to the SQL Server session they're created in. Once the session ends, they're dropped.
This url: http://www.sql-server-performance.com/tips/query_execution_plan_analysis_p1.aspx seems to indicate that temp tables aren't created when query execution plans are created.
A:
Whilst it may be automatically dropped at the end of a session, it is good practice to drop the table yourself when you're done with it.
|
SQL Server 2005 Temporary Tables
|
In a stored procedure, when is #Temptable created in SQL Server 2005? When creating the query execution plan or when executing the stored procedure?
if (@x = 1)
begin
select 1 as Text into #Temptable
end
else
begin
select 2 as Text into #Temptable
end
|
[
"It's created when it's executed and dropped when the session ends.\n",
"You might also want to consider table variables, whose lifecycle is completely managed for you.\nDECLARE @MyTable TABLE (MyPK INT IDENTITY, MyName VARCHAR(100))\nINSERT INTO @MyTable ( MyName ) VALUES ( 'Icarus' )\nINSERT INTO @MyTable ( MyName ) VALUES ( 'Daedalus' )\nSELECT * FROM @MyTable\n\nI almost always use this approach, but it does have disadvantages. Most notably, you can only use indexes that you can declare within the TABLE() construct, essentially meaning that you're limited to the primary key only -- no using ALTER TABLE.\n",
"Interesting question.\nFor the type of temporary table you're creating, I think it's when the stored procedure is executed. Tables created with the # prefix are accessible to the SQL Server session they're created in. Once the session ends, they're dropped.\nThis url: http://www.sql-server-performance.com/tips/query_execution_plan_analysis_p1.aspx seems to indicate that temp tables aren't created when query execution plans are created.\n",
"Whilst it may be automatically dropped at the end of a session, it is good practice to drop the table yourself when you're done with it.\n"
] |
[
2,
2,
1,
1
] |
[] |
[] |
[
"sql_server",
"sql_server_2005",
"temp_tables"
] |
stackoverflow_0000043903_sql_server_sql_server_2005_temp_tables.txt
|
Q:
Which PHP opcode cacher should I use to improve performance?
I'm trying to improve performance under high load and would like to implement opcode caching. Which of the following should I use?
APC - Installation Guide
eAccelerator - Installation Guide
XCache - Installation Guide
I'm also open to any other alternatives that have slipped under my radar.
Currently running on a stock Debian Etch with Apache 2 and PHP 5.2
[Update 1]
HowtoForge installation links added
[Update 2]
Based on the answers and feedback given, I have tested all 3 implementations using the following Apache JMeter test plan on my application:
Login
Access Home Page
With 50 concurrent connections, the results are as follows:
No Opcode Caching
APC
eAccelerator
XCache
Performance Graph (smaller is better)
From the above results, eAccelerator has a slight edge in performance compared to APC and XCache. However, what matters most from the above data is that any sort of opcode caching gives a tremendous boost in performance.
I have decided to use APC due to the following 2 reasons:
Package is available in official Debian repository
More functional control panel
To summarize my experience:
Ease of Installation: APC > eAccelerator > XCache
Performance: eAccelerator > APC, XCache
Control Panel: APC > XCache > eAccelerator
A:
I think the answer might depend on the type of web applications you are running. I had to make this decision myself two years ago and couldn't decide between Zend Optimizer and eAccelerator.
In order to make my decision, I used ab (apache bench) to test the server, and tested the three combinations (zend, eaccelerator, both running) and proved that eAccelerator on its own gave the greatest performance.
If you have the luxury of time, I would recommend doing similar tests yourself, and making the decision based on your results.
A:
I have run several benchmarks with eAcclerator, APC, XCache, and Zend Optimizer (even though Zend is an optimizer, not a cache).
Benchmark Results http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png
Result: eAccelerator is fastest (in all tests), followed by XCache and APC. (The one in the diagram is the number of seconds to call a WordPress home page 10,000 times).
Zend Optimizer made everything slower (!).
A:
I use APC because it was easy to install in windows and I'm developing on WAMP.
Integrating APC into PHP6 was discussed here:
http://www.php.net/~derick/meeting-notes.html#add-an-opcode-cache-to-the-distribution-apc
And there are directions on installing APC on Debian Etch here:
http://www.howtoforge.com/apc-php5-apache2-debian-etch
A:
I can't tell you for sure, but the place where I am working now is looking at APC and eAccelerator. However, this might influence you - APC will be integrated into a future release of PHP (thanks to Ed Haber for the link).
A:
I've had good success with eAccelerator (speed improvement with no load is noticable) but XCache also seems pretty promising. You may want to run some trials with each though, your application might scale differently on each.
A:
I've been using XCache for more than a year now with no problems at all.
I tried to switch to eAccelerator, but ended up with a bunch of segmentation faults (it's less forgiving of errors). The major benefit to eAccelerator is that it's not just an opcode cache, it's also an optimizer.
You should fully test out your application with each one of them to make sure there aren't any problems, and then I'd use apachebench to test it under load.
A:
These add-ons have historically introduced lots of weird bugs to track down. These bugs can cause inconsistent behaviour which can't be diagnosed easily because it depends on the state of the cache.
So I'd say:
Don't use any of the above. Buy more tin instead, it's a more reliable (i.e. error-free) way of increasing performance.
OR
Go with whichever of the above is the most robust, having tested the pants off your application.
But I'd say:
Make sure it's REALLY PHP code parsing that is causing your performance problems by profiling your application. I think it's extremely likely that it isn't - in which case you'd be wasting your time (actually, using your time negatively productively) by installing any of them.
|
Which PHP opcode cacher should I use to improve performance?
|
I'm trying to improve performance under high load and would like to implement opcode caching. Which of the following should I use?
APC - Installation Guide
eAccelerator - Installation Guide
XCache - Installation Guide
I'm also open to any other alternatives that have slipped under my radar.
Currently running on a stock Debian Etch with Apache 2 and PHP 5.2
[Update 1]
HowtoForge installation links added
[Update 2]
Based on the answers and feedback given, I have tested all 3 implementations using the following Apache JMeter test plan on my application:
Login
Access Home Page
With 50 concurrent connections, the results are as follows:
No Opcode Caching
APC
eAccelerator
XCache
Performance Graph (smaller is better)
From the above results, eAccelerator has a slight edge in performance compared to APC and XCache. However, what matters most from the above data is that any sort of opcode caching gives a tremendous boost in performance.
I have decided to use APC due to the following 2 reasons:
Package is available in official Debian repository
More functional control panel
To summarize my experience:
Ease of Installation: APC > eAccelerator > XCache
Performance: eAccelerator > APC, XCache
Control Panel: APC > XCache > eAccelerator
|
[
"I think the answer might depend on the type of web applications you are running. I had to make this decision myself two years ago and couldn't decide between Zend Optimizer and eAccelerator.\nIn order to make my decision, I used ab (apache bench) to test the server, and tested the three combinations (zend, eaccelerator, both running) and proved that eAccelerator on its own gave the greatest performance.\nIf you have the luxury of time, I would recommend doing similar tests yourself, and making the decision based on your results.\n",
"I have run several benchmarks with eAcclerator, APC, XCache, and Zend Optimizer (even though Zend is an optimizer, not a cache). \nBenchmark Results http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png\nResult: eAccelerator is fastest (in all tests), followed by XCache and APC. (The one in the diagram is the number of seconds to call a WordPress home page 10,000 times).\nZend Optimizer made everything slower (!).\n",
"I use APC because it was easy to install in windows and I'm developing on WAMP.\nIntegrating APC into PHP6 was discussed here: \nhttp://www.php.net/~derick/meeting-notes.html#add-an-opcode-cache-to-the-distribution-apc\nAnd there are directions on installing APC on Debian Etch here:\nhttp://www.howtoforge.com/apc-php5-apache2-debian-etch\n",
"I can't tell you for sure, but the place where I am working now is looking at APC and eAccelerator. However, this might influence you - APC will be integrated into a future release of PHP (thanks to Ed Haber for the link).\n",
"I've had good success with eAccelerator (speed improvement with no load is noticable) but XCache also seems pretty promising. You may want to run some trials with each though, your application might scale differently on each.\n",
"I've been using XCache for more than a year now with no problems at all.\nI tried to switch to eAccelerator, but ended up with a bunch of segmentation faults (it's less forgiving of errors). The major benefit to eAccelerator is that it's not just an opcode cache, it's also an optimizer.\nYou should fully test out your application with each one of them to make sure there aren't any problems, and then I'd use apachebench to test it under load.\n",
"These add-ons have historically introduced lots of weird bugs to track down. These bugs can cause inconsistent behaviour which can't be diagnosed easily because it depends on the state of the cache.\nSo I'd say:\n\nDon't use any of the above. Buy more tin instead, it's a more reliable (i.e. error-free) way of increasing performance.\nOR\nGo with whichever of the above is the most robust, having tested the pants off your application.\n\nBut I'd say:\n\nMake sure it's REALLY PHP code parsing that is causing your performance problems by profiling your application. I think it's extremely likely that it isn't - in which case you'd be wasting your time (actually, using your time negatively productively) by installing any of them.\n\n"
] |
[
18,
6,
5,
4,
3,
1,
1
] |
[] |
[] |
[
"caching",
"performance",
"php"
] |
stackoverflow_0000028716_caching_performance_php.txt
|
Q:
DoSomethingToThing(Thing n) vs Thing.DoSomething()
What factors determine which approach is more appropriate?
A:
I think both have their places.
You shouldn't simply use DoSomethingToThing(Thing n) just because you think "Functional programming is good". Likewise you shouldn't simply use Thing.DoSomething() because "Object Oriented programming is good".
I think it comes down to what you are trying to convey. Stop thinking about your code as a series of instructions, and start thinking about it like a paragraph or sentence of a story. Think about which parts are the most important from the point of view of the task at hand.
For example, if the part of the 'sentence' you would like to stress is the object, you should use the OO style.
Example:
fileHandle.close();
Most of the time when you're passing around file handles, the main thing you are thinking about is keeping track of the file it represents.
CounterExample:
string x = "Hello World";
submitHttpRequest( x );
In this case submitting the HTTP request is far more important than the string which is the body, so submitHttpRequst(x) is preferable to x.submitViaHttp()
Needless to say, these are not mutually exclusive. You'll probably actually have
networkConnection.submitHttpRequest(x)
in which you mix them both. The important thing is that you think about what parts are emphasized, and what you will be conveying to the future reader of the code.
A:
To be object-oriented, tell, don't ask : http://www.pragmaticprogrammer.com/articles/tell-dont-ask.
So, Thing.DoSomething() rather than DoSomethingToThing(Thing n).
A:
If you're dealing with internal state of a thing, Thing.DoSomething() makes more sense, because even if you change the internal representation of Thing, or how it works, the code talking to it doesn't have to change. If you're dealing with a collection of Things, or writing some utility methods, procedural-style DoSomethingToThing() might make more sense or be more straight-forward; but still, can usually be represented as a method on the object representing that collection: for instance
GetTotalPriceofThings();
vs
Cart.getTotal();
It really depends on how object oriented your code is.
A:
Thing.DoSomething is appropriate if Thing is the subject of your sentence.
DoSomethingToThing(Thing n) is appropriate if Thing is the object of your sentence.
ThingA.DoSomethingToThingB(ThingB m) is an unavoidable combination, since in all the languages I can think of, functions belong to one class and are not mutually owned. But this makes sense because you can have a subject and an object.
Active voice is more straightforward than passive voice, so make sure your sentence has a subject that isn't just "the computer". This means, use form 1 and form 3 frequently, and use form 2 rarely.
For clarity:
// Form 1: "File handle, close."
fileHandle.close();
// Form 2: "(Computer,) close the file handle."
close(fileHandle);
// Form 3: "File handle, write the contents of another file handle."
fileHandle.writeContentsOf(anotherFileHandle);
A:
I agree with Orion, but I'm going to rephrase the decision process.
You have a noun and a verb / an object and an action.
If many objects of this type will use this action, try to make the action part of the object.
Otherwise, try to group the action separately, but with related actions.
I like the File / string examples. There are many string operations, such as "SendAsHTTPReply", which won't happen for your average string, but do happen often in a certain setting. However, you basically will always close a File (hopefully), so it makes perfect sense to put the Close action in the class interface.
Another way to think of this is as buying part of an entertainment system. It makes sense to bundle a TV remote with a TV, because you always use them together. But it would be strange to bundle a power cable for a specific VCR with a TV, since many customers will never use this. The key idea is how often will this action be used on this object?
A:
Not nearly enough information here. It depends if your language even supports the construct "Thing.something" or equivalent (ie. it's an OO language). If so, it's far more appropriate because that's the OO paradigm (members should be associated with the object they act on). In a procedural style, of course, DoSomethingtoThing() is your only choice... or ThingDoSomething()
A:
DoSomethingToThing(Thing n) would be more of a functional approach whereas Thing.DoSomething() would be more of an object oriented approach.
A:
That is the Object Oriented versus Procedural Programming choice :)
I think the well documented OO advantages apply to the Thing.DoSomething()
A:
This has been asked Design question: does the Phone dial the PhoneNumber, or does the PhoneNumber dial itself on the Phone?
A:
Here are a couple of factors to consider:
Can you modify or extend the Thing class. If not, use the former
Can Thing be instantiated. If not, use the later as a static method
If Thing actually get modified (i.e. has properties that change), prefer the latter. If Thing is not modified the latter is just as acceptable.
Otherwise, as objects are meant to map on to real world object, choose the method that seems more grounded in reality.
A:
Even if you aren't working in an OO language, where you would have Thing.DoSomething(), for the overall readability of your code, having a set of functions like:
ThingDoSomething()
ThingDoAnotherTask()
ThingWeDoSomethingElse()
then
AnotherThingDoSomething()
and so on is far better.
All the code that works on "Thing" is on the one location. Of course, the "DoSomething" and other tasks should be named consistently - so you have a ThingOneRead(), a ThingTwoRead()... by now you should get point. When you go back to work on the code in twelve months time, you will appreciate taking the time to make things logical.
A:
In general, if "something" is an action that "thing" naturally knows how to do, then you should use thing.doSomething(). That's good OO encapsulation, because otherwise DoSomethingToThing(thing) would have to access potential internal information of "thing".
For example invoice.getTotal()
If "something" is not naturally part of "thing's" domain model, then one option is to use a helper method.
For example: Logger.log(invoice)
A:
If DoingSomething to an object is likely to produce a different result in another scenario, then i'd suggest you oneThing.DoSomethingToThing(anotherThing).
For example you may have two was of saving thing in you program so you might adopt a DatabaseObject.Save(thing) SessionObject.Save(thing) would be more advantageous than thing.Save() or thing.SaveToDatabase or thing.SaveToSession().
I rarely pass no parameters to a class, unless I'm retrieving public properties.
A:
To add to Aeon's answer, it depends on the the thing and what you want to do to it. So if you are writing Thing, and DoSomething alters the internal state of Thing, then the best approach is Thing.DoSomething. However, if the action does more than change the internal state, then DoSomething(Thing) makes more sense. For example:
Collection.Add(Thing)
is better than
Thing.AddSelfToCollection(Collection)
And if you didn't write Thing, and cannot create a derived class, then you have no chocie but to do DoSomething(Thing)
A:
Even in object oriented programming it might be useful to use a function call instead of a method (or for that matter calling a method of an object other than the one we call it on). Imagine a simple database persistence framework where you'd like to just call save() on an object. Instead of including an SQL statement in every class you'd like to have saved, thus complicating code, spreading SQL all across the code and making changing the storage engine a PITA, you could create an Interface defining save(Class1), save(Class2) etc. and its implementation. Then you'd actually be calling databaseSaver.save(class1) and have everything in one place.
A:
I have to agree with Kevin Conner
Also keep in mind the caller of either of the 2 forms. The caller is probably a method of some other object that definitely does something to your Thing :)
|
DoSomethingToThing(Thing n) vs Thing.DoSomething()
|
What factors determine which approach is more appropriate?
|
[
"I think both have their places.\nYou shouldn't simply use DoSomethingToThing(Thing n) just because you think \"Functional programming is good\". Likewise you shouldn't simply use Thing.DoSomething() because \"Object Oriented programming is good\".\nI think it comes down to what you are trying to convey. Stop thinking about your code as a series of instructions, and start thinking about it like a paragraph or sentence of a story. Think about which parts are the most important from the point of view of the task at hand.\nFor example, if the part of the 'sentence' you would like to stress is the object, you should use the OO style.\nExample:\nfileHandle.close();\n\nMost of the time when you're passing around file handles, the main thing you are thinking about is keeping track of the file it represents.\nCounterExample:\nstring x = \"Hello World\";\nsubmitHttpRequest( x );\n\nIn this case submitting the HTTP request is far more important than the string which is the body, so submitHttpRequst(x) is preferable to x.submitViaHttp()\nNeedless to say, these are not mutually exclusive. You'll probably actually have\nnetworkConnection.submitHttpRequest(x)\n\nin which you mix them both. The important thing is that you think about what parts are emphasized, and what you will be conveying to the future reader of the code.\n",
"To be object-oriented, tell, don't ask : http://www.pragmaticprogrammer.com/articles/tell-dont-ask.\nSo, Thing.DoSomething() rather than DoSomethingToThing(Thing n).\n",
"If you're dealing with internal state of a thing, Thing.DoSomething() makes more sense, because even if you change the internal representation of Thing, or how it works, the code talking to it doesn't have to change. If you're dealing with a collection of Things, or writing some utility methods, procedural-style DoSomethingToThing() might make more sense or be more straight-forward; but still, can usually be represented as a method on the object representing that collection: for instance\nGetTotalPriceofThings();\n\nvs\nCart.getTotal();\n\nIt really depends on how object oriented your code is.\n",
"\nThing.DoSomething is appropriate if Thing is the subject of your sentence.\n\n\nDoSomethingToThing(Thing n) is appropriate if Thing is the object of your sentence.\nThingA.DoSomethingToThingB(ThingB m) is an unavoidable combination, since in all the languages I can think of, functions belong to one class and are not mutually owned. But this makes sense because you can have a subject and an object.\n\n\nActive voice is more straightforward than passive voice, so make sure your sentence has a subject that isn't just \"the computer\". This means, use form 1 and form 3 frequently, and use form 2 rarely.\nFor clarity:\n// Form 1: \"File handle, close.\"\nfileHandle.close(); \n\n// Form 2: \"(Computer,) close the file handle.\"\nclose(fileHandle);\n\n// Form 3: \"File handle, write the contents of another file handle.\"\nfileHandle.writeContentsOf(anotherFileHandle);\n\n",
"I agree with Orion, but I'm going to rephrase the decision process.\nYou have a noun and a verb / an object and an action.\n\nIf many objects of this type will use this action, try to make the action part of the object.\nOtherwise, try to group the action separately, but with related actions.\n\nI like the File / string examples. There are many string operations, such as \"SendAsHTTPReply\", which won't happen for your average string, but do happen often in a certain setting. However, you basically will always close a File (hopefully), so it makes perfect sense to put the Close action in the class interface.\nAnother way to think of this is as buying part of an entertainment system. It makes sense to bundle a TV remote with a TV, because you always use them together. But it would be strange to bundle a power cable for a specific VCR with a TV, since many customers will never use this. The key idea is how often will this action be used on this object?\n",
"Not nearly enough information here. It depends if your language even supports the construct \"Thing.something\" or equivalent (ie. it's an OO language). If so, it's far more appropriate because that's the OO paradigm (members should be associated with the object they act on). In a procedural style, of course, DoSomethingtoThing() is your only choice... or ThingDoSomething()\n",
"DoSomethingToThing(Thing n) would be more of a functional approach whereas Thing.DoSomething() would be more of an object oriented approach.\n",
"That is the Object Oriented versus Procedural Programming choice :)\nI think the well documented OO advantages apply to the Thing.DoSomething()\n",
"This has been asked Design question: does the Phone dial the PhoneNumber, or does the PhoneNumber dial itself on the Phone?\n",
"Here are a couple of factors to consider:\n\nCan you modify or extend the Thing class. If not, use the former\nCan Thing be instantiated. If not, use the later as a static method\nIf Thing actually get modified (i.e. has properties that change), prefer the latter. If Thing is not modified the latter is just as acceptable.\nOtherwise, as objects are meant to map on to real world object, choose the method that seems more grounded in reality.\n\n",
"Even if you aren't working in an OO language, where you would have Thing.DoSomething(), for the overall readability of your code, having a set of functions like:\nThingDoSomething()\nThingDoAnotherTask()\nThingWeDoSomethingElse()\nthen\nAnotherThingDoSomething()\nand so on is far better.\nAll the code that works on \"Thing\" is on the one location. Of course, the \"DoSomething\" and other tasks should be named consistently - so you have a ThingOneRead(), a ThingTwoRead()... by now you should get point. When you go back to work on the code in twelve months time, you will appreciate taking the time to make things logical.\n",
"In general, if \"something\" is an action that \"thing\" naturally knows how to do, then you should use thing.doSomething(). That's good OO encapsulation, because otherwise DoSomethingToThing(thing) would have to access potential internal information of \"thing\".\nFor example invoice.getTotal()\nIf \"something\" is not naturally part of \"thing's\" domain model, then one option is to use a helper method. \nFor example: Logger.log(invoice)\n",
"If DoingSomething to an object is likely to produce a different result in another scenario, then i'd suggest you oneThing.DoSomethingToThing(anotherThing). \nFor example you may have two was of saving thing in you program so you might adopt a DatabaseObject.Save(thing) SessionObject.Save(thing) would be more advantageous than thing.Save() or thing.SaveToDatabase or thing.SaveToSession().\nI rarely pass no parameters to a class, unless I'm retrieving public properties.\n",
"To add to Aeon's answer, it depends on the the thing and what you want to do to it. So if you are writing Thing, and DoSomething alters the internal state of Thing, then the best approach is Thing.DoSomething. However, if the action does more than change the internal state, then DoSomething(Thing) makes more sense. For example:\nCollection.Add(Thing)\n\nis better than\nThing.AddSelfToCollection(Collection)\n\nAnd if you didn't write Thing, and cannot create a derived class, then you have no chocie but to do DoSomething(Thing)\n",
"Even in object oriented programming it might be useful to use a function call instead of a method (or for that matter calling a method of an object other than the one we call it on). Imagine a simple database persistence framework where you'd like to just call save() on an object. Instead of including an SQL statement in every class you'd like to have saved, thus complicating code, spreading SQL all across the code and making changing the storage engine a PITA, you could create an Interface defining save(Class1), save(Class2) etc. and its implementation. Then you'd actually be calling databaseSaver.save(class1) and have everything in one place.\n",
"I have to agree with Kevin Conner\nAlso keep in mind the caller of either of the 2 forms. The caller is probably a method of some other object that definitely does something to your Thing :)\n"
] |
[
14,
8,
3,
3,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"coding_style",
"language_agnostic",
"oop"
] |
stackoverflow_0000076812_coding_style_language_agnostic_oop.txt
|
Q:
How is it possible to run a traceroute-like program without needing root privileges?
I have seen another program provide traceroute functionality within it but without needing root (superuser) privileges? I've always assumed that raw sockets need to be root, but is there some other way? (I think somebody mentioned "supertrace" or "tracepath"?) Thanks!
A:
Ping the target, gradually increasing the TTL and watching where the "TTL exceeded" responses originate.
A:
Rather than using raw sockets, some applications use a higher numbered tcp or udp port. By directing that tcp port at port 80 on a known webserver, you could traceroute to that server. The downside is that you need to know what ports are open on a destination device to tcpping it.
A:
ping and traceroute use the ICMP protocol. Like UDP and TCP this is accessible through the normal sockets API. Only UDP and TCP port numbers less than 1024 are protected from use, other than by root. ICMP is freely available to all users.
If you really want to see how ping and traceroute work you can download an example C code implementation for them from CodeProject.
In short, they simple open an ICMP socket, and traceroute alters the increments the TTL using setsockopt until the target is reached.
A:
You don't need to use raw sockets to send and receive ICMP packets. At least not on Windows.
|
How is it possible to run a traceroute-like program without needing root privileges?
|
I have seen another program provide traceroute functionality within it but without needing root (superuser) privileges? I've always assumed that raw sockets need to be root, but is there some other way? (I think somebody mentioned "supertrace" or "tracepath"?) Thanks!
|
[
"Ping the target, gradually increasing the TTL and watching where the \"TTL exceeded\" responses originate.\n",
"Rather than using raw sockets, some applications use a higher numbered tcp or udp port. By directing that tcp port at port 80 on a known webserver, you could traceroute to that server. The downside is that you need to know what ports are open on a destination device to tcpping it.\n",
"ping and traceroute use the ICMP protocol. Like UDP and TCP this is accessible through the normal sockets API. Only UDP and TCP port numbers less than 1024 are protected from use, other than by root. ICMP is freely available to all users.\nIf you really want to see how ping and traceroute work you can download an example C code implementation for them from CodeProject.\nIn short, they simple open an ICMP socket, and traceroute alters the increments the TTL using setsockopt until the target is reached.\n",
"You don't need to use raw sockets to send and receive ICMP packets. At least not on Windows.\n"
] |
[
3,
1,
1,
0
] |
[
"If you have a modern Linux distro you can look at the source for traceroute (or tracepath, which came about before traceroute went no setuid) and tcptraceroute. None of those require RAW sockets -- checked on Fedora 9, they aren't setuid and work with default options for the normal user.\nUsing the code that tcptraceroute does might be esp. useful, as ICMP packets to an address will not necessarily end up at the same place as a TCP connection to port 80, for example.\nDoing an strace of traceroute (as a normal user) shows it doing something like:\n\nint opt_on = 1;\nint opt_off = 0;\n\nfd = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP)\nsetsockopt(fd, SOL_IP, IP_MTU_DISCOVER, &opt_off, sizeof int)\nsetsockopt(fd, SOL_SOCKET, SO_TIMESTAMP, &opt_on, sizeof int)\nsetsockopt(fd, SOL_IP, IP_RECVTTL, &opt_on, sizeof int)\n\n...and then reading the data out of the CMSG results.\n"
] |
[
-2
] |
[
"sockets",
"traceroute"
] |
stackoverflow_0000076988_sockets_traceroute.txt
|
Q:
Visually customize autocomplete in Wicket
How can I visually customize autocomplete fields in Wicket (change colors, fonts, etc.)?
A:
You can use CSS to modify the look of this component. For the Ajax auto-complete component in 1.3 the element you want to override is div.wicket-aa, so for example you might do:
div.wicket-aa {
background-color:white;
border:1px solid #CCCCCC;
color:black;
}
div.wicket-aa ul {
list-style-image:none;
list-style-position:outside;
list-style-type:none;
margin:0pt;
padding:5px;
}
div.wicket-aa ul li.selected {
background-color:#CCCCCC;
}
A:
Perilandmishap has probably the most usefull answer for your needs. Personally, I always found the default Ajax auto complete control in Wicket to be woefully insufficient for my needs. If you really want a professional "feel" to your auto complete, roll your an using Wicket's Ajax libraries.
|
Visually customize autocomplete in Wicket
|
How can I visually customize autocomplete fields in Wicket (change colors, fonts, etc.)?
|
[
"You can use CSS to modify the look of this component. For the Ajax auto-complete component in 1.3 the element you want to override is div.wicket-aa, so for example you might do:\ndiv.wicket-aa {\n background-color:white;\n border:1px solid #CCCCCC;\n color:black;\n}\ndiv.wicket-aa ul {\n list-style-image:none;\n list-style-position:outside;\n list-style-type:none;\n margin:0pt;\n padding:5px;\n}\ndiv.wicket-aa ul li.selected {\n background-color:#CCCCCC;\n}\n\n",
"Perilandmishap has probably the most usefull answer for your needs. Personally, I always found the default Ajax auto complete control in Wicket to be woefully insufficient for my needs. If you really want a professional \"feel\" to your auto complete, roll your an using Wicket's Ajax libraries.\n"
] |
[
8,
1
] |
[] |
[] |
[
"autocomplete",
"wicket"
] |
stackoverflow_0000070090_autocomplete_wicket.txt
|
Q:
MS Access ADP Autonumber
I am getting the following error in an MS Access ADP when trying to add a record on a form linked to a MS SQL Server 2000 table:
Run-time error '31004':
The value of an (AutoNumber) field
cannot be retrived prior to being
saved.
Please save the record that contains
the (AutoNumber) field prior to
performing this action.
note: retrieved is actually spelled wrong in the error.
Does anyone know what this means?
I've done a web search and was only able to find the answer at a certain site that only experts have access to.
A:
First of all, if you are going to look at experts-exchange - do it in FireFox, you'll see the unblocked answers at the bottom of the page.
Second, do you have a subform on that form that's using the autonumber/key field on the master form? Do you require the data that's on that subform to be saved (i.e., having its own key) before the main form is saved. You could be into a deadlock of A and B requiring each other to be saved first.
Other than that, you must somehow be accessing that autonumber field whenyou are saving it. The best I can suggest is to step through the code line by line.
A:
Are you trying to assign the value of an Identity field to a variable or something else before you have saved the record?
For whatever reason, your app is trying to read the value of the identity field before the record has been saved, which is what generates that identity field. In other words, no value exists for the Autonumber field until the row is saved.
I think we'd need to see more code or know more about the steps that lead up to this error to resolve it in more detail.
A:
You should have add some lines of code to show us how you're managing your data and what you are doing exactly. But I am suspecting an issue related to a recordset update. can you identify when the autonumber value is created? Is it available in a control on a form? Can you add a control to display this value to check how it is generated when adding a new record? Is the underlying recordset properly updated? Can you add something like me.recordset.update on some form events: I would try the OnCurrent one ...
|
MS Access ADP Autonumber
|
I am getting the following error in an MS Access ADP when trying to add a record on a form linked to a MS SQL Server 2000 table:
Run-time error '31004':
The value of an (AutoNumber) field
cannot be retrived prior to being
saved.
Please save the record that contains
the (AutoNumber) field prior to
performing this action.
note: retrieved is actually spelled wrong in the error.
Does anyone know what this means?
I've done a web search and was only able to find the answer at a certain site that only experts have access to.
|
[
"First of all, if you are going to look at experts-exchange - do it in FireFox, you'll see the unblocked answers at the bottom of the page.\nSecond, do you have a subform on that form that's using the autonumber/key field on the master form? Do you require the data that's on that subform to be saved (i.e., having its own key) before the main form is saved. You could be into a deadlock of A and B requiring each other to be saved first.\nOther than that, you must somehow be accessing that autonumber field whenyou are saving it. The best I can suggest is to step through the code line by line.\n",
"Are you trying to assign the value of an Identity field to a variable or something else before you have saved the record?\nFor whatever reason, your app is trying to read the value of the identity field before the record has been saved, which is what generates that identity field. In other words, no value exists for the Autonumber field until the row is saved.\nI think we'd need to see more code or know more about the steps that lead up to this error to resolve it in more detail.\n",
"You should have add some lines of code to show us how you're managing your data and what you are doing exactly. But I am suspecting an issue related to a recordset update. can you identify when the autonumber value is created? Is it available in a control on a form? Can you add a control to display this value to check how it is generated when adding a new record? Is the underlying recordset properly updated? Can you add something like me.recordset.update on some form events: I would try the OnCurrent one ... \n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"ms_access",
"sql_server"
] |
stackoverflow_0000059809_ms_access_sql_server.txt
|
Q:
Mixing C# Code and umanaged C++ code on Windows with Visual Studio
I would like to call my unmanaged C++ libraries from my C# code. What are the potential pitfalls and precautions that need to be taken? Thank you for your time.
A:
There are a couple routes you can go with this - one, you can update your unmanaged C++ libraries to have a managed C++ extensions wrapper around them and have C# utilize those classes directly. This is a bit time-consuming, but it provides a nice bridge to legacy unmanaged code. But be aware that managed C++ extensions are sometimes a bit hard to navigate themselves as the syntax is similar to unmanaged C++, but close enough that a very trained eye will be able to see the differences.
The other route to go is have your umnanaged C++ implement COM classes and have C# utilize it via an autogenerated interop assembly. This way is easier if you know your way around COM well enough.
Hope this helps.
A:
You're describing P/Invoke. That means your C++ library will need to expose itself via a DLL interface, and the interface will need to be simple enough to describe to P/Invoke via the call attributes. When the managed code calls into the unmanaged world, the parameters have to be marshalled, so it seems there could be a slight performance hit, but you'd have to do some testing to see if the marshalling is significant or not.
A:
The easiest way to start is to make sure that all the C++ functionality is exposed as 'C' style functions. Make sure to declare the function as _stdcall.
extern "C" __declspec(dllexport) int _stdcall Foo(int a)
Make sure you get the marshalling right, especially things like pointers & wchar_t *. If you get it wrong, it can be difficult to debug.
Debug it from either side, but not both. When debugging mixed native & managed, the debugger can get very slow. Debugging 1 side at a time saves lots of time.
Getting more specific would require a more specific question.
A:
This question is too broad. The only reasonable answer is P/Invoke, but that's kind of like saying that if you want to program for Windows you need to know the Win32 API.
Pretty much entire books have been written about P/Invoke (http://www.amazon.com/NET-COM-Complete-Interoperability-Guide/dp/067232170X), and of course entire websites have been made: http://www.pinvoke.net/.
A:
You can also call into unmanaged code via P/Invoke. This may be easier if your code doesn't currently use COM. I guess you would probably need to write some specific export points in your code using "C" bindings if you went this route.
Probably the biggest thing you have to watch out for in my experience is that the lack of deterministic garbage collection means that your destructors will not run when you might have thought they would previously. You need to keep this in mind and use IDisposable or some other method to make sure your managed code is cleaned up when you want it to be.
A:
Of course there is always PInvoke out there too if you packaged your code as DLLs with external entrypoints. None of the options are pain free. They depend on either a) your skill at writing COM or Managed C wrappers b) chancing your arm at PInvoke.
A:
I would take a look at swig, we use this to good effect on our project to expose our C++ API to other language platforms.
It's a well maintained project that effectively builds a thin wrapper around your C++ library that can allow languages such as C# to communicate directly with your native code - saving you the trouble of having to implement (and debug) glue code.
A:
If you want a good PInvoke examples you can look at PInvoke.net. It has examples of how to call most of win api functions.
Also you can use tool from this article Clr Inside Out: PInvoke that will translate your .h file to c# wrappers.
|
Mixing C# Code and umanaged C++ code on Windows with Visual Studio
|
I would like to call my unmanaged C++ libraries from my C# code. What are the potential pitfalls and precautions that need to be taken? Thank you for your time.
|
[
"There are a couple routes you can go with this - one, you can update your unmanaged C++ libraries to have a managed C++ extensions wrapper around them and have C# utilize those classes directly. This is a bit time-consuming, but it provides a nice bridge to legacy unmanaged code. But be aware that managed C++ extensions are sometimes a bit hard to navigate themselves as the syntax is similar to unmanaged C++, but close enough that a very trained eye will be able to see the differences.\nThe other route to go is have your umnanaged C++ implement COM classes and have C# utilize it via an autogenerated interop assembly. This way is easier if you know your way around COM well enough.\nHope this helps.\n",
"You're describing P/Invoke. That means your C++ library will need to expose itself via a DLL interface, and the interface will need to be simple enough to describe to P/Invoke via the call attributes. When the managed code calls into the unmanaged world, the parameters have to be marshalled, so it seems there could be a slight performance hit, but you'd have to do some testing to see if the marshalling is significant or not.\n",
"The easiest way to start is to make sure that all the C++ functionality is exposed as 'C' style functions. Make sure to declare the function as _stdcall.\nextern \"C\" __declspec(dllexport) int _stdcall Foo(int a)\nMake sure you get the marshalling right, especially things like pointers & wchar_t *. If you get it wrong, it can be difficult to debug.\nDebug it from either side, but not both. When debugging mixed native & managed, the debugger can get very slow. Debugging 1 side at a time saves lots of time.\nGetting more specific would require a more specific question.\n",
"This question is too broad. The only reasonable answer is P/Invoke, but that's kind of like saying that if you want to program for Windows you need to know the Win32 API.\nPretty much entire books have been written about P/Invoke (http://www.amazon.com/NET-COM-Complete-Interoperability-Guide/dp/067232170X), and of course entire websites have been made: http://www.pinvoke.net/.\n",
"You can also call into unmanaged code via P/Invoke. This may be easier if your code doesn't currently use COM. I guess you would probably need to write some specific export points in your code using \"C\" bindings if you went this route.\nProbably the biggest thing you have to watch out for in my experience is that the lack of deterministic garbage collection means that your destructors will not run when you might have thought they would previously. You need to keep this in mind and use IDisposable or some other method to make sure your managed code is cleaned up when you want it to be.\n",
"Of course there is always PInvoke out there too if you packaged your code as DLLs with external entrypoints. None of the options are pain free. They depend on either a) your skill at writing COM or Managed C wrappers b) chancing your arm at PInvoke. \n",
"I would take a look at swig, we use this to good effect on our project to expose our C++ API to other language platforms. \nIt's a well maintained project that effectively builds a thin wrapper around your C++ library that can allow languages such as C# to communicate directly with your native code - saving you the trouble of having to implement (and debug) glue code.\n",
"If you want a good PInvoke examples you can look at PInvoke.net. It has examples of how to call most of win api functions.\nAlso you can use tool from this article Clr Inside Out: PInvoke that will translate your .h file to c# wrappers.\n"
] |
[
3,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"c#",
"unmanaged",
"visual_c++",
"windows"
] |
stackoverflow_0000076629_c#_unmanaged_visual_c++_windows.txt
|
Q:
How to gracefully deal with ViewState errors?
I'm running some c# .net pages with various gridviews. If I ever leave any of them alone in a web browser for an extended period of time (usually overnight), I get the following error when I click any element on the page.
I'm not really sure where to start dealing with the problem. I don't mind resetting the page if it's viewstate has expired, but throwing an error is unacceptable!
Error: The state information is invalid for this page and might be corrupted.
Target: Void ThrowError(System.Exception, System.String, System.String, Boolean)
Data: System.Collections.ListDictionaryInternal
Inner: System.Web.UI.ViewStateException: Invalid viewstate. Client IP: 66.35.180.246 Port: 1799 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9) Gecko/2008052906 Firefox/3.0 ViewState: (**Very long Gibberish Omitted!**)
Offending URL: (**Omitted**)
Source: System.Web
Message: The state information is invalid for this page and might be corrupted.
Stack trace: at System.Web.UI.ViewStateException.ThrowError(Exception inner, String persistedState, String errorPageMessage, Boolean macValidationError) at System.Web.UI.ClientScriptManager.EnsureEventValidationFieldLoaded() at System.Web.UI.ClientScriptManager.ValidateEvent(String uniqueId, String argument) at System.Web.UI.Control.ValidateEvent(String uniqueID, String eventArgument) at System.Web.UI.WebControls.DropDownList.LoadPostData(String postDataKey, NameValueCollection postCollection) at System.Web.UI.WebControls.DropDownList.System.Web.UI.IPostBackDataHandler.LoadPostData(String postDataKey, NameValueCollection postCollection) at System.Web.UI.Page.ProcessPostData(NameValueCollection postData, Boolean fBeforeLoad) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
A:
That is odd as the ViewState is stored as a string in the webpage itself. So I do not see how an extended period of time would cause that error. Perhaps one or more objects on the page have been garbage collected or the application reset, so the viewstate is referencing old controls instead of the controls created when the application restarted.
Whatever the case, I feel your pain, these errors are never pleasant to debug, and I have no easy answer as to how to find the problem other than perhaps studying how ViewState works
A:
You can remove this error completely by saving your view state to a database and only cleaning within the duration you need to. This also sygnificantly improves the performance of your pages even shen using relatively small viewstates.
At the very least you can inherit from the Page class and add your own ViewStateLoad routen that check to see if it has expired and reloads the default state.
Check ViewState Provider - an implementation using Provider Model Design Pattern for providing a custom Viewstate provider.
A:
Alternatively if you know the time-out length then you could add a bit of javascript to the page which redirects the user to an alternative page if there has been no activity on the page after a preset period of time. You can then extend this to warn the customer that their session / page is about to expire and provide them with a means to extend it (e.g. javascript server call back).
A:
The above posts give you some answers on solving the problem. If just handling the ugly error in the interim is what you're looking for, custom errors are the easiest way to gracefully handle all your "ugly yellow errors"
http://msdn.microsoft.com/en-us/library/aa479319.aspx
http://msdn.microsoft.com/en-us/library/h0hfz6fc.aspx
A:
Another option is to add in a global error handler, that would capture the exception at the application level and redirect the user to a "Session Elapsed" page.
If you want an idea of a general implementation of a global error handler, I have one available on my website, I can give you the code if needed - http://iowacomputergurus.com/free-products/asp.net-global-error-handler.aspx
|
How to gracefully deal with ViewState errors?
|
I'm running some c# .net pages with various gridviews. If I ever leave any of them alone in a web browser for an extended period of time (usually overnight), I get the following error when I click any element on the page.
I'm not really sure where to start dealing with the problem. I don't mind resetting the page if it's viewstate has expired, but throwing an error is unacceptable!
Error: The state information is invalid for this page and might be corrupted.
Target: Void ThrowError(System.Exception, System.String, System.String, Boolean)
Data: System.Collections.ListDictionaryInternal
Inner: System.Web.UI.ViewStateException: Invalid viewstate. Client IP: 66.35.180.246 Port: 1799 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9) Gecko/2008052906 Firefox/3.0 ViewState: (**Very long Gibberish Omitted!**)
Offending URL: (**Omitted**)
Source: System.Web
Message: The state information is invalid for this page and might be corrupted.
Stack trace: at System.Web.UI.ViewStateException.ThrowError(Exception inner, String persistedState, String errorPageMessage, Boolean macValidationError) at System.Web.UI.ClientScriptManager.EnsureEventValidationFieldLoaded() at System.Web.UI.ClientScriptManager.ValidateEvent(String uniqueId, String argument) at System.Web.UI.Control.ValidateEvent(String uniqueID, String eventArgument) at System.Web.UI.WebControls.DropDownList.LoadPostData(String postDataKey, NameValueCollection postCollection) at System.Web.UI.WebControls.DropDownList.System.Web.UI.IPostBackDataHandler.LoadPostData(String postDataKey, NameValueCollection postCollection) at System.Web.UI.Page.ProcessPostData(NameValueCollection postData, Boolean fBeforeLoad) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
|
[
"That is odd as the ViewState is stored as a string in the webpage itself. So I do not see how an extended period of time would cause that error. Perhaps one or more objects on the page have been garbage collected or the application reset, so the viewstate is referencing old controls instead of the controls created when the application restarted. \nWhatever the case, I feel your pain, these errors are never pleasant to debug, and I have no easy answer as to how to find the problem other than perhaps studying how ViewState works\n",
"You can remove this error completely by saving your view state to a database and only cleaning within the duration you need to. This also sygnificantly improves the performance of your pages even shen using relatively small viewstates.\nAt the very least you can inherit from the Page class and add your own ViewStateLoad routen that check to see if it has expired and reloads the default state.\nCheck ViewState Provider - an implementation using Provider Model Design Pattern for providing a custom Viewstate provider.\n",
"Alternatively if you know the time-out length then you could add a bit of javascript to the page which redirects the user to an alternative page if there has been no activity on the page after a preset period of time. You can then extend this to warn the customer that their session / page is about to expire and provide them with a means to extend it (e.g. javascript server call back).\n",
"The above posts give you some answers on solving the problem. If just handling the ugly error in the interim is what you're looking for, custom errors are the easiest way to gracefully handle all your \"ugly yellow errors\"\nhttp://msdn.microsoft.com/en-us/library/aa479319.aspx\nhttp://msdn.microsoft.com/en-us/library/h0hfz6fc.aspx\n",
"Another option is to add in a global error handler, that would capture the exception at the application level and redirect the user to a \"Session Elapsed\" page. \nIf you want an idea of a general implementation of a global error handler, I have one available on my website, I can give you the code if needed - http://iowacomputergurus.com/free-products/asp.net-global-error-handler.aspx \n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"exception",
"viewstate"
] |
stackoverflow_0000073380_.net_c#_exception_viewstate.txt
|
Q:
What is the best way the _DoPostBack javascript method in Asp.net
I want to set a breakpoint on the __DoPostBack method, but it's a pain to find the correct file to set the breakpoint in.
The method __DoPostBack is contained in an auto-generated js file called something like:
ScriptResource.axd?d=P_lo2...
After a few post-backs visual studio gets littered with many of these files, and it's a bit of a bear to check which one the current page is referencing. Any thoughts?
A:
If you using IE7 for testing you can use View -> Script Debugger -> Break on next statement and then just click the button that generates the event(__DoPostBack)
A:
TBH, I dont think there is much value in setting a breakpoint within the Javascript since it pretty much comes straight back to the server anyways.
It would be best to set breakpoints in your server code.. Depending on what you are trying to debug this will be in different places.. Either in the page event cycle or a controls IPostBackEventHandler.RaisePostBackEvent handler.
|
What is the best way the _DoPostBack javascript method in Asp.net
|
I want to set a breakpoint on the __DoPostBack method, but it's a pain to find the correct file to set the breakpoint in.
The method __DoPostBack is contained in an auto-generated js file called something like:
ScriptResource.axd?d=P_lo2...
After a few post-backs visual studio gets littered with many of these files, and it's a bit of a bear to check which one the current page is referencing. Any thoughts?
|
[
"If you using IE7 for testing you can use View -> Script Debugger -> Break on next statement and then just click the button that generates the event(__DoPostBack)\n",
"TBH, I dont think there is much value in setting a breakpoint within the Javascript since it pretty much comes straight back to the server anyways.\nIt would be best to set breakpoints in your server code.. Depending on what you are trying to debug this will be in different places.. Either in the page event cycle or a controls IPostBackEventHandler.RaisePostBackEvent handler.\n"
] |
[
0,
0
] |
[] |
[] |
[
"asp.net",
"asp.net_ajax"
] |
stackoverflow_0000077025_asp.net_asp.net_ajax.txt
|
Q:
Best design for entities with multiple values
Say you have an entity like a vehicle that you are capturing detailed information about. The car you want to capture is painted red, black and white. The front tires are Bridgestone 275/35-18 and the rear tires are 325/30-19. And sometimes you can have just two tires (yes this would be considered a motorcycle which is a type of vehicle) and sometimes 18 tires that could all be different. Then there are some fields that are always single valued like engine size (if we let our imaginations run wild we can think of multi-engined vehicles but I am trying to keep this simple).
Our current strategy for dealing with this is to have a table for each of the fields that can have multiple values. This will spawn a large number of tables (we have a bunch of different entities with this requirement) and smells a little bad. Is this the best strategy and if not, what would be better?
A:
If it's a possibility for your app, you might want to look into couchdb.
A:
If you're using a relational database, your suggestion is pretty much the only way to do it. The theory of normal forms will give you more information about it - the Wikipedia articles about it are quite good, though slightly heavy going simply because it is a tricky theoretical subject when you get into the higher normalisation levels. The examples are mostly common sense though.
Assuming you have a Vehicle table, a Colour table and a TyreType table (sorry for the British spelling), you are presumably defining a VehicleTyre and VehicleColour table which acts as a join between the relevant pairs of tables. This structure is actually quite healthy. It not only encapsulates the information you want directly, but also lets you capture in a natural way things like which tyre is which (e.g. front left is Bridgestone 275/35-18) or how much of the car is painted red (e.g. with a percentage field on the VehicleColour table).
You may want to model a vehicle type entity which could govern how many tyres there are. While this is not necessary in order to get working SELECT queries out of the system, it is probably going to be useful both in your user interface and figuring out how many tyres to insert into your tables.
My company has lots of schemas which operate on exactly this basis - indeed our object-relational framework creates them automatically to manage many-to-many relationships (and sometimes even for one-to-many relationships depending on how we model them). Several of our apps have over 150 entities and over 100 of these join tables. There are no performance problems and no meaningful impact on manageability of the data, except that a few of the table names are annoyingly long.
A:
You're describing a Star Schema. I think its fairly standard practice in your kind of case
Edit: Actually your schema is slightly modified from the Star Schema, you use the primary key of the fact table in each of the dimension tables to join on so you can have multiple paint colors etc. Either way I think it's a fine way to deal with your entity. You may go one step further and normalize the dimension tables and then you'd have a Snowflake Schema
A:
It seems like you may be looking at something called Hierarchical Model.
Or maybe a simple list of (attr, value) pairs will do?
A:
If you're using SQL Server, don't be afraid to store the XML Data Type. I have found that it makes things like this much, much easier.
A:
It really depends on whether the variables themselves only have one variable (example: you can have a variable number of tires that are all the same type, or a set number of tires that are of variable type).
Since you seem to need to have multiple variables (eg. specific type for each tire, with a variable number of tires), I am afraid the best solution is to have specific tables for each specific area of the car you wish to customize.
If you have some fields that simply have a set of values to chose between (say, 2, 4 or 6 windows), you can simply use an enum or define a new field-type using User-Defined Domains (depending on which DBMS you're using).
A:
Your current strategy is the correct one. You're tracking so many kinds of data, so you'll need lots of tables. That's just how it is. Is the DBMS complaining?
|
Best design for entities with multiple values
|
Say you have an entity like a vehicle that you are capturing detailed information about. The car you want to capture is painted red, black and white. The front tires are Bridgestone 275/35-18 and the rear tires are 325/30-19. And sometimes you can have just two tires (yes this would be considered a motorcycle which is a type of vehicle) and sometimes 18 tires that could all be different. Then there are some fields that are always single valued like engine size (if we let our imaginations run wild we can think of multi-engined vehicles but I am trying to keep this simple).
Our current strategy for dealing with this is to have a table for each of the fields that can have multiple values. This will spawn a large number of tables (we have a bunch of different entities with this requirement) and smells a little bad. Is this the best strategy and if not, what would be better?
|
[
"If it's a possibility for your app, you might want to look into couchdb.\n",
"If you're using a relational database, your suggestion is pretty much the only way to do it. The theory of normal forms will give you more information about it - the Wikipedia articles about it are quite good, though slightly heavy going simply because it is a tricky theoretical subject when you get into the higher normalisation levels. The examples are mostly common sense though.\nAssuming you have a Vehicle table, a Colour table and a TyreType table (sorry for the British spelling), you are presumably defining a VehicleTyre and VehicleColour table which acts as a join between the relevant pairs of tables. This structure is actually quite healthy. It not only encapsulates the information you want directly, but also lets you capture in a natural way things like which tyre is which (e.g. front left is Bridgestone 275/35-18) or how much of the car is painted red (e.g. with a percentage field on the VehicleColour table).\nYou may want to model a vehicle type entity which could govern how many tyres there are. While this is not necessary in order to get working SELECT queries out of the system, it is probably going to be useful both in your user interface and figuring out how many tyres to insert into your tables.\nMy company has lots of schemas which operate on exactly this basis - indeed our object-relational framework creates them automatically to manage many-to-many relationships (and sometimes even for one-to-many relationships depending on how we model them). Several of our apps have over 150 entities and over 100 of these join tables. There are no performance problems and no meaningful impact on manageability of the data, except that a few of the table names are annoyingly long.\n",
"You're describing a Star Schema. I think its fairly standard practice in your kind of case\nEdit: Actually your schema is slightly modified from the Star Schema, you use the primary key of the fact table in each of the dimension tables to join on so you can have multiple paint colors etc. Either way I think it's a fine way to deal with your entity. You may go one step further and normalize the dimension tables and then you'd have a Snowflake Schema\n",
"It seems like you may be looking at something called Hierarchical Model.\nOr maybe a simple list of (attr, value) pairs will do?\n",
"If you're using SQL Server, don't be afraid to store the XML Data Type. I have found that it makes things like this much, much easier.\n",
"It really depends on whether the variables themselves only have one variable (example: you can have a variable number of tires that are all the same type, or a set number of tires that are of variable type).\nSince you seem to need to have multiple variables (eg. specific type for each tire, with a variable number of tires), I am afraid the best solution is to have specific tables for each specific area of the car you wish to customize.\nIf you have some fields that simply have a set of values to chose between (say, 2, 4 or 6 windows), you can simply use an enum or define a new field-type using User-Defined Domains (depending on which DBMS you're using).\n",
"Your current strategy is the correct one. You're tracking so many kinds of data, so you'll need lots of tables. That's just how it is. Is the DBMS complaining?\n"
] |
[
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"database_design",
"design_patterns"
] |
stackoverflow_0000065805_database_design_design_patterns.txt
|
Q:
How to retrieve error when launching sqlcmd from C#?
I need to run a stored procedure from a C# application.
I use the following code to do so:
Process sqlcmdCall = new Process();
sqlcmdCall.StartInfo.FileName = "sqlcmd.exe";
sqlcmdCall.StartInfo.Arguments = "-S localhost\\SQLEXPRESS -d some_db -Q \":EXIT(sp_test)\""
sqlcmdCall.Start();
sqlcmdCall.WaitForExit();
From the sqlcmdCall object after the call completes, I currently get an ExitCode of -100 for success and of 1 for failure (i.e. missing parameter, stored proc does not exist, etc...).
How can I customize these return codes?
H.
A:
I have a small VB.Net app that executes system commands like that. To capture error or success conditions I define regular expressions to match the error text output from the command and I capture the output like this:
myprocess.Start()
procReader = myprocess.StandardOutput()
While (Not procReader.EndOfStream)
procLine = procReader.ReadLine()
If (MatchesRegEx(errRegEx, procLine)) Then
writeDebug("Error reg ex: [" + errorRegEx + "] has matched: [" + procLine + "] setting hasError to true.")
Me.hasError = True
End If
writeLog(procLine)
End While
procReader.Close()
myprocess.WaitForExit(CInt(waitTime))
That way I can capture specific errors and also log all the output from the command in case I run across an unexpected error.
A:
If you are trying to call a stored procedure from c# you would want to use ADO.Net instead of the calling sqlcmd via the command line. Look at SqlConnection and SqlCommand in the System.Data.SqlClient namespace.
Once you are calling the stored procedure via SqlCommand you will be able to catch an exception raised by the stored procedure as well we reading the return value of the procedure if you need to.
A:
Even with windows authentication you can still use SqlCommand and SqlConnection to execute, and you don't have to re-invent the wheel for exception handling.
A simple connection configuration and a single SqlCommand can execute it without issue.
|
How to retrieve error when launching sqlcmd from C#?
|
I need to run a stored procedure from a C# application.
I use the following code to do so:
Process sqlcmdCall = new Process();
sqlcmdCall.StartInfo.FileName = "sqlcmd.exe";
sqlcmdCall.StartInfo.Arguments = "-S localhost\\SQLEXPRESS -d some_db -Q \":EXIT(sp_test)\""
sqlcmdCall.Start();
sqlcmdCall.WaitForExit();
From the sqlcmdCall object after the call completes, I currently get an ExitCode of -100 for success and of 1 for failure (i.e. missing parameter, stored proc does not exist, etc...).
How can I customize these return codes?
H.
|
[
"I have a small VB.Net app that executes system commands like that. To capture error or success conditions I define regular expressions to match the error text output from the command and I capture the output like this:\n myprocess.Start()\n procReader = myprocess.StandardOutput()\n\n While (Not procReader.EndOfStream)\n procLine = procReader.ReadLine()\n\n If (MatchesRegEx(errRegEx, procLine)) Then\n writeDebug(\"Error reg ex: [\" + errorRegEx + \"] has matched: [\" + procLine + \"] setting hasError to true.\")\n\n Me.hasError = True\n End If\n\n writeLog(procLine)\n End While\n\n procReader.Close()\n\n myprocess.WaitForExit(CInt(waitTime))\n\nThat way I can capture specific errors and also log all the output from the command in case I run across an unexpected error.\n",
"If you are trying to call a stored procedure from c# you would want to use ADO.Net instead of the calling sqlcmd via the command line. Look at SqlConnection and SqlCommand in the System.Data.SqlClient namespace.\nOnce you are calling the stored procedure via SqlCommand you will be able to catch an exception raised by the stored procedure as well we reading the return value of the procedure if you need to.\n",
"Even with windows authentication you can still use SqlCommand and SqlConnection to execute, and you don't have to re-invent the wheel for exception handling.\nA simple connection configuration and a single SqlCommand can execute it without issue.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"c#",
"sql_server"
] |
stackoverflow_0000073051_c#_sql_server.txt
|
Q:
report generation on php?
one of the most frequent requests I get is to create XY report for YZ App. These apps are normally built on PHP, so far I have manually created most of this reports, and while I enjoy the freedom of building it as I want, it usually becomes pretty tedious to calculate subtotals, averages, exporting to different formats etc.
What solutions are out there (free/OSS preferred) that help me get this repetitive tasks cranking?
edits:
I'm talking about reports/summaries from SQL data. Many times from DBs not designed for reporting use.
while I'm aware of "business-intelligence" we're not ready to implement a full scaled "intelligence" structure, looking more for a helper of sorts...
A:
The problem you're facing is solved by so-called Business Intelligence software. This software tends to be bloated and expensive, but if you know your way around them you will be able to crank out such reports in no time at all.
I'm only familiar with one particular proprietary solution, which isn't too great either. But a quick search turns up the following page, which lists a number of free/open source alternatives:
http://en.wikipedia.org/wiki/Business_intelligence_tools
A:
It depends on what kind of reports you're talking about. For example... site stats... you could install google analytics and the client could export whatever format they wanted.
A:
A little Google search gave me the following OSS:
RLib
http://rlib.sicompos.com/
PM Report
http://www.hotscripts.com/Detailed/48187.html
I don't have any information on them, sorry.
|
report generation on php?
|
one of the most frequent requests I get is to create XY report for YZ App. These apps are normally built on PHP, so far I have manually created most of this reports, and while I enjoy the freedom of building it as I want, it usually becomes pretty tedious to calculate subtotals, averages, exporting to different formats etc.
What solutions are out there (free/OSS preferred) that help me get this repetitive tasks cranking?
edits:
I'm talking about reports/summaries from SQL data. Many times from DBs not designed for reporting use.
while I'm aware of "business-intelligence" we're not ready to implement a full scaled "intelligence" structure, looking more for a helper of sorts...
|
[
"The problem you're facing is solved by so-called Business Intelligence software. This software tends to be bloated and expensive, but if you know your way around them you will be able to crank out such reports in no time at all. \nI'm only familiar with one particular proprietary solution, which isn't too great either. But a quick search turns up the following page, which lists a number of free/open source alternatives: \nhttp://en.wikipedia.org/wiki/Business_intelligence_tools\n",
"It depends on what kind of reports you're talking about. For example... site stats... you could install google analytics and the client could export whatever format they wanted.\n",
"A little Google search gave me the following OSS:\n\nRLib\n\nhttp://rlib.sicompos.com/\n\nPM Report\n\nhttp://www.hotscripts.com/Detailed/48187.html\n\n\n\nI don't have any information on them, sorry.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"php",
"reporting"
] |
stackoverflow_0000065976_php_reporting.txt
|
Q:
Checklist for testing a new site
What are the most common things to test in a new site?
For instance to prevent exploits by bots, malicious users, massive load, etc.?
And just as importantly, what tools and approaches should you use?
(some stress test tools are really expensive/had to use, do you write your own? etc)
Common exploits that should be checked for.
Edit: the reason for this question is partially from being in SO beta, however please refrain from SO beta discussion, SO beta got me thinking about my own site and good thing too. This is meant to be a checklist for things that I, you, or someone else hasn't thought of before.
A:
Try and break your own site before someone else does. Your web site is basically a publicly accessible API that allows access to a database and other backend systems. Test the URLs as if they were any other API. I like to start by cataloging all URLs that have some sort of permenant affect on the state of the system - this is easy if you are doing Ruby on Rails development or trying to follow a RESTful design pattern. For each of those URLs, try running a GET, POST, PUT or DELETE HTTP methods with different parameters so that you can ensure that you're only giving access to what you want to give access to.
This of course is in addition to obvious: Functional testing, Load Testing, SQL Injection, XSS etc.
A:
Turn off javascript and make sure your site can still be navigated.
Even if you want to ignore the small but significant number of people who have it disabled, this will impact search engines as well.
A:
What do friendly bots see (eg: Google); check using Google Webmaster Tools;
A:
YSlow can give you a quick analysis of different metrics.
A:
Regarding tools for running functional tests of a web pages, I've found that Selenium IDE to be useful.
The Firefox (version 2 only compatible at the moment) plug in lets your capture almost all web events, and save them and replay them in the same browser.
In conjunction with another Firefox https://addons.mozilla.org/en-US/firefox/addon/1843"> Firebug
you can create some very powerful tests.
If you want to set up Selenium Remote Control
you can then convert the Selenium IDE tests into nUnit tests, which you can run automatically.
I use cruise control and run these web tests as part of a daily build.
The nice thing about using Selenium remote control is that it can run the same functional tests on multiple browsers and operating systems, something that you can't do with the IDE.
Although the web tests will take ages to run, there is an version of Selenium called Selenium Grid that lets you use any old hardware you have spare to run the tests in parallel as part of a computing grid. Not tried this myself, but it sounds interesting.
All of the above is open source and free which helped me convince management to use if :-)
A:
For checking the cross browser and cross platform look of your site, browershots.org is maybe the best free tool that can safe a lot of time and costs.
A:
There's seperate stages for this one.
Firstly there's the technical testing, where you check all technical functionality:
SQL injections
Cross-site Scripting (XSS)
load times
stress levels
Then there's the phase where you have someone completely computer-illiterate sit down and ask them to find something. Not only does it show you where there's flaws in your navigational logic (I find that developers look upon things way differently than 'other people') but they're also guaranteed to find some way to break your site.
|
Checklist for testing a new site
|
What are the most common things to test in a new site?
For instance to prevent exploits by bots, malicious users, massive load, etc.?
And just as importantly, what tools and approaches should you use?
(some stress test tools are really expensive/had to use, do you write your own? etc)
Common exploits that should be checked for.
Edit: the reason for this question is partially from being in SO beta, however please refrain from SO beta discussion, SO beta got me thinking about my own site and good thing too. This is meant to be a checklist for things that I, you, or someone else hasn't thought of before.
|
[
"Try and break your own site before someone else does. Your web site is basically a publicly accessible API that allows access to a database and other backend systems. Test the URLs as if they were any other API. I like to start by cataloging all URLs that have some sort of permenant affect on the state of the system - this is easy if you are doing Ruby on Rails development or trying to follow a RESTful design pattern. For each of those URLs, try running a GET, POST, PUT or DELETE HTTP methods with different parameters so that you can ensure that you're only giving access to what you want to give access to. \nThis of course is in addition to obvious: Functional testing, Load Testing, SQL Injection, XSS etc.\n",
"Turn off javascript and make sure your site can still be navigated.\nEven if you want to ignore the small but significant number of people who have it disabled, this will impact search engines as well.\n",
"\nWhat do friendly bots see (eg: Google); check using Google Webmaster Tools;\n\n",
"YSlow can give you a quick analysis of different metrics. \n",
"Regarding tools for running functional tests of a web pages, I've found that Selenium IDE to be useful.\nThe Firefox (version 2 only compatible at the moment) plug in lets your capture almost all web events, and save them and replay them in the same browser.\nIn conjunction with another Firefox https://addons.mozilla.org/en-US/firefox/addon/1843\"> Firebug\nyou can create some very powerful tests.\nIf you want to set up Selenium Remote Control\nyou can then convert the Selenium IDE tests into nUnit tests, which you can run automatically.\nI use cruise control and run these web tests as part of a daily build.\nThe nice thing about using Selenium remote control is that it can run the same functional tests on multiple browsers and operating systems, something that you can't do with the IDE.\nAlthough the web tests will take ages to run, there is an version of Selenium called Selenium Grid that lets you use any old hardware you have spare to run the tests in parallel as part of a computing grid. Not tried this myself, but it sounds interesting.\nAll of the above is open source and free which helped me convince management to use if :-)\n",
"For checking the cross browser and cross platform look of your site, browershots.org is maybe the best free tool that can safe a lot of time and costs.\n",
"There's seperate stages for this one.\nFirstly there's the technical testing, where you check all technical functionality:\n\nSQL injections\nCross-site Scripting (XSS)\nload times\nstress levels\n\nThen there's the phase where you have someone completely computer-illiterate sit down and ask them to find something. Not only does it show you where there's flaws in your navigational logic (I find that developers look upon things way differently than 'other people') but they're also guaranteed to find some way to break your site.\n"
] |
[
4,
2,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"testing"
] |
stackoverflow_0000023016_testing.txt
|
Q:
Can operator>> read an int hex AND decimal?
Can I persuade operator>> in C++ to read both a hex value AND and a decimal value? The following program demonstrates how reading hex goes wrong. I'd like the same istringstream to be able to read both hex and decimal.
#include <iostream>
#include <sstream>
int main(int argc, char** argv)
{
int result = 0;
// std::istringstream is("5"); // this works
std::istringstream is("0x5"); // this fails
while ( is.good() ) {
if ( is.peek() != EOF )
is >> result;
else
break;
}
if ( is.fail() )
std::cout << "failed to read string" << std::endl;
else
std::cout << "successfully read string" << std::endl;
std::cout << "result: " << result << std::endl;
}
A:
You need to tell C++ what your base is going to be.
Want to parse a hex number? Change your "is >> result" line to:
is >> std::hex >> result;
Putting a std::dec indicates decimal numbers, std::oct indicates octal.
A:
Use std::setbase(0) which enables prefix dependent parsing. It will be able to parse 10 (dec) as 10 decimal, 0x10 (hex) as 16 decimal and 010 (octal) as 8 decimal.
#include <iomanip>
is >> std::setbase(0) >> result;
|
Can operator>> read an int hex AND decimal?
|
Can I persuade operator>> in C++ to read both a hex value AND and a decimal value? The following program demonstrates how reading hex goes wrong. I'd like the same istringstream to be able to read both hex and decimal.
#include <iostream>
#include <sstream>
int main(int argc, char** argv)
{
int result = 0;
// std::istringstream is("5"); // this works
std::istringstream is("0x5"); // this fails
while ( is.good() ) {
if ( is.peek() != EOF )
is >> result;
else
break;
}
if ( is.fail() )
std::cout << "failed to read string" << std::endl;
else
std::cout << "successfully read string" << std::endl;
std::cout << "result: " << result << std::endl;
}
|
[
"You need to tell C++ what your base is going to be.\nWant to parse a hex number? Change your \"is >> result\" line to:\nis >> std::hex >> result;\n\nPutting a std::dec indicates decimal numbers, std::oct indicates octal.\n",
"Use std::setbase(0) which enables prefix dependent parsing. It will be able to parse 10 (dec) as 10 decimal, 0x10 (hex) as 16 decimal and 010 (octal) as 8 decimal.\n#include <iomanip>\nis >> std::setbase(0) >> result;\n\n"
] |
[
12,
12
] |
[
"0x is C/C++ specific prefix. A hex number is just digits like a decimal one.\nYou'll need to check for presence of those characters then parse appropriately.\n"
] |
[
-2
] |
[
"c++",
"hex",
"istringstream"
] |
stackoverflow_0000077266_c++_hex_istringstream.txt
|
Q:
Virtual functions in constructors, why do languages differ?
In C++ when a virtual function is called from within a constructor it doesn't behave like a virtual function.
I think everyone who encountered this behavior for the first time was surprised but on second thought it made sense:
As long as the derived constructor has not been executed the object is not yet a derived instance.
So how can a derived function be called? The preconditions haven't had the chance to be set up. Example:
class base {
public:
base()
{
std::cout << "foo is " << foo() << std::endl;
}
virtual int foo() { return 42; }
};
class derived : public base {
int* ptr_;
public:
derived(int i) : ptr_(new int(i*i)) { }
// The following cannot be called before derived::derived due to how C++ behaves,
// if it was possible... Kaboom!
virtual int foo() { return *ptr_; }
};
It's exactly the same for Java and .NET yet they chose to go the other way, and is possibly the only reason for the principle of least surprise?
Which do you think is the correct choice?
A:
There's a fundamental difference in how the languages define an object's life time. In Java and .Net the object members are zero/null initialized before any constructor is run and is at this point that the object life time begins. So when you enter the constructor you've already got an initialized object.
In C++ the object life time only begins when the constructor finishes (although member variables and base classes are fully constructed before it starts). This explains the behaviour when virtual functions are called and also why the destructor isn't run if there's an exception in the constructor's body.
The problem with the Java/.Net definition of object lifetime is that it's harder to make sure the object always meets its invariant without having to put in special cases for when the object is initialized but the constructor hasn't run. The problem with the C++ definition is that you have this odd period where the object is in limbo and not fully constructed.
A:
Both ways can lead to unexpected results. Your best bet is to not call a virtual function in your constructor at all.
The C++ way I think makes more sense, but leads to expectation problems when someone reviews your code. If you are aware of this situation, you should purposely not put your code in this situation for later debugging's sake.
A:
Virtual functions in constructors, why do languages differ?
Because there's no one good behaviour. I find the C++ behaviour makes more sense (since base class c-tors are called first, it stands to reason that they should call base class virtual functions--after all, the derived class c-tor hasn't run yet, so it may not have set up the right preconditions for the derived class virtual function).
But sometimes, where I want to use the virtual functions to initialize state (so it doesn't matter that they're being called with the state uninitialized) the C#/Java behaviour is nicer.
A:
I think C++ offers the best semantics in terms of having the 'most correct' behavior ... however it is more work for the compiler and the code is definitiely non-intuitive to someone reading it later.
With the .NET approach the function must be very limited not to rely on any derived object state.
A:
Delphi makes good use of virtual constructors in the VCL GUI framework:
type
TComponent = class
public
constructor Create(AOwner: TComponent); virtual; // virtual constructor
end;
TMyEdit = class(TComponent)
public
constructor Create(AOwner: TComponent); override; // override virtual constructor
end;
TMyButton = class(TComponent)
public
constructor Create(AOwner: TComponent); override; // override virtual constructor
end;
TComponentClass = class of TComponent;
function CreateAComponent(ComponentClass: TComponentClass; AOwner: TComponent): TComponent;
begin
Result := ComponentClass.Create(AOwner);
end;
var
MyEdit: TMyEdit;
MyButton: TMyButton;
begin
MyEdit := CreateAComponent(TMyEdit, Form) as TMyEdit;
MyButton := CreateAComponent(TMyButton, Form) as TMyButton;
end;
A:
I have found the C++ behavior very annoying. You cannot write virtual functions to, for instance, return the desired size of the object, and have the default constructor initialize each item. For instance it would be nice to do:
BaseClass() {
for (int i=0; i<virtualSize(); i++)
initialize_stuff_for_index(i);
}
Then again the advantage of C++ behavior is that it discourages constuctors like the above from being written.
I don't think the problem of calling methods that assume the constructor has been finished is a good excuse for C++. If this really was a problem then the constructor would not be allowed to call any methods, since the same problem can apply to methods for the base class.
Another point against C++ is that the behavior is much less efficient. Although the constructor knows directly what it calls, the vtab pointer has to be changed for every single class from base to final, because the constructor might call other methods that will call virtual functions. From my experience this wastes far more time than is saved by making virtual functions calls in the constructor more efficient.
Far more annoying is that this is also true of destructors. If you write a virtual cleanup() function, and the base class destructor does cleanup(), it certainly does not do what you expect.
This and the fact that C++ calls destructors on static objects on exit have really pissed me off for a long time.
|
Virtual functions in constructors, why do languages differ?
|
In C++ when a virtual function is called from within a constructor it doesn't behave like a virtual function.
I think everyone who encountered this behavior for the first time was surprised but on second thought it made sense:
As long as the derived constructor has not been executed the object is not yet a derived instance.
So how can a derived function be called? The preconditions haven't had the chance to be set up. Example:
class base {
public:
base()
{
std::cout << "foo is " << foo() << std::endl;
}
virtual int foo() { return 42; }
};
class derived : public base {
int* ptr_;
public:
derived(int i) : ptr_(new int(i*i)) { }
// The following cannot be called before derived::derived due to how C++ behaves,
// if it was possible... Kaboom!
virtual int foo() { return *ptr_; }
};
It's exactly the same for Java and .NET yet they chose to go the other way, and is possibly the only reason for the principle of least surprise?
Which do you think is the correct choice?
|
[
"There's a fundamental difference in how the languages define an object's life time. In Java and .Net the object members are zero/null initialized before any constructor is run and is at this point that the object life time begins. So when you enter the constructor you've already got an initialized object.\nIn C++ the object life time only begins when the constructor finishes (although member variables and base classes are fully constructed before it starts). This explains the behaviour when virtual functions are called and also why the destructor isn't run if there's an exception in the constructor's body.\nThe problem with the Java/.Net definition of object lifetime is that it's harder to make sure the object always meets its invariant without having to put in special cases for when the object is initialized but the constructor hasn't run. The problem with the C++ definition is that you have this odd period where the object is in limbo and not fully constructed.\n",
"Both ways can lead to unexpected results. Your best bet is to not call a virtual function in your constructor at all. \nThe C++ way I think makes more sense, but leads to expectation problems when someone reviews your code. If you are aware of this situation, you should purposely not put your code in this situation for later debugging's sake.\n",
"\nVirtual functions in constructors, why do languages differ?\n\nBecause there's no one good behaviour. I find the C++ behaviour makes more sense (since base class c-tors are called first, it stands to reason that they should call base class virtual functions--after all, the derived class c-tor hasn't run yet, so it may not have set up the right preconditions for the derived class virtual function).\nBut sometimes, where I want to use the virtual functions to initialize state (so it doesn't matter that they're being called with the state uninitialized) the C#/Java behaviour is nicer.\n",
"I think C++ offers the best semantics in terms of having the 'most correct' behavior ... however it is more work for the compiler and the code is definitiely non-intuitive to someone reading it later.\nWith the .NET approach the function must be very limited not to rely on any derived object state.\n",
"Delphi makes good use of virtual constructors in the VCL GUI framework: \ntype\n TComponent = class\n public\n constructor Create(AOwner: TComponent); virtual; // virtual constructor\n end;\n\n TMyEdit = class(TComponent)\n public\n constructor Create(AOwner: TComponent); override; // override virtual constructor\n end;\n\n TMyButton = class(TComponent)\n public\n constructor Create(AOwner: TComponent); override; // override virtual constructor\n end;\n\n TComponentClass = class of TComponent;\n\nfunction CreateAComponent(ComponentClass: TComponentClass; AOwner: TComponent): TComponent;\nbegin\n Result := ComponentClass.Create(AOwner);\nend;\n\nvar\n MyEdit: TMyEdit;\n MyButton: TMyButton;\nbegin\n MyEdit := CreateAComponent(TMyEdit, Form) as TMyEdit;\n MyButton := CreateAComponent(TMyButton, Form) as TMyButton;\nend;\n\n",
"I have found the C++ behavior very annoying. You cannot write virtual functions to, for instance, return the desired size of the object, and have the default constructor initialize each item. For instance it would be nice to do:\nBaseClass() {\n for (int i=0; i<virtualSize(); i++)\n initialize_stuff_for_index(i);\n}\nThen again the advantage of C++ behavior is that it discourages constuctors like the above from being written.\nI don't think the problem of calling methods that assume the constructor has been finished is a good excuse for C++. If this really was a problem then the constructor would not be allowed to call any methods, since the same problem can apply to methods for the base class.\nAnother point against C++ is that the behavior is much less efficient. Although the constructor knows directly what it calls, the vtab pointer has to be changed for every single class from base to final, because the constructor might call other methods that will call virtual functions. From my experience this wastes far more time than is saved by making virtual functions calls in the constructor more efficient.\nFar more annoying is that this is also true of destructors. If you write a virtual cleanup() function, and the base class destructor does cleanup(), it certainly does not do what you expect.\nThis and the fact that C++ calls destructors on static objects on exit have really pissed me off for a long time.\n"
] |
[
11,
7,
2,
1,
0,
0
] |
[] |
[] |
[
".net",
"c++",
"java",
"language_agnostic"
] |
stackoverflow_0000036832_.net_c++_java_language_agnostic.txt
|
Q:
PostgreSQL 8.3 privileges not updated - wrong usage?
I'm having trouble granting privileges to another user in PostgreSQL 8.3. While the GRANT command gives me no error, the privileges do not show up. Do I need to "flush" them?
sirprize=# CREATE DATABASE testdb;
CREATE DATABASE
sirprize=# GRANT ALL PRIVILEGES ON DATABASE testdb TO testuser;
GRANT
sirprize=# \c testdb
You are now connected to database "testdb".
testdb=# \z
Access privileges for database "testdb"
Schema | Name | Type | Access privileges
--------+------+------+-------------------
(0 rows)
testdb=#
A:
\z Shows your table, view, and sequence permissions, for the objects contained within the Database. It does not show permissions on the database itself. If you create a table or some other object within 'testdb', it will then show up in \z's output.
You can see which Databases exist on your system with \l (or \l+ for a bit more info).
See section 9.22. of the PostgreSQL 8.3 manual for information about how to programatically determine which permissions exist for a user on a given database.
|
PostgreSQL 8.3 privileges not updated - wrong usage?
|
I'm having trouble granting privileges to another user in PostgreSQL 8.3. While the GRANT command gives me no error, the privileges do not show up. Do I need to "flush" them?
sirprize=# CREATE DATABASE testdb;
CREATE DATABASE
sirprize=# GRANT ALL PRIVILEGES ON DATABASE testdb TO testuser;
GRANT
sirprize=# \c testdb
You are now connected to database "testdb".
testdb=# \z
Access privileges for database "testdb"
Schema | Name | Type | Access privileges
--------+------+------+-------------------
(0 rows)
testdb=#
|
[
"\\z Shows your table, view, and sequence permissions, for the objects contained within the Database. It does not show permissions on the database itself. If you create a table or some other object within 'testdb', it will then show up in \\z's output.\nYou can see which Databases exist on your system with \\l (or \\l+ for a bit more info).\nSee section 9.22. of the PostgreSQL 8.3 manual for information about how to programatically determine which permissions exist for a user on a given database.\n"
] |
[
12
] |
[] |
[] |
[
"authentication",
"postgresql",
"privileges",
"roles"
] |
stackoverflow_0000075696_authentication_postgresql_privileges_roles.txt
|
Q:
How to instantiate a Java array given an array type at runtime?
In the Java collections framework, the Collection interface declares the following method:
<T> T[] toArray(T[] a)
Returns an array containing all of the elements in this collection; the runtime type of the returned array is that of the specified array. If the collection fits in the specified array, it is returned therein. Otherwise, a new array is allocated with the runtime type of the specified array and the size of this collection.
If you wanted to implement this method, how would you create an array of the type of a, known only at runtime?
A:
Use the static method
java.lang.reflect.Array.newInstance(Class<?> componentType, int length)
A tutorial on its use can be found here:
http://java.sun.com/docs/books/tutorial/reflect/special/arrayInstance.html
A:
By looking at how ArrayList does it:
public <T> T[] toArray(T[] a) {
if (a.length < size)
a = (T[])java.lang.reflect.Array.newInstance(a.getClass().getComponentType(), size);
System.arraycopy(elementData, 0, a, 0, size);
if (a.length > size)
a[size] = null;
return a;
}
A:
Array.newInstance(Class componentType, int length)
|
How to instantiate a Java array given an array type at runtime?
|
In the Java collections framework, the Collection interface declares the following method:
<T> T[] toArray(T[] a)
Returns an array containing all of the elements in this collection; the runtime type of the returned array is that of the specified array. If the collection fits in the specified array, it is returned therein. Otherwise, a new array is allocated with the runtime type of the specified array and the size of this collection.
If you wanted to implement this method, how would you create an array of the type of a, known only at runtime?
|
[
"Use the static method\njava.lang.reflect.Array.newInstance(Class<?> componentType, int length)\n\nA tutorial on its use can be found here:\nhttp://java.sun.com/docs/books/tutorial/reflect/special/arrayInstance.html\n",
"By looking at how ArrayList does it:\npublic <T> T[] toArray(T[] a) {\n if (a.length < size)\n a = (T[])java.lang.reflect.Array.newInstance(a.getClass().getComponentType(), size);\n System.arraycopy(elementData, 0, a, 0, size);\n if (a.length > size)\n a[size] = null;\n return a;\n}\n\n",
"Array.newInstance(Class componentType, int length)\n\n"
] |
[
34,
20,
3
] |
[
"To create a new array of a generic type (which is only known at runtime), you have to create an array of Objects and simply cast it to the generic type and then use it as such. This is a limitation of the generics implementation of Java (erasure).\nT[] newArray = (T[]) new Object[X]; // where X is the number of elements you want.\n\nThe function then takes the array given (a) and uses it (checking it's size beforehand) or creates a new one.\n"
] |
[
-1
] |
[
"arrays",
"collections",
"java"
] |
stackoverflow_0000077387_arrays_collections_java.txt
|
Q:
Is it possible to send WM_QUERYENDSESSION messages to a window in a different process?
I want to debug a windows C++ application I've written to see why it isn't responding to WM_QUERYENDSESSION how I expect it to. Clearly it's a little tricky to do this by just shutting the system down. Is there any utility or code which I can use to send a fake WM_QUERYENDSESSION to my application windows myself?
A:
I've used the Win32::GuiTest Perl module to do this kind of thing in the past.
A:
The Windows API SendMessage can be used to do this.
http://msdn.microsoft.com/en-us/library/ms644950(VS.85).aspx
IS ti possible it's not responding because some other running process has responded with a zero (making the system wait on it.)
A:
Yes of course, it possible. I faced a similar issue some months ago where some (unknown, but probably mine) app was preventing shutdown, so I wrote some quick code that used EnumWindows to enumerate all the top level windows, sent each one a WM_QUERYENDSESSION message, noted what the return value from SendMessage was and stopped the enumeration if anyone returned FALSE. Took about ten minutes in C++/MFC. This was the guts of it:
void CQes_testDlg::OnBtnTest()
{
// enumerate all the top-level windows.
m_ctrl_ListMsgs.ResetContent();
EnumWindows (EnumProc, 0);
}
BOOL CALLBACK EnumProc (HWND hTarget, LPARAM lParam)
{
CString csTitle;
CString csMsg;
CWnd * pWnd = CWnd::FromHandle (hTarget);
BOOL bRetVal = TRUE;
DWORD dwPID;
if (pWnd)
{
pWnd->GetWindowText (csTitle);
if (csTitle.GetLength() == 0)
{
GetWindowThreadProcessId (hTarget, &dwPID);
csTitle.Format ("<PID=%d>", dwPID);
}
if (pWnd->SendMessage (WM_QUERYENDSESSION, 0, ENDSESSION_LOGOFF))
{
csMsg.Format ("window 0x%X (%s) returned TRUE", hTarget, csTitle);
}
else
{
csMsg.Format ("window 0x%X (%s) returned FALSE", hTarget, csTitle);
bRetVal = FALSE;
}
mg_pThis->m_ctrl_ListMsgs.AddString (csMsg);
}
else
{
csMsg.Format ("Unable to resolve HWND 0x%X to a CWnd", hTarget);
mg_pThis->m_ctrl_ListMsgs.AddString (csMsg);
}
return bRetVal;
}
mg_pThis was just a local copy of the dialog's this pointer, so the helper callback could access it. I told you it was quick and dirty :-)
A:
Yes. If you can get the window handle (maybe using FindWindow()), you can send/post any message to it as long as the WPARAM & LPARAM aren't pointers.
|
Is it possible to send WM_QUERYENDSESSION messages to a window in a different process?
|
I want to debug a windows C++ application I've written to see why it isn't responding to WM_QUERYENDSESSION how I expect it to. Clearly it's a little tricky to do this by just shutting the system down. Is there any utility or code which I can use to send a fake WM_QUERYENDSESSION to my application windows myself?
|
[
"I've used the Win32::GuiTest Perl module to do this kind of thing in the past.\n",
"The Windows API SendMessage can be used to do this.\nhttp://msdn.microsoft.com/en-us/library/ms644950(VS.85).aspx\nIS ti possible it's not responding because some other running process has responded with a zero (making the system wait on it.)\n",
"Yes of course, it possible. I faced a similar issue some months ago where some (unknown, but probably mine) app was preventing shutdown, so I wrote some quick code that used EnumWindows to enumerate all the top level windows, sent each one a WM_QUERYENDSESSION message, noted what the return value from SendMessage was and stopped the enumeration if anyone returned FALSE. Took about ten minutes in C++/MFC. This was the guts of it:\nvoid CQes_testDlg::OnBtnTest() \n{ \n // enumerate all the top-level windows. \n m_ctrl_ListMsgs.ResetContent(); \n EnumWindows (EnumProc, 0); \n} \n\n\nBOOL CALLBACK EnumProc (HWND hTarget, LPARAM lParam) \n{ \n CString csTitle; \n CString csMsg; \n CWnd * pWnd = CWnd::FromHandle (hTarget); \n BOOL bRetVal = TRUE; \n DWORD dwPID; \n\n if (pWnd) \n { \n pWnd->GetWindowText (csTitle); \n if (csTitle.GetLength() == 0) \n { \n GetWindowThreadProcessId (hTarget, &dwPID); \n csTitle.Format (\"<PID=%d>\", dwPID); \n } \n\n if (pWnd->SendMessage (WM_QUERYENDSESSION, 0, ENDSESSION_LOGOFF)) \n { \n csMsg.Format (\"window 0x%X (%s) returned TRUE\", hTarget, csTitle); \n } \n else \n { \n csMsg.Format (\"window 0x%X (%s) returned FALSE\", hTarget, csTitle); \n bRetVal = FALSE; \n } \n\n mg_pThis->m_ctrl_ListMsgs.AddString (csMsg);\n }\n else \n { \n csMsg.Format (\"Unable to resolve HWND 0x%X to a CWnd\", hTarget); \n mg_pThis->m_ctrl_ListMsgs.AddString (csMsg); \n } \n return bRetVal; \n}\n\nmg_pThis was just a local copy of the dialog's this pointer, so the helper callback could access it. I told you it was quick and dirty :-)\n",
"Yes. If you can get the window handle (maybe using FindWindow()), you can send/post any message to it as long as the WPARAM & LPARAM aren't pointers.\n"
] |
[
1,
1,
1,
0
] |
[] |
[] |
[
"message",
"winapi"
] |
stackoverflow_0000077133_message_winapi.txt
|
Q:
How to take screenshot in Mac OS X using Cocoa or C++
How to take screenshot programmically of desktop area in Mac OS X ?
A:
Two interesting options I have seen, but yet to use professionally, are the screencapture utility and a MacFuse demo.
The screencapture utility has been around since 10.2, according to the man page, and could be linked to a Cocoa application by use of NSTask.
The MacFuse demo worked by creating a new screenshot each time a folder was opened, or something like that. The idea being you could write a quick script to access the image when you needed it, without having to have the script actually run on that machine.
But seriously, Apple has some other sample code called "Son of Grab" which uses the new CGWindow API which is pretty awesome.
http://developer.apple.com/samplecode/SonOfGrab/
A:
One way of going about doing this would be to use NSTask in conjuction with the 'screencapture' command line command.
For example:
NSTask *theProcess;
theProcess = [[NSTask alloc] init];
[theProcess setLaunchPath:@"/usr/sbin/screencapture"];
// use arguments to set save location
[theProcess setArguments:@"blahblah"];
[theProcess launch];
The you could open up the file wherever you told it to be saved, process it, and then delete it as needed. Obviously stopgap, but it would work.
A:
If you're fine with Leopard compatibility, there's a very powerful new CGWindow API that will let you grab screen shots, window shots, or composites of any range of window layers.
http://developer.apple.com/samplecode/SonOfGrab/
A:
Qt includes an example screenshot app in examples\desktop\screenshot. Qt works on a range of platforms, including MacOSX.
http://trolltech.com/products/qt/
A:
The following might be helpful if you are attempting to accomplish this with C++ or python. Also, this would be even more helpful in the case that you want your programmatic method to be cross-platform portable. (Windows, Linux, Mac osx, and even beyond)
An earlier response mentions QT.
In the same way that QT will allow you to capture and save a screenshot, so does another "competing" framework, namely wxWidgets. wxWidgets is a C++ framework, but it also provides python bindings via wxPython.
To read more, use the following link, search the book for wxScreenDC and choose "Page 139" from the list of pages that match the search:
http://books.google.com/books?id=CyMsvtgnq0QC&vq="accessing+the+screen+with+wxScreendc"
|
How to take screenshot in Mac OS X using Cocoa or C++
|
How to take screenshot programmically of desktop area in Mac OS X ?
|
[
"Two interesting options I have seen, but yet to use professionally, are the screencapture utility and a MacFuse demo.\nThe screencapture utility has been around since 10.2, according to the man page, and could be linked to a Cocoa application by use of NSTask.\nThe MacFuse demo worked by creating a new screenshot each time a folder was opened, or something like that. The idea being you could write a quick script to access the image when you needed it, without having to have the script actually run on that machine.\nBut seriously, Apple has some other sample code called \"Son of Grab\" which uses the new CGWindow API which is pretty awesome. \nhttp://developer.apple.com/samplecode/SonOfGrab/\n",
"One way of going about doing this would be to use NSTask in conjuction with the 'screencapture' command line command.\nFor example:\nNSTask *theProcess;\ntheProcess = [[NSTask alloc] init];\n\n[theProcess setLaunchPath:@\"/usr/sbin/screencapture\"];\n// use arguments to set save location\n[theProcess setArguments:@\"blahblah\"];\n[theProcess launch];\n\nThe you could open up the file wherever you told it to be saved, process it, and then delete it as needed. Obviously stopgap, but it would work.\n",
"If you're fine with Leopard compatibility, there's a very powerful new CGWindow API that will let you grab screen shots, window shots, or composites of any range of window layers. \nhttp://developer.apple.com/samplecode/SonOfGrab/\n",
"Qt includes an example screenshot app in examples\\desktop\\screenshot. Qt works on a range of platforms, including MacOSX.\nhttp://trolltech.com/products/qt/\n",
"The following might be helpful if you are attempting to accomplish this with C++ or python. Also, this would be even more helpful in the case that you want your programmatic method to be cross-platform portable. (Windows, Linux, Mac osx, and even beyond)\nAn earlier response mentions QT.\nIn the same way that QT will allow you to capture and save a screenshot, so does another \"competing\" framework, namely wxWidgets. wxWidgets is a C++ framework, but it also provides python bindings via wxPython.\nTo read more, use the following link, search the book for wxScreenDC and choose \"Page 139\" from the list of pages that match the search:\nhttp://books.google.com/books?id=CyMsvtgnq0QC&vq=\"accessing+the+screen+with+wxScreendc\"\n"
] |
[
14,
11,
10,
4,
1
] |
[
"If you consider REALbasic, this is extremely easy to do with RB and the MBS Plugins. I've just written an application that does timed screenshots using RB and the MBS Plugins. You can read about it here: http://tektalkin.blogspot.com/2008/08/screenaudit-for-mac-osx.html\n"
] |
[
-2
] |
[
"c++",
"cocoa",
"macos"
] |
stackoverflow_0000038934_c++_cocoa_macos.txt
|
Q:
if statement condition optimisation
I have an if statement with two conditions (separated by an OR operator), one of the conditions covers +70% of situations and takes far less time to process/execute than the second condition, so in the interests of speed I only want the second condition to be processed if the first condition evaluates to false.
if I order the conditions so that the first condition (the quicker one) appears in the if statement first - on the occasions where this condition is met and evaluates true is the second condition even processed?
if ( (condition1) | (condition2) ){
// do this
}
or would I need to nest two if statements to only check the second condition if the first evaluates to false?
if (condition1){
// do this
}else if (condition2){
// do this
}
I am working in PHP, however, I assume that this may be language-agnostic.
A:
For C, C++, C#, Java and other .NET languages boolean expressions are optimised so that as soon as enough is known nothing else is evaluated.
An old trick for doing obfuscated code was to use this to create if statements, such as:
a || b();
if "a" is true, "b()" would never be evaluated, so we can rewrite it into:
if(!a)
b();
and similarly:
a && b();
would become
if(a)
b();
Please note that this is only valid for the || and && operator. The two operators | and & is bitwise or, and and, respectively, and are therefore not "optimised".
EDIT:
As mentioned by others, trying to optimise code using short circuit logic is very rarely well spent time.
First go for clarity, both because it is easier to read and understand. Also, if you try to be too clever a simple reordering of the terms could lead to wildly different behaviour without any apparent reason.
Second, go for optimisation, but only after timing and profiling. Way too many developer do premature optimisation without profiling. Most of the time it's completely useless.
A:
Pretty much every language does a short circuit evaluation. Meaning the second condition is only evaluated if it's aboslutely necessary to. For this to work, most languages use the double pipe, ||, not the single one, |.
See http://en.wikipedia.org/wiki/Short-circuit_evaluation
A:
In C, C++ and Java, the statement:
if (condition1 | condition2) {
...
}
will evaluate both conditions every time and only be true if the entire expression is true.
The statement:
if (condition1 || condition2) {
...
}
will evaluate condition2 only if condition1 is false. The difference is significant if condition2 is a function or another expression with a side-effect.
There is, however, no difference between the || case and the if/else case.
A:
I've seen a lot of these types of questions lately--optimization to the nth degree.
I think it makes sense in certain circumstances:
Computing condition 2 is not a constant time operation
You are asking strictly for educational purposes--you want to know how the language works, not to save 3us.
In other cases, worrying about the "fastest" way to iterate or check a conditional is silly. Instead of writing tests which require millions of trials to see any recordable (but insignificant) difference, focus on clarity.
When someone else (could be you!) picks up this code in a month or a year, what's going to be most important is clarity.
In this case, your first example is shorter, clearer and doesn't require you to repeat yourself.
A:
According to this article PHP does short circuit evaluation, which means that if the first condition is met the second is not even evaluated.
It's quite easy to test also (from the article):
<?php
/* ch06ex07 – shows no output because of short circuit evaluation */
if (true || $intVal = 5) // short circuits after true
{
echo $intVal; // will be empty because the assignment never took place
}
?>
A:
The short-circuiting is not for optimization. It's main purpose is to avoid calling code that will not work, yet result in a readable test. Example:
if (i < array.size() && array[i]==foo) ...
Note that array[i] may very well get an access violation if i is out of range and crash the program. Thus this program is certainly depending on short-circuiting the evaluation!
I believe this is the reason for writing expressions this way far more often than optimization concerns.
A:
While using short-circuiting for the purposes of optimization is often overkill, there are certainly other compelling reasons to use it. One such example (in C++) is the following:
if( pObj != NULL && *pObj == "username" ) {
// Do something...
}
Here, short-circuiting is being relied upon to ensure that pObj has been allocated prior to dereferencing it. This is far more concise than having nested if statements.
A:
Since this is tagged language agnostic I'll chime in. For Perl at least, the first option is sufficient, I'm not familiar with PHP. It evaluates left to right and drops out as soon as the condition is met.
A:
In most languages with decent optimization the former will work just fine.
A:
The | is a bitwise operator in PHP. It does not mean $a OR $b, exactly. You'll want to use the double-pipe. And yes, as mentioned, PHP does short-circuit evaluation. In similar fashion, if the first condition of an && clause evaluates to false, PHP does not evaluate the rest of the clause, either.
A:
VB.net has two wonderful expression called "OrElse" and "AndAlso"
OrElse will short circuit itself the first time it reaches a True evaluation and execute the code you desire.
If FirstName = "Luke" OrElse FirstName = "Darth" Then
Console.Writeline "Greetings Exalted One!"
End If
AndAlso will short circuit itself the first time it a False evaluation and not evaluate the code within the block.
If FirstName = "Luke" AndAlso LastName = "Skywalker" Then
Console.Writeline "You are the one and only."
End If
I find both of these helpful.
|
if statement condition optimisation
|
I have an if statement with two conditions (separated by an OR operator), one of the conditions covers +70% of situations and takes far less time to process/execute than the second condition, so in the interests of speed I only want the second condition to be processed if the first condition evaluates to false.
if I order the conditions so that the first condition (the quicker one) appears in the if statement first - on the occasions where this condition is met and evaluates true is the second condition even processed?
if ( (condition1) | (condition2) ){
// do this
}
or would I need to nest two if statements to only check the second condition if the first evaluates to false?
if (condition1){
// do this
}else if (condition2){
// do this
}
I am working in PHP, however, I assume that this may be language-agnostic.
|
[
"For C, C++, C#, Java and other .NET languages boolean expressions are optimised so that as soon as enough is known nothing else is evaluated.\nAn old trick for doing obfuscated code was to use this to create if statements, such as:\na || b();\n\nif \"a\" is true, \"b()\" would never be evaluated, so we can rewrite it into:\nif(!a)\n b();\n\nand similarly:\na && b();\n\nwould become\nif(a)\n b();\n\nPlease note that this is only valid for the || and && operator. The two operators | and & is bitwise or, and and, respectively, and are therefore not \"optimised\".\nEDIT:\nAs mentioned by others, trying to optimise code using short circuit logic is very rarely well spent time.\nFirst go for clarity, both because it is easier to read and understand. Also, if you try to be too clever a simple reordering of the terms could lead to wildly different behaviour without any apparent reason.\nSecond, go for optimisation, but only after timing and profiling. Way too many developer do premature optimisation without profiling. Most of the time it's completely useless.\n",
"Pretty much every language does a short circuit evaluation. Meaning the second condition is only evaluated if it's aboslutely necessary to. For this to work, most languages use the double pipe, ||, not the single one, |.\nSee http://en.wikipedia.org/wiki/Short-circuit_evaluation\n",
"In C, C++ and Java, the statement:\n\nif (condition1 | condition2) {\n ...\n}\n\nwill evaluate both conditions every time and only be true if the entire expression is true. \nThe statement:\n\nif (condition1 || condition2) {\n ...\n}\n\nwill evaluate condition2 only if condition1 is false. The difference is significant if condition2 is a function or another expression with a side-effect. \nThere is, however, no difference between the || case and the if/else case.\n",
"I've seen a lot of these types of questions lately--optimization to the nth degree.\nI think it makes sense in certain circumstances:\n\nComputing condition 2 is not a constant time operation\nYou are asking strictly for educational purposes--you want to know how the language works, not to save 3us.\n\nIn other cases, worrying about the \"fastest\" way to iterate or check a conditional is silly. Instead of writing tests which require millions of trials to see any recordable (but insignificant) difference, focus on clarity.\nWhen someone else (could be you!) picks up this code in a month or a year, what's going to be most important is clarity.\nIn this case, your first example is shorter, clearer and doesn't require you to repeat yourself.\n",
"According to this article PHP does short circuit evaluation, which means that if the first condition is met the second is not even evaluated.\nIt's quite easy to test also (from the article):\n<?php\n/* ch06ex07 – shows no output because of short circuit evaluation */\n\nif (true || $intVal = 5) // short circuits after true\n{\n\necho $intVal; // will be empty because the assignment never took place\n}\n\n?>\n\n",
"The short-circuiting is not for optimization. It's main purpose is to avoid calling code that will not work, yet result in a readable test. Example:\nif (i < array.size() && array[i]==foo) ...\n\nNote that array[i] may very well get an access violation if i is out of range and crash the program. Thus this program is certainly depending on short-circuiting the evaluation!\nI believe this is the reason for writing expressions this way far more often than optimization concerns.\n",
"While using short-circuiting for the purposes of optimization is often overkill, there are certainly other compelling reasons to use it. One such example (in C++) is the following:\nif( pObj != NULL && *pObj == \"username\" ) {\n // Do something...\n}\n\nHere, short-circuiting is being relied upon to ensure that pObj has been allocated prior to dereferencing it. This is far more concise than having nested if statements.\n",
"Since this is tagged language agnostic I'll chime in. For Perl at least, the first option is sufficient, I'm not familiar with PHP. It evaluates left to right and drops out as soon as the condition is met.\n",
"In most languages with decent optimization the former will work just fine.\n",
"The | is a bitwise operator in PHP. It does not mean $a OR $b, exactly. You'll want to use the double-pipe. And yes, as mentioned, PHP does short-circuit evaluation. In similar fashion, if the first condition of an && clause evaluates to false, PHP does not evaluate the rest of the clause, either.\n",
"VB.net has two wonderful expression called \"OrElse\" and \"AndAlso\"\nOrElse will short circuit itself the first time it reaches a True evaluation and execute the code you desire.\nIf FirstName = \"Luke\" OrElse FirstName = \"Darth\" Then\n Console.Writeline \"Greetings Exalted One!\"\nEnd If\n\nAndAlso will short circuit itself the first time it a False evaluation and not evaluate the code within the block.\nIf FirstName = \"Luke\" AndAlso LastName = \"Skywalker\" Then\n Console.Writeline \"You are the one and only.\"\nEnd If\n\nI find both of these helpful.\n"
] |
[
9,
3,
3,
2,
2,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"conditional",
"language_agnostic",
"php"
] |
stackoverflow_0000034938_conditional_language_agnostic_php.txt
|
Q:
What XML parser do you use for PHP?
I like the XMLReader class for it's simplicity and speed. But I like the xml_parse associated functions as it better allows for error recovery. It would be nice if the XMLReader class would throw exceptions for things like invalid entity refs instead of just issuinng a warning.
A:
I'd avoid SimpleXML if you can. Though it looks very tempting by getting to avoid a lot of "ugly" code, it's just what the name suggests: simple. For example, it can't handle this:
<p>
Here is <strong>a very simple</strong> XML document.
</p>
Bite the bullet and go to the DOM Functions. The power of it far outweighs the little bit extra complexity. If you're familiar at all with DOM manipulation in Javascript, you'll feel right at home with this library.
A:
SimpleXML seems to do a good job for me.
A:
SimpleXML and DOM work seamlessly together, so you can use the same XML interacting with it as SimpleXML or DOM.
For example:
$simplexml = simplexml_load_string("<xml></xml>");
$simplexml->simple = "it is simple.";
$domxml = dom_import_simplexml($simplexml);
$node = $domxml->ownerDocument->createElement("dom", "yes, with DOM too.");
$domxml->ownerDocument->firstChild->appendChild($node);
echo (string)$simplexml->dom;
You will get the result:
"yes, with DOM too."
Because when you import the object (either into simplexml or dom) it uses the same underlining PHP object by reference.
I figured this out when I was trying to correct some of the errors in SimpleXML by extending/wrapping the object.
See http://code.google.com/p/blibrary/source/browse/trunk/classes/bXml.class.inc for examples.
This is really good for small chunks of XML (-2MB), as DOM/SimpleXML pull the full document into memory with some additional overhead (think x2 or x3). For large XML chunks (+2MB) you'll want to use XMLReader/XMLWriter to parse SAX style, with low memory overhead. I've used 14MB+ documents successfully with XMLReader/XMLWriter.
A:
There are at least four options when using PHP5 to parse XML files. The best option depends on the complexity and size of the XML file.
There’s a very good 3-part article series titled ‘XML for PHP developers’ at IBM developerWorks.
“Parsing with the DOM, now fully compliant with the W3C standard, is a familiar option, and is your choice for complex but relatively small documents. SimpleXML is the way to go for basic and not-too-large XML documents, and XMLReader, easier and faster than SAX, is the stream parser of choice for large documents.”
A:
I mostly stick to SimpleXML, at least whenever PHP5 is available for me.
http://www.php.net/simplexml
|
What XML parser do you use for PHP?
|
I like the XMLReader class for it's simplicity and speed. But I like the xml_parse associated functions as it better allows for error recovery. It would be nice if the XMLReader class would throw exceptions for things like invalid entity refs instead of just issuinng a warning.
|
[
"I'd avoid SimpleXML if you can. Though it looks very tempting by getting to avoid a lot of \"ugly\" code, it's just what the name suggests: simple. For example, it can't handle this:\n<p>\n Here is <strong>a very simple</strong> XML document.\n</p>\n\nBite the bullet and go to the DOM Functions. The power of it far outweighs the little bit extra complexity. If you're familiar at all with DOM manipulation in Javascript, you'll feel right at home with this library.\n",
"SimpleXML seems to do a good job for me.\n",
"SimpleXML and DOM work seamlessly together, so you can use the same XML interacting with it as SimpleXML or DOM.\nFor example:\n$simplexml = simplexml_load_string(\"<xml></xml>\");\n$simplexml->simple = \"it is simple.\";\n\n$domxml = dom_import_simplexml($simplexml);\n$node = $domxml->ownerDocument->createElement(\"dom\", \"yes, with DOM too.\");\n$domxml->ownerDocument->firstChild->appendChild($node);\n\necho (string)$simplexml->dom;\n\nYou will get the result: \n\"yes, with DOM too.\"\n\nBecause when you import the object (either into simplexml or dom) it uses the same underlining PHP object by reference. \nI figured this out when I was trying to correct some of the errors in SimpleXML by extending/wrapping the object.\nSee http://code.google.com/p/blibrary/source/browse/trunk/classes/bXml.class.inc for examples.\nThis is really good for small chunks of XML (-2MB), as DOM/SimpleXML pull the full document into memory with some additional overhead (think x2 or x3). For large XML chunks (+2MB) you'll want to use XMLReader/XMLWriter to parse SAX style, with low memory overhead. I've used 14MB+ documents successfully with XMLReader/XMLWriter.\n",
"There are at least four options when using PHP5 to parse XML files. The best option depends on the complexity and size of the XML file.\nThere’s a very good 3-part article series titled ‘XML for PHP developers’ at IBM developerWorks.\n“Parsing with the DOM, now fully compliant with the W3C standard, is a familiar option, and is your choice for complex but relatively small documents. SimpleXML is the way to go for basic and not-too-large XML documents, and XMLReader, easier and faster than SAX, is the stream parser of choice for large documents.”\n",
"I mostly stick to SimpleXML, at least whenever PHP5 is available for me.\nhttp://www.php.net/simplexml\n"
] |
[
4,
3,
2,
1,
0
] |
[] |
[] |
[
"php",
"xml"
] |
stackoverflow_0000068565_php_xml.txt
|
Q:
Is there an integrated Eclipse plugin to debug Jython?
JyDT is a good Jython Eclipse plugin.
However, it doesn't allow Jython debugging in the Debug perspective.
Jython provides a command-line debugger (Pdb) but it operates outside Eclipse.
A:
Pydev has worked well for me.
|
Is there an integrated Eclipse plugin to debug Jython?
|
JyDT is a good Jython Eclipse plugin.
However, it doesn't allow Jython debugging in the Debug perspective.
Jython provides a command-line debugger (Pdb) but it operates outside Eclipse.
|
[
"Pydev has worked well for me.\n"
] |
[
1
] |
[] |
[] |
[
"debugging",
"eclipse_plugin",
"jython"
] |
stackoverflow_0000077587_debugging_eclipse_plugin_jython.txt
|
Q:
How do I secure a folder used to let users upload files?
I have a folder in my web server used for the users to upload photos using an ASP page.
Is it safe enough to give IUSR write permissions to the folder? Must I secure something else?
I am afraid of hackers bypassing the ASP page and uploading content directly to the folder.
I'm using ASP classic and IIS6 on Windows 2003 Server. The upload is through HTTP, not FTP.
Edit: Changing the question for clarity and changing my answers as comments.
A:
also, I would recommend not to let the users upload into a folder that's accessible from the web. Even the best MIME type detection may fail and you absolutely don't want users to upload, say, an executable disguised as a jpeg in a case where your MIME sniffing fails, but the one in IIS works correctly.
In the PHP world it's even worse, because an attacker could upload a malicious PHP script and later access it via the webserver.
Always, always store the uploaded files in a directory somewhere outside the document root and access them via some accessing-script which does additional sanitizing (and at least explicitly sets a image/whatever MIME type.
A:
How will the user upload the photos? If you are writing an ASP page to accept the uploaded files then only the user that IIS runs as will need write permission to the folder, since IIS will be doing the file I/O. Your ASP page should check the file size and have some form of authentication to prevent hackers from filling your hard drive.
If you are setting up an FTP server or some other file transfer method, then the answer will be specific to the method you choose.
A:
You'll have to grant write permissions, but you can check the file's mime type to ensure an image. You can use FSO as so:
set fs=Server.CreateObject("Scripting.FileSystemObject")
set f=fs.GetFile("upload.jpg")
'image mime types or image/jpeg or image/gif, so just check to see if "image" is instr
if instr(f.type, "image") = 0 then
f.delete
end if
set f=nothing
set fs=nothing
Also, most upload COM objects have a type property that you could check against before writing the file.
A:
Your best bang for the buck would probably be to use an upload component (I've used ASPUpload) that allows you to upload/download files from a folder that isn't accessible from the website.
You'll get some authentication hooks and won't have to worry about someone casually browsing the folder and downloading the files (or uploading in your case), since the files are only available through the component.
|
How do I secure a folder used to let users upload files?
|
I have a folder in my web server used for the users to upload photos using an ASP page.
Is it safe enough to give IUSR write permissions to the folder? Must I secure something else?
I am afraid of hackers bypassing the ASP page and uploading content directly to the folder.
I'm using ASP classic and IIS6 on Windows 2003 Server. The upload is through HTTP, not FTP.
Edit: Changing the question for clarity and changing my answers as comments.
|
[
"also, I would recommend not to let the users upload into a folder that's accessible from the web. Even the best MIME type detection may fail and you absolutely don't want users to upload, say, an executable disguised as a jpeg in a case where your MIME sniffing fails, but the one in IIS works correctly.\nIn the PHP world it's even worse, because an attacker could upload a malicious PHP script and later access it via the webserver.\nAlways, always store the uploaded files in a directory somewhere outside the document root and access them via some accessing-script which does additional sanitizing (and at least explicitly sets a image/whatever MIME type.\n",
"How will the user upload the photos? If you are writing an ASP page to accept the uploaded files then only the user that IIS runs as will need write permission to the folder, since IIS will be doing the file I/O. Your ASP page should check the file size and have some form of authentication to prevent hackers from filling your hard drive.\nIf you are setting up an FTP server or some other file transfer method, then the answer will be specific to the method you choose.\n",
"You'll have to grant write permissions, but you can check the file's mime type to ensure an image. You can use FSO as so:\nset fs=Server.CreateObject(\"Scripting.FileSystemObject\")\nset f=fs.GetFile(\"upload.jpg\")\n'image mime types or image/jpeg or image/gif, so just check to see if \"image\" is instr\nif instr(f.type, \"image\") = 0 then\n f.delete\nend if\nset f=nothing\nset fs=nothing\n\nAlso, most upload COM objects have a type property that you could check against before writing the file.\n",
"Your best bang for the buck would probably be to use an upload component (I've used ASPUpload) that allows you to upload/download files from a folder that isn't accessible from the website. \nYou'll get some authentication hooks and won't have to worry about someone casually browsing the folder and downloading the files (or uploading in your case), since the files are only available through the component.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"asp_classic",
"iis",
"iis_6",
"security",
"windows_server_2003"
] |
stackoverflow_0000022519_asp_classic_iis_iis_6_security_windows_server_2003.txt
|
Q:
Are there any good automated frameworks for applying coding standards in Perl?
One I am aware of is Perl::Critic
And my googling has resulted in no results on multiple attempts so far. :-(
Does anyone have any recommendations here?
Any resources to configure Perl::Critic as per our coding standards and run it on code base would be appreciated.
A:
In terms of setting up a profile, have you tried perlcritic --profile-proto? This will emit to stdout all of your installed policies with all their options with descriptions of both, including their default values, in perlcriticrc format. Save and edit to match what you want. Whenever you upgrade Perl::Critic, you may want to run this command again and do a diff with your current perlcriticrc so you can see any changes to existing policies and pick up any new ones.
In terms of running perlcritic regularly, set up a Test::Perl::Critic test along with the rest of your tests. This is good for new code.
For your existing code, use Test::Perl::Critic::Progressive instead. T::P::C::Progressive will succeed the first time you run it, but will save counts on the number of violations; thereafter, T::P::C::Progressive will complain if any of the counts go up. One thing to look out for is when you revert changes in your source control system. (You are using one, aren't you?) Say I check in a change and run tests and my changes reduce the number of P::C violations. Later, it turns out my change was bad, so I revert to the old code. The T::P::C::Progressive test will fail due to the reduced counts. The easiest thing to do at this point is to just delete the history file (default location t/.perlcritic-history) and run again. It should reproduce your old counts and you can write new stuff to bring them down again.
Perl::Critic has a lot of policies that ship with it, but there are a bunch of add-on distributions of policies. Have a look at Task::Perl::Critic and
Task::Perl::Critic::IncludingOptionalDependencies.
You don't need to have a single perlcriticrc handle all your code. Create separate perlcriticrc files for each set of files you want to test and then a separate test that points to each one. For an example, have a look at the author tests for P::C itself at http://perlcritic.tigris.org/source/browse/perlcritic/trunk/Perl-Critic/xt/author/. When author tests are run, there's a test that runs over all the code of P::C, a second test that applies additional rules just on the policies, and a third one that criticizes P::C's tests.
I personally think that everyone should run at the "brutal" severity level, but knock out the policies that they don't agree with. Perl::Critic isn't entirely self compliant; even the P::C developers don't agree with everything Conway says. Look at the perlcriticrc files used on Perl::Critic itself and search the Perl::Critic code for instances of "## no critic"; I count 143 at present.
(Yes, I'm one of the Perl::Critic developers.)
A:
There is perltidy for most stylistic standards. perlcritic can be easily configured using a .perlcritic file. I personally use the it at level one, but I've disabled a few policies.
A:
In addition to 'automated frameworks', I highly recommend Damian Conway's Perl Best Practices. I don't agree with 100% of what he suggests, but most of the time he's bang on.
A:
The post above mentioning Devel::Prof probably really means Devel::Cover (to get the code coverage of a test suite).
A:
Like:
http://metacpan.org/pod/Perl::Critic
http://www.slideshare.net/joshua.mcadams/an-introduction-to-perl-critic/
Looks like a nice tool!
A:
A nice combination is perlcritic with EPIC for Eclipse - hit CTRL-SHIFT-C (or your preferred configured shortcut) and your code is marked up with warning indicators wherever perlcritic has found something to complain about. Much nicer than remembering to run it before checkin. And as normal with perlcritic, it will pick up your .perlcriticrc so you can customise the rules. We keep our .perlcriticrc in version control so everyone gets the same standards.
A:
In addition to the cosmetic best practices, I always find it useful to run Devel::Prof on my unit test suite to check test coverage.
|
Are there any good automated frameworks for applying coding standards in Perl?
|
One I am aware of is Perl::Critic
And my googling has resulted in no results on multiple attempts so far. :-(
Does anyone have any recommendations here?
Any resources to configure Perl::Critic as per our coding standards and run it on code base would be appreciated.
|
[
"In terms of setting up a profile, have you tried perlcritic --profile-proto? This will emit to stdout all of your installed policies with all their options with descriptions of both, including their default values, in perlcriticrc format. Save and edit to match what you want. Whenever you upgrade Perl::Critic, you may want to run this command again and do a diff with your current perlcriticrc so you can see any changes to existing policies and pick up any new ones.\nIn terms of running perlcritic regularly, set up a Test::Perl::Critic test along with the rest of your tests. This is good for new code.\nFor your existing code, use Test::Perl::Critic::Progressive instead. T::P::C::Progressive will succeed the first time you run it, but will save counts on the number of violations; thereafter, T::P::C::Progressive will complain if any of the counts go up. One thing to look out for is when you revert changes in your source control system. (You are using one, aren't you?) Say I check in a change and run tests and my changes reduce the number of P::C violations. Later, it turns out my change was bad, so I revert to the old code. The T::P::C::Progressive test will fail due to the reduced counts. The easiest thing to do at this point is to just delete the history file (default location t/.perlcritic-history) and run again. It should reproduce your old counts and you can write new stuff to bring them down again.\nPerl::Critic has a lot of policies that ship with it, but there are a bunch of add-on distributions of policies. Have a look at Task::Perl::Critic and \nTask::Perl::Critic::IncludingOptionalDependencies.\nYou don't need to have a single perlcriticrc handle all your code. Create separate perlcriticrc files for each set of files you want to test and then a separate test that points to each one. For an example, have a look at the author tests for P::C itself at http://perlcritic.tigris.org/source/browse/perlcritic/trunk/Perl-Critic/xt/author/. When author tests are run, there's a test that runs over all the code of P::C, a second test that applies additional rules just on the policies, and a third one that criticizes P::C's tests.\nI personally think that everyone should run at the \"brutal\" severity level, but knock out the policies that they don't agree with. Perl::Critic isn't entirely self compliant; even the P::C developers don't agree with everything Conway says. Look at the perlcriticrc files used on Perl::Critic itself and search the Perl::Critic code for instances of \"## no critic\"; I count 143 at present.\n(Yes, I'm one of the Perl::Critic developers.)\n",
"There is perltidy for most stylistic standards. perlcritic can be easily configured using a .perlcritic file. I personally use the it at level one, but I've disabled a few policies.\n",
"In addition to 'automated frameworks', I highly recommend Damian Conway's Perl Best Practices. I don't agree with 100% of what he suggests, but most of the time he's bang on.\n",
"The post above mentioning Devel::Prof probably really means Devel::Cover (to get the code coverage of a test suite).\n",
"Like: \n\nhttp://metacpan.org/pod/Perl::Critic\nhttp://www.slideshare.net/joshua.mcadams/an-introduction-to-perl-critic/\n\nLooks like a nice tool!\n",
"A nice combination is perlcritic with EPIC for Eclipse - hit CTRL-SHIFT-C (or your preferred configured shortcut) and your code is marked up with warning indicators wherever perlcritic has found something to complain about. Much nicer than remembering to run it before checkin. And as normal with perlcritic, it will pick up your .perlcriticrc so you can customise the rules. We keep our .perlcriticrc in version control so everyone gets the same standards.\n",
"In addition to the cosmetic best practices, I always find it useful to run Devel::Prof on my unit test suite to check test coverage.\n"
] |
[
12,
5,
4,
3,
2,
1,
0
] |
[] |
[] |
[
"coding_style",
"frameworks",
"perl",
"perl_critic"
] |
stackoverflow_0000051499_coding_style_frameworks_perl_perl_critic.txt
|
Q:
BugzScout in hosted Fogbugz
Is it possible to use BugzScout in the fogcreek-hosted version of Fogbugz?
A:
Yes, you can!
The documentation is on the FogBugz Knowledge Exchange. The sample code that ships for the for-your-server version of FogBugz is available for download here.
|
BugzScout in hosted Fogbugz
|
Is it possible to use BugzScout in the fogcreek-hosted version of Fogbugz?
|
[
"Yes, you can!\nThe documentation is on the FogBugz Knowledge Exchange. The sample code that ships for the for-your-server version of FogBugz is available for download here.\n"
] |
[
6
] |
[] |
[] |
[
"bugzscout",
"fogbugz",
"fogbugz_on_demand"
] |
stackoverflow_0000077697_bugzscout_fogbugz_fogbugz_on_demand.txt
|
Q:
PLS-00306 (wrong number or types of arguments) on call to cursor
I think I might be missing something here. Here is the relevant part of the trigger:
CURSOR columnNames (inTableName IN VARCHAR2) IS
SELECT COLUMN_NAME FROM USER_TAB_COLUMNS WHERE TABLE_NAME = inTableName;
/* Removed for brevity */
OPEN columnNames('TEMP');
And here is the error message that I'm getting back,
27/20 PLS-00306: wrong number or types of arguments in call to 'COLUMNNAMES'
27/2 PL/SQL: Statement ignored
If I am understanding the documentation correctly, that should work, but since it is not I must be doing something wrong. Any ideas?
@Matthew - I appreciate the help, but the reason that I am confused is because this bit of code isn't working for me and is raising the errors referenced. We have other triggers in the database with code almost exactly the as that so I'm not sure if it is something that I did wrong, or something with how I am trying to store the trigger, etc.
@Matthew - Well, now I get to feel embarrassed. I did a copy/paste of the code that you provided into a new trigger and it worked fine. So I went back into the original trigger and tried it and received the error message again, except this time I started to delete stuff out of the trigger and after getting rid of this line,
FOR columnName IN columnNames LOOP
Things saved fine. So it turns out that where I thought the error was, wasn't actually were the error was.
A:
To clarify the cause of the issue. As you state
OPEN columnNames('TEMP');
worked while
FOR columnName IN columnNames LOOP
did not. The FOR statement would work fine if it also included the parameter like so:
FOR columnName IN columnNames('TEMP') LOOP
You don't show the code where you fetch the rows so I can't tell your purpose, but where I work OPEN is commonly used to fetch the first row (in this case, the first column name of the given table) while the FOR is used to iterate through all returned rows.
@Rob's comment. I'm not allowed to comment so updating here instead. The missing parameter is what I describe above. You added a response stating you simply deleted the FOR loop. It did not look like you, at the time, understood why deleting it made a difference. Which is why I attempted to explain since, depending on your need, the FOR loop might be a better solution.
A:
Works fine for me.
create or replace procedure so_test_procedure as
CURSOR columnNames (inTableName IN VARCHAR2) IS
SELECT COLUMN_NAME FROM USER_TAB_COLUMNS WHERE TABLE_NAME = inTableName;
BEGIN
OPEN columnNames('TEMP');
CLOSE columnNames;
END;
procedure so_test_procedure Compiled.
execute so_test_procedure();
anonymous block completed
|
PLS-00306 (wrong number or types of arguments) on call to cursor
|
I think I might be missing something here. Here is the relevant part of the trigger:
CURSOR columnNames (inTableName IN VARCHAR2) IS
SELECT COLUMN_NAME FROM USER_TAB_COLUMNS WHERE TABLE_NAME = inTableName;
/* Removed for brevity */
OPEN columnNames('TEMP');
And here is the error message that I'm getting back,
27/20 PLS-00306: wrong number or types of arguments in call to 'COLUMNNAMES'
27/2 PL/SQL: Statement ignored
If I am understanding the documentation correctly, that should work, but since it is not I must be doing something wrong. Any ideas?
@Matthew - I appreciate the help, but the reason that I am confused is because this bit of code isn't working for me and is raising the errors referenced. We have other triggers in the database with code almost exactly the as that so I'm not sure if it is something that I did wrong, or something with how I am trying to store the trigger, etc.
@Matthew - Well, now I get to feel embarrassed. I did a copy/paste of the code that you provided into a new trigger and it worked fine. So I went back into the original trigger and tried it and received the error message again, except this time I started to delete stuff out of the trigger and after getting rid of this line,
FOR columnName IN columnNames LOOP
Things saved fine. So it turns out that where I thought the error was, wasn't actually were the error was.
|
[
"To clarify the cause of the issue. As you state \nOPEN columnNames('TEMP'); \nworked while \nFOR columnName IN columnNames LOOP\ndid not. The FOR statement would work fine if it also included the parameter like so:\nFOR columnName IN columnNames('TEMP') LOOP\nYou don't show the code where you fetch the rows so I can't tell your purpose, but where I work OPEN is commonly used to fetch the first row (in this case, the first column name of the given table) while the FOR is used to iterate through all returned rows.\n@Rob's comment. I'm not allowed to comment so updating here instead. The missing parameter is what I describe above. You added a response stating you simply deleted the FOR loop. It did not look like you, at the time, understood why deleting it made a difference. Which is why I attempted to explain since, depending on your need, the FOR loop might be a better solution.\n",
"Works fine for me.\ncreate or replace procedure so_test_procedure as \n CURSOR columnNames (inTableName IN VARCHAR2) IS \n SELECT COLUMN_NAME FROM USER_TAB_COLUMNS WHERE TABLE_NAME = inTableName; \nBEGIN \n OPEN columnNames('TEMP');\n CLOSE columnNames;\nEND;\n\nprocedure so_test_procedure Compiled.\nexecute so_test_procedure();\n\nanonymous block completed\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"database_cursor",
"oracle",
"plsql",
"triggers"
] |
stackoverflow_0000043832_database_cursor_oracle_plsql_triggers.txt
|
Q:
Performance Testing
We are developing automated regression tests using VMWare and NUnit. We have divided tests into steps and now I would like to see each step be examined for performance regression. Simply timing the tests, as NUnit does, does not seem reliable. I have figured in a acceptance factor of about 15% but our steps can differ sometimes to over 35%. In such a resource dependent test environment is there any consistent way of testing performance? Is a "smart" timing system my only option?
A:
For this sort of performance testing, there's no such thing as a system that will give you a simple pass/fail result. In real life, changing your system is likely to make some things faster and some other things slower, so it's usually not a choice between "better" and "not better", it's a choice between different kinds of better. (Of course, you want to avoid cases where it's strictly worse.)
What I've done for this in the past is to just keep statistics over time. Every time you run your tests, drop the results in a SQL database with the revision number and the test timings. Then you can graph them whenever and however you want (ideally in a little web applet so everyone on the team can review them) and see if your performance is trending up or down, or if performance has been sucking ever since a particular revision.
The key thing here, though, is that it needs to be a graph. That way human eyes can look at it and find the trends. You could spend all week trying to come up with an AI algorithm to analyse the data numerically, but it would never beat a human's pattern-recognition ability.
A:
You might look into the features available with a tool such as Ants Profiler as it does give method executing/run times, but I'm not sure what it offers in terms of repeated testing.
A:
With respect to performance testing I've been very skeptical of using vmware or other virtualization processes. The way we have handled this in the past is to have part of the build install the latest version on a static machine and run the tests. You should see more consistent results outside of the virtualization.
|
Performance Testing
|
We are developing automated regression tests using VMWare and NUnit. We have divided tests into steps and now I would like to see each step be examined for performance regression. Simply timing the tests, as NUnit does, does not seem reliable. I have figured in a acceptance factor of about 15% but our steps can differ sometimes to over 35%. In such a resource dependent test environment is there any consistent way of testing performance? Is a "smart" timing system my only option?
|
[
"For this sort of performance testing, there's no such thing as a system that will give you a simple pass/fail result. In real life, changing your system is likely to make some things faster and some other things slower, so it's usually not a choice between \"better\" and \"not better\", it's a choice between different kinds of better. (Of course, you want to avoid cases where it's strictly worse.)\nWhat I've done for this in the past is to just keep statistics over time. Every time you run your tests, drop the results in a SQL database with the revision number and the test timings. Then you can graph them whenever and however you want (ideally in a little web applet so everyone on the team can review them) and see if your performance is trending up or down, or if performance has been sucking ever since a particular revision.\nThe key thing here, though, is that it needs to be a graph. That way human eyes can look at it and find the trends. You could spend all week trying to come up with an AI algorithm to analyse the data numerically, but it would never beat a human's pattern-recognition ability.\n",
"You might look into the features available with a tool such as Ants Profiler as it does give method executing/run times, but I'm not sure what it offers in terms of repeated testing.\n",
"With respect to performance testing I've been very skeptical of using vmware or other virtualization processes. The way we have handled this in the past is to have part of the build install the latest version on a static machine and run the tests. You should see more consistent results outside of the virtualization. \n"
] |
[
5,
0,
0
] |
[] |
[] |
[
".net",
"nunit",
"regression_testing",
"vmware"
] |
stackoverflow_0000077603_.net_nunit_regression_testing_vmware.txt
|
Q:
Best practice: How to handle concurrency of browser and website navigation
It is a well known problem to every web developer. As far as I tried to find a good solution to this problem - there was none (or at least I could not find it).
Lets assume the following:
The user does not behave, as he was expected to. The actual project I'm working in uses a navigation within the web portal. But if the user uses the browser's back button, the whole thing becomes jeoprady[?] and the result was not always predictable.
We used the struts framework and stored the back-url into forms - at some places, where we needed a back-url - this has been rendered out of this form's back-url. For there was only a singe field for this information and therefore it was not possible of going back multiple steps.
When you change the "struts-flow" - which may result in using a different form - this information will be lost.
If the user dares to put a bookmark somewhere within your webapp - this information may never have been set and again the result will again be either unpredictable or not flexible enough!
My "solution":
I was storing every navigation-relevant page the user visited onto a stack-like storage into the session. This means a navigation-path is collected and stored for later navigations.
At any page within the webapp, where back-navigations are involved I used a self-made tag which renders the stack-content into the url.
And thats it.
When this back-url was clicked, the stack has been filled with the content from the back-url clicked by the user (which holds all information from the stack once the back-link was rendered).
This is quite clear, because a click on a link is a clear state, where the web developer exactly knows, where the user "is" a this very moment - absolutely independant from whatever the user did before (e.g. hitting the browser back button multiple times). Then the navigation stack is built upon this new state.
Resumé:
It becomes clear, that this won't be the best solution. But it allows storing additional information on the stack like page parameters and some other useful stuff (further developments possible).
So, what were your solutions to this problem?
cheers,
mana
A:
The stack solution sounds interesting, but it will probably break if the user chooses to navigate "in parallel" on different tabs or using bookmarks.
I'm afraid I don't really understand why you have to keep all this state for each user: ideally the web should follow the REST principle and be completely stateless. Therefore a single URL should identify a single resource, without having to keep the navigation history of each user.
If your web app relies heavily on AJAX, you could try to implement something like GMail (admittedly, not so easy...), where each change in the interface is reflected in a change in the page URL. Therefore each page is identified by the current URL and the user can navigate concurrently or use the back button as usual.
|
Best practice: How to handle concurrency of browser and website navigation
|
It is a well known problem to every web developer. As far as I tried to find a good solution to this problem - there was none (or at least I could not find it).
Lets assume the following:
The user does not behave, as he was expected to. The actual project I'm working in uses a navigation within the web portal. But if the user uses the browser's back button, the whole thing becomes jeoprady[?] and the result was not always predictable.
We used the struts framework and stored the back-url into forms - at some places, where we needed a back-url - this has been rendered out of this form's back-url. For there was only a singe field for this information and therefore it was not possible of going back multiple steps.
When you change the "struts-flow" - which may result in using a different form - this information will be lost.
If the user dares to put a bookmark somewhere within your webapp - this information may never have been set and again the result will again be either unpredictable or not flexible enough!
My "solution":
I was storing every navigation-relevant page the user visited onto a stack-like storage into the session. This means a navigation-path is collected and stored for later navigations.
At any page within the webapp, where back-navigations are involved I used a self-made tag which renders the stack-content into the url.
And thats it.
When this back-url was clicked, the stack has been filled with the content from the back-url clicked by the user (which holds all information from the stack once the back-link was rendered).
This is quite clear, because a click on a link is a clear state, where the web developer exactly knows, where the user "is" a this very moment - absolutely independant from whatever the user did before (e.g. hitting the browser back button multiple times). Then the navigation stack is built upon this new state.
Resumé:
It becomes clear, that this won't be the best solution. But it allows storing additional information on the stack like page parameters and some other useful stuff (further developments possible).
So, what were your solutions to this problem?
cheers,
mana
|
[
"The stack solution sounds interesting, but it will probably break if the user chooses to navigate \"in parallel\" on different tabs or using bookmarks.\nI'm afraid I don't really understand why you have to keep all this state for each user: ideally the web should follow the REST principle and be completely stateless. Therefore a single URL should identify a single resource, without having to keep the navigation history of each user.\nIf your web app relies heavily on AJAX, you could try to implement something like GMail (admittedly, not so easy...), where each change in the interface is reflected in a change in the page URL. Therefore each page is identified by the current URL and the user can navigate concurrently or use the back button as usual.\n"
] |
[
1
] |
[] |
[] |
[
"browser",
"navigation",
"struts"
] |
stackoverflow_0000077645_browser_navigation_struts.txt
|
Q:
Looking for a simple C# numeric edit control
I am a MFC programmer who is new to C# and am looking for a simple control that will allow number entry and range validation.
A:
Look at the "NumericUpDown" control. It has range validation, the input will always be numeric, and it has those nifty increment/decrement buttons.
A:
I had to implement a Control which only accepted numbers, integers or reals.
I build the control as a specialization of (read: derived from) TextBox control, and using input control and a regular expresión for the validation.
Adding range validation is terribly easy.
This is the code for building the regex. _numericSeparation is a string with characters accepted as decimal comma values
(for example, a '.' or a ',': $10.50 10,50€
private string ComputeRegexPattern()
{
StringBuilder builder = new StringBuilder();
if (this._forcePositives)
{
builder.Append("([+]|[-])?");
}
builder.Append(@"[\d]*((");
if (!this._useIntegers)
{
for (int i = 0; i < this._numericSeparator.Length; i++)
{
builder.Append("[").Append(this._numericSeparator[i]).Append("]");
if ((this._numericSeparator.Length > 0) && (i != (this._numericSeparator.Length - 1)))
{
builder.Append("|");
}
}
}
builder.Append(@")[\d]*)?");
return builder.ToString();
}
The regular expression matches any number (i.e. any string with numeric characters) with only one character as a numeric separation, and a '+' or a '-' optional character at the beginning of the string.
Once you create the regex (when instanciating the Control), you check if the value is correct overriding the OnValidating method.
CheckValidNumber() just applies the Regex to the introduced text. If the regex match fails, activates an error provider with an specified error (set with ValidationError public property) and raises a ValidationError event.
Here you could do the verification to know if the number is in the requiered range.
private bool CheckValidNumber()
{
if (Regex.Match(this.Text, this.RegexPattern).Value != this.Text)
{
this._errorProvider.SetError(this, this.ValidationError);
return false;
}
this._errorProvider.Clear();
return true;
}
protected override void OnValidating(CancelEventArgs e)
{
bool flag = this.CheckValidNumber();
if (!flag)
{
e.Cancel = true;
this.Text = "0";
}
base.OnValidating(e);
if (!flag)
{
this.ValidationFail(this, EventArgs.Empty);
}
}
As I said, i also prevent the user from input data in the text box other than numeric characteres overriding the OnKeyPress methdod:
protected override void OnKeyPress(KeyPressEventArgs e)
{
if ((!char.IsDigit(e.KeyChar) && !char.IsControl(e.KeyChar)) && (!this._numberSymbols.Contains(e.KeyChar.ToString()) && !this._numericSeparator.Contains(e.KeyChar.ToString())))
{
e.Handled = true;
}
if (this._numberSymbols.Contains(e.KeyChar.ToString()) && !this._forcePositives)
{
e.Handled = true;
}
if (this._numericSeparator.Contains(e.KeyChar.ToString()) && this._useIntegers)
{
e.Handled = true;
}
base.OnKeyPress(e);
}
The elegant touch: I check if the number valid every time the user releases a key, so the user can get feedback as he/she types. (But remember that you must be carefull with the ValidationFail event ;))
protected override void OnKeyUp(KeyEventArgs e)
{
this.CheckValidNumber();
base.OnKeyUp(e);
}
A:
You can use a regular textbox and a Validator control to control input.
A:
Try using an error provider control to validate the textbox. You can use int.TryParse() or double.TryParse() to check if it's numeric and then validate the range.
A:
You can use a combination of the RequiredFieldValidator and CompareValidator (Set to DataTypeCheck for the operator and Type set to Integer)
That will get it with a normal textbox if you would like, otherwise the recommendation above is good.
|
Looking for a simple C# numeric edit control
|
I am a MFC programmer who is new to C# and am looking for a simple control that will allow number entry and range validation.
|
[
"Look at the \"NumericUpDown\" control. It has range validation, the input will always be numeric, and it has those nifty increment/decrement buttons.\n",
"I had to implement a Control which only accepted numbers, integers or reals.\nI build the control as a specialization of (read: derived from) TextBox control, and using input control and a regular expresión for the validation. \nAdding range validation is terribly easy.\nThis is the code for building the regex. _numericSeparation is a string with characters accepted as decimal comma values\n(for example, a '.' or a ',': $10.50 10,50€\nprivate string ComputeRegexPattern()\n{\n StringBuilder builder = new StringBuilder();\n if (this._forcePositives)\n {\n builder.Append(\"([+]|[-])?\");\n }\n builder.Append(@\"[\\d]*((\");\n if (!this._useIntegers)\n {\n for (int i = 0; i < this._numericSeparator.Length; i++)\n {\n builder.Append(\"[\").Append(this._numericSeparator[i]).Append(\"]\");\n if ((this._numericSeparator.Length > 0) && (i != (this._numericSeparator.Length - 1)))\n {\n builder.Append(\"|\");\n }\n }\n }\n builder.Append(@\")[\\d]*)?\");\n return builder.ToString();\n}\n\nThe regular expression matches any number (i.e. any string with numeric characters) with only one character as a numeric separation, and a '+' or a '-' optional character at the beginning of the string.\nOnce you create the regex (when instanciating the Control), you check if the value is correct overriding the OnValidating method. \nCheckValidNumber() just applies the Regex to the introduced text. If the regex match fails, activates an error provider with an specified error (set with ValidationError public property) and raises a ValidationError event.\nHere you could do the verification to know if the number is in the requiered range.\nprivate bool CheckValidNumber()\n{\n if (Regex.Match(this.Text, this.RegexPattern).Value != this.Text)\n {\n this._errorProvider.SetError(this, this.ValidationError);\n return false;\n }\n this._errorProvider.Clear();\n return true;\n}\n\nprotected override void OnValidating(CancelEventArgs e)\n{\n bool flag = this.CheckValidNumber();\n if (!flag)\n {\n e.Cancel = true;\n this.Text = \"0\";\n }\n base.OnValidating(e);\n if (!flag)\n {\n this.ValidationFail(this, EventArgs.Empty);\n }\n}\n\nAs I said, i also prevent the user from input data in the text box other than numeric characteres overriding the OnKeyPress methdod:\nprotected override void OnKeyPress(KeyPressEventArgs e)\n{\n if ((!char.IsDigit(e.KeyChar) && !char.IsControl(e.KeyChar)) && (!this._numberSymbols.Contains(e.KeyChar.ToString()) && !this._numericSeparator.Contains(e.KeyChar.ToString())))\n {\n e.Handled = true;\n }\n if (this._numberSymbols.Contains(e.KeyChar.ToString()) && !this._forcePositives)\n {\n e.Handled = true;\n }\n if (this._numericSeparator.Contains(e.KeyChar.ToString()) && this._useIntegers)\n {\n e.Handled = true;\n }\n base.OnKeyPress(e);\n}\n\nThe elegant touch: I check if the number valid every time the user releases a key, so the user can get feedback as he/she types. (But remember that you must be carefull with the ValidationFail event ;))\nprotected override void OnKeyUp(KeyEventArgs e)\n{\n this.CheckValidNumber();\n base.OnKeyUp(e);\n}\n\n",
"You can use a regular textbox and a Validator control to control input.\n",
"Try using an error provider control to validate the textbox. You can use int.TryParse() or double.TryParse() to check if it's numeric and then validate the range.\n",
"You can use a combination of the RequiredFieldValidator and CompareValidator (Set to DataTypeCheck for the operator and Type set to Integer)\nThat will get it with a normal textbox if you would like, otherwise the recommendation above is good.\n"
] |
[
8,
1,
0,
0,
0
] |
[] |
[] |
[
"c#",
"edit",
"numeric"
] |
stackoverflow_0000076963_c#_edit_numeric.txt
|
Q:
What are some good compilers to use when learning C++?
What are some suggestions for easy to use C++ compilers for a beginner? Free or open-source ones would be preferred.
A:
GCC is a good choice for simple things.
Visual Studio Express edition is the free version of the major windows C++ compiler.
If you are on Windows I would use VS. If you are on linux you should use GCC.
*I say GCC for simple things because for a more complicated project the build process isn't so easy
A:
G++ is the GNU C++ compiler. Most *nix distros should have the package available.
A:
I'd recommend using Dev C++. It's a small and lightweight IDE that uses the mingw ports as the backend, meaning you'll be compiling the the defacto C/C++ compiler, gcc
A:
For a beginner: g++ --pedantic-errors -Wall
It'll help enforce good programming from the start.
A:
gcc with -Wall (enable all warnings) -Werror (change warnings into errors), -pedantic (get warnings for non-standard code) and -ansi (make the standard c++98).
If a warning is something you're aware of and need to turn off, you can always turn them back into warnings.
A:
I recommend gcc because it's designed to be used on the command line, and you can compile simple programs and see exactly what's happening:
g++ -o myprogram myprogram.cc
ls -l myprogram
One file in, two files out. With Visual C++, most people use it with the GUI, where you have to set up a project and the IDE generates a bunch of files which can get in the way if you're just starting out.
If you're using Windows, you'll choose between MingW or Cygwin. Cygwin is a little work to set up because you have to choose which packages to install, but I don't have experience with MingW.
A:
You can always use the C++ compiler from the Gnu Compiler Collection (GCC). It is available for almost any Unix system on earth, BSDs, Mac OS, Linux, and Windows (via Cygwin or mingw).
A number of IDEs are supporting the GCC C++ compiler, e.g. KDevelop under Linux/KDE, or Dev-CPP as mentioned in other posts.
A:
Microsoft Visual Studio Express Edition of their C++ compiler is good
A:
CodeBlocks is a very good IDE that can use besides many other compilers CL.EXE (from visual studio) and gcc. It comes also in a version with gcc included.
Visual Studio Express edition is avery good choice also (with Platform SDK if you will develop application that call winapi functions).
A:
Eclipse is a good one for mac, or Apple's own free Xcode which can be d/l'd off their development site.
A:
One reason to use g++ or MingW/Cygwin that hasn't been mentioned yet is that starting and IDE will hide some of what is going on. It will be incredibly useful down the road to understand the differences between compiling and linking for instance. Learn it and understand it from the start, and you won't even know you should be thanking yourself later.
-Max
A:
I say GCC for simple things because for a more complicated project the build process isn't so easy
True, but I don't think understanding the build process of a large project is orthogonal to understanding the project itself. My last job I worked at, they had a huge project that needed to build for the target platform (LynxOS) as well as an emulation environment (WinXP). They chose to throw everything into one .VCP file for on windows, and build it as one big executable. On target it was about 50 individual processes, so they wrote a makefile that listed all 3000 source files, compiled them all into one big library, and then linked the individual main.cpp's for each executable with the all-in-one library, to make 50 executables (which shared maybe 10% of their code with the other executables). As a result, no developer had a clue about what code depended on any other code. As a result, they never bothered trying to define clean interfaces between anything, because everything was easily accessible from everywhere. A hierarchical build system could have helped enforce some sort of order in an otherwise disorganized source code repository.
If you don't learn how .cpp files produce object code, what a static library is, what a shared library is, etc., when you are learning C/C++, you do still need to learn it at some point to be a competent C/C++ developer.
A:
Visual Studio in command line behaves just like GCC. Just open the Visual Studio command line window and:
c:\temp> cl /nologo /EHsc /W4 foo.cpp
c:\temp> dir /b foo.*
foo.cpp <-- your source file
foo.obj <-- result of compiling the cpp file
foo.pdb <-- debugging symbols (friendly names for debugging)
foo.exe <-- result of linking the obj with libraries
A:
I agree with Iulian Șerbănoiu: Code::Blocks is a very good solution, usable both from Linux (it will use g++/gcc) and from Windows (it will use either the MS compiler or gcc)
Note that you should at least once or twice try to compile using a good old makefile, if only to understand the logic behind headers, sources, inclusion, etc. etc..
As a beginner, don't forget to read books about C++ (Scott Meyers and Herb Sutter books come to the mind, when trying to learns the quirks of the language), and to study open source high profile projects to learn from their code style (they already encountered the problems you will encounter, and probably found viable solutions...).
|
What are some good compilers to use when learning C++?
|
What are some suggestions for easy to use C++ compilers for a beginner? Free or open-source ones would be preferred.
|
[
"GCC is a good choice for simple things.\nVisual Studio Express edition is the free version of the major windows C++ compiler.\nIf you are on Windows I would use VS. If you are on linux you should use GCC.\n*I say GCC for simple things because for a more complicated project the build process isn't so easy\n",
"G++ is the GNU C++ compiler. Most *nix distros should have the package available.\n",
"I'd recommend using Dev C++. It's a small and lightweight IDE that uses the mingw ports as the backend, meaning you'll be compiling the the defacto C/C++ compiler, gcc \n",
"For a beginner: g++ --pedantic-errors -Wall\nIt'll help enforce good programming from the start.\n",
"gcc with -Wall (enable all warnings) -Werror (change warnings into errors), -pedantic (get warnings for non-standard code) and -ansi (make the standard c++98).\nIf a warning is something you're aware of and need to turn off, you can always turn them back into warnings.\n",
"I recommend gcc because it's designed to be used on the command line, and you can compile simple programs and see exactly what's happening:\ng++ -o myprogram myprogram.cc\nls -l myprogram\n\nOne file in, two files out. With Visual C++, most people use it with the GUI, where you have to set up a project and the IDE generates a bunch of files which can get in the way if you're just starting out.\nIf you're using Windows, you'll choose between MingW or Cygwin. Cygwin is a little work to set up because you have to choose which packages to install, but I don't have experience with MingW.\n",
"You can always use the C++ compiler from the Gnu Compiler Collection (GCC). It is available for almost any Unix system on earth, BSDs, Mac OS, Linux, and Windows (via Cygwin or mingw).\nA number of IDEs are supporting the GCC C++ compiler, e.g. KDevelop under Linux/KDE, or Dev-CPP as mentioned in other posts.\n",
"Microsoft Visual Studio Express Edition of their C++ compiler is good\n",
"CodeBlocks is a very good IDE that can use besides many other compilers CL.EXE (from visual studio) and gcc. It comes also in a version with gcc included.\nVisual Studio Express edition is avery good choice also (with Platform SDK if you will develop application that call winapi functions). \n",
"Eclipse is a good one for mac, or Apple's own free Xcode which can be d/l'd off their development site.\n",
"One reason to use g++ or MingW/Cygwin that hasn't been mentioned yet is that starting and IDE will hide some of what is going on. It will be incredibly useful down the road to understand the differences between compiling and linking for instance. Learn it and understand it from the start, and you won't even know you should be thanking yourself later.\n-Max\n",
"I say GCC for simple things because for a more complicated project the build process isn't so easy\nTrue, but I don't think understanding the build process of a large project is orthogonal to understanding the project itself. My last job I worked at, they had a huge project that needed to build for the target platform (LynxOS) as well as an emulation environment (WinXP). They chose to throw everything into one .VCP file for on windows, and build it as one big executable. On target it was about 50 individual processes, so they wrote a makefile that listed all 3000 source files, compiled them all into one big library, and then linked the individual main.cpp's for each executable with the all-in-one library, to make 50 executables (which shared maybe 10% of their code with the other executables). As a result, no developer had a clue about what code depended on any other code. As a result, they never bothered trying to define clean interfaces between anything, because everything was easily accessible from everywhere. A hierarchical build system could have helped enforce some sort of order in an otherwise disorganized source code repository.\nIf you don't learn how .cpp files produce object code, what a static library is, what a shared library is, etc., when you are learning C/C++, you do still need to learn it at some point to be a competent C/C++ developer.\n",
"Visual Studio in command line behaves just like GCC. Just open the Visual Studio command line window and:\n\nc:\\temp> cl /nologo /EHsc /W4 foo.cpp\nc:\\temp> dir /b foo.*\nfoo.cpp <-- your source file\nfoo.obj <-- result of compiling the cpp file\nfoo.pdb <-- debugging symbols (friendly names for debugging)\nfoo.exe <-- result of linking the obj with libraries\n\n",
"I agree with Iulian Șerbănoiu: Code::Blocks is a very good solution, usable both from Linux (it will use g++/gcc) and from Windows (it will use either the MS compiler or gcc)\nNote that you should at least once or twice try to compile using a good old makefile, if only to understand the logic behind headers, sources, inclusion, etc. etc..\nAs a beginner, don't forget to read books about C++ (Scott Meyers and Herb Sutter books come to the mind, when trying to learns the quirks of the language), and to study open source high profile projects to learn from their code style (they already encountered the problems you will encounter, and probably found viable solutions...).\n"
] |
[
17,
4,
3,
3,
3,
3,
2,
1,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"c++",
"compiler_construction"
] |
stackoverflow_0000077126_c++_compiler_construction.txt
|
Q:
Match conditionally upon current node value
Given the following XML:
<current>
<login_name>jd</login_name>
</current>
<people>
<person>
<first>John</first>
<last>Doe</last>
<login_name>jd</login_name>
</preson>
<person>
<first>Pierre</first>
<last>Spring</last>
<login_name>ps</login_name>
</preson>
</people>
How can I get "John Doe" from within the current/login matcher?
I tried the following:
<xsl:template match="current/login_name">
<xsl:value-of select="../people/first[login_name = .]"/>
<xsl:text> </xsl:text>
<xsl:value-of select="../people/last[login_name = .]"/>
</xsl:template>
A:
I'd define a key to index the people:
<xsl:key name="people" match="person" use="login_name" />
Using a key here simply keeps the code clean, but you might also find it helpful for efficiency if you're often having to retrieve the <person> elements based on their <login_name> child.
I'd have a template that returned the formatted name of a given <person>:
<xsl:template match="person" mode="name">
<xsl:value-of select="concat(first, ' ', last)" />
</xsl:template>
And then I'd do:
<xsl:template match="current/login_name">
<xsl:apply-templates select="key('people', .)" mode="name" />
</xsl:template>
A:
You want current() function
<xsl:template match="current/login_name">
<xsl:value-of select="../../people/person[login_name = current()]/first"/>
<xsl:text> </xsl:text>
<xsl:value-of select="../../people/person[login_name = current()]/last"/>
</xsl:template>
or a bit more cleaner:
<xsl:template match="current/login_name">
<xsl:for-each select="../../people/person[login_name = current()]">
<xsl:value-of select="first"/>
<xsl:text> </xsl:text>
<xsl:value-of select="last"/>
</xsl:for-each>
</xsl:template>
A:
If you need to access multiple users, then JeniT's <xsl:key /> approach is ideal.
Here is my alternative take on it:
<xsl:template match="current/login_name">
<xsl:variable name="person" select="//people/person[login_name = .]" />
<xsl:value-of select="concat($person/first, ' ', $person/last)" />
</xsl:template>
We assign the selected <person> node to a variable, then we use the concat() function to output the first/last names.
There is also an error in your example XML. The <person> node incorrectly ends with </preson> (typo)
A better solution could be given if we knew the overall structure of the XML document (with root nodes, etc.)
A:
I think what he actually wanted was the replacement in the match for the "current" node, not a match in the person node:
<xsl:variable name="login" select="//current/login_name/text()"/>
<xsl:template match="current/login_name">
<xsl:value-of select='concat(../../people/person[login_name=$login]/first," ", ../../people/person[login_name=$login]/last)'/>
</xsl:template>
A:
Just to add my thoughts to the stack
<xsl:template match="login_name[parent::current]">
<xsl:variable name="login" select="text()"/>
<xsl:value-of select='concat(ancestor::people/child::person[login_name=$login]/child::first/text()," ",ancestor::people/child::person[login_name=$login]/child::last/text())'/>
</xsl:template>
I always prefer to use the axes explicitly in my XPath, more verbose but clearer IMHO.
Depending on how the rest of the XML documents looks (assuming this is just a fragment) you might need to constrain the reference to "ancestor::people" for example using "ancestor::people[1]" to constrain to the first people ancestor.
|
Match conditionally upon current node value
|
Given the following XML:
<current>
<login_name>jd</login_name>
</current>
<people>
<person>
<first>John</first>
<last>Doe</last>
<login_name>jd</login_name>
</preson>
<person>
<first>Pierre</first>
<last>Spring</last>
<login_name>ps</login_name>
</preson>
</people>
How can I get "John Doe" from within the current/login matcher?
I tried the following:
<xsl:template match="current/login_name">
<xsl:value-of select="../people/first[login_name = .]"/>
<xsl:text> </xsl:text>
<xsl:value-of select="../people/last[login_name = .]"/>
</xsl:template>
|
[
"I'd define a key to index the people:\n<xsl:key name=\"people\" match=\"person\" use=\"login_name\" />\n\nUsing a key here simply keeps the code clean, but you might also find it helpful for efficiency if you're often having to retrieve the <person> elements based on their <login_name> child.\nI'd have a template that returned the formatted name of a given <person>:\n<xsl:template match=\"person\" mode=\"name\">\n <xsl:value-of select=\"concat(first, ' ', last)\" />\n</xsl:template>\n\nAnd then I'd do:\n<xsl:template match=\"current/login_name\">\n <xsl:apply-templates select=\"key('people', .)\" mode=\"name\" />\n</xsl:template>\n\n",
"You want current() function\n<xsl:template match=\"current/login_name\">\n <xsl:value-of select=\"../../people/person[login_name = current()]/first\"/>\n <xsl:text> </xsl:text>\n <xsl:value-of select=\"../../people/person[login_name = current()]/last\"/>\n</xsl:template>\n\nor a bit more cleaner:\n<xsl:template match=\"current/login_name\">\n <xsl:for-each select=\"../../people/person[login_name = current()]\">\n <xsl:value-of select=\"first\"/>\n <xsl:text> </xsl:text>\n <xsl:value-of select=\"last\"/>\n </xsl:for-each>\n</xsl:template>\n\n",
"If you need to access multiple users, then JeniT's <xsl:key /> approach is ideal.\nHere is my alternative take on it:\n<xsl:template match=\"current/login_name\">\n <xsl:variable name=\"person\" select=\"//people/person[login_name = .]\" />\n <xsl:value-of select=\"concat($person/first, ' ', $person/last)\" />\n</xsl:template>\n\nWe assign the selected <person> node to a variable, then we use the concat() function to output the first/last names.\nThere is also an error in your example XML. The <person> node incorrectly ends with </preson> (typo)\nA better solution could be given if we knew the overall structure of the XML document (with root nodes, etc.)\n",
"I think what he actually wanted was the replacement in the match for the \"current\" node, not a match in the person node:\n<xsl:variable name=\"login\" select=\"//current/login_name/text()\"/>\n\n<xsl:template match=\"current/login_name\">\n<xsl:value-of select='concat(../../people/person[login_name=$login]/first,\" \", ../../people/person[login_name=$login]/last)'/>\n\n</xsl:template>\n\n",
"Just to add my thoughts to the stack\n<xsl:template match=\"login_name[parent::current]\">\n <xsl:variable name=\"login\" select=\"text()\"/>\n <xsl:value-of select='concat(ancestor::people/child::person[login_name=$login]/child::first/text(),\" \",ancestor::people/child::person[login_name=$login]/child::last/text())'/>\n</xsl:template>\n\nI always prefer to use the axes explicitly in my XPath, more verbose but clearer IMHO.\nDepending on how the rest of the XML documents looks (assuming this is just a fragment) you might need to constrain the reference to \"ancestor::people\" for example using \"ancestor::people[1]\" to constrain to the first people ancestor.\n"
] |
[
10,
4,
1,
0,
0
] |
[] |
[] |
[
"xmlnode",
"xpath",
"xslt"
] |
stackoverflow_0000061995_xmlnode_xpath_xslt.txt
|
Q:
Performance difference between dot notation versus method call in Objective-C
You can use a standard dot notation or a method call in Objective-C to access a property of an object in Objective-C.
myObject.property = YES;
or
[myObject setProperty:YES];
Is there a difference in performance (in terms of accessing the property)? Is it just a matter of preference in terms of coding style?
A:
Dot notation for property access in Objective-C is a message send, just as bracket notation. That is, given this:
@interface Foo : NSObject
@property BOOL bar;
@end
Foo *foo = [[Foo alloc] init];
foo.bar = YES;
[foo setBar:YES];
The last two lines will compile exactly the same. The only thing that changes this is if a property has a getter and/or setter attribute specified; however, all it does is change what message gets sent, not whether a message is sent:
@interface MyView : NSView
@property(getter=isEmpty) BOOL empty;
@end
if ([someView isEmpty]) { /* ... */ }
if (someView.empty) { /* ... */ }
Both of the last two lines will compile identically.
A:
Check out article from Cocoa is My Girlfriend. The gist of it, is that there is no performance penalty of using one over the other.
However, the notation does make it more difficult to see what is happening with your variables and what your variables are.
A:
The only time you'll see a performance difference is if you do not mark a property as "nonatomic". Then @synthesize will automatically add synchronization code around the setting of your property, keeping it thread safe - but slower to set and access.
Thus mostly you probably want to define a property like:
@property (nonatomic, retain) NSString *myProp;
Personally I find the dot notation generally useful from the standpoint of you not having to think about writing correct setter methods, which is not completely trivial even for nonatomic setters because you must also remember to release the old value properly. Using template code helps but you can always make mistakes and it's generally repetitious code that clutters up classes.
A pattern to be aware of: if you define the setter yourself (instead of letting @synthesize create it) and start having other side effects of setting a value you should probably make the setter a normal method instead of calling using the property notation.
Semantically using properties appears to be direct access to the actual value to the caller and anything that varies from that should thus be done via sending a message, not accessing a property (even though they are really both sending messages).
A:
As far as I've seen, there isn't a significant performance difference between the two. I'm reasonably certain that in most cases it will be 'compiled' down to the same code.
If you're not sure, try writing a test application that does each method a million times or so, all the while timing how long it takes. That's the only way to be certain (although it may vary on different architecture.)
A:
Also read this blog post on Cocoa with Love:
http://cocoawithlove.com/2008/06/speed-test-nsmanagedobject-objc-20.html
There the author compares the speed of custom accessor and dot notations for NSManagedObject, and finds no difference. However, KVC access (setValue:forKey:) appears to be about twice as slow.
|
Performance difference between dot notation versus method call in Objective-C
|
You can use a standard dot notation or a method call in Objective-C to access a property of an object in Objective-C.
myObject.property = YES;
or
[myObject setProperty:YES];
Is there a difference in performance (in terms of accessing the property)? Is it just a matter of preference in terms of coding style?
|
[
"Dot notation for property access in Objective-C is a message send, just as bracket notation. That is, given this:\n@interface Foo : NSObject\n@property BOOL bar;\n@end\n\nFoo *foo = [[Foo alloc] init];\nfoo.bar = YES;\n[foo setBar:YES];\n\nThe last two lines will compile exactly the same. The only thing that changes this is if a property has a getter and/or setter attribute specified; however, all it does is change what message gets sent, not whether a message is sent:\n@interface MyView : NSView\n@property(getter=isEmpty) BOOL empty;\n@end\n\nif ([someView isEmpty]) { /* ... */ }\nif (someView.empty) { /* ... */ }\n\nBoth of the last two lines will compile identically.\n",
"Check out article from Cocoa is My Girlfriend. The gist of it, is that there is no performance penalty of using one over the other.\nHowever, the notation does make it more difficult to see what is happening with your variables and what your variables are.\n",
"The only time you'll see a performance difference is if you do not mark a property as \"nonatomic\". Then @synthesize will automatically add synchronization code around the setting of your property, keeping it thread safe - but slower to set and access.\nThus mostly you probably want to define a property like:\n@property (nonatomic, retain) NSString *myProp;\n\nPersonally I find the dot notation generally useful from the standpoint of you not having to think about writing correct setter methods, which is not completely trivial even for nonatomic setters because you must also remember to release the old value properly. Using template code helps but you can always make mistakes and it's generally repetitious code that clutters up classes.\nA pattern to be aware of: if you define the setter yourself (instead of letting @synthesize create it) and start having other side effects of setting a value you should probably make the setter a normal method instead of calling using the property notation.\nSemantically using properties appears to be direct access to the actual value to the caller and anything that varies from that should thus be done via sending a message, not accessing a property (even though they are really both sending messages).\n",
"As far as I've seen, there isn't a significant performance difference between the two. I'm reasonably certain that in most cases it will be 'compiled' down to the same code.\nIf you're not sure, try writing a test application that does each method a million times or so, all the while timing how long it takes. That's the only way to be certain (although it may vary on different architecture.)\n",
"Also read this blog post on Cocoa with Love:\nhttp://cocoawithlove.com/2008/06/speed-test-nsmanagedobject-objc-20.html\nThere the author compares the speed of custom accessor and dot notations for NSManagedObject, and finds no difference. However, KVC access (setValue:forKey:) appears to be about twice as slow.\n"
] |
[
21,
5,
5,
0,
0
] |
[] |
[] |
[
"objective_c",
"performance"
] |
stackoverflow_0000034674_objective_c_performance.txt
|
Q:
Graphics card memory usage in linux
What tools are available to monitor graphics card memory usage in linux?
A:
NVIDIA PerfKit has a linux version which allows real-time monitoring of various graphics card properties, including graphics card memory usage. Obviously, this only works for NVIDIA graphics cards, and it also requires the use of a special instrumented driver.
A:
If you just need to know it for 3D graphics development purposes, you may want to look into something like gDEBugger or, if you only care about NVIDIA cards, you can try NVIDIA PerfHUD. I have not used them myself, but I would expect them to track such information.
|
Graphics card memory usage in linux
|
What tools are available to monitor graphics card memory usage in linux?
|
[
"NVIDIA PerfKit has a linux version which allows real-time monitoring of various graphics card properties, including graphics card memory usage. Obviously, this only works for NVIDIA graphics cards, and it also requires the use of a special instrumented driver.\n",
"If you just need to know it for 3D graphics development purposes, you may want to look into something like gDEBugger or, if you only care about NVIDIA cards, you can try NVIDIA PerfHUD. I have not used them myself, but I would expect them to track such information.\n"
] |
[
2,
0
] |
[] |
[] |
[
"graphics",
"linux"
] |
stackoverflow_0000050042_graphics_linux.txt
|
Q:
C++ runtime knowledge of classes
I have multiple classes that all derive from a base class, now some of the derived classes will not be compiled depending on the platform. I have a class that allows me to return an object of the base class, however now all the names of the derived classes have been hard coded.
Is there a way to determine what classes have been compiled, at run-time preferably, so that I can remove the linking and instead provide dynamically loadable libraries instead.
A:
Are you looking for C++ runtime class registration? I found this link (backup).
That would probably accomplish what you want, I am not sure about the dynamically loaded modules and whether or not you can register them using the same method.
A:
I don't know what you're really trying to accomplish, but you could put a singleton constructor in each derived class's implementation file that adds the name to a list, along with a pointer to a factory. Then the list is always up to date and can create all the compiled in classes.
A:
Generally, relying on the run-time type information is a bad idea in C++. What you have described seems like the factory pattern. You may want to consider creating a special factory subclass for each platform, which would only know about classes that exist on that platform.
A:
If every class has its own dynamic library, just check if the library exists.
A:
This sounds like a place to use "compile time polymorphism" or template policy parameters.
See Modern C++ Design by Andrei Alexandrescu and his Loki implementation based on the book. See also the Loki page at wikipedia.
A:
There are nasty, compiler-specific tricks for getting at class information at runtime. Trust me, you don't want to open that can of worms.
It seems to me that the only serious way of doing this would be to use conditional compilation on each of the derived classes. Within the #ifdef block, define a new constant which contains the class name which is being compiled. Then, the names are still hard coded, but all in a central location.
A:
The names of the derived classes have to be hard-coded in C++. There's no other way to use them. Therefore, not only is there no way to automatically detect what classes have been compiled, there would be no way to use that information if it existed.
If you could specify classes at run-time based on their name, something like:
std::string foo = "Derived1";
Base * object = new "foo"; // or whatever notation you like - doesn't work in C++
then the ability to tell if "Derived1" was compiled or not would be useful. Since you have to specify the class directly, like:
Base * object = new Derived1; // does work in C++
all checking is done at compile time.
|
C++ runtime knowledge of classes
|
I have multiple classes that all derive from a base class, now some of the derived classes will not be compiled depending on the platform. I have a class that allows me to return an object of the base class, however now all the names of the derived classes have been hard coded.
Is there a way to determine what classes have been compiled, at run-time preferably, so that I can remove the linking and instead provide dynamically loadable libraries instead.
|
[
"Are you looking for C++ runtime class registration? I found this link (backup).\nThat would probably accomplish what you want, I am not sure about the dynamically loaded modules and whether or not you can register them using the same method.\n",
"I don't know what you're really trying to accomplish, but you could put a singleton constructor in each derived class's implementation file that adds the name to a list, along with a pointer to a factory. Then the list is always up to date and can create all the compiled in classes.\n",
"Generally, relying on the run-time type information is a bad idea in C++. What you have described seems like the factory pattern. You may want to consider creating a special factory subclass for each platform, which would only know about classes that exist on that platform. \n",
"If every class has its own dynamic library, just check if the library exists.\n",
"This sounds like a place to use \"compile time polymorphism\" or template policy parameters.\nSee Modern C++ Design by Andrei Alexandrescu and his Loki implementation based on the book. See also the Loki page at wikipedia.\n",
"There are nasty, compiler-specific tricks for getting at class information at runtime. Trust me, you don't want to open that can of worms.\nIt seems to me that the only serious way of doing this would be to use conditional compilation on each of the derived classes. Within the #ifdef block, define a new constant which contains the class name which is being compiled. Then, the names are still hard coded, but all in a central location.\n",
"The names of the derived classes have to be hard-coded in C++. There's no other way to use them. Therefore, not only is there no way to automatically detect what classes have been compiled, there would be no way to use that information if it existed.\nIf you could specify classes at run-time based on their name, something like:\nstd::string foo = \"Derived1\";\nBase * object = new \"foo\"; // or whatever notation you like - doesn't work in C++\nthen the ability to tell if \"Derived1\" was compiled or not would be useful. Since you have to specify the class directly, like:\nBase * object = new Derived1; // does work in C++\nall checking is done at compile time.\n"
] |
[
3,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"c++",
"class",
"runtime"
] |
stackoverflow_0000077817_c++_class_runtime.txt
|
Q:
SharePoint stream file for preview
I am looking to stream a file housed in a SharePoint 2003 document library down to the browser. Basically the idea is to open the file as a stream and then to "write" the file stream to the reponse, specifying the content type and content disposition headers. Content disposition is used to preserve the file name, content type of course to clue the browser about what app to open to view the file.
This works all good and fine in a development environment and UAT environment. However, in the production environment, things do not always work as expected,however only with IE6/IE7. FF works great in all environments.
Note that in the production environment SSL is enabled and generally used. (When SSL is not used in the production environment, file streams, is named as expected, and properly dislays.)
Here is a code snippet:
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "test.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] pdfBytes = new byte[byteNum];
fs.Read(pdfBytes, 0, (int)byteNum);
Response.AppendHeader("Content-disposition", "filename=Testme.doc");
Response.CacheControl = "no-cache";
Response.ContentType = "application/msword; charset=utf-8";
Response.Expires = -1;
Response.OutputStream.Write(pdfBytes, 0, pdfBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Like I said, this code snippet works fine on the dev machine and in the UAT environment. A dialog box opens and asks to save, view or cancel Testme.doc. But in production onnly when using SSL, IE 6 & IE7 don't use the name of the attachment. Instead it uses the name of the page that is sending the stream, testheader.apx and then an error is thrown.
IE does provide an advanced setting "Do not save encrypted pages to disk".
I suspect this is part of the problem, the server tells the browser not to cache the file, while IE has the "Do not save encrypted pages to disk" enabled.
Yes I am aware that for larger files, the code snippet above will be a major drag on memory and this implimentation will be problematic. So the real final solution will not open the entire file into a single byte array, but rather will open the file as a stream, and then send the file down to the client in bite size chunks (e.g. perhaps roughly 10K in size).
Anyone else have similar experience "streaming" binary files over ssl? Any suggestions or recommendations?
A:
It might be something really simple, believe it or not I coded exactly the same thing today, i think the issue might be that the content disposition doesnt tell the browser its an attachment and therefore able to be saved.
Response.AddHeader("Content-Disposition", "attachment;filename=myfile.doc");
failing that i've included my code below as I know that works over https://
private void ReadFile(string URL)
{
try
{
string uristring = URL;
WebRequest myReq = WebRequest.Create(uristring);
NetworkCredential netCredential = new NetworkCredential(ConfigurationManager.AppSettings["Username"].ToString(),
ConfigurationManager.AppSettings["Password"].ToString(),
ConfigurationManager.AppSettings["Domain"].ToString());
myReq.Credentials = netCredential;
StringBuilder strSource = new StringBuilder("");
//get the stream of data
string contentType = "";
MemoryStream ms;
// Send a request to download the pdf document and then get the response
using (HttpWebResponse response = (HttpWebResponse)myReq.GetResponse())
{
contentType = response.ContentType;
// Get the stream from the server
using (Stream stream = response.GetResponseStream())
{
// Use the ReadFully method from the link above:
byte[] data = ReadFully(stream, response.ContentLength);
// Return the memory stream.
ms = new MemoryStream(data);
}
}
Response.Clear();
Response.ContentType = contentType;
Response.AddHeader("Content-Disposition", "attachment;");
// Write the memory stream containing the pdf file directly to the Response object that gets sent to the client
ms.WriteTo(Response.OutputStream);
}
catch (Exception ex)
{
throw new Exception("Error in ReadFile", ex);
}
}
A:
Ok, I resolved the problem, several factors at play here.
Firstly this support Microsoft article was beneficial:
Internet Explorer is unable to open Office documents from an SSL Web site.
In order for Internet Explorer to open documents in Office (or any out-of-process, ActiveX document server), Internet Explorer must save the file to the local cache directory and ask the associated application to load the file by using IPersistFile::Load. If the file is not stored to disk, this operation fails.
When Internet Explorer communicates with a secure Web site through SSL, Internet Explorer enforces any no-cache request. If the header or headers are present, Internet Explorer does not cache the file. Consequently, Office cannot open the file.
Secondly, something earlier in the page processing was causing the "no-cache" header to get written. So Response.ClearHeaders needed to be added, this cleared out the no-cache header, and the output of the page needs to allow caching.
Thirdly for good measure, also added on Response.End, so that no other processing futher on in the request lifetime attempts to clear the headers I've set and re-add the no-cache header.
Fourthly, discovered that content expiration had been enabled in IIS. I've left it enabled at the web site level, but since this one aspx page will serve as a gateway for downloading the files, I disabled it at the download page level.
So here is the code snippet that works (there are a couple other minor changes which I believe are inconsequential):
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "TestMe.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] fileBytes = new byte[byteNum];
fs.Read(fileBytes, 0, (int)byteNum);
Response.ClearContent();
Response.ClearHeaders();
Response.AppendHeader("Content-disposition", "attachment; filename=Testme.doc");
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.ContentType = "application/octet-stream";
Response.OutputStream.Write(fileBytes, 0, fileBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Response.End();
Keep in mind too, this is just for illustration. The real production code will include exception handling and likely read the file a chunk at a time (perhaps 10K).
Mauro, thanks for catching a detail that was missing from the code as well.
|
SharePoint stream file for preview
|
I am looking to stream a file housed in a SharePoint 2003 document library down to the browser. Basically the idea is to open the file as a stream and then to "write" the file stream to the reponse, specifying the content type and content disposition headers. Content disposition is used to preserve the file name, content type of course to clue the browser about what app to open to view the file.
This works all good and fine in a development environment and UAT environment. However, in the production environment, things do not always work as expected,however only with IE6/IE7. FF works great in all environments.
Note that in the production environment SSL is enabled and generally used. (When SSL is not used in the production environment, file streams, is named as expected, and properly dislays.)
Here is a code snippet:
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "test.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] pdfBytes = new byte[byteNum];
fs.Read(pdfBytes, 0, (int)byteNum);
Response.AppendHeader("Content-disposition", "filename=Testme.doc");
Response.CacheControl = "no-cache";
Response.ContentType = "application/msword; charset=utf-8";
Response.Expires = -1;
Response.OutputStream.Write(pdfBytes, 0, pdfBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Like I said, this code snippet works fine on the dev machine and in the UAT environment. A dialog box opens and asks to save, view or cancel Testme.doc. But in production onnly when using SSL, IE 6 & IE7 don't use the name of the attachment. Instead it uses the name of the page that is sending the stream, testheader.apx and then an error is thrown.
IE does provide an advanced setting "Do not save encrypted pages to disk".
I suspect this is part of the problem, the server tells the browser not to cache the file, while IE has the "Do not save encrypted pages to disk" enabled.
Yes I am aware that for larger files, the code snippet above will be a major drag on memory and this implimentation will be problematic. So the real final solution will not open the entire file into a single byte array, but rather will open the file as a stream, and then send the file down to the client in bite size chunks (e.g. perhaps roughly 10K in size).
Anyone else have similar experience "streaming" binary files over ssl? Any suggestions or recommendations?
|
[
"It might be something really simple, believe it or not I coded exactly the same thing today, i think the issue might be that the content disposition doesnt tell the browser its an attachment and therefore able to be saved.\n\nResponse.AddHeader(\"Content-Disposition\", \"attachment;filename=myfile.doc\");\n\nfailing that i've included my code below as I know that works over https://\n\n\nprivate void ReadFile(string URL)\n{\n try\n {\n string uristring = URL;\n WebRequest myReq = WebRequest.Create(uristring);\n NetworkCredential netCredential = new NetworkCredential(ConfigurationManager.AppSettings[\"Username\"].ToString(), \n ConfigurationManager.AppSettings[\"Password\"].ToString(), \n ConfigurationManager.AppSettings[\"Domain\"].ToString());\n myReq.Credentials = netCredential;\n StringBuilder strSource = new StringBuilder(\"\");\n\n //get the stream of data \n string contentType = \"\";\n MemoryStream ms;\n // Send a request to download the pdf document and then get the response\n using (HttpWebResponse response = (HttpWebResponse)myReq.GetResponse())\n {\n contentType = response.ContentType;\n // Get the stream from the server\n using (Stream stream = response.GetResponseStream())\n {\n // Use the ReadFully method from the link above:\n byte[] data = ReadFully(stream, response.ContentLength);\n // Return the memory stream.\n ms = new MemoryStream(data);\n }\n }\n\n Response.Clear();\n Response.ContentType = contentType;\n Response.AddHeader(\"Content-Disposition\", \"attachment;\");\n\n // Write the memory stream containing the pdf file directly to the Response object that gets sent to the client\n ms.WriteTo(Response.OutputStream);\n }\n catch (Exception ex)\n {\n throw new Exception(\"Error in ReadFile\", ex);\n }\n}\n\n\n",
"Ok, I resolved the problem, several factors at play here.\nFirstly this support Microsoft article was beneficial:\nInternet Explorer is unable to open Office documents from an SSL Web site.\n\nIn order for Internet Explorer to open documents in Office (or any out-of-process, ActiveX document server), Internet Explorer must save the file to the local cache directory and ask the associated application to load the file by using IPersistFile::Load. If the file is not stored to disk, this operation fails.\nWhen Internet Explorer communicates with a secure Web site through SSL, Internet Explorer enforces any no-cache request. If the header or headers are present, Internet Explorer does not cache the file. Consequently, Office cannot open the file. \n\nSecondly, something earlier in the page processing was causing the \"no-cache\" header to get written. So Response.ClearHeaders needed to be added, this cleared out the no-cache header, and the output of the page needs to allow caching.\nThirdly for good measure, also added on Response.End, so that no other processing futher on in the request lifetime attempts to clear the headers I've set and re-add the no-cache header. \nFourthly, discovered that content expiration had been enabled in IIS. I've left it enabled at the web site level, but since this one aspx page will serve as a gateway for downloading the files, I disabled it at the download page level.\nSo here is the code snippet that works (there are a couple other minor changes which I believe are inconsequential):\nSystem.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(\".\") + \"\\\\\" + \"TestMe.doc\", System.IO.FileMode.Open);\nlong byteNum = fs.Length;\nbyte[] fileBytes = new byte[byteNum];\nfs.Read(fileBytes, 0, (int)byteNum);\n\nResponse.ClearContent();\nResponse.ClearHeaders();\nResponse.AppendHeader(\"Content-disposition\", \"attachment; filename=Testme.doc\");\nResponse.Cache.SetCacheability(HttpCacheability.Public);\nResponse.ContentType = \"application/octet-stream\";\nResponse.OutputStream.Write(fileBytes, 0, fileBytes.Length);\nResponse.Flush();\nResponse.Close();\nfs.Close();\nResponse.End();\n\nKeep in mind too, this is just for illustration. The real production code will include exception handling and likely read the file a chunk at a time (perhaps 10K).\nMauro, thanks for catching a detail that was missing from the code as well.\n"
] |
[
4,
3
] |
[] |
[] |
[
"asp.net",
"binary",
"sharepoint",
"ssl",
"streaming"
] |
stackoverflow_0000052702_asp.net_binary_sharepoint_ssl_streaming.txt
|
Q:
How should Rails models containing database and non-database datasources be broken up?
So I'm working on a Rails app to get the feeling for the whole thing. I've got a Product model that's a standard ActiveRecord model. However, I also want to get some additional product info from Amazon ECS. So my complete model gets some of its info from the database and some from the web service. My question is, should I:
Make two models a Product and a ProductAWS, and then tie them together at the controller level.
Have the Product ActiveRecord model contain a ProductAWS object that does all the AWS stuff?
Just add all the AWS functionality to my Product model.
???
A:
As with most things: it depends. Each of your ideas have merit. If it were me, I'd start out this way:
class Product < ActiveRecord::Base
has_one :aws_item
end
class AWSItem
belongs_to :product
end
The key questions you want to ask yourself are:
Are you only going to be offering AWS ECS items, or will you have other products? If you'll have products that have nothing to do with Amazon, don't care about ASIN, etc, then a has_one could be the way to go. Or, even better, a polymorphic relationship to a :vendable interface so you can later plug in different extension types.
Is it just behavior that is different, or is the data going to be largely different too? Because you might want to consider:
class Product < ActiveRecord::Base
end
class AWSItem < Product
def do_amazon_stuff
...
end
end
How do you want the system to perform when Amazon ECS isn't available? Should it throw exceptions? Or should you rely on a local cached version of the catalog?
class Product < ActiveRecord::Base
end
class ItemFetcher < BackgrounDRb::Rails
def do_work
# .... Make a cached copy of your ECS catalog here.
# Copy the Amazon stuff into your local model
end
end
Walk through these questions slowly and the answer will become clearer. If it doesn't, start prototyping it out. Good luck!
A:
You can use the composed_of relationship in ActiveRecord. You make a regular class with all the attributes that you manage through AWS and specify that your Product-class is composed_of this class. ActiveRecord will handle the delegation of the mapped attributes to and from this class.
See the documentation of composed_of
A:
@Menno
What about using ActiveResource for the AWS-attributes class?
A:
If you are retrieving data from two completely different sources (ActiveRecord on one hand and the Internet on the other), there are many benefits to keeping these as separate models. As the above poster wrote, Product has_one (or has_many) :aws_item.
|
How should Rails models containing database and non-database datasources be broken up?
|
So I'm working on a Rails app to get the feeling for the whole thing. I've got a Product model that's a standard ActiveRecord model. However, I also want to get some additional product info from Amazon ECS. So my complete model gets some of its info from the database and some from the web service. My question is, should I:
Make two models a Product and a ProductAWS, and then tie them together at the controller level.
Have the Product ActiveRecord model contain a ProductAWS object that does all the AWS stuff?
Just add all the AWS functionality to my Product model.
???
|
[
"As with most things: it depends. Each of your ideas have merit. If it were me, I'd start out this way: \n class Product < ActiveRecord::Base\n has_one :aws_item\n end \n class AWSItem \n belongs_to :product\n end\n\nThe key questions you want to ask yourself are: \nAre you only going to be offering AWS ECS items, or will you have other products? If you'll have products that have nothing to do with Amazon, don't care about ASIN, etc, then a has_one could be the way to go. Or, even better, a polymorphic relationship to a :vendable interface so you can later plug in different extension types. \nIs it just behavior that is different, or is the data going to be largely different too? Because you might want to consider: \n\nclass Product < ActiveRecord::Base\nend \nclass AWSItem < Product\n def do_amazon_stuff\n ... \n end\nend\n\n\nHow do you want the system to perform when Amazon ECS isn't available? Should it throw exceptions? Or should you rely on a local cached version of the catalog?\n\nclass Product < ActiveRecord::Base\n\nend\n\nclass ItemFetcher < BackgrounDRb::Rails\n def do_work\n # .... Make a cached copy of your ECS catalog here. \n # Copy the Amazon stuff into your local model\n end\nend\n\n\nWalk through these questions slowly and the answer will become clearer. If it doesn't, start prototyping it out. Good luck!\n",
"You can use the composed_of relationship in ActiveRecord. You make a regular class with all the attributes that you manage through AWS and specify that your Product-class is composed_of this class. ActiveRecord will handle the delegation of the mapped attributes to and from this class.\nSee the documentation of composed_of\n",
"@Menno\nWhat about using ActiveResource for the AWS-attributes class?\n",
"If you are retrieving data from two completely different sources (ActiveRecord on one hand and the Internet on the other), there are many benefits to keeping these as separate models. As the above poster wrote, Product has_one (or has_many) :aws_item.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000070397_ruby_ruby_on_rails.txt
|
Q:
How do you quickly find the URL for a .NET framework method on MSDN?
How to you find the URL that represents the documentation of a .NET framework method on the MSDN website?
For example, I want to embed the URL for the .NET framework method into some comments in some code. The normal "mangled" URL that one gets searching MSDN isn't very friendly looking: http://msdn.microsoft.com/library/xd12z8ts.aspx. Using a Google search URL isn't all that pretty looking either.
What I really want a URL that can be embedded in comments that is plain and easy to read. For example,
// blah blah blah. See http://<....>/System.Byte.ToString for more information
A:
A lot of the time you can merely append the lowercase namespace reference to the domain:
http://msdn.microsoft.com/en-us/library/system.windows.application_events.aspx
Moreover, for say the .Net 2.0 version (or any specific version) you can add "(VS.80)":
http://msdn.microsoft.com/en-us/library/system.windows.forms.button(VS.80).aspx
Other versions:
.Net 1.1 -> (VS.71)
.Net 2.0 -> (VS.80)
.Net 3.0 -> (VS.85)
.Net 3.5 -> (VS.90)
Try it for a method (Control.IsInputChar) like so:
http://msdn.microsoft.com/en-us/library/system.windows.forms.control.isinputchar.aspx
A:
It's probably quickest to just type it into Google in my experience.
EDIT:
Now that you've edited your post to clarify what you actually meant I would say that embedding URLs in your comments is nice but you really have no guarantees that either the mangled URL or the pretty one will exist in future.
A:
It's simple -- just add the name of the method to the end of http://msdn.microsoft.com//library/.
For example, to find the URL for the System.Byte.ToString method go to http://msdn.microsoft.com//library/System.Byte.ToString
A:
In my experiecne, googling it works faster than msdn search.
A:
I find that googling "msdn " does it.
ie - msdn System.Web.UI.WebControls.Repeater.ItemDataBound
A:
Create a Search engine for MSDN Library in Firefox
Like this perhaps?
|
How do you quickly find the URL for a .NET framework method on MSDN?
|
How to you find the URL that represents the documentation of a .NET framework method on the MSDN website?
For example, I want to embed the URL for the .NET framework method into some comments in some code. The normal "mangled" URL that one gets searching MSDN isn't very friendly looking: http://msdn.microsoft.com/library/xd12z8ts.aspx. Using a Google search URL isn't all that pretty looking either.
What I really want a URL that can be embedded in comments that is plain and easy to read. For example,
// blah blah blah. See http://<....>/System.Byte.ToString for more information
|
[
"A lot of the time you can merely append the lowercase namespace reference to the domain:\nhttp://msdn.microsoft.com/en-us/library/system.windows.application_events.aspx\n\nMoreover, for say the .Net 2.0 version (or any specific version) you can add \"(VS.80)\":\nhttp://msdn.microsoft.com/en-us/library/system.windows.forms.button(VS.80).aspx\n\nOther versions:\n\n.Net 1.1 -> (VS.71)\n.Net 2.0 -> (VS.80)\n.Net 3.0 -> (VS.85)\n.Net 3.5 -> (VS.90)\n\nTry it for a method (Control.IsInputChar) like so:\nhttp://msdn.microsoft.com/en-us/library/system.windows.forms.control.isinputchar.aspx\n",
"It's probably quickest to just type it into Google in my experience. \nEDIT:\nNow that you've edited your post to clarify what you actually meant I would say that embedding URLs in your comments is nice but you really have no guarantees that either the mangled URL or the pretty one will exist in future.\n",
"It's simple -- just add the name of the method to the end of http://msdn.microsoft.com//library/.\nFor example, to find the URL for the System.Byte.ToString method go to http://msdn.microsoft.com//library/System.Byte.ToString\n",
"In my experiecne, googling it works faster than msdn search.\n",
"I find that googling \"msdn \" does it.\nie - msdn System.Web.UI.WebControls.Repeater.ItemDataBound\n",
"Create a Search engine for MSDN Library in Firefox\nLike this perhaps?\n"
] |
[
5,
4,
2,
1,
1,
0
] |
[] |
[] |
[
".net",
"msdn"
] |
stackoverflow_0000075145_.net_msdn.txt
|
Q:
How do I create a regex in Emacs for exactly 3 digits?
I want to create a regexp in Emacs that matches exactly 3 digits. For example, I want to match the following:
123
345
789
But not
1234
12
12 23
If I use [0-9]+ I match any single string of digits. I thought [0-9]{3} would work, but when tested in re-builder it doesn't match anything.
A:
If you're entering the regex interactively, and want to use {3}, you need to use backslashes to escape the curly braces. If you don't want to match any part of the longer strings of numbers, use \b to match word boundaries around the numbers. This leaves:
\b[0-9]\{3\}\b
For those wanting more information about \b, see the docs:
matches the empty string, but only at the beginning or end of a word. Thus, \bfoo\b
matches any occurrence of foo as a separate word.
\bballs?\b matches ball or balls as a separate word.
\b matches at the beginning or end of the buffer regardless of what text appears next to it.
If you do want to use this regex from elisp code, as always, you must escape the backslashes one more time. For example:
(highlight-regexp "\\b[0-9]\\{3\\}\\b")
A:
[0-9][0-9][0-9], [0-9]{3} or \d{3} don't work because they also match "1234".
So it depends on what the delimiter is.
If it's in a variable, then you can do ^/[0-9]{3}/$. If it's delimited by whitespace you could do \w+[0-9]{3}\w+
A:
You should use this:
"^\d{3}$"
A:
As others point out, you need to match more than just the three digits. Before the digits you have to have either a line-start or something that is not a digit. If emacs supports \D, use it. Otherwise use the set [^0-9].
In a nutshell:
(^|\D)\d{3}(\D|$)
A:
When experimenting with regular expressions in Emacs, I find regex-tool quite useful:
ftp://ftp.newartisans.com/pub/emacs/regex-tool.el
Not an answer (the question is answered already), just a general tip.
A:
[0-9][0-9][0-9] will match a minimum of 3 numbers, so as Joe mentioned, you have to (at a minimum) include \b or anything else that will delimit the numbers. Probably the most sure-fire method is:
[^0-9][0-9][0-9][0-9][^0-9]
|
How do I create a regex in Emacs for exactly 3 digits?
|
I want to create a regexp in Emacs that matches exactly 3 digits. For example, I want to match the following:
123
345
789
But not
1234
12
12 23
If I use [0-9]+ I match any single string of digits. I thought [0-9]{3} would work, but when tested in re-builder it doesn't match anything.
|
[
"If you're entering the regex interactively, and want to use {3}, you need to use backslashes to escape the curly braces. If you don't want to match any part of the longer strings of numbers, use \\b to match word boundaries around the numbers. This leaves:\n\\b[0-9]\\{3\\}\\b\n\nFor those wanting more information about \\b, see the docs:\n\nmatches the empty string, but only at the beginning or end of a word. Thus, \\bfoo\\b\n matches any occurrence of foo as a separate word. \n \\bballs?\\b matches ball or balls as a separate word.\n \\b matches at the beginning or end of the buffer regardless of what text appears next to it. \n\nIf you do want to use this regex from elisp code, as always, you must escape the backslashes one more time. For example:\n(highlight-regexp \"\\\\b[0-9]\\\\{3\\\\}\\\\b\")\n\n",
"[0-9][0-9][0-9], [0-9]{3} or \\d{3} don't work because they also match \"1234\".\nSo it depends on what the delimiter is.\nIf it's in a variable, then you can do ^/[0-9]{3}/$. If it's delimited by whitespace you could do \\w+[0-9]{3}\\w+\n",
"You should use this: \n\"^\\d{3}$\"\n\n",
"As others point out, you need to match more than just the three digits. Before the digits you have to have either a line-start or something that is not a digit. If emacs supports \\D, use it. Otherwise use the set [^0-9].\nIn a nutshell:\n(^|\\D)\\d{3}(\\D|$)\n\n",
"When experimenting with regular expressions in Emacs, I find regex-tool quite useful: \nftp://ftp.newartisans.com/pub/emacs/regex-tool.el\nNot an answer (the question is answered already), just a general tip. \n",
"[0-9][0-9][0-9] will match a minimum of 3 numbers, so as Joe mentioned, you have to (at a minimum) include \\b or anything else that will delimit the numbers. Probably the most sure-fire method is:\n[^0-9][0-9][0-9][0-9][^0-9]\n"
] |
[
50,
10,
4,
3,
2,
0
] |
[
"It's pretty simple:\n[0-9][0-9][0-9]\n\n"
] |
[
-3
] |
[
"emacs",
"regex"
] |
stackoverflow_0000069591_emacs_regex.txt
|
Q:
How can you disable the Windows' "X" close button in the upper right-hand corner for a web-based program that is displayed in IE7?
We are using a software program at our school to enter IEPs (Individualized Education Programs). When entering goals and objectives for a student, users are provided with a Save and a Close button. Close is meant for users not wishing to save the goal they just chose. However, our users are sometimes wanting to back out of the screen and close the Window by clicking on the X in the upper right hand corner. Unfortunately, this somehow corrupts data and the user has difficulty later entering goals. The software company tells us to educate our staff not to click on the X and that there is no way to disable it. The software is web-based and our school has standardized on IE7.
A:
If it's web based, then you're probably just running a webpage in Internet Explorer. If that's the case, I'd recommend IE's kiosk mode.
If you need something a bit more heavyweight, Public Web Browser is a good and cheap choice that I've had good experiences with.
A:
There is no way to disable the close button on the window (can you imagine!? ad popups that never go away! eek!).
However, you can catch it and do something useful (like click the "close" button on the form). See:
http://blogs.x2line.com/al/archive/2004/09/15/561.aspx
A:
A lot of browsers have a full-screen mode (F11 in Firefox), where they take up the entire screen real estate, hiding any other UI elements, including the top bar (at least for Windows, dunno about *nix). This is a very simple solution, but afaik there's no way to disable the [x] for windows in general, you'd have to find a browser that does not use the default Windows look and doesn't implement it's own [x] in the corner.
|
How can you disable the Windows' "X" close button in the upper right-hand corner for a web-based program that is displayed in IE7?
|
We are using a software program at our school to enter IEPs (Individualized Education Programs). When entering goals and objectives for a student, users are provided with a Save and a Close button. Close is meant for users not wishing to save the goal they just chose. However, our users are sometimes wanting to back out of the screen and close the Window by clicking on the X in the upper right hand corner. Unfortunately, this somehow corrupts data and the user has difficulty later entering goals. The software company tells us to educate our staff not to click on the X and that there is no way to disable it. The software is web-based and our school has standardized on IE7.
|
[
"If it's web based, then you're probably just running a webpage in Internet Explorer. If that's the case, I'd recommend IE's kiosk mode.\nIf you need something a bit more heavyweight, Public Web Browser is a good and cheap choice that I've had good experiences with.\n",
"There is no way to disable the close button on the window (can you imagine!? ad popups that never go away! eek!). \nHowever, you can catch it and do something useful (like click the \"close\" button on the form). See: \nhttp://blogs.x2line.com/al/archive/2004/09/15/561.aspx\n",
"A lot of browsers have a full-screen mode (F11 in Firefox), where they take up the entire screen real estate, hiding any other UI elements, including the top bar (at least for Windows, dunno about *nix). This is a very simple solution, but afaik there's no way to disable the [x] for windows in general, you'd have to find a browser that does not use the default Windows look and doesn't implement it's own [x] in the corner.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"internet_explorer",
"windows"
] |
stackoverflow_0000077993_internet_explorer_windows.txt
|
Q:
Extending/Merging VB Arrays
I have a class with a public array of bytes. Lets say its
Public myBuff as byte()
Events within the class get chunks of data in byte array. How do i tell the event code to stick the get chunk on the end? Lets say
Private Sub GetChunk
Dim chunk as byte
'... get stuff in chunk
Me.myBuff += chunk '(stick chunk on end of public array)
End sub
Or am I totally missing the point?
A:
if i remember right, in vb you want to redim with preserve to grow an array.
A:
If the array is small, and new data is infrequently added, an easy way would be to:
public BufferSize as long 'or you can just use Ubound(mybuff), I prefer a tracker var tho
public MyBuff
private sub GetChunk()
dim chunk as byte
'get stuff
BufferSize=BufferSize+1
redim preserve MyBuff(buffersize)
mybuff(buffersize) = chunk
end sub
if chunk is an array of bytes, it would look more like:
buffersize=buffersize+ubound(chunk) 'or if it's a fixed-size chunk, just use that number
redim preserve mybuff(buffersize)
for k%=0 to ubound(chunk) 'copy new information to buffersize
mybuff(k%+buffersize-ubound(chunk))=chunk(k%)
next
if you will be doing this frequently (say, many times per second) you'd want to do something like how the StringBuilder class works:
public BufSize&,BufAlloc& 'initialize bufalloc to 1 or a number >= bufsize
public MyBuff() as byte
sub getdata()
bufsize=bufsize+ubound(chunk)
if bufsize>bufalloc then
bufalloc=bufalloc*2
redim preserve mybuff(bufalloc)
end if
for k%=0 to ubound(chunk) 'copy new information to buffersize
mybuff(k%+bufsize-ubound(chunk))=chunk(k%)
next
end sub
that basically doubles the memory allocated to mybuf each time the pointer passes the end of the buffer. this means much less shuffling around of memory.
A:
You'll be constantly using the ReDim keyword, which is extremely inefficient.
Are you using .Net? If so, consider using a System.Collections.Generic.List(Of Byte) instead. You can use it's .AddRange() method to append your bytes, and it's .ToArray() method to get an array back out if you really need one.
A:
Your question doesn't seem to be very clear. You should probably not have the array of bytes as public. It should probably be private and you should provide a set of public functions that allow users of the class to perform operations against the array.
A:
I think you might be looking for something other then an array. If you are trying to gradually extend the amount of data frequently, you should use a dynamic data structure such asArrayList. This has an Add method which adds the specific object or value to the array without concerns for space. It also has a nifty ToArray() method that you can use.
If you are trying to use an array for specific reasons (performance, I guess), use ReDim Preserve array(newSize).
|
Extending/Merging VB Arrays
|
I have a class with a public array of bytes. Lets say its
Public myBuff as byte()
Events within the class get chunks of data in byte array. How do i tell the event code to stick the get chunk on the end? Lets say
Private Sub GetChunk
Dim chunk as byte
'... get stuff in chunk
Me.myBuff += chunk '(stick chunk on end of public array)
End sub
Or am I totally missing the point?
|
[
"if i remember right, in vb you want to redim with preserve to grow an array.\n",
"If the array is small, and new data is infrequently added, an easy way would be to:\npublic BufferSize as long 'or you can just use Ubound(mybuff), I prefer a tracker var tho\npublic MyBuff\n\nprivate sub GetChunk()\ndim chunk as byte\n'get stuff\nBufferSize=BufferSize+1\n\nredim preserve MyBuff(buffersize)\nmybuff(buffersize) = chunk\nend sub\n\nif chunk is an array of bytes, it would look more like:\nbuffersize=buffersize+ubound(chunk) 'or if it's a fixed-size chunk, just use that number\nredim preserve mybuff(buffersize)\nfor k%=0 to ubound(chunk) 'copy new information to buffersize\n mybuff(k%+buffersize-ubound(chunk))=chunk(k%)\nnext\n\nif you will be doing this frequently (say, many times per second) you'd want to do something like how the StringBuilder class works:\npublic BufSize&,BufAlloc& 'initialize bufalloc to 1 or a number >= bufsize\npublic MyBuff() as byte\n\nsub getdata()\nbufsize=bufsize+ubound(chunk)\nif bufsize>bufalloc then\n bufalloc=bufalloc*2\n redim preserve mybuff(bufalloc)\nend if\nfor k%=0 to ubound(chunk) 'copy new information to buffersize\n mybuff(k%+bufsize-ubound(chunk))=chunk(k%)\nnext\nend sub\n\nthat basically doubles the memory allocated to mybuf each time the pointer passes the end of the buffer. this means much less shuffling around of memory.\n",
"You'll be constantly using the ReDim keyword, which is extremely inefficient.\nAre you using .Net? If so, consider using a System.Collections.Generic.List(Of Byte) instead. You can use it's .AddRange() method to append your bytes, and it's .ToArray() method to get an array back out if you really need one.\n",
"Your question doesn't seem to be very clear. You should probably not have the array of bytes as public. It should probably be private and you should provide a set of public functions that allow users of the class to perform operations against the array.\n",
"I think you might be looking for something other then an array. If you are trying to gradually extend the amount of data frequently, you should use a dynamic data structure such asArrayList. This has an Add method which adds the specific object or value to the array without concerns for space. It also has a nifty ToArray() method that you can use.\nIf you are trying to use an array for specific reasons (performance, I guess), use ReDim Preserve array(newSize).\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"arrays",
"concatenation",
"vba"
] |
stackoverflow_0000077382_arrays_concatenation_vba.txt
|
Q:
Python and "re"
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed:
result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness")
# result is None`
After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change:
regex = ".*(a_regex_of_pure_awesomeness)"
into
regex = "a_regex_of_pure_awesomeness"
Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
A:
In Python, there's a distinction between "match" and "search"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string.
Python regex docs
Matching vs searching
A:
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(your_html)
for a in soup.findAll('a', href=True):
# do something with `a` w/ href attribute
print a['href']
A:
>>> import re
>>> pattern = re.compile("url")
>>> string = " url"
>>> pattern.match(string)
>>> pattern.search(string)
<_sre.SRE_Match object at 0xb7f7a6e8>
A:
Are you using the re.match() or re.search() method? My understanding is that re.match() assumes a "^" at the beginning of your expression and will only search at the beginning of the text, while re.search() acts more like the Perl regular expressions and will only match the beginning of the text if you include a "^" at the beginning of your expression. Hope that helps.
|
Python and "re"
|
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed:
result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness")
# result is None`
After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change:
regex = ".*(a_regex_of_pure_awesomeness)"
into
regex = "a_regex_of_pure_awesomeness"
Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
|
[
"In Python, there's a distinction between \"match\" and \"search\"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string.\nPython regex docs\nMatching vs searching\n",
"from BeautifulSoup import BeautifulSoup \n\nsoup = BeautifulSoup(your_html)\nfor a in soup.findAll('a', href=True):\n # do something with `a` w/ href attribute\n print a['href']\n\n",
">>> import re\n>>> pattern = re.compile(\"url\")\n>>> string = \" url\"\n>>> pattern.match(string)\n>>> pattern.search(string)\n<_sre.SRE_Match object at 0xb7f7a6e8>\n\n",
"Are you using the re.match() or re.search() method? My understanding is that re.match() assumes a \"^\" at the beginning of your expression and will only search at the beginning of the text, while re.search() acts more like the Perl regular expressions and will only match the beginning of the text if you include a \"^\" at the beginning of your expression. Hope that helps.\n"
] |
[
20,
4,
3,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000072393_python_regex.txt
|
Q:
Are there any projects for replacing HTML and the current javascript?
Google created protocol buffers as a replacement for the bulky XML method of data transition. Faster XML processing was just not good enough. Most of the web has grown up as a hodge podge of different technologies that have been integrated to work within the browser or to generate html. JavaScript is separate from HTML. Flash and Silverlight are plugged in the mix as well. We can get the job done with the tools we have but can we do better?
Before you mention standards, (which are a good thing to have), think about evolutionary change versus revolutionary change. If Henry Ford asked people about a better way to get around they would have said they wanted a faster horse. (Webkit is a faster horse.)
I am hoping there is a project and I just haven’t read of it.
A:
There are all sorts of "replacements", and have been since before the web existed. The problem with talking about a "replacement" for HTML+JS is that the conversation generally starts out of frustration with one or more specific aspects of the current implementations:
"i hate the lack of presentation-specific tags, can we replace it?"
"i hate the lack of semantic tags, can we replace it?"
"i hate the CSS box model, can we replace it?"
"i hate the sub-par printing support, can we replace it?"
"i hate the hacks required to get glitzy animation, can we replace it?"
...
Someone wants a faster horse, someone wants a tireless horse, someone wants a stronger horse, someone wants a horse that smells like burning petroleum instead of, uh, horse... Put all the ideas together and you might get a Model-T... or you might get something out of a Jules Verne / steampunk nightmare.
For every revolution that results in something better, there are scores that produce bloodshed followed by more of the same. Be careful what you wish for...
A:
HTML+CSS+JS will be replaced by HTML+CSS+SVG+JS, which will be replaced by incrementally more modern versions of the former, sometimes with something new added in the mix. The web technologies of today are very different of the web technologies of 10 years ago. You can expect the landscape will still be different in ten years.
Look where the alpha geeks look. Well, they are all looking at REST designs with lots of Javascript and CSS.
The various "web replacement" technologies promoted by Microsoft, Adobe, Sun, etc. are only here because those companies hope to get people back into lock-in. Pray that they do not succeed.
The web technologies are not be themselve a "hodge-podge". The hodge-podge aspect comes from multiple implementations with their own bugs and quirks. In other words, it comes from open formats implemented in a competitive market.
A:
You mentioned two alternatives already: Silverlight and Flash. It's safe to assume that ~95% people have Flash Player installed; Silverlight has also seen quite good adoption in this short amount of time.
But jumping on the eye-candy bandwagon isn't necessarily going to make your site better. There will be issues with accessibility, search engines not being to properly index your content, users not being to bookmark pages they want to get back to. Rich graphics pages, although vector, take more to load and can often turn out just annoying (where the goal was visual appeal, the opposite happened). All these things can be worked around or even fixed, but it takes much more resources compared to using standards.
All these things would apply even if there was some new technology that we "haven't read of".
HTTP is as slow as network connection is, not by poor design. It's actually very efficient. HTML processing is also blazing fast, considering browsers performed well enough for people using them even on sites with terrible, fat table-based markup. JavaScript scene is looking very bright; there is increased attention on the new version of the specs, multiple implementations, incredible speed advantages in modern browsers over the course of last year. And don't think only WebKit is fast -- Opera and Mozilla have never fallen behind.
If you observe what was happening on the Internet in the last 20 years, you would have noticed that proprietary, vendor-dictated technologies eventually got pushed out by open standards. The only reason Flash Player survived was that JavaScript and open video codecs needed some time to get developed. Now that they are here, I think the same thing is going to happen all over again.
A:
You might be interested in Sun's Lively.
There will also probably be more tools that compile to HTML+JavaScript, so you won't have to deal with them directly (like GWT.) There are also projects that try compile other languages to work in the browser (like HotRuby).
A:
so what you're looking for is a paradigm shift in web technology. it's always tough to imagine how that will look - maybe new tech will become a more immersive experience, incorporating more senses then just sight and sound (touch is a good candidate), as well as something that allows for full-range-motion interaction rather then the 2D 'point and click' mouse interface.
|
Are there any projects for replacing HTML and the current javascript?
|
Google created protocol buffers as a replacement for the bulky XML method of data transition. Faster XML processing was just not good enough. Most of the web has grown up as a hodge podge of different technologies that have been integrated to work within the browser or to generate html. JavaScript is separate from HTML. Flash and Silverlight are plugged in the mix as well. We can get the job done with the tools we have but can we do better?
Before you mention standards, (which are a good thing to have), think about evolutionary change versus revolutionary change. If Henry Ford asked people about a better way to get around they would have said they wanted a faster horse. (Webkit is a faster horse.)
I am hoping there is a project and I just haven’t read of it.
|
[
"There are all sorts of \"replacements\", and have been since before the web existed. The problem with talking about a \"replacement\" for HTML+JS is that the conversation generally starts out of frustration with one or more specific aspects of the current implementations: \n\n\"i hate the lack of presentation-specific tags, can we replace it?\"\n\"i hate the lack of semantic tags, can we replace it?\"\n\"i hate the CSS box model, can we replace it?\"\n\"i hate the sub-par printing support, can we replace it?\"\n\"i hate the hacks required to get glitzy animation, can we replace it?\"\n...\n\nSomeone wants a faster horse, someone wants a tireless horse, someone wants a stronger horse, someone wants a horse that smells like burning petroleum instead of, uh, horse... Put all the ideas together and you might get a Model-T... or you might get something out of a Jules Verne / steampunk nightmare. \nFor every revolution that results in something better, there are scores that produce bloodshed followed by more of the same. Be careful what you wish for...\n",
"HTML+CSS+JS will be replaced by HTML+CSS+SVG+JS, which will be replaced by incrementally more modern versions of the former, sometimes with something new added in the mix. The web technologies of today are very different of the web technologies of 10 years ago. You can expect the landscape will still be different in ten years.\nLook where the alpha geeks look. Well, they are all looking at REST designs with lots of Javascript and CSS.\nThe various \"web replacement\" technologies promoted by Microsoft, Adobe, Sun, etc. are only here because those companies hope to get people back into lock-in. Pray that they do not succeed.\nThe web technologies are not be themselve a \"hodge-podge\". The hodge-podge aspect comes from multiple implementations with their own bugs and quirks. In other words, it comes from open formats implemented in a competitive market.\n",
"You mentioned two alternatives already: Silverlight and Flash. It's safe to assume that ~95% people have Flash Player installed; Silverlight has also seen quite good adoption in this short amount of time. \nBut jumping on the eye-candy bandwagon isn't necessarily going to make your site better. There will be issues with accessibility, search engines not being to properly index your content, users not being to bookmark pages they want to get back to. Rich graphics pages, although vector, take more to load and can often turn out just annoying (where the goal was visual appeal, the opposite happened). All these things can be worked around or even fixed, but it takes much more resources compared to using standards.\nAll these things would apply even if there was some new technology that we \"haven't read of\".\nHTTP is as slow as network connection is, not by poor design. It's actually very efficient. HTML processing is also blazing fast, considering browsers performed well enough for people using them even on sites with terrible, fat table-based markup. JavaScript scene is looking very bright; there is increased attention on the new version of the specs, multiple implementations, incredible speed advantages in modern browsers over the course of last year. And don't think only WebKit is fast -- Opera and Mozilla have never fallen behind.\nIf you observe what was happening on the Internet in the last 20 years, you would have noticed that proprietary, vendor-dictated technologies eventually got pushed out by open standards. The only reason Flash Player survived was that JavaScript and open video codecs needed some time to get developed. Now that they are here, I think the same thing is going to happen all over again.\n",
"You might be interested in Sun's Lively.\nThere will also probably be more tools that compile to HTML+JavaScript, so you won't have to deal with them directly (like GWT.) There are also projects that try compile other languages to work in the browser (like HotRuby).\n",
"so what you're looking for is a paradigm shift in web technology. it's always tough to imagine how that will look - maybe new tech will become a more immersive experience, incorporating more senses then just sight and sound (touch is a good candidate), as well as something that allows for full-range-motion interaction rather then the 2D 'point and click' mouse interface.\n"
] |
[
3,
2,
0,
0,
0
] |
[] |
[] |
[
"ajax",
"flash",
"html",
"javascript",
"silverlight"
] |
stackoverflow_0000074786_ajax_flash_html_javascript_silverlight.txt
|
Q:
Interpreting Stacks in Windows Minidumps
As someone who is just starting to learn the intricacies of computer debugging, for the life of me, I can't understand how to read the Stack Text of a dump in Windbg. I've no idea of where to start on how to interpret them or how to go about it. Can anyone offer direction to this poor soul?
ie (the only dump I have on hand with me actually)
>b69dd8f0 bfa1e255 016d2fc0 89efc000 00000040 nv4_disp+0x48b94
b69dd8f4 016d2fc0 89efc000 00000040 00000006 nv4_disp+0x49255
b69dd8f8 89efc000 00000040 00000006 bfa1dcc0 0x16d2fc0
b69dd8fc 00000000 00000006 bfa1dcc0 e1e71018 0x89efc000
I know the problem is to do with the Nvidia display driver, but what I want to know is how to actually read the stack (eg, what is b69dd8f4?) :-[
A:
First, you need to have the proper symbols configured. The symbols will allow you to match memory addresses to function names. In order to do this you have to create a local folder in your machine in which you will store a local cache of symbols (for example: C:\symbols). Then you need to specify the symbols server path. To do this just go to: File > Symbol File Path and type:
SRV*c:\symbols*http://msdl.microsoft.com/download/symbols
You can find more information on how to correctly configure the symbols here.
Once you have properly configured the Symbols server you can open the minidump from: File > Open Crash Dump.
Once the minidump is opened it will show you on the left side of the command line the thread that was executing when the dump was generated. If you want to see what this thread was executing type:
kpn 200
This might take some time the first you execute it since it has to download the necessary public Microsoft related symbols the first time. Once all the symbols are downloaded you'll get something like:
01 MODULE!CLASS.FUNCTIONNAME1(...)
02 MODULE!CLASS.FUNCTIONNAME2(...)
03 MODULE!CLASS.FUNCTIONNAME3(...)
04 MODULE!CLASS.FUNCTIONNAME4(...)
Where:
THE FIRST NUMBER: Indicates the frame number
MODULE: The DLL that contains the code
CLASS: (Only on C++ code) will show you the class that contains the code
FUNCTIONAME: The method that was called. If you have the correct symbols you will also see the parameters.
You might also see something like
01 MODULE!+989823
This indicates that you don't have the proper Symbol for this DLL and therefore you are only able to see the method offset.
So, what is a callstack?
Imagine you have this code:
void main()
{
method1();
}
void method1()
{
method2();
}
int method2()
{
return 20/0;
}
In this code method2 basically will throw an Exception since we are trying to divide by 0 and this will cause the process to crash. If we got a minidump when this occurred we would see the following callstack:
01 MYDLL!method2()
02 MYDLL!method1()
03 MYDLL!main()
You can follow from this callstack that "main" called "method1" that then called "method2" and it failed.
In your case you've got this callstack (which I guess is the result of running "kb" command)
b69dd8f0 bfa1e255 016d2fc0 89efc000 00000040 nv4_disp+0x48b94
b69dd8f4 016d2fc0 89efc000 00000040 00000006 nv4_disp+0x49255
b69dd8f8 89efc000 00000040 00000006 bfa1dcc0 0x16d2fc0
b69dd8fc 00000000 00000006 bfa1dcc0 e1e71018 0x89efc000
The first column indicates the Child Frame Pointer, the second column indicates the Return address of the method that is executing, the next three columns show the first 3 parameters that were passed to the method, and the last part is the DLL name (nv4_disp) and the offset of the method that is being executed (+0x48b94). Since you don't have the symbols you are not able to see the method name. I doubt tha NVIDIA offers public access to their symbols so I gues you can't get much information from here.
I recommend you run "kpn 200". This will show you the full callstack and you might be able to see the origin of the method that caused this crash (if it was a Microsoft DLL you should have the proper symbols in the steps that I provided you).
At least you know it's related to a NVIDIA bug ;-) Try upgrading the DLLs of this driver to the latest version.
In case you want to learn more about WinDBG debugging I recommend the following links:
If broken it is, fix it you should
TechNet Webcast: Windows Hang and Crash Dump Analysis
Delicious.com popular links on WinDBG
A:
A really good tutorial on interpreting a stack trace is available here:
http://www.codeproject.com/KB/debug/cdbntsd2.aspx
However, even with a tutorial like that it can be very difficult (or near impossible) to interpret a stack dump without the proper symbols available/loaded.
A:
It might help to include an example of the stack you are trying to read. A good tip is to ensure you have correct debug symbols for all modules shown in the stack. This includes symbols for modules in the OS, Microsoft has made their symbol server publicly available.
|
Interpreting Stacks in Windows Minidumps
|
As someone who is just starting to learn the intricacies of computer debugging, for the life of me, I can't understand how to read the Stack Text of a dump in Windbg. I've no idea of where to start on how to interpret them or how to go about it. Can anyone offer direction to this poor soul?
ie (the only dump I have on hand with me actually)
>b69dd8f0 bfa1e255 016d2fc0 89efc000 00000040 nv4_disp+0x48b94
b69dd8f4 016d2fc0 89efc000 00000040 00000006 nv4_disp+0x49255
b69dd8f8 89efc000 00000040 00000006 bfa1dcc0 0x16d2fc0
b69dd8fc 00000000 00000006 bfa1dcc0 e1e71018 0x89efc000
I know the problem is to do with the Nvidia display driver, but what I want to know is how to actually read the stack (eg, what is b69dd8f4?) :-[
|
[
"First, you need to have the proper symbols configured. The symbols will allow you to match memory addresses to function names. In order to do this you have to create a local folder in your machine in which you will store a local cache of symbols (for example: C:\\symbols). Then you need to specify the symbols server path. To do this just go to: File > Symbol File Path and type:\nSRV*c:\\symbols*http://msdl.microsoft.com/download/symbols\n\nYou can find more information on how to correctly configure the symbols here.\nOnce you have properly configured the Symbols server you can open the minidump from: File > Open Crash Dump. \nOnce the minidump is opened it will show you on the left side of the command line the thread that was executing when the dump was generated. If you want to see what this thread was executing type:\nkpn 200\n\nThis might take some time the first you execute it since it has to download the necessary public Microsoft related symbols the first time. Once all the symbols are downloaded you'll get something like:\n01 MODULE!CLASS.FUNCTIONNAME1(...)\n02 MODULE!CLASS.FUNCTIONNAME2(...)\n03 MODULE!CLASS.FUNCTIONNAME3(...)\n04 MODULE!CLASS.FUNCTIONNAME4(...)\n\nWhere:\n\nTHE FIRST NUMBER: Indicates the frame number\nMODULE: The DLL that contains the code\nCLASS: (Only on C++ code) will show you the class that contains the code\nFUNCTIONAME: The method that was called. If you have the correct symbols you will also see the parameters.\n\nYou might also see something like\n01 MODULE!+989823\n\nThis indicates that you don't have the proper Symbol for this DLL and therefore you are only able to see the method offset.\nSo, what is a callstack? \nImagine you have this code:\nvoid main()\n{\n method1();\n}\n\nvoid method1()\n{\n method2();\n}\n\nint method2()\n{\n return 20/0;\n}\n\nIn this code method2 basically will throw an Exception since we are trying to divide by 0 and this will cause the process to crash. If we got a minidump when this occurred we would see the following callstack:\n01 MYDLL!method2()\n02 MYDLL!method1()\n03 MYDLL!main()\n\nYou can follow from this callstack that \"main\" called \"method1\" that then called \"method2\" and it failed.\nIn your case you've got this callstack (which I guess is the result of running \"kb\" command)\nb69dd8f0 bfa1e255 016d2fc0 89efc000 00000040 nv4_disp+0x48b94\nb69dd8f4 016d2fc0 89efc000 00000040 00000006 nv4_disp+0x49255\nb69dd8f8 89efc000 00000040 00000006 bfa1dcc0 0x16d2fc0\nb69dd8fc 00000000 00000006 bfa1dcc0 e1e71018 0x89efc000\n\nThe first column indicates the Child Frame Pointer, the second column indicates the Return address of the method that is executing, the next three columns show the first 3 parameters that were passed to the method, and the last part is the DLL name (nv4_disp) and the offset of the method that is being executed (+0x48b94). Since you don't have the symbols you are not able to see the method name. I doubt tha NVIDIA offers public access to their symbols so I gues you can't get much information from here.\nI recommend you run \"kpn 200\". This will show you the full callstack and you might be able to see the origin of the method that caused this crash (if it was a Microsoft DLL you should have the proper symbols in the steps that I provided you).\nAt least you know it's related to a NVIDIA bug ;-) Try upgrading the DLLs of this driver to the latest version.\nIn case you want to learn more about WinDBG debugging I recommend the following links:\n\nIf broken it is, fix it you should\nTechNet Webcast: Windows Hang and Crash Dump Analysis \nDelicious.com popular links on WinDBG\n\n",
"A really good tutorial on interpreting a stack trace is available here:\nhttp://www.codeproject.com/KB/debug/cdbntsd2.aspx\nHowever, even with a tutorial like that it can be very difficult (or near impossible) to interpret a stack dump without the proper symbols available/loaded.\n",
"It might help to include an example of the stack you are trying to read. A good tip is to ensure you have correct debug symbols for all modules shown in the stack. This includes symbols for modules in the OS, Microsoft has made their symbol server publicly available.\n"
] |
[
18,
3,
0
] |
[] |
[] |
[
"multithreading",
"stack_trace",
"windbg"
] |
stackoverflow_0000077887_multithreading_stack_trace_windbg.txt
|
Q:
Is it possible to offline a disk in a raidz zfs pool?
When i try to offline a disk in a zfs raidz pool (the raidz pool is not mirrored), zfs says that the disk cannot be taken offline because it has no valid mirror.
Isn't one of the properties of raidz that it has a redundant disk (or even 2 disks in raidz2)...?
A:
Could you give a bit more about your configuration please? What are the commands you are using? If I'm understanding your question that should work.
Note that:
You cannot take a pool offline to the point where it becomes faulted. For example, you cannot take offline two devices out of a RAID-Z configuration, nor can you take offline a top-level virtual device.
Managing Devices in ZFS Storage Pools.
ZFS Best Practices Guide.
|
Is it possible to offline a disk in a raidz zfs pool?
|
When i try to offline a disk in a zfs raidz pool (the raidz pool is not mirrored), zfs says that the disk cannot be taken offline because it has no valid mirror.
Isn't one of the properties of raidz that it has a redundant disk (or even 2 disks in raidz2)...?
|
[
"Could you give a bit more about your configuration please? What are the commands you are using? If I'm understanding your question that should work.\nNote that:\n\nYou cannot take a pool offline to the point where it becomes faulted. For example, you cannot take offline two devices out of a RAID-Z configuration, nor can you take offline a top-level virtual device.\n\nManaging Devices in ZFS Storage Pools.\nZFS Best Practices Guide.\n"
] |
[
1
] |
[] |
[] |
[
"filesystems",
"linux",
"solaris",
"zfs"
] |
stackoverflow_0000075693_filesystems_linux_solaris_zfs.txt
|
Q:
Alphanumeric Sorting
What is the best/fastest way to sort Alphanumeric fields?
A:
You don't specify your target language, but whatever it is, it should have reliable, built-in sorting methods, so use one of them! For PHP...
Load into an array and sort($array);
php sort...
$fruits = array("lemon", "orange", "banana", "apple");
sort($fruits);
foreach ($fruits as $key => $val)
{
echo "fruits[" . $key . "] = " . $val . "\n";
}
Output:
fruits[0] = apple
fruits[1] = banana
fruits[2] = lemon
fruits[3] = orange
A:
Bubble sort! Just kidding :)
Probably your best bet would be quicksort or mergesort.
Both are O(nlogn) as opposed to bubble sort's O(n^2)
A:
The answer to your question is intimately related to some details you haven't provided. The "best/fastest" way depends on how long the fields are, how many you have to sort, how much memory you have available, the relative speeds of disk and memory, the details of what's in the strings, ..., ad nauseam.
Knuth Vol 3 has the details on a wide variety of approaches. I don't recall if he discusses Radix Sorting, but he probably does. If he doesn't, you should look up some references on Radix Sorting. It's only useful in a narrow set of circumstances, but positively flies there. If you've got a small set of short strings, Bubble Sort will perform better than complex sorts on some architectures, due to lower overhead. The C Run Time Library includes a version of Quick Sort because that can be a very efficient algorithm for larger data sets in some circumstances.
Net-net, the answer is "It depends".
A:
The "best" way depends on a lot of factors:
Do you need to support more than language?
Do you need to support more than one language simultaniously?
Do you need to support languages other than the current Operating System or user language? (ex, web applications)
Do you need to support more than one encoding? (unicode, utf-16le/utf-8, ansi code pages, etc)
Do you need to support long or highly redundant inputs? (where precomputation or compression may speed up sorting operations)
Do you need to support a large number of inputs, ex: million, or billion inputs?
A:
You will find that most development libraries ship with an implementation of the quicksort algorithm, which is often the fastest sorting algorithm. Check out the Wikipedia link here.
A:
In C#, List has .Sort().
In general QuickSort is very fast on many situations but it always depend of the size of the array,
Here is the link
|
Alphanumeric Sorting
|
What is the best/fastest way to sort Alphanumeric fields?
|
[
"You don't specify your target language, but whatever it is, it should have reliable, built-in sorting methods, so use one of them! For PHP...\nLoad into an array and sort($array);\nphp sort...\n$fruits = array(\"lemon\", \"orange\", \"banana\", \"apple\");\nsort($fruits);\n\nforeach ($fruits as $key => $val)\n{\n echo \"fruits[\" . $key . \"] = \" . $val . \"\\n\";\n}\n\nOutput:\nfruits[0] = apple\nfruits[1] = banana\nfruits[2] = lemon\nfruits[3] = orange\n\n",
"Bubble sort! Just kidding :)\nProbably your best bet would be quicksort or mergesort. \nBoth are O(nlogn) as opposed to bubble sort's O(n^2)\n",
"The answer to your question is intimately related to some details you haven't provided. The \"best/fastest\" way depends on how long the fields are, how many you have to sort, how much memory you have available, the relative speeds of disk and memory, the details of what's in the strings, ..., ad nauseam.\nKnuth Vol 3 has the details on a wide variety of approaches. I don't recall if he discusses Radix Sorting, but he probably does. If he doesn't, you should look up some references on Radix Sorting. It's only useful in a narrow set of circumstances, but positively flies there. If you've got a small set of short strings, Bubble Sort will perform better than complex sorts on some architectures, due to lower overhead. The C Run Time Library includes a version of Quick Sort because that can be a very efficient algorithm for larger data sets in some circumstances.\nNet-net, the answer is \"It depends\".\n",
"The \"best\" way depends on a lot of factors:\n\nDo you need to support more than language?\nDo you need to support more than one language simultaniously?\nDo you need to support languages other than the current Operating System or user language? (ex, web applications)\nDo you need to support more than one encoding? (unicode, utf-16le/utf-8, ansi code pages, etc)\nDo you need to support long or highly redundant inputs? (where precomputation or compression may speed up sorting operations)\nDo you need to support a large number of inputs, ex: million, or billion inputs? \n\n",
"You will find that most development libraries ship with an implementation of the quicksort algorithm, which is often the fastest sorting algorithm. Check out the Wikipedia link here.\n",
"In C#, List has .Sort().\nIn general QuickSort is very fast on many situations but it always depend of the size of the array,\nHere is the link\n"
] |
[
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"alphanumeric",
"sorting"
] |
stackoverflow_0000078077_alphanumeric_sorting.txt
|
Q:
How do I programmatically sanitize ColdFusion cfquery parameters?
I have inherited a large legacy ColdFusion app. There are hundreds of <cfquery>some sql here #variable#</cfquery> statements that need to be parameterized along the lines of: <cfquery> some sql here <cfqueryparam value="#variable#"/> </cfquery>
How can I go about adding parameterization programmatically?
I have thought about writing some regular expression or sed/awk'y sort of solution, but it seems like somebody somewhere has tackled such a problem. Bonus points awarded for inferring the sql type automatically.
A:
There's a queryparam scanner that will find them for you on RIAForge: http://qpscanner.riaforge.org/
A:
There is a script referenced here: http://www.webapper.net/index.cfm/2008/7/22/ColdFusion-SQL-Injection that will do the majority of the heavy lifting for you. All you have to do is check the queries and make sure the syntax will parse properly.
There is no excuse for not using CFQueryParam, apart from it being much more secure, it is a performance boost and the best way to handle quoted values in character based column types.
A:
Keep in mind that you may not be able to solve everything with <cfqueryparam>.
I've seen a number of examples where the order by field name is being passed in the query string, which is a slightly trickier problem to solve as you need to validate that in a more "manual" way.
A:
<cf_inputFilter
scopes = "FORM,COOKIE,URL"
chars = "<,>,!,&,|,%,=,(,),',{,}"
tags="script,embed,applet,object,HTML">
We used this to counteract a recent SQL injection attack. We added it to the Application.cfm file for our site.
A:
I doubt that there is a solution that will fit your needs exactly. The only option I see is to write your own recursive search that builds a report for you or use one of the apps/scripts that people have listed above. Basically, you are going to have to edit each page or approve all of the automated changes.
|
How do I programmatically sanitize ColdFusion cfquery parameters?
|
I have inherited a large legacy ColdFusion app. There are hundreds of <cfquery>some sql here #variable#</cfquery> statements that need to be parameterized along the lines of: <cfquery> some sql here <cfqueryparam value="#variable#"/> </cfquery>
How can I go about adding parameterization programmatically?
I have thought about writing some regular expression or sed/awk'y sort of solution, but it seems like somebody somewhere has tackled such a problem. Bonus points awarded for inferring the sql type automatically.
|
[
"There's a queryparam scanner that will find them for you on RIAForge: http://qpscanner.riaforge.org/\n",
"There is a script referenced here: http://www.webapper.net/index.cfm/2008/7/22/ColdFusion-SQL-Injection that will do the majority of the heavy lifting for you. All you have to do is check the queries and make sure the syntax will parse properly.\nThere is no excuse for not using CFQueryParam, apart from it being much more secure, it is a performance boost and the best way to handle quoted values in character based column types.\n",
"Keep in mind that you may not be able to solve everything with <cfqueryparam>. \nI've seen a number of examples where the order by field name is being passed in the query string, which is a slightly trickier problem to solve as you need to validate that in a more \"manual\" way.\n",
"<cf_inputFilter\n scopes = \"FORM,COOKIE,URL\"\n chars = \"<,>,!,&,|,%,=,(,),',{,}\"\n tags=\"script,embed,applet,object,HTML\">\n\nWe used this to counteract a recent SQL injection attack. We added it to the Application.cfm file for our site.\n",
"I doubt that there is a solution that will fit your needs exactly. The only option I see is to write your own recursive search that builds a report for you or use one of the apps/scripts that people have listed above. Basically, you are going to have to edit each page or approve all of the automated changes.\n"
] |
[
10,
6,
3,
1,
0
] |
[] |
[] |
[
"cfquery",
"cfqueryparam",
"coldfusion",
"sql_injection"
] |
stackoverflow_0000064432_cfquery_cfqueryparam_coldfusion_sql_injection.txt
|
Q:
iBATIS for Python?
At my current gig, we use iBATIS through Java to CRUD our databases. I like the abstract qualities of the tool, especially when working with legacy databases, as it doesn't impose its own syntax on you.
I'm looking for a Python analogue to this library, since the website only has Java/.NET/Ruby versions available. I don't want to have to switch to Jython if I don't need to.
Are there any other projects similar to iBATIS functionality out there for Python?
A:
iBatis sequesters the SQL DML (or the definitions of the SQL) in an XML file. It specifically focuses on the mapping between the SQL and some object model defined elsewhere.
SQL Alchemy can do this -- but it isn't really a very complete solution. Like iBatis, you can merely have SQL table definitions and a mapping between the tables and Python class definitions.
What's more complete is to have a class definition that is also the SQL database definition. If the class definition generates the SQL Table DDL as well as the query and processing DML, that's much more complete.
I flip-flop between SQLAlchemy and the Django ORM. SQLAlchemy can be used in an iBatis like manner. But I prefer to make the object design central and leave the SQL implementation be derived from the objects by the toolset.
I use SQLAlchemy for large, batch, stand-alone projects. DB Loads, schema conversions, DW reporting and the like work out well. In these projects, the focus is on the relational view of the data, not the object model. The SQL that's generated may be moved into PL/SQL stored procedures, for example.
I use Django for web applications, exploiting its built-in ORM capabilities. You can, with a little work, segregate the Django ORM from the rest of the Django environment. You can provide global settings to bind your app to a specific database without using a separate settings module.
Django includes a number of common relationships (Foreign Key, Many-to-Many, One-to-One) for which it can manage the SQL implementation. It generates key and index definitions for the attached database.
If your problem is largely object-oriented, with the database being used for persistence, then the nearly transparent ORM layer of Django has advantages.
If your problem is largely relational, with the SQL processing central, then the capability of seeing the generated SQL in SQLAlchemy has advantages.
A:
Perhaps SQLAlchemy SQL Expression support is suitable. See the documentation.
|
iBATIS for Python?
|
At my current gig, we use iBATIS through Java to CRUD our databases. I like the abstract qualities of the tool, especially when working with legacy databases, as it doesn't impose its own syntax on you.
I'm looking for a Python analogue to this library, since the website only has Java/.NET/Ruby versions available. I don't want to have to switch to Jython if I don't need to.
Are there any other projects similar to iBATIS functionality out there for Python?
|
[
"iBatis sequesters the SQL DML (or the definitions of the SQL) in an XML file. It specifically focuses on the mapping between the SQL and some object model defined elsewhere.\nSQL Alchemy can do this -- but it isn't really a very complete solution. Like iBatis, you can merely have SQL table definitions and a mapping between the tables and Python class definitions. \nWhat's more complete is to have a class definition that is also the SQL database definition. If the class definition generates the SQL Table DDL as well as the query and processing DML, that's much more complete. \nI flip-flop between SQLAlchemy and the Django ORM. SQLAlchemy can be used in an iBatis like manner. But I prefer to make the object design central and leave the SQL implementation be derived from the objects by the toolset.\nI use SQLAlchemy for large, batch, stand-alone projects. DB Loads, schema conversions, DW reporting and the like work out well. In these projects, the focus is on the relational view of the data, not the object model. The SQL that's generated may be moved into PL/SQL stored procedures, for example.\nI use Django for web applications, exploiting its built-in ORM capabilities. You can, with a little work, segregate the Django ORM from the rest of the Django environment. You can provide global settings to bind your app to a specific database without using a separate settings module.\nDjango includes a number of common relationships (Foreign Key, Many-to-Many, One-to-One) for which it can manage the SQL implementation. It generates key and index definitions for the attached database.\nIf your problem is largely object-oriented, with the database being used for persistence, then the nearly transparent ORM layer of Django has advantages.\nIf your problem is largely relational, with the SQL processing central, then the capability of seeing the generated SQL in SQLAlchemy has advantages.\n",
"Perhaps SQLAlchemy SQL Expression support is suitable. See the documentation. \n"
] |
[
10,
1
] |
[] |
[] |
[
"ibatis",
"orm",
"python"
] |
stackoverflow_0000077731_ibatis_orm_python.txt
|
Q:
What does it mean when a PostgreSQL process is "idle in transaction"?
What does it mean when a PostgreSQL process is "idle in transaction"?
On a server that I'm looking at, the output of "ps ax | grep postgres" I see 9 PostgreSQL processes that look like the following:
postgres: user db 127.0.0.1(55658) idle in transaction
Does this mean that some of the processes are hung, waiting for a transaction to be committed? Any pointers to relevant documentation are appreciated.
A:
The PostgreSQL manual indicates that this means the transaction is open (inside BEGIN) and idle. It's most likely a user connected using the monitor who is thinking or typing. I have plenty of those on my system, too.
If you're using Slony for replication, however, the Slony-I FAQ suggests idle in transaction may mean that the network connection was terminated abruptly. Check out the discussion in that FAQ for more details.
A:
As mentioned here: Re: BUG #4243: Idle in transaction it is probably best to check your pg_locks table to see what is being locked and that might give you a better clue where the problem lies.
|
What does it mean when a PostgreSQL process is "idle in transaction"?
|
What does it mean when a PostgreSQL process is "idle in transaction"?
On a server that I'm looking at, the output of "ps ax | grep postgres" I see 9 PostgreSQL processes that look like the following:
postgres: user db 127.0.0.1(55658) idle in transaction
Does this mean that some of the processes are hung, waiting for a transaction to be committed? Any pointers to relevant documentation are appreciated.
|
[
"The PostgreSQL manual indicates that this means the transaction is open (inside BEGIN) and idle. It's most likely a user connected using the monitor who is thinking or typing. I have plenty of those on my system, too.\nIf you're using Slony for replication, however, the Slony-I FAQ suggests idle in transaction may mean that the network connection was terminated abruptly. Check out the discussion in that FAQ for more details.\n",
"As mentioned here: Re: BUG #4243: Idle in transaction it is probably best to check your pg_locks table to see what is being locked and that might give you a better clue where the problem lies.\n"
] |
[
71,
21
] |
[] |
[] |
[
"postgresql"
] |
stackoverflow_0000051019_postgresql.txt
|
Q:
Export ChartFX7 to SVG in Java
Can anybody give an example of exporting a ChartFX7 chart to SVG?
I've tried:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
m_chart.setOutputWriter(new SvgWriter());
m_chart.exportChart(FileFormat.EXTERNAL, baos);
and :
ByteArrayOutputStream baos = new ByteArrayOutputStream();
m_chart.setRenderFormat("SVG");
m_chart.renderToStream();
But both result in a null pointer exception.
The following successfully outputs to XML:
FileOutputStream fos = new FileOutputStream(Debug.getInstance().createExternalFile("chart.xml"));
m_chart.exportChart(FileFormat.XML, fos);
A:
batik is a libary that you can import into your java libary to convert or create svg images. I dont know chartfx7 but that is the standard way to create svg in java.
|
Export ChartFX7 to SVG in Java
|
Can anybody give an example of exporting a ChartFX7 chart to SVG?
I've tried:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
m_chart.setOutputWriter(new SvgWriter());
m_chart.exportChart(FileFormat.EXTERNAL, baos);
and :
ByteArrayOutputStream baos = new ByteArrayOutputStream();
m_chart.setRenderFormat("SVG");
m_chart.renderToStream();
But both result in a null pointer exception.
The following successfully outputs to XML:
FileOutputStream fos = new FileOutputStream(Debug.getInstance().createExternalFile("chart.xml"));
m_chart.exportChart(FileFormat.XML, fos);
|
[
"batik is a libary that you can import into your java libary to convert or create svg images. I dont know chartfx7 but that is the standard way to create svg in java.\n"
] |
[
1
] |
[] |
[] |
[
"chartfx",
"export",
"java",
"xml"
] |
stackoverflow_0000071755_chartfx_export_java_xml.txt
|
Q:
Fastest way to find if a 3D coordinate is already used
Using C++ (and Qt), I need to process a big amount of 3D coordinates.
Specifically, when I receive a 3D coordinate (made of 3 doubles), I need to check in a list if this coordinate has already been processed.
If not, then I process it and add it to the list (or container).
The amount of coordinates can become very big, so I need to store the processed coordinates in a container which will ensure that checking if a 3D coordinate is already contained in the container is fast.
I was thinking of using a map of a map of a map, storing the x coordinate, then the y coordinate then the z coordinate, but this makes it quite tedious to use, so I'm actually hoping there is a much better way to do it that I cannot think of.
A:
Probably the simplest way to speed up such processing is to store the already-processed points in Octree. Checking for duplication will become close to logarithmic.
Also, make sure you tolerate round-off errors by checking the distance between the points, not the equality of the coordinates.
A:
You can easily use a set as follows:
#include <set>
#include <cassert>
const double epsilon(1e-8);
class Coordinate {
public:
Coordinate(double x, double y, double z) :
x_(x), y_(y), z_(z) {}
private:
double x_;
double y_;
double z_;
friend bool operator<(const Coordinate& cl, const Coordinate& cr);
};
bool operator<(const Coordinate& cl, const Coordinate& cr) {
if (cl.x_ < cr.x_ - epsilon) return true;
if (cl.x_ > cr.x_ + epsilon) return false;
if (cl.y_ < cr.y_ - epsilon) return true;
if (cl.y_ > cr.y_ + epsilon) return false;
if (cl.z_ < cr.z_ - epsilon) return true;
return false;
}
typedef std::set<Coordinate> Coordinates;
// Not thread safe!
// Return true if real processing is done
bool Process(const Coordinate& coordinate) {
static Coordinates usedCoordinates;
// Already processed?
if (usedCoordinates.find(coordinate) != usedCoordinates.end()) {
return false;
}
usedCoordinates.insert(coordinate);
// Here goes your processing code
return true;
}
// Test it
int main() {
assert(Process(Coordinate(1, 2, 3)));
assert(Process(Coordinate(1, 3, 3)));
assert(!Process(Coordinate(1, 3, 3)));
assert(!Process(Coordinate(1+epsilon/2, 2, 3)));
}
A:
Divide your space into discrete bins. Could be infinitely deep squares, or could be cubes. Store your processed coordinates in a simple linked list, sorted if you like in each bin. When you get a new coordinate, jump to the enclosing bin, and walk the list looking for the new point.
Be wary of floating point comparisons. You need to either turn values into integers (say multiply by 1000 and truncate), or decide how close 2 values are to be considered equal.
A:
Assuming you already have a Coordinate class, add a hash function and maintain a hash_set of the coordinates.
Would look something like:
struct coord_eq
{
bool operator()(const Coordinate &s1, const Coordinate &s2) const
{
return s1 == s2;
// or: return s1.x() == s2.x() && s1.y() == s2.y() && s1.z() == s2.z();
}
};
struct coord_hash
{
size_t operator()(const Coordinate &s) const
{
union {double d, unsigned long ul} c[3];
c[0].d = s.x();
c[1].d = s.y();
c[2].d = s.z();
return static_cast<size_t> ((3 * c[0].ul) ^ (5 * c[1].ul) ^ (7 * c[2].ul));
}
};
std::hash_map<Coordinate, coord_hash, coord_eq> existing_coords;
A:
Well, it depends on what's most important... if a tripple map is too tedious to use, then is implementing other data structures not worth the effort?
If you want to get around the uglyness of the tripple map solution, just wrap it up in another container class with an access function with three parameter, and hide all the messing around with maps internally in that.
If you're more worried about the runtime performance of this thing, storing the coordinates in an Octree might be a good idea.
Also worth mentioning is that doing these sorts of things with floats or doubles you should be very careful about precision -- if (0, 0, 0.01) the same coordinate as (0, 0, 0.01000001)? If it is, you'll need to look at the comparison functions you use, regardless of the data structure. That also depends on the source of your coordinates I guess.
A:
Are you expecting/requiring exact matches? These might be hard to enforce with doubles. For example, if you have processed (1.0, 1.0, 1.0) and you then receive (0.9999999999999, 1.0, 1.0) would you consider it the same? If so, you will need to either apply some kind of approximation or else define error bounds.
However, to answer the question itself: the first method that comes to mind is to create a single index (either a string or a bitstring, depending how readable you want things to be). For example, create the string "(1.0,1.0,1.0)" and use that as the key to your map. This will make it easy to look up the map, keeps the code readable (and also lets you easily dump the contents of the map for debugging purposes) and gives you reasonable performance. If you need much faster performance you could use a hashing algorithm to combine the three coordinates numerically without going via a string.
A:
How about using a boost::tuple for the coordinates, and storing the tuple as the index for the map?
(You may also need to do the divide-by-epsilon idea from this answer.)
A:
Use any unique transformation of your 3D coordinates and store only the list of the results.
Example:
md5('X, Y, Z') is unique and you can store only the resulting string.
The hash is not a performant idea but you get the concept. Find any methematic unique transformation and you have it.
/Vey
A:
Use an std::set. Define a type for the 3d coordinate (or use a boost::tuple) that has operator< defined. When adding elements, you can add it to the set, and if it was added, do your processing. If it was not added (because it already exists in there), do not do your processing.
However, if you are using doubles, be aware that your algorithm can potentially lead to unpredictable behavior. IE, is (1.0, 1.0, 1.0) the same as (1.0, 1.0, 1.000000001)?
A:
Pick a constant to scale the coordinates by so that 1 unit describes an acceptably small box and yet the integer part of the largest component by magnitude will fit into a 32-bit integer; convert the X, Y and Z components of the result to integers and hash them together. Use that as a hash function for a map or hashtable (NOT as an array index, you need to deal with collisions).
You may also want to consider using a fudge factor when comparing the coordinates, since you may get floating point values which are only slightly different, and it is usually preferable to weld those together to avoid cracks when rendering.
A:
If you write a helper class with a simple public interface, that greatly reduces the practical tedium of implementation details like use of a map<map<map<>>>. The beauty of encapsulation!
That said, you might be able to rig a hashmap to do the trick nicely. Just hash the three doubles together to get the key for the point as a whole. If you're concerned about to many collisions between points with symmetric coordinates (e.g., (1, 2, 3) and (3, 2, 1) and so on), just make the hash key asymmetric with respect to the x, y, and z coordinates, using bit shift or some such.
A:
You could use a hash_set of any hashable type - for example, turn each tuple into a string "(x, y, z)". hash_set does fast lookups but handles collisions well.
A:
Whatever your storage method, I would suggest you decide on an epsilon (minimum floating point distance that differentiates two coordinates), then divide all coordinates by the epsilon, round and store them as integers.
A:
Something in this direction maybe:
struct Coor {
Coor(double x, double y, double z)
: X(x), Y(y), Z(z) {}
double X, Y, Z;
}
struct coords_thesame
{
bool operator()(const Coor& c1, const Coor& c2) const {
return c1.X == c2.X && c1.Y == c2.Y && c1.Z == c2.Z;
}
};
std::hash_map<Coor, bool, hash<Coor>, coords_thesame> m_SeenCoordinates;
Untested, use at your own peril :)
A:
You can easily define a comparator for a one-level std::map, so that lookup becomes way less cumbersome. There is no reason of being afraid of that. The comparator defines an ordering of the _Key template argument of the map. It can then also be used for the multimap and set collections.
An example:
#include <map>
#include <cassert>
struct Point {
double x, y, z;
};
struct PointResult {
};
PointResult point_function( const Point& p ) { return PointResult(); }
// helper: binary function for comparison of two points
struct point_compare {
bool operator()( const Point& p1, const Point& p2 ) const {
return p1.x < p2.x
|| ( p1.x == p2.x && ( p1.y < p2.y
|| ( p1.y == p2.y && p1.z < p2.z )
)
);
}
};
typedef std::map<Point, PointResult, point_compare> pointmap;
int _tmain(int argc, _TCHAR* argv[])
{
pointmap pm;
Point p1 = { 0.0, 0.0, 0.0 };
Point p2 = { 0.1, 1.0, 1.0 };
pm[ p1 ] = point_function( p1 );
pm[ p2 ] = point_function( p2 );
assert( pm.find( p2 ) != pm.end() );
return 0;
}
A:
There are more than a few ways to do it, but you have to ask yourself first what are your assumptions and conditions.
So, assuming that your space is limited in size and you know what is the maximum accuracy, then you can form a function that given (x,y,z) will convert them to a unique number or string -this can be done only if you know that your accuracy is limited (for example - no two entities can occupy the same cubic centimeter).
Encoding the coordinate allows you to use a single map/hash with O(1).
If this is not tha case, you can always use 3 embedded maps as you suggested, or go towards space division algorithms (such as OcTree as mentioned) which although given O(logN) on a average search, they also give you additional information you might want (neighbors, population, etc..), but of course it is harder to implement.
A:
You can either use a std::set of 3D coordinates, or a sorted std::vector. Both will give you logarithmic time lookup. In either case, you'll need to implement the less than comparison operator for your 3D coordinate class.
A:
Why bother? What "processing" are you doing? Unless it's very complex, it's probably faster to just do the calculation again, rather then waste time looking things up in a huge map or hashtable.
This is one of the more counter-intuitive things about modern cpu's. Computation is fast, memory is slow.
I realize this isn't really an answer to your question, it's questioning your question.
A:
Good question... it's one that has many solutions, because this type of problem comes
up many times in Graphical and Scientific applications.
Depending on the solution you require it may be rather complex under the hood, in this
case less code doesn't necessarily mean faster.
"but this makes it quite tedious to use" --- generally, you can get around this by
typedefs or wrapper classes (wrappers in this case would be highly recommended).
If you don't need to use the 3D co-ordinates in any kind of spacially significant way (
things like "give me all the points within X distance of point P") then I suggest you
just find a way to hash each point, and use a single hash map... O(n) creation, O(1)
access (checking to see if it's been processed), you can't do much better than that.
If you do need more spacial information you'll need a container that explicitly takes
it into account.
The type of container you choose will be dependant on your data set. If you have good
knowledge of the range of values that you recieve this will help.
If you are recieving well distributed data over a known range... go with octree.
If you have a distribution that tends to cluster, then go with k-d trees. You'll need
to rebuild a k-d tree after inputting new co-ordinates (not necessarily every time,
just when it becomes overly imbalanced). Put simply, Kd-trees are like Octrees, but with non uniform division.
|
Fastest way to find if a 3D coordinate is already used
|
Using C++ (and Qt), I need to process a big amount of 3D coordinates.
Specifically, when I receive a 3D coordinate (made of 3 doubles), I need to check in a list if this coordinate has already been processed.
If not, then I process it and add it to the list (or container).
The amount of coordinates can become very big, so I need to store the processed coordinates in a container which will ensure that checking if a 3D coordinate is already contained in the container is fast.
I was thinking of using a map of a map of a map, storing the x coordinate, then the y coordinate then the z coordinate, but this makes it quite tedious to use, so I'm actually hoping there is a much better way to do it that I cannot think of.
|
[
"Probably the simplest way to speed up such processing is to store the already-processed points in Octree. Checking for duplication will become close to logarithmic.\nAlso, make sure you tolerate round-off errors by checking the distance between the points, not the equality of the coordinates.\n",
"You can easily use a set as follows:\n#include <set>\n#include <cassert>\n\nconst double epsilon(1e-8);\n\nclass Coordinate {\npublic:\nCoordinate(double x, double y, double z) :\n x_(x), y_(y), z_(z) {}\n\nprivate:\ndouble x_;\ndouble y_;\ndouble z_;\n\nfriend bool operator<(const Coordinate& cl, const Coordinate& cr);\n};\n\nbool operator<(const Coordinate& cl, const Coordinate& cr) {\n if (cl.x_ < cr.x_ - epsilon) return true;\n if (cl.x_ > cr.x_ + epsilon) return false;\n\n if (cl.y_ < cr.y_ - epsilon) return true;\n if (cl.y_ > cr.y_ + epsilon) return false;\n\n if (cl.z_ < cr.z_ - epsilon) return true;\n\n return false;\n\n}\n\ntypedef std::set<Coordinate> Coordinates;\n\n// Not thread safe!\n// Return true if real processing is done\nbool Process(const Coordinate& coordinate) {\n static Coordinates usedCoordinates;\n\n // Already processed?\n if (usedCoordinates.find(coordinate) != usedCoordinates.end()) {\n return false;\n }\n\n usedCoordinates.insert(coordinate);\n\n // Here goes your processing code\n\n\n\n return true;\n\n}\n\n// Test it\nint main() {\n assert(Process(Coordinate(1, 2, 3)));\n assert(Process(Coordinate(1, 3, 3)));\n assert(!Process(Coordinate(1, 3, 3)));\n assert(!Process(Coordinate(1+epsilon/2, 2, 3)));\n}\n\n",
"Divide your space into discrete bins. Could be infinitely deep squares, or could be cubes. Store your processed coordinates in a simple linked list, sorted if you like in each bin. When you get a new coordinate, jump to the enclosing bin, and walk the list looking for the new point. \nBe wary of floating point comparisons. You need to either turn values into integers (say multiply by 1000 and truncate), or decide how close 2 values are to be considered equal.\n",
"Assuming you already have a Coordinate class, add a hash function and maintain a hash_set of the coordinates.\nWould look something like:\nstruct coord_eq\n{\n bool operator()(const Coordinate &s1, const Coordinate &s2) const\n {\n return s1 == s2;\n // or: return s1.x() == s2.x() && s1.y() == s2.y() && s1.z() == s2.z();\n }\n};\n\nstruct coord_hash\n{\n size_t operator()(const Coordinate &s) const\n {\n union {double d, unsigned long ul} c[3];\n c[0].d = s.x();\n c[1].d = s.y();\n c[2].d = s.z();\n return static_cast<size_t> ((3 * c[0].ul) ^ (5 * c[1].ul) ^ (7 * c[2].ul));\n }\n};\n\nstd::hash_map<Coordinate, coord_hash, coord_eq> existing_coords;\n\n",
"Well, it depends on what's most important... if a tripple map is too tedious to use, then is implementing other data structures not worth the effort?\nIf you want to get around the uglyness of the tripple map solution, just wrap it up in another container class with an access function with three parameter, and hide all the messing around with maps internally in that.\nIf you're more worried about the runtime performance of this thing, storing the coordinates in an Octree might be a good idea.\nAlso worth mentioning is that doing these sorts of things with floats or doubles you should be very careful about precision -- if (0, 0, 0.01) the same coordinate as (0, 0, 0.01000001)? If it is, you'll need to look at the comparison functions you use, regardless of the data structure. That also depends on the source of your coordinates I guess.\n",
"Are you expecting/requiring exact matches? These might be hard to enforce with doubles. For example, if you have processed (1.0, 1.0, 1.0) and you then receive (0.9999999999999, 1.0, 1.0) would you consider it the same? If so, you will need to either apply some kind of approximation or else define error bounds.\nHowever, to answer the question itself: the first method that comes to mind is to create a single index (either a string or a bitstring, depending how readable you want things to be). For example, create the string \"(1.0,1.0,1.0)\" and use that as the key to your map. This will make it easy to look up the map, keeps the code readable (and also lets you easily dump the contents of the map for debugging purposes) and gives you reasonable performance. If you need much faster performance you could use a hashing algorithm to combine the three coordinates numerically without going via a string.\n",
"How about using a boost::tuple for the coordinates, and storing the tuple as the index for the map?\n(You may also need to do the divide-by-epsilon idea from this answer.)\n",
"Use any unique transformation of your 3D coordinates and store only the list of the results.\nExample:\nmd5('X, Y, Z') is unique and you can store only the resulting string.\nThe hash is not a performant idea but you get the concept. Find any methematic unique transformation and you have it.\n/Vey\n",
"Use an std::set. Define a type for the 3d coordinate (or use a boost::tuple) that has operator< defined. When adding elements, you can add it to the set, and if it was added, do your processing. If it was not added (because it already exists in there), do not do your processing.\nHowever, if you are using doubles, be aware that your algorithm can potentially lead to unpredictable behavior. IE, is (1.0, 1.0, 1.0) the same as (1.0, 1.0, 1.000000001)?\n",
"Pick a constant to scale the coordinates by so that 1 unit describes an acceptably small box and yet the integer part of the largest component by magnitude will fit into a 32-bit integer; convert the X, Y and Z components of the result to integers and hash them together. Use that as a hash function for a map or hashtable (NOT as an array index, you need to deal with collisions).\nYou may also want to consider using a fudge factor when comparing the coordinates, since you may get floating point values which are only slightly different, and it is usually preferable to weld those together to avoid cracks when rendering.\n",
"If you write a helper class with a simple public interface, that greatly reduces the practical tedium of implementation details like use of a map<map<map<>>>. The beauty of encapsulation!\nThat said, you might be able to rig a hashmap to do the trick nicely. Just hash the three doubles together to get the key for the point as a whole. If you're concerned about to many collisions between points with symmetric coordinates (e.g., (1, 2, 3) and (3, 2, 1) and so on), just make the hash key asymmetric with respect to the x, y, and z coordinates, using bit shift or some such.\n",
"You could use a hash_set of any hashable type - for example, turn each tuple into a string \"(x, y, z)\". hash_set does fast lookups but handles collisions well.\n",
"Whatever your storage method, I would suggest you decide on an epsilon (minimum floating point distance that differentiates two coordinates), then divide all coordinates by the epsilon, round and store them as integers.\n",
"Something in this direction maybe:\nstruct Coor {\n Coor(double x, double y, double z)\n : X(x), Y(y), Z(z) {}\n double X, Y, Z;\n}\n\nstruct coords_thesame\n{\n bool operator()(const Coor& c1, const Coor& c2) const {\n return c1.X == c2.X && c1.Y == c2.Y && c1.Z == c2.Z;\n }\n};\n\nstd::hash_map<Coor, bool, hash<Coor>, coords_thesame> m_SeenCoordinates;\n\nUntested, use at your own peril :)\n",
"You can easily define a comparator for a one-level std::map, so that lookup becomes way less cumbersome. There is no reason of being afraid of that. The comparator defines an ordering of the _Key template argument of the map. It can then also be used for the multimap and set collections.\nAn example:\n#include <map>\n#include <cassert>\n\n\nstruct Point {\n double x, y, z;\n};\n\nstruct PointResult {\n};\n\nPointResult point_function( const Point& p ) { return PointResult(); }\n\n// helper: binary function for comparison of two points\nstruct point_compare {\n bool operator()( const Point& p1, const Point& p2 ) const {\n return p1.x < p2.x\n || ( p1.x == p2.x && ( p1.y < p2.y \n || ( p1.y == p2.y && p1.z < p2.z ) \n )\n );\n }\n};\n\ntypedef std::map<Point, PointResult, point_compare> pointmap;\n\nint _tmain(int argc, _TCHAR* argv[])\n{\n\npointmap pm;\n\nPoint p1 = { 0.0, 0.0, 0.0 };\nPoint p2 = { 0.1, 1.0, 1.0 };\n\npm[ p1 ] = point_function( p1 );\npm[ p2 ] = point_function( p2 );\nassert( pm.find( p2 ) != pm.end() );\n return 0;\n}\n\n",
"There are more than a few ways to do it, but you have to ask yourself first what are your assumptions and conditions.\nSo, assuming that your space is limited in size and you know what is the maximum accuracy, then you can form a function that given (x,y,z) will convert them to a unique number or string -this can be done only if you know that your accuracy is limited (for example - no two entities can occupy the same cubic centimeter).\nEncoding the coordinate allows you to use a single map/hash with O(1).\nIf this is not tha case, you can always use 3 embedded maps as you suggested, or go towards space division algorithms (such as OcTree as mentioned) which although given O(logN) on a average search, they also give you additional information you might want (neighbors, population, etc..), but of course it is harder to implement.\n",
"You can either use a std::set of 3D coordinates, or a sorted std::vector. Both will give you logarithmic time lookup. In either case, you'll need to implement the less than comparison operator for your 3D coordinate class.\n",
"Why bother? What \"processing\" are you doing? Unless it's very complex, it's probably faster to just do the calculation again, rather then waste time looking things up in a huge map or hashtable.\nThis is one of the more counter-intuitive things about modern cpu's. Computation is fast, memory is slow.\nI realize this isn't really an answer to your question, it's questioning your question.\n",
"Good question... it's one that has many solutions, because this type of problem comes \nup many times in Graphical and Scientific applications.\nDepending on the solution you require it may be rather complex under the hood, in this \ncase less code doesn't necessarily mean faster.\n\"but this makes it quite tedious to use\" --- generally, you can get around this by \ntypedefs or wrapper classes (wrappers in this case would be highly recommended).\nIf you don't need to use the 3D co-ordinates in any kind of spacially significant way ( \nthings like \"give me all the points within X distance of point P\") then I suggest you \njust find a way to hash each point, and use a single hash map... O(n) creation, O(1) \naccess (checking to see if it's been processed), you can't do much better than that.\nIf you do need more spacial information you'll need a container that explicitly takes \nit into account. \nThe type of container you choose will be dependant on your data set. If you have good \nknowledge of the range of values that you recieve this will help.\nIf you are recieving well distributed data over a known range... go with octree. \nIf you have a distribution that tends to cluster, then go with k-d trees. You'll need \nto rebuild a k-d tree after inputting new co-ordinates (not necessarily every time, \njust when it becomes overly imbalanced). Put simply, Kd-trees are like Octrees, but with non uniform division.\n"
] |
[
7,
4,
3,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c++",
"performance"
] |
stackoverflow_0000072128_c++_performance.txt
|
Q:
How should I start when developing a system based on modules or plugins?
I intend to develop a system that is entirely based on modules. The system base should have support for finding out about plugins, starting them up and being able to provide ways for those modules to communicate. Ideally, one should be able to put in new modules and yank out unused modules at will, and modules should be able to use each other's funcionality if it is available.
This system should be used as a basis for simulation systems where a lot of stuff happens in different modules, and other modules might want to do something based on that.
The system I intend to develop is going to be in Java. The way I see it, I intend to have a folder with a subfolder for each module that includes a XML that describes the module with information such as name, maybe which events it might raise, stuff like that. I suppose I might need to write a custom ClassLoader to work this stuff out.
The thing is, I don't know if my idea actually holds any water and, of course, I intend on building a working prototype. However, I never worked on a truly modular system before, and I'm not really sure what is the best way to take on this problem.
Where should I start? Are there common problems and pitfalls that are found while developing this kind of system? How do I make the modules talk with each other while maintaining isolation (i.e, you remove a module and another module that was using it stays sane)? Are there any guides, specifications or articles I can read that could give me some ideas on where to start? It would be better if they were based on Java, but this is not a requirement, as what I'm looking for right now are ideas, not code.
Any feedback is appreciated.
A:
You should definitely look at OSGi. It aims at being the component/plugin mechanism for Java. It allows you to modularize your code (in so-called bundles) and update bundles at runtime. You can also completely hide implementation packages from unwanted access by other bundles, eg. only provide the API.
Eclipse was the first major open-source project to implement and use OSGi, but they didn't fully leverage it (no plugin installations/updates without restarts). If you start from scratch though, it will give you a very good framework for a plugin system.
Apache Felix is a complete open-source implementation (and there are others, such as Eclipse Equinox).
A:
Without getting into great detail, you should be looking at Spring and a familiarization with OSGI or the Eclipse RCP frameworks will also give you some fundamental concepts you will need to keep in mind.
A:
Another option is the ServiceLoader added in Java 1.6.
A:
They are many way to do it but something simple can be by using Reflection. You write in your XML file name of file (that would be a class in reallity). You can than check what type is it and create it back with reflection. The class could have a common Interface that will let you find if the external file/class is really one of your module. Here is some information about Reflexion.
You can also use a precoded framework like this SourceForge onelink text that will give you a first good step to create module/plugin.
|
How should I start when developing a system based on modules or plugins?
|
I intend to develop a system that is entirely based on modules. The system base should have support for finding out about plugins, starting them up and being able to provide ways for those modules to communicate. Ideally, one should be able to put in new modules and yank out unused modules at will, and modules should be able to use each other's funcionality if it is available.
This system should be used as a basis for simulation systems where a lot of stuff happens in different modules, and other modules might want to do something based on that.
The system I intend to develop is going to be in Java. The way I see it, I intend to have a folder with a subfolder for each module that includes a XML that describes the module with information such as name, maybe which events it might raise, stuff like that. I suppose I might need to write a custom ClassLoader to work this stuff out.
The thing is, I don't know if my idea actually holds any water and, of course, I intend on building a working prototype. However, I never worked on a truly modular system before, and I'm not really sure what is the best way to take on this problem.
Where should I start? Are there common problems and pitfalls that are found while developing this kind of system? How do I make the modules talk with each other while maintaining isolation (i.e, you remove a module and another module that was using it stays sane)? Are there any guides, specifications or articles I can read that could give me some ideas on where to start? It would be better if they were based on Java, but this is not a requirement, as what I'm looking for right now are ideas, not code.
Any feedback is appreciated.
|
[
"You should definitely look at OSGi. It aims at being the component/plugin mechanism for Java. It allows you to modularize your code (in so-called bundles) and update bundles at runtime. You can also completely hide implementation packages from unwanted access by other bundles, eg. only provide the API.\nEclipse was the first major open-source project to implement and use OSGi, but they didn't fully leverage it (no plugin installations/updates without restarts). If you start from scratch though, it will give you a very good framework for a plugin system.\nApache Felix is a complete open-source implementation (and there are others, such as Eclipse Equinox).\n",
"Without getting into great detail, you should be looking at Spring and a familiarization with OSGI or the Eclipse RCP frameworks will also give you some fundamental concepts you will need to keep in mind.\n",
"Another option is the ServiceLoader added in Java 1.6. \n",
"They are many way to do it but something simple can be by using Reflection. You write in your XML file name of file (that would be a class in reallity). You can than check what type is it and create it back with reflection. The class could have a common Interface that will let you find if the external file/class is really one of your module. Here is some information about Reflexion.\nYou can also use a precoded framework like this SourceForge onelink text that will give you a first good step to create module/plugin.\n"
] |
[
10,
2,
2,
1
] |
[] |
[] |
[
"events",
"java",
"module",
"plugins"
] |
stackoverflow_0000078004_events_java_module_plugins.txt
|
Q:
What attributes help runtime .Net performance?
I am looking for attributes I can use to ensure the best runtime performance for my .Net application by giving hints to the loader, JIT compiler or ngen.
For example we have DebuggableAttribute which should be set to not debug and not disable optimization for optimal performance.
[Debuggable(false, false)]
Are there any others I should know about?
A:
Ecma-335 specifies some more CompilationRelaxations for relaxed exception handling (so-called e-relaxed calls) in Annex F "Imprecise faults", but they have not been exposed by Microsoft.
Specifically CompilationRelaxations.RelaxedArrayExceptions and CompilationRelaxations.RelaxedNullReferenceException are mentioned there.
It'd be intersting what happens when you just try some integers in the CompilationRelaxationsAttribute's ctor ;)
A:
And another: Literal strings (strings declared in source code) are by default interned into a pool to save memory.
string s1 = "MyTest";
string s2 = new StringBuilder().Append("My").Append("Test").ToString();
string s3 = String.Intern(s2);
Console.WriteLine((Object)s2==(Object)s1); // Different references.
Console.WriteLine((Object)s3==(Object)s1); // The same reference.
Although it saves memory when the same literal string is used multiple times, it costs some cpu to maintaining the pool and once a string is put into the pool it stays there until the process is stopped.
Using CompilationRelaxationsAttribute you can tell the JIT compiler that you really don't want it to intern all the literal strings.
[assembly: CompilationRelaxations(CompilationRelaxations.NoStringInterning)]
A:
I found another: NeutralResourcesLanguageAttribute. According to this blog post it helps the loader in finding the right satellite assemblies faster by specifying the culture if the current (neutral) assembly.
[NeutralResourcesLanguageAttribute("nl", UltimateResourceFallbackLocation.MainAssembly)]
|
What attributes help runtime .Net performance?
|
I am looking for attributes I can use to ensure the best runtime performance for my .Net application by giving hints to the loader, JIT compiler or ngen.
For example we have DebuggableAttribute which should be set to not debug and not disable optimization for optimal performance.
[Debuggable(false, false)]
Are there any others I should know about?
|
[
"Ecma-335 specifies some more CompilationRelaxations for relaxed exception handling (so-called e-relaxed calls) in Annex F \"Imprecise faults\", but they have not been exposed by Microsoft.\nSpecifically CompilationRelaxations.RelaxedArrayExceptions and CompilationRelaxations.RelaxedNullReferenceException are mentioned there.\nIt'd be intersting what happens when you just try some integers in the CompilationRelaxationsAttribute's ctor ;)\n",
"And another: Literal strings (strings declared in source code) are by default interned into a pool to save memory. \nstring s1 = \"MyTest\"; \nstring s2 = new StringBuilder().Append(\"My\").Append(\"Test\").ToString(); \nstring s3 = String.Intern(s2); \nConsole.WriteLine((Object)s2==(Object)s1); // Different references.\nConsole.WriteLine((Object)s3==(Object)s1); // The same reference.\n\nAlthough it saves memory when the same literal string is used multiple times, it costs some cpu to maintaining the pool and once a string is put into the pool it stays there until the process is stopped.\nUsing CompilationRelaxationsAttribute you can tell the JIT compiler that you really don't want it to intern all the literal strings.\n[assembly: CompilationRelaxations(CompilationRelaxations.NoStringInterning)]\n\n",
"I found another: NeutralResourcesLanguageAttribute. According to this blog post it helps the loader in finding the right satellite assemblies faster by specifying the culture if the current (neutral) assembly.\n[NeutralResourcesLanguageAttribute(\"nl\", UltimateResourceFallbackLocation.MainAssembly)]\n\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
".net",
"performance",
"runtime"
] |
stackoverflow_0000070150_.net_performance_runtime.txt
|
Q:
Programmatically list WMI classes and their properties
Is there any known way of listing the WMI classes and their properties available for a particular system? Im interested in a vbscript approach, but please suggest anything really :)
P.S. Great site.
A:
I believe this is what you want.
WMI Code Creator
A part of this nifty utility allows you to browse namespaces/classes/properties on the local and remote PCs, not to mention generating WMI code in VBScript/C#/VB on the fly. Very useful.
Also, the source code used to create the utility is included in the download, which could provide a reference if you wanted to create your own browser like interface.
A:
This MSDN page walks through enumerating the available classes: How to: List the Classes in a WMI Namespace
for retrieving properties from a class:
ManagementPath l_Path = new ManagementPath(l_className);
ManagementClass l_Class = new ManagementClass(myScope, l_ManagementPath, null);
foreach (PropertyData l_PropertyData in l_Class.Properties)
{
string l_type = l_PropertyData.Type.ToString());
int l_length = Convert.ToInt32(l_PropertyData.Qualifiers["maxlen"].Value);
}
|
Programmatically list WMI classes and their properties
|
Is there any known way of listing the WMI classes and their properties available for a particular system? Im interested in a vbscript approach, but please suggest anything really :)
P.S. Great site.
|
[
"I believe this is what you want.\nWMI Code Creator\nA part of this nifty utility allows you to browse namespaces/classes/properties on the local and remote PCs, not to mention generating WMI code in VBScript/C#/VB on the fly. Very useful.\nAlso, the source code used to create the utility is included in the download, which could provide a reference if you wanted to create your own browser like interface.\n",
"This MSDN page walks through enumerating the available classes: How to: List the Classes in a WMI Namespace\nfor retrieving properties from a class:\nManagementPath l_Path = new ManagementPath(l_className);\nManagementClass l_Class = new ManagementClass(myScope, l_ManagementPath, null);\nforeach (PropertyData l_PropertyData in l_Class.Properties)\n{\n string l_type = l_PropertyData.Type.ToString());\n int l_length = Convert.ToInt32(l_PropertyData.Qualifiers[\"maxlen\"].Value); \n}\n\n"
] |
[
5,
2
] |
[] |
[] |
[
"vbscript",
"wmi"
] |
stackoverflow_0000012330_vbscript_wmi.txt
|
Q:
ASP.NET MVC "Components"
Is there someway to have a part of the page that renders like a little sub-page, like components?
For example, if I have a shopping cart on all my pages?
A:
Using preview 5, Html.RenderPartial is your man, you can render sub-controls, and pass them your viewdata, or an arbitrary model, and new viewdata combo.
A:
If you want it to render another controllers action, as a component, to get encapsulation, you use.
Html.RenderAction
uses routedata to get you there, has its own viewdata and kind of mini life cycle
A:
You can create an ActionFilter that modifies the view data. That way, you can decorate every action that returns the partial with the action filter. Take a look at my post:
http://stephenwalther.com/blog/archive/2008/08/12/asp-net-mvc-tip-31-passing-data-to-master-pages-and-user-controls.aspx
A:
You are looking for subcontrollers. This implementation is the best way to do what you are talking about.
Edit: I just posted about this here: http://mhinze.com/subcontrollers-in-aspnet-mvc/
|
ASP.NET MVC "Components"
|
Is there someway to have a part of the page that renders like a little sub-page, like components?
For example, if I have a shopping cart on all my pages?
|
[
"Using preview 5, Html.RenderPartial is your man, you can render sub-controls, and pass them your viewdata, or an arbitrary model, and new viewdata combo.\n",
"If you want it to render another controllers action, as a component, to get encapsulation, you use.\nHtml.RenderAction\nuses routedata to get you there, has its own viewdata and kind of mini life cycle\n",
"You can create an ActionFilter that modifies the view data. That way, you can decorate every action that returns the partial with the action filter. Take a look at my post:\nhttp://stephenwalther.com/blog/archive/2008/08/12/asp-net-mvc-tip-31-passing-data-to-master-pages-and-user-controls.aspx\n",
"You are looking for subcontrollers. This implementation is the best way to do what you are talking about.\nEdit: I just posted about this here: http://mhinze.com/subcontrollers-in-aspnet-mvc/\n"
] |
[
10,
7,
5,
1
] |
[] |
[] |
[
"asp.net",
"asp.net_mvc",
"partial_views"
] |
stackoverflow_0000073902_asp.net_asp.net_mvc_partial_views.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.