content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Sharepoint Item Level Access & performance i have created a workflow activity that do give the item creater of a specific list full control on the item and set everyone else to read only access (permission) someone told me that doing it this way (if i have a lot of users) the performance will go down dramatically is that correct ?!! if yes what is the best solution to create a list where any one can create new items but after the item is created only the creater can edit it and the rest of the users can read it only A: The accepted answer is not actually answering the question correctly... You should not use a workflow to do this, if you want people to be able to edit items they create and only read ones they did not, use "List->Settings->Advanced Settings->Item-level Permissions", and this is available for document libraries (since they inherit from SPLIST) it just does not show up in their "Advanced Settings" in the UI. You can set the ReadSecurity property to 1 and the WriteSecurity property to 2 on the Document Library. http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.splist.writesecurity.aspx A: Performance degradation will happen when you use large ACLs for each list item. Just make sure that item-level permissions basically have the minimum entries. For example: The user that has permissions to edit that item A single security group that contains all the users with only Reader permissions. So, can Sharepoint offer these default permissions OOB? Not that I'm aware of. The only option that I can think of is using workflows that set these permissions dinamycally when the document is uploaded. If you want to avoid performance degradation just make sure that you never display (or iterate using the object model) more than 2000 of those items in a Fine Grained Permissions list. THAT would definitely cause major performance issues. A: Yes, you might solve this with workflows but that might be a bit clumsy and it might slow your server. The better option is to use List Settings > Advanced Settings > Item-level Permissions. This feature is not available for Document and Form Libraries. A: It is true that a list that contains a large number of items with custom permissions applied, will slown down your server. This is document in the official Microsoft paper Plan for software boundaries. The recommended/magic number is 2000. Going further won't break anything, but it could be that you will run into performance issues.
Sharepoint Item Level Access & performance
i have created a workflow activity that do give the item creater of a specific list full control on the item and set everyone else to read only access (permission) someone told me that doing it this way (if i have a lot of users) the performance will go down dramatically is that correct ?!! if yes what is the best solution to create a list where any one can create new items but after the item is created only the creater can edit it and the rest of the users can read it only
[ "The accepted answer is not actually answering the question correctly...\nYou should not use a workflow to do this, if you want people to be able to edit items they create and only read ones they did not, use \"List->Settings->Advanced Settings->Item-level Permissions\", and this is available for document libraries (since they inherit from SPLIST) it just does not show up in their \"Advanced Settings\" in the UI. You can set the ReadSecurity property to 1 and the WriteSecurity property to 2 on the Document Library.\nhttp://msdn.microsoft.com/en-us/library/microsoft.sharepoint.splist.writesecurity.aspx\n", "Performance degradation will happen when you use large ACLs for each list item. Just make sure that item-level permissions basically have the minimum entries. For example:\n\nThe user that has permissions to edit that item\nA single security group that contains all the users with only Reader permissions.\n\nSo, can Sharepoint offer these default permissions OOB? Not that I'm aware of. The only option that I can think of is using workflows that set these permissions dinamycally when the document is uploaded.\nIf you want to avoid performance degradation just make sure that you never display (or iterate using the object model) more than 2000 of those items in a Fine Grained Permissions list. THAT would definitely cause major performance issues.\n", "Yes, you might solve this with workflows but that might be a bit clumsy and it might slow your server.\nThe better option is to use List Settings > Advanced Settings > Item-level Permissions. \nThis feature is not available for Document and Form Libraries.\n", "It is true that a list that contains a large number of items with custom permissions applied, will slown down your server. This is document in the official Microsoft paper Plan for software boundaries.\nThe recommended/magic number is 2000. Going further won't break anything, but it could be that you will run into performance issues.\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ "data_access", "list", "permissions", "sharepoint" ]
stackoverflow_0000085444_data_access_list_permissions_sharepoint.txt
Q: Injecting a user account into mySQL Tackling a strange scenario here. We use a proprietary workstation management application which uses mySQL to store its data. Within the application they provide number of reports, such as which user logged into which machine at what time, all the software products installed on the monitored machines, so on and so forth. We are looking to do a different set of reports, however, they do not support custom reports. Since their data is being stored in mySQL, I gather I can do the reporting manually. I don't have valid credentials to connect to the mySQL server though. Is there anyway for me to create a user account in the mySQL server? I do not want to reset the root password or anything account that might be in there, as it might break the application. I have full access to the Windows 2003 server. I can stop and restart services, including the mySQL server. To the actual mySQL server, I only have basic access through the GUI provided by the software. I can't connect to it directly through CLI or through another tool (due to the lack of credentials). I apologize if it came off as if I'm trying to get unauthorized access to the mySQL server. I have contacted the software company, and as of today it's been two weeks without a response from them. I need to get to the data. I have full access to the physical box, I have admin privileges on it. A: You'll want to use the MySQL password recovery process. Follow these instructions, except replace the password reset query with a query to add a new user. The new user query would be something like: GRANT ALL ON *.* TO 'myuser'@'localhost' IDENTIFIED BY 'mypassword' WITH GRANT OPTION; That will create a new user "myuser" with the password "mypassword", who may log in to MySQL through the local system's CLI. You can then use the MySQL Administrator GUI (download here) and update user permissions so you can log in from other systems on the network. Or use the GRANT statement from the CLI, if that's more your style. A: Do you have access to the MySQL server in question? As in, what access do you have beyond what a regular user would? You should try to go through those routes before you "hack" your way in there, since that may or may not be feasible with that software. A: I assume I really should not answer this one, but it's just too much fun. Look at This page about SQL injections. That should cover your needs. This page shows how to add user accounts to mySQL I would try entering the following in random user input fields: p'; INSERT INTO user VALUES ('localhost','myNewAdmin',PASSWORD('some_pass'), 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y'); and then p'; FLUSH PRIVILEGES; p'; is intended to close the regular question. e.g - Normal question is: "Select Adress from cusomers where custName = ' + $INPUT + '; becomes Select Adress from cusomers where custName = 'p'; INSERT INTO user VALUES('localhost','myNewAdmin',PASSWORD('some_pass'), 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y'); A: One thing that comes in mind is sniffing the database communication and hope it's not encrypted. If it is encrypted try changing the configuration not to use SSL and restart mysql. A good sniffer that I use is Wireshark From mysql 5.0 documentation: MySQL supports secure (encrypted) connections between MySQL clients and the server using the Secure Sockets Layer (SSL) protocol. This section discusses how to use SSL connections. It also describes a way to set up SSH on Windows. For information on how to require users to use SSL connections, see the discussion of the REQUIRE clause of the GRANT statement in Section 12.5.1.3, “GRANT Syntax”. The standard configuration of MySQL is intended to be as fast as possible, so encrypted connections are not used by default. Doing so would make the client/server protocol much slower. Encrypting data is a CPU-intensive operation that requires the computer to do additional work and can delay other MySQL tasks. For applications that require the security provided by encrypted connections, the extra computation is warranted. MySQL allows encryption to be enabled on a per-connection basis. You can choose a normal unencrypted connection or a secure encrypted SSL connection according the requirements of individual applications. Secure connections are based on the OpenSSL API and are available through the MySQL C API. Replication uses the C API, so secure connections can be used between master and slave servers. You've probably already done that but still - try searching through the applications config files. If there's nothing - try searching through the executables/source code - maybe it's in plaintext if you're lucky. A: odds are there are triggers on the database side keeping a log so when you hack yourself into the database they will know when and how you did it. Not a good idea.
Injecting a user account into mySQL
Tackling a strange scenario here. We use a proprietary workstation management application which uses mySQL to store its data. Within the application they provide number of reports, such as which user logged into which machine at what time, all the software products installed on the monitored machines, so on and so forth. We are looking to do a different set of reports, however, they do not support custom reports. Since their data is being stored in mySQL, I gather I can do the reporting manually. I don't have valid credentials to connect to the mySQL server though. Is there anyway for me to create a user account in the mySQL server? I do not want to reset the root password or anything account that might be in there, as it might break the application. I have full access to the Windows 2003 server. I can stop and restart services, including the mySQL server. To the actual mySQL server, I only have basic access through the GUI provided by the software. I can't connect to it directly through CLI or through another tool (due to the lack of credentials). I apologize if it came off as if I'm trying to get unauthorized access to the mySQL server. I have contacted the software company, and as of today it's been two weeks without a response from them. I need to get to the data. I have full access to the physical box, I have admin privileges on it.
[ "You'll want to use the MySQL password recovery process. Follow these instructions, except replace the password reset query with a query to add a new user. The new user query would be something like:\nGRANT ALL ON *.* TO 'myuser'@'localhost' IDENTIFIED BY 'mypassword' WITH GRANT OPTION;\n\nThat will create a new user \"myuser\" with the password \"mypassword\", who may log in to MySQL through the local system's CLI. You can then use the MySQL Administrator GUI (download here) and update user permissions so you can log in from other systems on the network. Or use the GRANT statement from the CLI, if that's more your style.\n", "Do you have access to the MySQL server in question?\nAs in, what access do you have beyond what a regular user would? You should try to go through those routes before you \"hack\" your way in there, since that may or may not be feasible with that software.\n", "I assume I really should not answer this one, but it's just too much fun.\nLook at This page about SQL injections. That should cover your needs. \nThis page shows how to add user accounts to mySQL\nI would try entering the following in random user input fields:\np'; INSERT INTO user VALUES\n\n('localhost','myNewAdmin',PASSWORD('some_pass'), \n'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y'); \nand then\np'; FLUSH PRIVILEGES;\n\np'; is intended to close the regular question. e.g -\nNormal question is: \n\"Select Adress from cusomers where custName = ' + $INPUT + ';\n\nbecomes \n Select Adress from cusomers where custName = 'p'; INSERT INTO user \nVALUES('localhost','myNewAdmin',PASSWORD('some_pass'), \n'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y'); \n\n", "One thing that comes in mind is sniffing the database communication and hope it's not encrypted. If it is encrypted try changing the configuration not to use SSL and restart mysql. A good sniffer that I use is Wireshark\nFrom mysql 5.0 documentation:\n\nMySQL supports secure (encrypted)\n connections between MySQL clients and\n the server using the Secure Sockets\n Layer (SSL) protocol. This section\n discusses how to use SSL connections.\n It also describes a way to set up SSH\n on Windows. For information on how to\n require users to use SSL connections,\n see the discussion of the REQUIRE\n clause of the GRANT statement in\n Section 12.5.1.3, “GRANT Syntax”.\nThe standard configuration of MySQL is\n intended to be as fast as possible, so\n encrypted connections are not used by\n default. Doing so would make the\n client/server protocol much slower.\n Encrypting data is a CPU-intensive\n operation that requires the computer\n to do additional work and can delay\n other MySQL tasks. For applications\n that require the security provided by\n encrypted connections, the extra\n computation is warranted.\nMySQL allows encryption to be enabled\n on a per-connection basis. You can\n choose a normal unencrypted connection\n or a secure encrypted SSL connection\n according the requirements of\n individual applications.\nSecure connections are based on the\n OpenSSL API and are available through\n the MySQL C API. Replication uses the\n C API, so secure connections can be\n used between master and slave servers.\n\nYou've probably already done that but still - try searching through the applications config files. If there's nothing - try searching through the executables/source code - maybe it's in plaintext if you're lucky. \n", "odds are there are triggers on the database side keeping a log so when you hack yourself into the database they will know when and how you did it. Not a good idea.\n" ]
[ 9, 0, 0, 0, 0 ]
[]
[]
[ "account", "mysql" ]
stackoverflow_0000105645_account_mysql.txt
Q: TCP send queue depth How do I discover how many bytes have been sent to a TCP socket but have not yet been put on the wire? Looking at the diagram here: I would like to know the total of Categories 2, 3, and 4 or the total of 3 and 4. This is in C(++) and on both Windows and Linux. Ideally there is a ioctl that I could use, but there doesn't seem to be any. A: Under Linux, see the man page for tcp(7). It appears that you can get the number of untransmitted bytes by ioctl(sock,SIOCINQ ... Other stats might be available from members of the structure given back by the TCP_INFO getsockopt() call. A: Some Unix flavors may have an API way to do this, but there is no way to do it that is portable across different variants. A: If you want to determine wheter to add data or not: don't worry, send will block until the data is in the queue. If you don't want it to block, you can tell it to send(2): send(socket, buf, buflen, MSG_DONTWAIT); But this only works on Linux. You can also set the socket to non-blocking: fcntl(socket, F_SETFD, O_NONBLOCK); This way write will return an error (EAGAIN) if the data cannot be written to the stream.
TCP send queue depth
How do I discover how many bytes have been sent to a TCP socket but have not yet been put on the wire? Looking at the diagram here: I would like to know the total of Categories 2, 3, and 4 or the total of 3 and 4. This is in C(++) and on both Windows and Linux. Ideally there is a ioctl that I could use, but there doesn't seem to be any.
[ "Under Linux, see the man page for tcp(7).\nIt appears that you can get the number of untransmitted bytes by ioctl(sock,SIOCINQ ...\nOther stats might be available from members of the structure given back by the TCP_INFO getsockopt() call.\n", "Some Unix flavors may have an API way to do this, but there is no way to do it that is portable across different variants.\n", "If you want to determine wheter to add data or not: don't worry, send will block until the data is in the queue. If you don't want it to block, you can tell it to send(2):\nsend(socket, buf, buflen, MSG_DONTWAIT);\n\nBut this only works on Linux.\nYou can also set the socket to non-blocking:\nfcntl(socket, F_SETFD, O_NONBLOCK);\n\nThis way write will return an error (EAGAIN) if the data cannot be written to the stream.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "network_programming", "tcp" ]
stackoverflow_0000105681_network_programming_tcp.txt
Q: Why won't an Ajax Script Work Locally? I've an issue with the same piece of code running fine on my live website but not on my local development server. I've an Ajax function that updates a div. The following code works on the live site: self.xmlHttpReq.open("POST", PageURL, true); self.xmlHttpReq.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); self.xmlHttpReq.setRequestHeader("Content-length", QueryString.length); //..update div stuff... self.xmlHttpReq.send(QueryString); When I try to run this on my local machine, nothing is passed to the QueryString. However, to confuse matters, the following code does work locally: self.xmlHttpReq.open("POST", PageURL+"?"+QueryString, true); self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8'); //..div update stuff.. self.xmlHttpReq.send(QueryString); But, I can't use the code that works on my local machine as it doesn't work on the live server (they've changed their policy on querystrings for security reasons)! I can alert the Querystring out so I know it's passed into the function on my local machine. The only thing I can think of is that it's a hardware/update issue. Live Site is running IIS 6 (on a WIN 2003 box I think) Local Site is running IIS 5.1 (On XP Pro) Are there some updates or something I'm missing or something? A: Is there a reason you're explicitly setting the Content-Length header in the first example? You... shouldn't need to do this, and i wouldn't be surprised to find it causing problems. Oh, and check your encoding routine. The rules are not quite the same for querystrings and POSTed form data. A: I would guess that Shog9 is right, and that IIS 6 i smart enough to ignore your request and send the correct headers, while 5.2 throws an error.
Why won't an Ajax Script Work Locally?
I've an issue with the same piece of code running fine on my live website but not on my local development server. I've an Ajax function that updates a div. The following code works on the live site: self.xmlHttpReq.open("POST", PageURL, true); self.xmlHttpReq.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); self.xmlHttpReq.setRequestHeader("Content-length", QueryString.length); //..update div stuff... self.xmlHttpReq.send(QueryString); When I try to run this on my local machine, nothing is passed to the QueryString. However, to confuse matters, the following code does work locally: self.xmlHttpReq.open("POST", PageURL+"?"+QueryString, true); self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8'); //..div update stuff.. self.xmlHttpReq.send(QueryString); But, I can't use the code that works on my local machine as it doesn't work on the live server (they've changed their policy on querystrings for security reasons)! I can alert the Querystring out so I know it's passed into the function on my local machine. The only thing I can think of is that it's a hardware/update issue. Live Site is running IIS 6 (on a WIN 2003 box I think) Local Site is running IIS 5.1 (On XP Pro) Are there some updates or something I'm missing or something?
[ "Is there a reason you're explicitly setting the Content-Length header in the first example? You... shouldn't need to do this, and i wouldn't be surprised to find it causing problems. \nOh, and check your encoding routine. The rules are not quite the same for querystrings and POSTed form data.\n", "I would guess that Shog9 is right, and that IIS 6 i smart enough to ignore your request and send the correct headers, while 5.2 throws an error.\n" ]
[ 1, 0 ]
[]
[]
[ "ajax", "javascript", "webserver" ]
stackoverflow_0000105777_ajax_javascript_webserver.txt
Q: T-SQL triggers firing a "Column name or number of supplied values does not match table definition" error Here's something I haven't been able to fix, and I've looked everywhere. Perhaps someone here will know! I have a table called dandb_raw, with three columns in particular: dunsId (PK), name, and searchName. I also have a trigger that acts on this table: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dandb_raw_searchNames] ON [dandb_raw] FOR INSERT, UPDATE AS SET NOCOUNT ON select dunsId, name into #magic from inserted UPDATE dandb SET dandb.searchName = company_generateSearchName(dandb.name) FROM (select dunsId, name from #magic) i INNER JOIN dandb_raw dandb on i.dunsId = dandb.dunsId --Add new search matches SELECT c.companyId, dandb.dunsId INTO #newMatches FROM dandb_raw dandb INNER JOIN (select dunsId, name from #magic) a on a.dunsId = dandb.dunsId INNER JOIN companies c ON dandb.searchName = c.searchBrand --avoid url matches that are potentially wrong AND (lower(dandb.url) = lower(c.url) OR dandb.url = '' OR c.url = '' OR c.url is null) INSERT INTO #newMatches (companyId, dunsId) SELECT c.companyId, max(dandb.dunsId) dunsId FROM dandb_raw dandb INNER JOIN ( select case when charindex('/',url) <> 0 then left(url, charindex('/',url)-1) else url end urlMatch, * from companies ) c ON dandb.url = c.urlMatch where subsidiaryOf = 1 and isReported = 1 and dandb.url <> '' and c.companyId not in (select companyId from #newMatches) group by companyId having count(dandb.dunsId) = 1 UPDATE cd SET cd.dunsId = nm.dunsId FROM companies_dandb cd INNER JOIN #newMatches nm ON cd.companyId = nm.companyId GO The trigger causes inserts to fail: insert into [dandb_raw](dunsId, name) select 3442355, 'harper' union all select 34425355, 'har 466per' update [dandb_raw] set name ='grap6767e' With this error: Msg 213, Level 16, State 1, Procedure companies_contactInfo_updateTerritories, Line 20 Insert Error: Column name or number of supplied values does not match table definition. The most curious thing about this is that each of the individual statements in the trigger works on its own. It's almost as though inserted is a one-off table that infects temporary tables if you try to move inserted into one of them. So what causes the trigger to fail? How can it be stopped? A: I think David and Cervo combined have hit on the problem here. I'm pretty sure part of what was happening was that we were using #newMatches in multiple triggers. When one trigger changed some rows, it would fire another trigger, which would attempt to use the connection scoped #newMatches. As a result, it would try to, find the table already existed with a different schema, die, and produce the message above. One piece of evidence that would be in favor: Does inserted use a stack style scope (nested triggers have their own inserteds?) Still speculating though - at least things seem to be working now! A: What is companies_contactInfo_updateTerritories? The actual reference mentions procedure "companies_contactInfo_updateTerritories" but I do not see it in the code given. Also I do not see where it is being called. Unless it is from your application that is calling the SQL and hence irrelevant.... If you tested everything and it worked but now it doesn't work, then something must be different. One thing to consider is security. I noticed that you just call the table [dandb_raw] and not [dbo].[dandb_raw]. So if the user had a table of the same name [user].[dandb_raw], that table would be used to check the definitions instead of your table. Also, the trigger creates temp tables. But if some of the temp tables already existed for whatever reason but with different definitions, this may also be a problem. A: I don't see any obvious problem in the code. "SELECT .. INTO" is weak kung-fu. Try explicitly creating the temp table definition: CREATE TABLE #newMatches ( CompanyID int PRIMARY KEY, DunsID int ) When you're done with #newMatches, you should get rid of it so you can create it again later (temp tables are connection scoped!!) DROP TABLE #newMatches A: Trigger code (because it must run everytime the data is updated) must be efficient and must account for multiple record inserts. You've succeeded at the second but not the first. You have made this overly complicated and have used things such as Not in statements that are usually less efficeint than using a left join. Temp tables are unnecessary here (I would never consider using one in a trigger) as they add to the inefficiency of the trigger. There is not reason not to write From inserted i instead of FROM (select dunsId, name from #magic) i The first is likely to be faster and is simpler to read and maintain. Here: JOIN ( select case when charindex('/',url) <> 0 then left(url, charindex('/',url)-1) else url end urlMatch, * from companies ) c ON dandb.url = c.urlMatch You are selecting all the fields in the table even though you only appear to be using one. Why? You are also running that case stament on all the records in company even though after you join you may not need all of them. Also in general I would avoid using select * but especially in a trigger. Suppose you are inserting into another table and you used select * from some table joined to inserted or deleted. Adding a column to that table would cause the trigger to fail and stop all data changes until it was fixed. You've also used a function in the trigger. This coudl be painfully slow if you havea large insert. I suggest you test this by updating a large group of records and see what happens. All data changes do not happen just from the user interface, one record at a time. There will be times when one field is updated from an ad-hoc query in management studio (when all prices need to be adjusted by 10% as the simplest example that comes to mind.) Your trigger needs to be able to handle those types if updates as well as the ones you are expecting. I would run a test case updating 100000 rows and see how much this trigger slows things down. Maybe this isn't really answering your problem, but the trigger just is so far from optimal, I had to say it.
T-SQL triggers firing a "Column name or number of supplied values does not match table definition" error
Here's something I haven't been able to fix, and I've looked everywhere. Perhaps someone here will know! I have a table called dandb_raw, with three columns in particular: dunsId (PK), name, and searchName. I also have a trigger that acts on this table: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dandb_raw_searchNames] ON [dandb_raw] FOR INSERT, UPDATE AS SET NOCOUNT ON select dunsId, name into #magic from inserted UPDATE dandb SET dandb.searchName = company_generateSearchName(dandb.name) FROM (select dunsId, name from #magic) i INNER JOIN dandb_raw dandb on i.dunsId = dandb.dunsId --Add new search matches SELECT c.companyId, dandb.dunsId INTO #newMatches FROM dandb_raw dandb INNER JOIN (select dunsId, name from #magic) a on a.dunsId = dandb.dunsId INNER JOIN companies c ON dandb.searchName = c.searchBrand --avoid url matches that are potentially wrong AND (lower(dandb.url) = lower(c.url) OR dandb.url = '' OR c.url = '' OR c.url is null) INSERT INTO #newMatches (companyId, dunsId) SELECT c.companyId, max(dandb.dunsId) dunsId FROM dandb_raw dandb INNER JOIN ( select case when charindex('/',url) <> 0 then left(url, charindex('/',url)-1) else url end urlMatch, * from companies ) c ON dandb.url = c.urlMatch where subsidiaryOf = 1 and isReported = 1 and dandb.url <> '' and c.companyId not in (select companyId from #newMatches) group by companyId having count(dandb.dunsId) = 1 UPDATE cd SET cd.dunsId = nm.dunsId FROM companies_dandb cd INNER JOIN #newMatches nm ON cd.companyId = nm.companyId GO The trigger causes inserts to fail: insert into [dandb_raw](dunsId, name) select 3442355, 'harper' union all select 34425355, 'har 466per' update [dandb_raw] set name ='grap6767e' With this error: Msg 213, Level 16, State 1, Procedure companies_contactInfo_updateTerritories, Line 20 Insert Error: Column name or number of supplied values does not match table definition. The most curious thing about this is that each of the individual statements in the trigger works on its own. It's almost as though inserted is a one-off table that infects temporary tables if you try to move inserted into one of them. So what causes the trigger to fail? How can it be stopped?
[ "I think David and Cervo combined have hit on the problem here.\nI'm pretty sure part of what was happening was that we were using #newMatches in multiple triggers. When one trigger changed some rows, it would fire another trigger, which would attempt to use the connection scoped #newMatches.\nAs a result, it would try to, find the table already existed with a different schema, die, and produce the message above. One piece of evidence that would be in favor: Does inserted use a stack style scope (nested triggers have their own inserteds?)\nStill speculating though - at least things seem to be working now!\n", "What is companies_contactInfo_updateTerritories? The actual reference mentions procedure \"companies_contactInfo_updateTerritories\" but I do not see it in the code given. Also I do not see where it is being called. Unless it is from your application that is calling the SQL and hence irrelevant....\nIf you tested everything and it worked but now it doesn't work, then something must be different. One thing to consider is security. I noticed that you just call the table [dandb_raw] and not [dbo].[dandb_raw]. So if the user had a table of the same name [user].[dandb_raw], that table would be used to check the definitions instead of your table. Also, the trigger creates temp tables. But if some of the temp tables already existed for whatever reason but with different definitions, this may also be a problem.\n", "I don't see any obvious problem in the code.\n\"SELECT .. INTO\" is weak kung-fu. Try explicitly creating the temp table definition:\nCREATE TABLE #newMatches\n(\n CompanyID int PRIMARY KEY,\n DunsID int\n)\n\nWhen you're done with #newMatches, you should get rid of it so you can create it again later (temp tables are connection scoped!!)\nDROP TABLE #newMatches\n\n", "Trigger code (because it must run everytime the data is updated) must be efficient and must account for multiple record inserts. You've succeeded at the second but not the first. You have made this overly complicated and have used things such as Not in statements that are usually less efficeint than using a left join. Temp tables are unnecessary here (I would never consider using one in a trigger) as they add to the inefficiency of the trigger. There is not reason not to write \nFrom inserted i \ninstead of \n FROM (select dunsId, name from #magic) i\nThe first is likely to be faster and is simpler to read and maintain.\nHere:\nJOIN ( select case when charindex('/',url) <> 0 then left(url, charindex('/',url)-1) else url end urlMatch, * from companies ) c ON dandb.url = c.urlMatch\nYou are selecting all the fields in the table even though you only appear to be using one. Why? You are also running that case stament on all the records in company even though after you join you may not need all of them.\nAlso in general I would avoid using select * but especially in a trigger. Suppose you are inserting into another table and you used select * from some table joined to inserted or deleted. Adding a column to that table would cause the trigger to fail and stop all data changes until it was fixed.\nYou've also used a function in the trigger. This coudl be painfully slow if you havea large insert. I suggest you test this by updating a large group of records and see what happens. All data changes do not happen just from the user interface, one record at a time. There will be times when one field is updated from an ad-hoc query in management studio (when all prices need to be adjusted by 10% as the simplest example that comes to mind.) Your trigger needs to be able to handle those types if updates as well as the ones you are expecting. I would run a test case updating 100000 rows and see how much this trigger slows things down.\nMaybe this isn't really answering your problem, but the trigger just is so far from optimal, I had to say it.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "sql", "triggers", "tsql" ]
stackoverflow_0000095218_sql_triggers_tsql.txt
Q: Can I revoke some database privileges from MediaWiki after installation? I've just installed MediaWiki on a web server. Obviously it needs lots of privileges during installation to set up its database correctly. Now that it's installed can I safely revoke some privileges (e.g. create table, drop table?) Or might it need to create more tables later (when they are first needed?) If not then I would prefer to grant it as few privileges as possible. A: After the installation, MediaWiki doesn't need to create any more tables. I'd suggest giving the user insert, select, and lock permission. grant select,lock tables,insert on media_wiki_db.* to 'wiki'@'localhost' identified by 'password'; A: Change the user that mediawiki connects as in LocalSettings.php and then using phpMyAdmin, you can edit the privileges of that user (that is, if you aren't comfortable granting and revoking privileges from the mysql console). http://www.phpmyadmin.net/home_page/index.php
Can I revoke some database privileges from MediaWiki after installation?
I've just installed MediaWiki on a web server. Obviously it needs lots of privileges during installation to set up its database correctly. Now that it's installed can I safely revoke some privileges (e.g. create table, drop table?) Or might it need to create more tables later (when they are first needed?) If not then I would prefer to grant it as few privileges as possible.
[ "After the installation, MediaWiki doesn't need to create any more tables. I'd suggest giving the user insert, select, and lock permission.\ngrant select,lock tables,insert on media_wiki_db.* to 'wiki'@'localhost' identified by 'password';\n\n", "Change the user that mediawiki connects as in LocalSettings.php and then using phpMyAdmin, you can edit the privileges of that user (that is, if you aren't comfortable granting and revoking privileges from the mysql console).\nhttp://www.phpmyadmin.net/home_page/index.php\n" ]
[ 2, 0 ]
[]
[]
[ "mediawiki", "mysql" ]
stackoverflow_0000105604_mediawiki_mysql.txt
Q: Implementing IntelliSense-like behavior in custom editors for domain-specific languages I'm creating a DSL with a template-like editor, much like the rule systems in Alice. Users will be able to select relationships from a list as well as the objects to apply the relation to. These two lists should be filtered based on the acceptable types -- for instance, if the relationship is "greater than" then the available objects must be of a type that "greater than" is implemented for. Similarly, if an object is selected that is not comparable with greater than, then that relation should not be in the list of potential relations. I think the heart of this problem is a type checker, but I'm not certain of the best way to incorporate that type of logic in my application. Is anyone aware of existing type checking libraries for DSLs? I am specifically interested in open-source and cross-platform technologies. Java is probably the language we will end up using, but that is not fixed. A: You might look into Scintilla. It's the editing component used by Notepad++, among other tools. It has some support for doing autocompletion, although I haven't tried using it myself, so I'm not sure how well it works. It's open source, so if it doesn't meet your needs, you can extend it without too much hassle, I think. A: This might help on the intellisense side - CodeTextBox
Implementing IntelliSense-like behavior in custom editors for domain-specific languages
I'm creating a DSL with a template-like editor, much like the rule systems in Alice. Users will be able to select relationships from a list as well as the objects to apply the relation to. These two lists should be filtered based on the acceptable types -- for instance, if the relationship is "greater than" then the available objects must be of a type that "greater than" is implemented for. Similarly, if an object is selected that is not comparable with greater than, then that relation should not be in the list of potential relations. I think the heart of this problem is a type checker, but I'm not certain of the best way to incorporate that type of logic in my application. Is anyone aware of existing type checking libraries for DSLs? I am specifically interested in open-source and cross-platform technologies. Java is probably the language we will end up using, but that is not fixed.
[ "You might look into Scintilla. It's the editing component used by Notepad++, among other tools. It has some support for doing autocompletion, although I haven't tried using it myself, so I'm not sure how well it works. It's open source, so if it doesn't meet your needs, you can extend it without too much hassle, I think.\n", "This might help on the intellisense side - CodeTextBox\n" ]
[ 2, 1 ]
[]
[]
[ "dsl", "intellisense", "types" ]
stackoverflow_0000105901_dsl_intellisense_types.txt
Q: I need a helper method to compare a char Enum and a char boxed to an object I have an enum that looks as follows: public enum TransactionStatus { Open = 'O', Closed = 'C'}; and I'm pulling data from the database with a single character indicating - you guessed it - whether 'O' the transaction is open or 'C' the transaction is closed. now because the data comes out of the database as an object I am having a heck of a time writing comparison code. The best I can do is to write: protected bool CharEnumEqualsCharObj(TransactionStatus enum_status, object obj_status) { return ((char)enum_status).ToString() == obj_status.ToString(); } However, this is not the only character enum that I have to deal with, I have 5 or 6 and writting the same method for them is annoying to say the least. Supposedly all enums inherit from System.Enum but if I try to set that as the input type I get compilation errors. This is also in .NET 1.1 so generics are out of the question. I've been struggling with this for a while. Does anyone have a better way of writing this method? Also, can anyone clarify the whole enums inherit from System.Enum but are not polymorphic thing? A: static void Main(string[] args) { object val = 'O'; Console.WriteLine(EnumEqual(TransactionStatus.Open, val)); val = 'R'; Console.WriteLine(EnumEqual(DirectionStatus.Left, val)); Console.ReadLine(); } public static bool EnumEqual(Enum e, object boxedValue) { return e.Equals(Enum.ToObject(e.GetType(), (char)boxedValue)); } public enum TransactionStatus { Open = 'O', Closed = 'C' }; public enum DirectionStatus { Left = 'L', Right = 'R' }; A: Enums are generally messy in C# so when using .NET 2.0 its common to wrap the syntax with generics to avoid having to write such clumsy code. In .NET 1.1 you can do something like the below, although it's not much tidier than the original snippet: protected bool CharEnumEqualsCharObj(TransactionStatus enum_status, object obj_status) { return (enum_status == Enum.Parse(typeof(TransactionStatus), obj_status.ToString())); } This is about the same amount of code but you are now doing enum rather than string comparison. You could also use the debugger/documentation to see if obj_status really is an object or whether you can safely cast it to a string.
I need a helper method to compare a char Enum and a char boxed to an object
I have an enum that looks as follows: public enum TransactionStatus { Open = 'O', Closed = 'C'}; and I'm pulling data from the database with a single character indicating - you guessed it - whether 'O' the transaction is open or 'C' the transaction is closed. now because the data comes out of the database as an object I am having a heck of a time writing comparison code. The best I can do is to write: protected bool CharEnumEqualsCharObj(TransactionStatus enum_status, object obj_status) { return ((char)enum_status).ToString() == obj_status.ToString(); } However, this is not the only character enum that I have to deal with, I have 5 or 6 and writting the same method for them is annoying to say the least. Supposedly all enums inherit from System.Enum but if I try to set that as the input type I get compilation errors. This is also in .NET 1.1 so generics are out of the question. I've been struggling with this for a while. Does anyone have a better way of writing this method? Also, can anyone clarify the whole enums inherit from System.Enum but are not polymorphic thing?
[ "static void Main(string[] args)\n{\n object val = 'O';\n Console.WriteLine(EnumEqual(TransactionStatus.Open, val));\n\n val = 'R';\n Console.WriteLine(EnumEqual(DirectionStatus.Left, val));\n\n Console.ReadLine();\n}\n\npublic static bool EnumEqual(Enum e, object boxedValue)\n{ \n return e.Equals(Enum.ToObject(e.GetType(), (char)boxedValue));\n}\n\npublic enum TransactionStatus { Open = 'O', Closed = 'C' };\npublic enum DirectionStatus { Left = 'L', Right = 'R' };\n\n", "Enums are generally messy in C# so when using .NET 2.0 its common to wrap the syntax with generics to avoid having to write such clumsy code.\nIn .NET 1.1 you can do something like the below, although it's not much tidier than the original snippet:\nprotected bool CharEnumEqualsCharObj(TransactionStatus enum_status, object obj_status)\n{\n return (enum_status == Enum.Parse(typeof(TransactionStatus), obj_status.ToString()));\n}\n\nThis is about the same amount of code but you are now doing enum rather than string comparison.\nYou could also use the debugger/documentation to see if obj_status really is an object or whether you can safely cast it to a string.\n" ]
[ 4, 0 ]
[ "I would take a look at Enum.Parse. It will let you parse your char back into the proper enum. I believe it works all the way back to C# 1.0. Your code would look a bit like this:\nTransactionStatus status = (TransactionStatus)Enum.Parse(typeof(TransactionStatus), obj.ToString());\n\n", "If you just have to compare values you can use something like:\nprotected bool CharEnumEqualsCharObj(TransactionStatus enum_status, object obj_status) {\n return (char)enum_status == (char)obj_status;\n}\n\n" ]
[ -1, -2 ]
[ "c#", "casting", "enums" ]
stackoverflow_0000105609_c#_casting_enums.txt
Q: PHP/JS - Create thumbnails on the fly or store as files For an image hosting web application: For my stored images, is it feasible to create thumbnails on the fly using PHP (or whatever), or should I save 1 or more different sized thumbnails to disk and just load those? Any help is appreciated. A: Save thumbnails to disk. Image processing takes a lot of resources and, depending on the size of the image, might exceed the default allowed memory limit for php. It is less of a concern if you have your own server with only your application running but it still takes a lot of cpu power and memory to resize images. If you're considering creating thumbnails on the fly anyway, you don't have to change much - upon the first request, create the thumbnail from the source file, save it to disk and upon subsequent requests just read it off the disk. A: I use phpThumb, as it's the best of both worlds. You can create thumbnails on the fly, but it automatically caches the images to speed up future requests. It creates a nice wrapper around the GD and ImageMagick libraries. Worth a look! A: It would be much better to cache the thumbnails. Generating them on the fly would be very taxing on the system. A: It depends on the usage pattern of the site, but, basically, how many times do you expect each image to be viewed? In the case of thumbnails, they're most likely to be around for quite a while (the image is uploaded once and never changed, so the thumbnail doesn't change either), so it's generally worthwhile to generate when the full image is uploaded and store them for later. Unless the site is completely dead, they'll be viewed many (hundreds or thousands of) times over their lifetime and disk is a lot cheaper than latency these days. This also becomes more significant as load on the server increases, of course. Conversely, for something like stock charts that get updated every hour (if not more frequently), that would be a situation where you'd do better to create them on the fly, so as to avoid wasting CPU time on constantly generating images which no user will ever see. Or, if you want to get fancy, you can optimize to handle either access pattern by generating the images on the fly the first time they're needed and then showing the pre-generated one afterwards, up until the data it's generated from changes, at which point you delete it so that it will be regenerated the next time it's needed. But that would be overkill for something as static as thumbnails, IMO. A: check out the gd library and imagemagick
PHP/JS - Create thumbnails on the fly or store as files
For an image hosting web application: For my stored images, is it feasible to create thumbnails on the fly using PHP (or whatever), or should I save 1 or more different sized thumbnails to disk and just load those? Any help is appreciated.
[ "Save thumbnails to disk. Image processing takes a lot of resources and, depending on the size of the image, might exceed the default allowed memory limit for php. It is less of a concern if you have your own server with only your application running but it still takes a lot of cpu power and memory to resize images. If you're considering creating thumbnails on the fly anyway, you don't have to change much - upon the first request, create the thumbnail from the source file, save it to disk and upon subsequent requests just read it off the disk.\n", "I use phpThumb, as it's the best of both worlds. You can create thumbnails on the fly, but it automatically caches the images to speed up future requests. It creates a nice wrapper around the GD and ImageMagick libraries. Worth a look!\n", "It would be much better to cache the thumbnails. Generating them on the fly would be very taxing on the system.\n", "It depends on the usage pattern of the site, but, basically, how many times do you expect each image to be viewed?\nIn the case of thumbnails, they're most likely to be around for quite a while (the image is uploaded once and never changed, so the thumbnail doesn't change either), so it's generally worthwhile to generate when the full image is uploaded and store them for later. Unless the site is completely dead, they'll be viewed many (hundreds or thousands of) times over their lifetime and disk is a lot cheaper than latency these days. This also becomes more significant as load on the server increases, of course.\nConversely, for something like stock charts that get updated every hour (if not more frequently), that would be a situation where you'd do better to create them on the fly, so as to avoid wasting CPU time on constantly generating images which no user will ever see.\nOr, if you want to get fancy, you can optimize to handle either access pattern by generating the images on the fly the first time they're needed and then showing the pre-generated one afterwards, up until the data it's generated from changes, at which point you delete it so that it will be regenerated the next time it's needed. But that would be overkill for something as static as thumbnails, IMO.\n", "check out the gd library and imagemagick\n" ]
[ 12, 2, 1, 1, 0 ]
[]
[]
[ "image", "image_manipulation", "javascript", "php", "thumbnails" ]
stackoverflow_0000103707_image_image_manipulation_javascript_php_thumbnails.txt
Q: Database safety: Intermediary "to_be_deleted" column/table? Everyone has accidentally forgotten the WHERE clause on a DELETE query and blasted some un-backed up data once or twice. I was pondering that problem, and I was wondering if the solution I came up with is practical. What if, in place of actual DELETE queries, the application and maintenance scripts did something like: UPDATE foo SET to_be_deleted=1 WHERE blah = 50; And then a cron job was set to go through and actually delete everything with the flag? The downside would be that pretty much every other query would need to have WHERE to_be_deleted != 1 appended to it, but the upside would be that you'd never mistakenly lose data again. You could see "2,349,325 rows affected" and say, "Hmm, looks like I forgot the WHERE clause," and reset the flags. You could even make the to_be_deleted field a DATE column, so the cron job would check to see if a row's time had come yet. Also, you could remove DELETE permission from the production database user, so even if someone managed to inject some SQL into your site, they wouldn't be able to remove anything. So, my question is: Is this a good idea, or are there pitfalls I'm not seeing? A: That is fine if you want to do that, but it seems like a lot of work. How many people are manually changing the database? It should be very few, especially if your users have an app to work with. When I work on the production db I put EVERYTHING I do in a transaction so if I mess up I can rollback. Just having a standard practice like that for me has helped me. I don't see anything really wrong with that though other than ever single point of data manipulation in each applicaiton will have to be aware of this functionality and not just the data it wants. A: This would be fine as long as your appliction does not require that the data is immediately deleted since you have to wait for the next interval of the cron job. I think a better solution and the more common practice is to use a development server and a production server. If your development database gets blown out, simply reload it. No harm done. If you're testing code on your production database, you deserve anything bad that happens. A: A lot of people have a delete flag or a row status flag. But if someone is doing a change through the back end (and they will be doing it since often people need batch changes done that can't be accomplished through the front end) and they make a mistake they will still often go for delete. Ultimately this is no substitute for testing the script before applying it to a production environment. Also...what happens if the following query gets executed "UPDATE foo SET to_be_deleted=1" because they left off the where clause. Unless you have auditing columns with a time stamp how do you know which columns were deleted and which ones were done in error? But even if you have auditing columns with a time stamp, if the auditing is done via a stored procedure or programmer convention then these back end queries may not supply information letting you know that they were just applied. A: Too complicated. The standard approach to this is to do all your work inside a transaction, so if you screw up and forget a WHERE clause, then you simply roll back when you see the "2,349,325 rows affected" result. A: It may be easier to create a parallel table for deleted rows. A DELETE trigger (and UPDATE too if you want to undo changes as well) on the original table could copy the affected rows to the parallel table. Adding a datetime column to the parallel table to record the date & time of the change would let you permanently remove rows past a certain age using your cron job. That way, you'd use normal DELETE statements on the original table, so there's no chance you'll forget to run your special "DELETE" statement. You also sidestep the to_be_deleted != 1 expression, which is just a bug waiting to happen when someone inevitably forgets. A: It looks like you're describing three cases here. Case 1 - maintenance scripts. Risk can be minimized by developing them and testing them in an environment other than your production box. For quick maintenance, do the maintenance in a single transaction, and check everything before committing. If you made a mistake, issue the rollback command. For more serious maintenance that you can't necessarily wait around for, or do in a single transaction, consider taking a backup directly before running the maintenance job, so that you can always restore back to the point before you ran your script if you encounter serious problems. Case 2 - SQL Injection. This is an architecture issue. Your application shouldn't pass SQL into the database, access should be controlled through packages / stored procedures / functions, and values that are going to come from the UI and be used in a DDL statement should be applied using bind variables, rather than by creating dynamic SQL by appending strings together. Case 3 - Regular batch jobs. These should have been tested before being deployed to production. If you delete too much, you have a bug, and are going to have to rely on your backup strategy. A: Everyone has accidentally forgotten the WHERE clause on a DELETE query and blasted some un-backed up data once or twice. No. I always prototype my DELETEs as SELECTs and only if the latter gives the results I want to delete change the statement before WHERE to a DELETE. This let's me inspect in any needed detail the rows I want to affect before doing anything. A: You could set up a view on that table that selects WHERE to_be_deleted != 1, and all of your normal selects are done on that view - that avoids having to put the WHERE on all of your queries. A: The pitfall is that it's unnecessarily complicated and someone will inadvertently forget too check the flag in their query. There's also the issue of potentially needing to delete something immediately instead of wait for the scheduled job to run. A: To avoid the to_be_deleted WHERE clause you could create a trigger before the delete command fires off to insert the deleted rows into a separate table. This table could be cleared out when you're sure everything in it really needs to be deleted, or you could keep it around for archive purposes. A: You also get a "soft delete" feature so you can give the(certain) end-users the power of "undo" - there would have to be a pretty strong downside in the mix to cancel the benefits of soft deleting. A: The "WHERE to_be_deleted <> 1" on every other query is a huge one. Another is once you've ran your accidentally rogue query, how will you determine which of the 2,349,325 were previously marked as deleted? I think the practical solution is regular backups, and failing that, perhaps a delete trigger that captures the tuples to be axed. A: The other option would be to create a delete trigger on each table. When anything is deleted, it would insert that "to be deleted" record into another table, ideally named TABLENAME_deleted. The downside would be that the db would have twice as many tables. I don't recommend triggers in general, but it might be what you are looking for. A: This is why, whenever you are editing data by hand, you should BEGIN TRAN, edit your data, check that it looks good (for instance that you didn't delete more data than you were expecting) and then END TRAN. If you're using Postgres then you want to create lots of savepoints as well so that a typo doesn't wipe out your intermediate work. But that said, in many applications it does make sense to have software mark records as invalid rather than deleting them. Add a last_modified date that is automatically updated, and you are all prepared to set up incremental updates into a data warehouse. Even if you don't have a data warehouse now, it never hurts to prepare for the future when preparing is cheap. Plus in the event of manual mistakes you still have the data, and can just find all of the records that got "deleted" when you made your mistake and fix them. (You should still use transactions though.)
Database safety: Intermediary "to_be_deleted" column/table?
Everyone has accidentally forgotten the WHERE clause on a DELETE query and blasted some un-backed up data once or twice. I was pondering that problem, and I was wondering if the solution I came up with is practical. What if, in place of actual DELETE queries, the application and maintenance scripts did something like: UPDATE foo SET to_be_deleted=1 WHERE blah = 50; And then a cron job was set to go through and actually delete everything with the flag? The downside would be that pretty much every other query would need to have WHERE to_be_deleted != 1 appended to it, but the upside would be that you'd never mistakenly lose data again. You could see "2,349,325 rows affected" and say, "Hmm, looks like I forgot the WHERE clause," and reset the flags. You could even make the to_be_deleted field a DATE column, so the cron job would check to see if a row's time had come yet. Also, you could remove DELETE permission from the production database user, so even if someone managed to inject some SQL into your site, they wouldn't be able to remove anything. So, my question is: Is this a good idea, or are there pitfalls I'm not seeing?
[ "That is fine if you want to do that, but it seems like a lot of work. How many people are manually changing the database? It should be very few, especially if your users have an app to work with.\nWhen I work on the production db I put EVERYTHING I do in a transaction so if I mess up I can rollback. Just having a standard practice like that for me has helped me.\nI don't see anything really wrong with that though other than ever single point of data manipulation in each applicaiton will have to be aware of this functionality and not just the data it wants.\n", "This would be fine as long as your appliction does not require that the data is immediately deleted since you have to wait for the next interval of the cron job.\nI think a better solution and the more common practice is to use a development server and a production server. If your development database gets blown out, simply reload it. No harm done. If you're testing code on your production database, you deserve anything bad that happens.\n", "A lot of people have a delete flag or a row status flag. But if someone is doing a change through the back end (and they will be doing it since often people need batch changes done that can't be accomplished through the front end) and they make a mistake they will still often go for delete. Ultimately this is no substitute for testing the script before applying it to a production environment.\nAlso...what happens if the following query gets executed \"UPDATE foo SET to_be_deleted=1\" because they left off the where clause. Unless you have auditing columns with a time stamp how do you know which columns were deleted and which ones were done in error? But even if you have auditing columns with a time stamp, if the auditing is done via a stored procedure or programmer convention then these back end queries may not supply information letting you know that they were just applied.\n", "Too complicated. The standard approach to this is to do all your work inside a transaction, so if you screw up and forget a WHERE clause, then you simply roll back when you see the \"2,349,325 rows affected\" result.\n", "It may be easier to create a parallel table for deleted rows. A DELETE trigger (and UPDATE too if you want to undo changes as well) on the original table could copy the affected rows to the parallel table. Adding a datetime column to the parallel table to record the date & time of the change would let you permanently remove rows past a certain age using your cron job.\nThat way, you'd use normal DELETE statements on the original table, so there's no chance you'll forget to run your special \"DELETE\" statement. You also sidestep the to_be_deleted != 1 expression, which is just a bug waiting to happen when someone inevitably forgets.\n", "It looks like you're describing three cases here.\n\nCase 1 - maintenance scripts. Risk can be minimized by developing them and testing them in an environment other than your production box. For quick maintenance, do the maintenance in a single transaction, and check everything before committing. If you made a mistake, issue the rollback command. For more serious maintenance that you can't necessarily wait around for, or do in a single transaction, consider taking a backup directly before running the maintenance job, so that you can always restore back to the point before you ran your script if you encounter serious problems.\nCase 2 - SQL Injection. This is an architecture issue. Your application shouldn't pass SQL into the database, access should be controlled through packages / stored procedures / functions, and values that are going to come from the UI and be used in a DDL statement should be applied using bind variables, rather than by creating dynamic SQL by appending strings together.\nCase 3 - Regular batch jobs. These should have been tested before being deployed to production. If you delete too much, you have a bug, and are going to have to rely on your backup strategy.\n\n", "\nEveryone has accidentally forgotten\n the WHERE clause on a DELETE query and\n blasted some un-backed up data once or\n twice.\n\nNo. I always prototype my DELETEs as SELECTs and only if the latter gives the results I want to delete change the statement before WHERE to a DELETE. This let's me inspect in any needed detail the rows I want to affect before doing anything.\n", "You could set up a view on that table that selects WHERE to_be_deleted != 1, and all of your normal selects are done on that view - that avoids having to put the WHERE on all of your queries.\n", "The pitfall is that it's unnecessarily complicated and someone will inadvertently forget too check the flag in their query. There's also the issue of potentially needing to delete something immediately instead of wait for the scheduled job to run.\n", "To avoid the to_be_deleted WHERE clause you could create a trigger before the delete command fires off to insert the deleted rows into a separate table. This table could be cleared out when you're sure everything in it really needs to be deleted, or you could keep it around for archive purposes.\n", "You also get a \"soft delete\" feature so you can give the(certain) end-users the power of \"undo\" - there would have to be a pretty strong downside in the mix to cancel the benefits of soft deleting.\n", "The \"WHERE to_be_deleted <> 1\" on every other query is a huge one. Another is once you've ran your accidentally rogue query, how will you determine which of the 2,349,325 were previously marked as deleted?\nI think the practical solution is regular backups, and failing that, perhaps a delete trigger that captures the tuples to be axed.\n", "The other option would be to create a delete trigger on each table. When anything is deleted, it would insert that \"to be deleted\" record into another table, ideally named TABLENAME_deleted. \nThe downside would be that the db would have twice as many tables. \nI don't recommend triggers in general, but it might be what you are looking for.\n", "This is why, whenever you are editing data by hand, you should BEGIN TRAN, edit your data, check that it looks good (for instance that you didn't delete more data than you were expecting) and then END TRAN. If you're using Postgres then you want to create lots of savepoints as well so that a typo doesn't wipe out your intermediate work.\nBut that said, in many applications it does make sense to have software mark records as invalid rather than deleting them. Add a last_modified date that is automatically updated, and you are all prepared to set up incremental updates into a data warehouse. Even if you don't have a data warehouse now, it never hurts to prepare for the future when preparing is cheap. Plus in the event of manual mistakes you still have the data, and can just find all of the records that got \"deleted\" when you made your mistake and fix them. (You should still use transactions though.)\n" ]
[ 4, 2, 2, 2, 2, 2, 2, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "database", "sql" ]
stackoverflow_0000102759_database_sql.txt
Q: What are indexes and how can I use them to optimize queries in my database? I am maintaining a pretty sizable application and database and am noticing some poor database performance in a few of our stored procedures. I always hear that "adding an index" can be done to help performance. I am certainly no DBA, and I do not understand what indexes are, why they help, and how to create them. I basically need an indexes 101. Can anyone give me resources so that I can learn? A: As a rule of thumb, indexes should be on any fields that you use in joins or where clauses (if they have enough different values to make using an index worthwhile, field with only a few possible values doesn't benefit from an index which is why it is pointless to try to index a bit field). If your structure has formally created primary keys (which it should, I never create a table without a primary key), those are by definition indexed becasue a primary key is required to have a unique index on it. People often forget that they have to index the foreign keys becasue an index is not automatically created when you set up the foreign key relationsship. Since the purpose of a foreign key is to give you a field to join on, most foreign keys should probably be indexed. Indexes once created need to be maintained. If you have a lot of data change activity, they can get fragmented and slow performance and need to be refreshed. Read in Books online about indexes. You can also find the syntax for the create index statement there. Indexes are a balancing act, every index you add usually will add time to data inserts, updates and deletes but can potentially speed up selects and joins in complex inserts, updates and deletes. There is no one formula for what are the best indexes although the rule of thumb above is a good place to start. A: Think of an index similar to a card catalog in the library. An index keeps you from having to search through every isle or shelf for a book. Instead, you may be able to find the items you want from a commonly used field, such as and ID, Name, etc. When you build an index the database basically creates something separate that a query could hit to rather than scanning the entire table. You speed up the query by allowing it to search a smaller subset of data, or an optimized set of data. A: Indexes is a method which database systems use to quickly find data. The real world analogy are indexes in books. If an author/publisher does a good job at indexing their book, it becomes pretty easy for the reader to directly go to the page they want to read simply by looking at the index. Same goes for a database. If an index is created on a field, the database pre-sorts the data. When a request is made on the data, the database uses the index to identify which location the data is stored in on the hard disk, and directly goes there. If there are no indexes, the database needs to look at every record in order to find out if it meets the criteria(s) of your query. A simple way to look at indexes is by thinking of a deck of cards. A database which is not indexed is like a deck a cards which have been shuffled. If you want to find the king of spades, you need to look at every card one by one to find it. You might be lucky and it can be the first one, or you might be unlucky and it can be the last one. A database which is indexed, has all the cards in the deck ordered from ace to king and each suite is set aside in its own pile. Looking for the king of spades is much simpler now because you simply need to look at the bottom of the pile of cards which contains the spades. I hope this helps. Be warned though that although indexes are necessary in a relational database system, they can counter productive if you write too many of them. There's a ton of great articles on the web that you can read up on indexes. I'd suggest doing some reading before you dive into them. A: An index basically sorts your data on the given columns and then stores that order, so when you want to find an item, the database can optimize by using binary search (or some other optimized way of searching), rather than looking at each individual row. Thus, if the amount of data you are searching through is large, you will absolutely want to add some indexes. Most databases have a tool to explain how your query will work (for db2, it's db2expln, something similar probably for sqlserver), and a tool to suggest indexes and other optimizations (db2advis for db2, again probably something similar for sqlserver). A: As previously stated, you can have a clustered index and multiple non-clustered indexes. In SQL 2005, you can also add additional columns to a non-clustered index, which can improve performance where a few commonly retrieved columns are included with the index but not part of the key, which eliminates a trip to the table altogether. Your #1 tool for determining what your SQL Server database is doing is the profiler. You can profile entire workloads and then see what indexes it recommends. You can also look at execution plans to see what effects an index has. The too-many indexes problem is due to writing into a database, and having to update all the indexes which would have a record for that row. If you're having read performance, it's probably not because of too many indexes, but too few, or too unsuitable. A: An index can be explained as a sorted list of the items in a register. It is very quick to lookup the position of the item in the register, by looking for it's key in the index. Next the the key in the index is a pointer to the position in the register where the rest of the record can be found. You can have many indexes on a register, but the more you have, the slower inserting new records will be (because each index needs a new record as well - in a sorted order, which also adds time). A: Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes, they are just used to speed up queries. Basically, your DBMS will create some sort of tree structure which points to the data (from one column) in a sorted manner. This way it is easier to search for data on that column(s). http://en.wikipedia.org/wiki/Index_(database) A: Some more index information! Clustered indexes are the actual physical layout of the records in the table. Hence, you can only have one per table. Nonclustered indexes are the aforementioned card catalog. Sure, the books are arranged in a particular order, but you can arrange the cards in the catalog by book size, or maybe by number of pages, or by alphabetical last name. Something to think about -- creating too many indexes is a common pitfall. Every time your data gets updated your DB has to seek through that index and update it, inserting a record into every index on that table for that new row. In transactional systems (think: NYSE's stock transactions!) that could be an application killer. A: for mssql (and maybe others) the syntax looks like: create index <indexname> on <tablename>(<column1>[,<column2>...])
What are indexes and how can I use them to optimize queries in my database?
I am maintaining a pretty sizable application and database and am noticing some poor database performance in a few of our stored procedures. I always hear that "adding an index" can be done to help performance. I am certainly no DBA, and I do not understand what indexes are, why they help, and how to create them. I basically need an indexes 101. Can anyone give me resources so that I can learn?
[ "As a rule of thumb, indexes should be on any fields that you use in joins or where clauses (if they have enough different values to make using an index worthwhile, field with only a few possible values doesn't benefit from an index which is why it is pointless to try to index a bit field). \nIf your structure has formally created primary keys (which it should, I never create a table without a primary key), those are by definition indexed becasue a primary key is required to have a unique index on it. People often forget that they have to index the foreign keys becasue an index is not automatically created when you set up the foreign key relationsship. Since the purpose of a foreign key is to give you a field to join on, most foreign keys should probably be indexed.\nIndexes once created need to be maintained. If you have a lot of data change activity, they can get fragmented and slow performance and need to be refreshed. Read in Books online about indexes. You can also find the syntax for the create index statement there.\nIndexes are a balancing act, every index you add usually will add time to data inserts, updates and deletes but can potentially speed up selects and joins in complex inserts, updates and deletes. There is no one formula for what are the best indexes although the rule of thumb above is a good place to start.\n", "Think of an index similar to a card catalog in the library. An index keeps you from having to search through every isle or shelf for a book. Instead, you may be able to find the items you want from a commonly used field, such as and ID, Name, etc. When you build an index the database basically creates something separate that a query could hit to rather than scanning the entire table. You speed up the query by allowing it to search a smaller subset of data, or an optimized set of data. \n", "Indexes is a method which database systems use to quickly find data. The real world analogy are indexes in books. If an author/publisher does a good job at indexing their book, it becomes pretty easy for the reader to directly go to the page they want to read simply by looking at the index. Same goes for a database. If an index is created on a field, the database pre-sorts the data. When a request is made on the data, the database uses the index to identify which location the data is stored in on the hard disk, and directly goes there. If there are no indexes, the database needs to look at every record in order to find out if it meets the criteria(s) of your query.\nA simple way to look at indexes is by thinking of a deck of cards. A database which is not indexed is like a deck a cards which have been shuffled. If you want to find the king of spades, you need to look at every card one by one to find it. You might be lucky and it can be the first one, or you might be unlucky and it can be the last one. \nA database which is indexed, has all the cards in the deck ordered from ace to king and each suite is set aside in its own pile. Looking for the king of spades is much simpler now because you simply need to look at the bottom of the pile of cards which contains the spades.\nI hope this helps. Be warned though that although indexes are necessary in a relational database system, they can counter productive if you write too many of them. There's a ton of great articles on the web that you can read up on indexes. I'd suggest doing some reading before you dive into them.\n", "An index basically sorts your data on the given columns and then stores that order, so when you want to find an item, the database can optimize by using binary search (or some other optimized way of searching), rather than looking at each individual row.\nThus, if the amount of data you are searching through is large, you will absolutely want to add some indexes.\nMost databases have a tool to explain how your query will work (for db2, it's db2expln, something similar probably for sqlserver), and a tool to suggest indexes and other optimizations (db2advis for db2, again probably something similar for sqlserver).\n", "As previously stated, you can have a clustered index and multiple non-clustered indexes. In SQL 2005, you can also add additional columns to a non-clustered index, which can improve performance where a few commonly retrieved columns are included with the index but not part of the key, which eliminates a trip to the table altogether.\nYour #1 tool for determining what your SQL Server database is doing is the profiler. You can profile entire workloads and then see what indexes it recommends. You can also look at execution plans to see what effects an index has.\nThe too-many indexes problem is due to writing into a database, and having to update all the indexes which would have a record for that row. If you're having read performance, it's probably not because of too many indexes, but too few, or too unsuitable.\n", "An index can be explained as a sorted list of the items in a register. It is very quick to lookup the position of the item in the register, by looking for it's key in the index. Next the the key in the index is a pointer to the position in the register where the rest of the record can be found.\nYou can have many indexes on a register, but the more you have, the slower inserting new records will be (because each index needs a new record as well - in a sorted order, which also adds time).\n", "Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes, they are just used to speed up queries.\nBasically, your DBMS will create some sort of tree structure which points to the data (from one column) in a sorted manner. This way it is easier to search for data on that column(s).\nhttp://en.wikipedia.org/wiki/Index_(database)\n", "Some more index information!\nClustered indexes are the actual physical layout of the records in the table. Hence, you can only have one per table.\nNonclustered indexes are the aforementioned card catalog. Sure, the books are arranged in a particular order, but you can arrange the cards in the catalog by book size, or maybe by number of pages, or by alphabetical last name.\nSomething to think about -- creating too many indexes is a common pitfall. Every time your data gets updated your DB has to seek through that index and update it, inserting a record into every index on that table for that new row. In transactional systems (think: NYSE's stock transactions!) that could be an application killer.\n", "for mssql (and maybe others) the syntax looks like:\ncreate index <indexname> on <tablename>(<column1>[,<column2>...])\n\n" ]
[ 34, 25, 6, 5, 3, 1, 1, 1, 0 ]
[]
[]
[ "database_design", "sql", "sql_server" ]
stackoverflow_0000105400_database_design_sql_sql_server.txt
Q: how do I query multiple SQL tables for a specific key-value pair? Situation: A PHP application with multiple installable modules creates a new table in database for each, in the style of mod_A, mod_B, mod_C etc. Each has the column section_id. Now, I am looking for all entries for a specific section_id, and I'm hoping there's another way besides "Select * from mod_a, mod_b, mod_c ... mod_xyzzy where section_id=value"... or even worse, using a separate query for each module. A: What about? SELECT * FROM mod_a WHERE section_id=value UNION ALL SELECT * FROM mod_b WHERE section_id=value UNION ALL SELECT * FROM mod_c WHERE section_id=value A: If the tables are changing over time, you can inline code gen your solution in an SP (pseudo code - you'll have to fill in): SET @sql = '' DECLARE CURSOR FOR SELECT t.[name] AS TABLE_NAME FROM sys.tables t WHERE t.[name] LIKE 'SOME_PATTERN_TO_IDENTIFY_THE_TABLES' -- or this DECLARE CURSOR FOR SELECT t.[name] AS TABLE_NAME FROM TABLE_OF_TABLES_TO_SEACRH t START LOOP IF @sql <> '' SET @sql = @sql + 'UNION ALL ' SET @sql = 'SELECT * FROM [' + @TABLE_NAME + '] WHERE section_id=value ' END LOOP EXEC(@sql) I've used this technique occasionally, when there just isn't any obvious way to make it future-proof without dynamic SQL. Note: In your loop, you can use the COALESCE/NULL propagation trick and leave the string as NULL before the loop, but it's not as clear if you are unfamiliar with the idiom: SET @sql = COALESCE(@sql + ' UNION ALL ', '') + 'SELECT * FROM [' + @TABLE_NAME + '] WHERE section_id=value ' A: I have two suggestions. Perhaps you need to consolidate all your tables. If they all contain the same structure, then why not have one "master" module table, that just adds one new column identifying the module ("A", "B", "C", ....) If your module tables are mostly the same, but you have a few columns that are different, you might still be able to consolidate all the common information into one table, and keep smaller module-specific tables with those differences. Then you would just need to do a join on them. This suggestion assumes that your query on the column section_id you mention is super-critical to look up quickly. With one query you get all the common information, and with a second you would get any specific information if you needed it. (And you might not -- for instance if you were trying to validate the existense of the section, then finding it in the common table would be enough) Alternatively you can add another table that maps section_id's to the modules that they are in. section_id | module -----------+------- 1 | A 2 | B 3 | A ... | ... This does mean though that you have to run two queries, one against this mapping table, and another against the module table to pull out any useful data. You can extend this table with other columns and indices on those columns if you need to look up other columns that are common to all modules. This method has the definite disadvanage that the data is duplicated. A: I was going to suggest the same think as borjab. The only problem with that is that you will have to update all of these queries if you add another table. The only other option I see is a stored procedure. I did think of another option here, or at least an easier way to present this. You can also use a view to these multiple tables to make them appear as one, and then your query would look cleaner, be easier to understand and you wouldn't have to rewrite a long union query when you wanted to do other queries on these multiple tables. A: Perhaps some additional info would help, but it sounds like you have the solution already. You will have to select from all the tables with a section_id. You could use joins instead of a table list, joining on section_id. For example select a.some_field, b.some_field.... from mod_a a inner join mod_b b on a.section_id = b.section_id ... where a.section_id = <parameter> You could also package this up as a view. Also notice the field list instead of *, which I would recommend if you were intending to actually use *. A: Well, there are only so many ways to aggregate information from multiple tables. You can join, like you mentioned in your example, or you can run multiple queries and union them together as in borjab's answer. I don't know if some idea of creating a table that intersects all the module tables would be useful to you, but if section_id was on a table like that you'd be able to get everything from a single query. Otherwise, I applaud your laziness, but am afraid to say, I don't see any way to make that job eaiser :) A: SELECT * FROM ( SELECT * FROM table1 UNION ALL SELECT * FROM table2 UNION ALL SELECT * FROM table3 ) subQry WHERE field=value A: An option from the database side would be to create a view of the UNION ALL of the various tables. When you add a table, you would need to add it to the view, but otherwise it would look like a single table. CREATE VIEW modules AS ( SELECT * FROM mod_A UNION ALL SELECT * FROM mod_B UNION ALL SELECT * FROM mod_C ); select * from modules where section_id=value;
how do I query multiple SQL tables for a specific key-value pair?
Situation: A PHP application with multiple installable modules creates a new table in database for each, in the style of mod_A, mod_B, mod_C etc. Each has the column section_id. Now, I am looking for all entries for a specific section_id, and I'm hoping there's another way besides "Select * from mod_a, mod_b, mod_c ... mod_xyzzy where section_id=value"... or even worse, using a separate query for each module.
[ "What about?\nSELECT * FROM mod_a WHERE section_id=value\nUNION ALL\nSELECT * FROM mod_b WHERE section_id=value\nUNION ALL\nSELECT * FROM mod_c WHERE section_id=value\n\n", "If the tables are changing over time, you can inline code gen your solution in an SP (pseudo code - you'll have to fill in):\nSET @sql = ''\n\nDECLARE CURSOR FOR\nSELECT t.[name] AS TABLE_NAME\nFROM sys.tables t\nWHERE t.[name] LIKE 'SOME_PATTERN_TO_IDENTIFY_THE_TABLES'\n\n-- or this\nDECLARE CURSOR FOR\nSELECT t.[name] AS TABLE_NAME\nFROM TABLE_OF_TABLES_TO_SEACRH t\n\nSTART LOOP\n\nIF @sql <> '' SET @sql = @sql + 'UNION ALL '\nSET @sql = 'SELECT * FROM [' + @TABLE_NAME + '] WHERE section_id=value '\n\nEND LOOP\n\nEXEC(@sql)\n\nI've used this technique occasionally, when there just isn't any obvious way to make it future-proof without dynamic SQL.\nNote: In your loop, you can use the COALESCE/NULL propagation trick and leave the string as NULL before the loop, but it's not as clear if you are unfamiliar with the idiom:\nSET @sql = COALESCE(@sql + ' UNION ALL ', '')\n + 'SELECT * FROM [' + @TABLE_NAME + '] WHERE section_id=value '\n\n", "I have two suggestions.\n\nPerhaps you need to consolidate all your tables. If they all contain the same structure, then why not have one \"master\" module table, that just adds one new column identifying the module (\"A\", \"B\", \"C\", ....)\nIf your module tables are mostly the same, but you have a few columns that are different, you might still be able to consolidate all the common information into one table, and keep smaller module-specific tables with those differences. Then you would just need to do a join on them.\nThis suggestion assumes that your query on the column section_id you mention is super-critical to look up quickly. With one query you get all the common information, and with a second you would get any specific information if you needed it. (And you might not -- for instance if you were trying to validate the existense of the section, then finding it in the common table would be enough)\nAlternatively you can add another table that maps section_id's to the modules that they are in.\n\nsection_id | module\n-----------+-------\n 1 | A\n 2 | B\n 3 | A\n ... | ...\n\nThis does mean though that you have to run two queries, one against this mapping table, and another against the module table to pull out any useful data.\nYou can extend this table with other columns and indices on those columns if you need to look up other columns that are common to all modules.\nThis method has the definite disadvanage that the data is duplicated.\n\n", "I was going to suggest the same think as borjab. The only problem with that is that you will have to update all of these queries if you add another table. The only other option I see is a stored procedure.\nI did think of another option here, or at least an easier way to present this. You can also use a view to these multiple tables to make them appear as one, and then your query would look cleaner, be easier to understand and you wouldn't have to rewrite a long union query when you wanted to do other queries on these multiple tables.\n", "Perhaps some additional info would help, but it sounds like you have the solution already. You will have to select from all the tables with a section_id. You could use joins instead of a table list, joining on section_id. For example\nselect a.some_field, b.some_field.... \nfrom mod_a a\ninner join mod_b b on a.section_id = b.section_id\n...\nwhere a.section_id = <parameter>\n\nYou could also package this up as a view.\nAlso notice the field list instead of *, which I would recommend if you were intending to actually use *.\n", "Well, there are only so many ways to aggregate information from multiple tables. You can join, like you mentioned in your example, or you can run multiple queries and union them together as in borjab's answer. I don't know if some idea of creating a table that intersects all the module tables would be useful to you, but if section_id was on a table like that you'd be able to get everything from a single query. Otherwise, I applaud your laziness, but am afraid to say, I don't see any way to make that job eaiser :)\n", "SELECT * FROM (\n SELECT * FROM table1\n UNION ALL\n SELECT * FROM table2\n UNION ALL\n SELECT * FROM table3\n) subQry\nWHERE field=value\n\n", "An option from the database side would be to create a view of the UNION ALL of the various tables. When you add a table, you would need to add it to the view, but otherwise it would look like a single table.\nCREATE VIEW modules AS (\n SELECT * FROM mod_A\n UNION ALL \n SELECT * FROM mod_B\n UNION ALL \n SELECT * FROM mod_C\n);\n\nselect * from modules where section_id=value;\n\n" ]
[ 1, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "lazy_evaluation", "sql" ]
stackoverflow_0000104230_lazy_evaluation_sql.txt
Q: SEO and hard links with dynamic URLs With ASP.NET MVC (or using HttpHandlers) you can dynamically generate URLs, like the one in this question, which includes the title. What happens if the title changes (for example, editing it) and there's a link pointing to the page from another site, or Google's Pagerank was calculated for that URL? I guess it's all lost right? (The link points to nowhere and the pagerank calculated is lost) If so, is there a way to avoid it? A: I use the same system as is in place here, everything after the number in the URL is not used in the db query, then I 301 redirect anything else to be the title. In other words, if the title changed, then it would redirect to the correct place. I do it in PHP rather than htaccess as it's easier to manage more complex ideas. A: I think you're generally best off having the server send a permanent redirect to the new location, if possible. That way any rank which is gained from third party links should, in theory, be transferred to the new location. I'm not convinced whether this happens in practice, but it should. A: Yes, all SEO is lost upon a url change -- it forks to an entirely new record. The way to handle that is to leave a 301 redirect at the old title to the new one, and some search engines (read: Google) is smart enough to pick that up. EDIT: Fixed to 301 redirect! A: The way Stackoverflow seems to be implemented everything after the question number is superfluous as far as linking to the question goes. For instance: SEO and hard links with dynamic URLs links to this question, despite the fact that I just made up the 'question title' part out of thin air. So the link will not point to nowhere and the PageRank is not lost (though it may be split between the two URLs, depending on whether or not Google can canonicalize them into a single URL). A: Have your app redirect the old URL via a 301 Redirect. This will tell Google to transfer the pagerank to the new URL. A: If a document is moved to a different URL, the server should be configured to return a HTTP status code of 301 (Moved Permanently) for the old URL to tell the client where the document has been moved to. With Apache, this is done using mod_rewrite and RewriteRule. A: The best thing to help Google in this instance is to return a permanent redirect on the old URL to the new one. I'm not an ASP.NET hacker - so I can't recommend the best way to implement this - but Googling the topic looks fairly productive :-)
SEO and hard links with dynamic URLs
With ASP.NET MVC (or using HttpHandlers) you can dynamically generate URLs, like the one in this question, which includes the title. What happens if the title changes (for example, editing it) and there's a link pointing to the page from another site, or Google's Pagerank was calculated for that URL? I guess it's all lost right? (The link points to nowhere and the pagerank calculated is lost) If so, is there a way to avoid it?
[ "I use the same system as is in place here, everything after the number in the URL is not used in the db query, then I 301 redirect anything else to be the title.\nIn other words, if the title changed, then it would redirect to the correct place. I do it in PHP rather than htaccess as it's easier to manage more complex ideas.\n", "I think you're generally best off having the server send a permanent redirect to the new location, if possible.\nThat way any rank which is gained from third party links should, in theory, be transferred to the new location. I'm not convinced whether this happens in practice, but it should.\n", "Yes, all SEO is lost upon a url change -- it forks to an entirely new record. The way to handle that is to leave a 301 redirect at the old title to the new one, and some search engines (read: Google) is smart enough to pick that up.\nEDIT: Fixed to 301 redirect!\n", "The way Stackoverflow seems to be implemented everything after the question number is superfluous as far as linking to the question goes. For instance:\nSEO and hard links with dynamic URLs\nlinks to this question, despite the fact that I just made up the 'question title' part out of thin air. So the link will not point to nowhere and the PageRank is not lost (though it may be split between the two URLs, depending on whether or not Google can canonicalize them into a single URL).\n", "Have your app redirect the old URL via a 301 Redirect. This will tell Google to transfer the pagerank to the new URL.\n", "If a document is moved to a different URL, the server should be configured to return a HTTP status code of 301 (Moved Permanently) for the old URL to tell the client where the document has been moved to. With Apache, this is done using mod_rewrite and RewriteRule.\n", "The best thing to help Google in this instance is to return a permanent redirect on the old URL to the new one.\nI'm not an ASP.NET hacker - so I can't recommend the best way to implement this - but Googling the topic looks fairly productive :-)\n" ]
[ 4, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "dynamic_links", "dynamic_url", "httphandler", "httpmodule", "seo" ]
stackoverflow_0000105830_dynamic_links_dynamic_url_httphandler_httpmodule_seo.txt
Q: Formatting data in a Fitnesse RowFixture I've got a Fitnesse RowFixture that returns a list of business objects. The object has a field which is a float representing a percentage between 0 and 1. The consumer of the business object will be a web page or report that comes from a designer, so the formatting of the percentage will be up to the designer rather than the business object. It would be nicer if the page could emulate the designer when converting the number to a percentage, i.e. instead of displaying 0.5, it should display 50%. But I'd rather not pollute the business object with the display code. Is there a way to specify a format string in the RowFixture? A: You certainly don't want to modify your Business Logic just to make your tests look better. Good news however, there is a way to accomplish this that is not difficult, but not as easy as passing in a format specifier. Try to think of your Fit Fixture as a service boundary between FitNesse and your application code. You want to define a contract that doesn't necessarily have to change if the implementation details of your SUT (System Under Test) change. Lets look at a simplified version of your Business Object: public class BusinessObject { public float Percent { get; private set; } } Becuase of the way that a RowFixture works we need to define a simple object that will work as the contract. Ordinarily we would use an interface, but that isn't going to serve our purpose here so a simple DTO (Data Transfer Object) will suffice. Something Like This: public class ReturnRowDTO { public String Percent { get; set; } } Now we can define a RowFixture that will return a list of our custom DTO objects. We also need to create a way to convert BusinessObjects to ReturnRowDTOs. We end up with a Fixture that looks something like this. public class ExampleRowFixture: fit.RowFixture { private ISomeService _someService; public override object[] Query() { BusinessObject[] list = _someService.GetBusinessObjects(); return Array.ConvertAll(list, new Converter<BusinessObject, ReturnRowDTO>(ConvertBusinessObjectToDTO)); } public override Type GetTargetClass() { return typeof (ReturnRowDTO); } public ReturnRowDTO ConvertBusinessObjectToDTO(BusinessObject businessObject) { return new ReturnRowDTO() {Percent = businessObject.Percent.ToString("%")}; } } You can now change your underlying BusinessObjects around without breaking your actual Fit Tests. Hope this helps. A: I'm not sure what the "polution" is. Either the requirement is that your Business Object returns a value expressed as a percentage, in which case your business object should offer that -OR- you are testing the true value of the response as float, which you have now. Trying to get fitnesse to massage the value for readability seems a bit odd.
Formatting data in a Fitnesse RowFixture
I've got a Fitnesse RowFixture that returns a list of business objects. The object has a field which is a float representing a percentage between 0 and 1. The consumer of the business object will be a web page or report that comes from a designer, so the formatting of the percentage will be up to the designer rather than the business object. It would be nicer if the page could emulate the designer when converting the number to a percentage, i.e. instead of displaying 0.5, it should display 50%. But I'd rather not pollute the business object with the display code. Is there a way to specify a format string in the RowFixture?
[ "You certainly don't want to modify your Business Logic just to make your tests look better. Good news however, there is a way to accomplish this that is not difficult, but not as easy as passing in a format specifier.\nTry to think of your Fit Fixture as a service boundary between FitNesse and your application code. You want to define a contract that doesn't necessarily have to change if the implementation details of your SUT (System Under Test) change.\nLets look at a simplified version of your Business Object:\npublic class BusinessObject\n{\n public float Percent { get; private set; }\n}\n\nBecuase of the way that a RowFixture works we need to define a simple object that will work as the contract. Ordinarily we would use an interface, but that isn't going to serve our purpose here so a simple DTO (Data Transfer Object) will suffice.\nSomething Like This:\npublic class ReturnRowDTO\n{\n public String Percent { get; set; }\n}\n\nNow we can define a RowFixture that will return a list of our custom DTO objects. We also need to create a way to convert BusinessObjects to ReturnRowDTOs. We end up with a Fixture that looks something like this.\npublic class ExampleRowFixture: fit.RowFixture\n {\n private ISomeService _someService;\n\n public override object[] Query()\n {\n BusinessObject[] list = _someService.GetBusinessObjects();\n\n return Array.ConvertAll(list, new Converter<BusinessObject, ReturnRowDTO>(ConvertBusinessObjectToDTO));\n }\n\n public override Type GetTargetClass()\n {\n return typeof (ReturnRowDTO);\n }\n\n public ReturnRowDTO ConvertBusinessObjectToDTO(BusinessObject businessObject)\n {\n return new ReturnRowDTO() {Percent = businessObject.Percent.ToString(\"%\")};\n }\n }\n\nYou can now change your underlying BusinessObjects around without breaking your actual Fit Tests. Hope this helps.\n", "I'm not sure what the \"polution\" is. Either the requirement is that your Business Object returns a value expressed as a percentage, in which case your business object should offer that -OR- you are testing the true value of the response as float, which you have now. \nTrying to get fitnesse to massage the value for readability seems a bit odd.\n" ]
[ 4, 0 ]
[]
[]
[ "fitnesse" ]
stackoverflow_0000102057_fitnesse.txt
Q: How do I set the default database in Sql Server from code? I can't seem to figure out how to set the default database in Sql Server from code. This can be either .Net code or T-Sql (T-Sql would be nice since it would be easy to use in any language). I searched Google and could only find how to do it in Sql Server Management Studio. A: ALTER LOGIN should be used for SQL Server 2005 or later: http://technet.microsoft.com/en-us/library/ms189828.aspx ALTER LOGIN <login_name> WITH DEFAULT_DATABASE = <default_database> sp_defaultdb eventually will be removed from SQL Server: http://technet.microsoft.com/en-us/library/ms181738.aspx A: from: http://doc.ddart.net/mssql/sql70/sp_da-di_6.htm sp_defaultdb [@loginame =] 'login' , [@defdb =] 'database' A: Thanks Stephen. As a note, if you are using Windows Authentication, the @loginname is YourDomain\YourLogin (probably obvious to everybody else, but took me a couple tries. sp_defaultdb @loginame='YourDomain\YourLogin', @defdb='YourDatabase' A: If you're trying to change which database you are using after you are logged in, you can use the USE command. E.g. USE Northwind. https://www.tutorialspoint.com/sql/sql-select-database.htm
How do I set the default database in Sql Server from code?
I can't seem to figure out how to set the default database in Sql Server from code. This can be either .Net code or T-Sql (T-Sql would be nice since it would be easy to use in any language). I searched Google and could only find how to do it in Sql Server Management Studio.
[ "ALTER LOGIN should be used for SQL Server 2005 or later:\nhttp://technet.microsoft.com/en-us/library/ms189828.aspx\nALTER LOGIN <login_name> WITH DEFAULT_DATABASE = <default_database>\n\nsp_defaultdb eventually will be removed from SQL Server:\nhttp://technet.microsoft.com/en-us/library/ms181738.aspx\n", "from: http://doc.ddart.net/mssql/sql70/sp_da-di_6.htm\nsp_defaultdb [@loginame =] 'login' , [@defdb =] 'database'\n\n", "Thanks Stephen. \nAs a note, if you are using Windows Authentication, the @loginname is YourDomain\\YourLogin (probably obvious to everybody else, but took me a couple tries.\nsp_defaultdb @loginame='YourDomain\\YourLogin', @defdb='YourDatabase'\n\n", "If you're trying to change which database you are using after you are logged in, you can use the USE command. E.g. USE Northwind.\nhttps://www.tutorialspoint.com/sql/sql-select-database.htm\n" ]
[ 30, 14, 2, 1 ]
[]
[]
[ "database", "sql_server", "sql_server_2000" ]
stackoverflow_0000105950_database_sql_server_sql_server_2000.txt
Q: Fastest way to find objects from a collection matched by condition on string member Suppose I have a collection (be it an array, generic List, or whatever is the fastest solution to this problem) of a certain class, let's call it ClassFoo: class ClassFoo { public string word; public float score; //... etc ... } Assume there's going to be like 50.000 items in the collection, all in memory. Now I want to obtain as fast as possible all the instances in the collection that obey a condition on its bar member, for example like this: List<ClassFoo> result = new List<ClassFoo>(); foreach (ClassFoo cf in collection) { if (cf.word.StartsWith(query) || cf.word.EndsWith(query)) result.Add(cf); } How do I get the results as fast as possible? Should I consider some advanced indexing techniques and datastructures? The application domain for this problem is an autocompleter, that gets a query and gives a collection of suggestions as a result. Assume that the condition doesn't get any more complex than this. Assume also that there's going to be a lot of searches. A: With the constraint that the condition clause can be "anything", then you're limited to scanning the entire list and applying the condition. If there are limitations on the condition clause, then you can look at organizing the data to more efficiently handle the queries. For example, the code sample with the "byFirstLetter" dictionary doesn't help at all with an "endsWith" query. So, it really comes down to what queries you want to do against that data. In Databases, this problem is the burden of the "query optimizer". In a typical database, if you have a database with no indexes, obviously every query is going to be a table scan. As you add indexes to the table, the optimizer can use that data to make more sophisticated query plans to better get to the data. That's essentially the problem you're describing. Once you have a more concrete subset of the types of queries then you can make a better decision as to what structure is best. Also, you need to consider the amount of data. If you have a list of 10 elements each less than 100 byte, a scan of everything may well be the fastest thing you can do since you have such a small amount of data. Obviously that doesn't scale to a 1M elements, but even clever access techniques carry a cost in setup, maintenance (like index maintenance), and memory. EDIT, based on the comment If it's an auto completer, if the data is static, then sort it and use a binary search. You're really not going to get faster than that. If the data is dynamic, then store it in a balanced tree, and search that. That's effectively a binary search, and it lets you keep add the data randomly. Anything else is some specialization on these concepts. A: var Answers = myList.Where(item => item.bar.StartsWith(query) || item.bar.EndsWith(query)); that's the easiest in my opinion, should execute rather quickly. A: Not sure I understand... All you can really do is optimize the rule, that's the part that needs to be fastest. You can't speed up the loop without just throwing more hardware at it. You could parallelize if you have multiple cores or machines. A: I'm not up on my Java right now, but I would think about the following things. How you are creating your list? Perhaps you can create it already ordered in a way which cuts down on comparison time. If you are just doing a straight loop through your collection, you won't see much difference between storing it as an array or as a linked list. For storing the results, depending on how you are collecting them, the structure could make a difference (but assuming Java's generic structures are smart, it won't). As I said, I'm not up on my Java, but I assume that the generic linked list would keep a tail pointer. In this case, it wouldn't really make a difference. Someone with more knowledge of the underlying array vs linked list implementation and how it ends up looking in the byte code could probably tell you whether appending to a linked list with a tail pointer or inserting into an array is faster (my guess would be the array). On the other hand, you would need to know the size of your result set or sacrifice some storage space and make it as big as the whole collection you are iterating through if you wanted to use an array. Optimizing your comparison query by figuring out which comparison is most likely to be true and doing that one first could also help. ie: If in general 10% of the time a member of the collection starts with your query, and 30% of the time a member ends with the query, you would want to do the end comparison first. A: For your particular example, sorting the collection would help as you could binarychop to the first item that starts with query and terminate early when you reach the next one that doesn't; you could also produce a table of pointers to collection items sorted by the reverse of each string for the second clause. In general, if you know the structure of the query in advance, you can sort your collection (or build several sorted indexes for your collection if there are multiple clauses) appropriately; if you do not, you will not be able to do better than linear search. A: If it's something where you populate the list once and then do many lookups (thousands or more) then you could create some kind of lookup dictionary that maps starts with/ends with values to their actual values. That would be a fast lookup, but would use much more memory. If you aren't doing that many lookups or know you're going to be repopulating the list at least semi-frequently I'd go with the LINQ query that CQ suggested. A: You can create some sort of index and it might get faster. We can build a index like this: Dictionary<char, List<ClassFoo>> indexByFirstLetter; foreach (var cf in collection) { indexByFirstLetter[cf.bar[0]] = indexByFirstLetter[cf.bar[0]] ?? new List<ClassFoo>(); indexByFirstLetter[cf.bar[0]].Add(cf); indexByFirstLetter[cf.bar[cf.bar.length - 1]] = indexByFirstLetter[cf.bar[cf.bar.Length - 1]] ?? new List<ClassFoo>(); indexByFirstLetter[cf.bar[cf.bar.Length - 1]].Add(cf); } Then use the it like this: foreach (ClasssFoo cf in indexByFirstLetter[query[0]]) { if (cf.bar.StartsWith(query) || cf.bar.EndsWith(query)) result.Add(cf); } Now we possibly do not have to loop through as many ClassFoo as in your example, but then again we have to keep the index up to date. There is no guarantee that it is faster, but it is definately more complicated. A: Depends. Are all your objects always going to be loaded in memory? Do you have a finite limit of objects that may be loaded? Will your queries have to consider objects that haven't been loaded yet? If the collection will get large, I would definitely use an index. In fact, if the collection can grow to an arbitrary size and you're not sure that you will be able to fit it all in memory, I'd look into an ORM, an in-memory database, or another embedded database. XPO from DevExpress for ORM or SQLite.Net for in-memory database comes to mind. If you don't want to go this far, make a simple index consisting of the "bar" member references mapping to class references. A: If the set of possible criteria is fixed and small, you can assign a bitmask to each element in the list. The size of the bitmask is the size of the set of the criteria. When you create an element/add it to the list, you check which criteria it satisfies and then set the corresponding bits in the bitmask of this element. Matching the elements from the list will be as easy as matching their bitmasks with the target bitmask. A more general method is the Bloom filter.
Fastest way to find objects from a collection matched by condition on string member
Suppose I have a collection (be it an array, generic List, or whatever is the fastest solution to this problem) of a certain class, let's call it ClassFoo: class ClassFoo { public string word; public float score; //... etc ... } Assume there's going to be like 50.000 items in the collection, all in memory. Now I want to obtain as fast as possible all the instances in the collection that obey a condition on its bar member, for example like this: List<ClassFoo> result = new List<ClassFoo>(); foreach (ClassFoo cf in collection) { if (cf.word.StartsWith(query) || cf.word.EndsWith(query)) result.Add(cf); } How do I get the results as fast as possible? Should I consider some advanced indexing techniques and datastructures? The application domain for this problem is an autocompleter, that gets a query and gives a collection of suggestions as a result. Assume that the condition doesn't get any more complex than this. Assume also that there's going to be a lot of searches.
[ "With the constraint that the condition clause can be \"anything\", then you're limited to scanning the entire list and applying the condition.\nIf there are limitations on the condition clause, then you can look at organizing the data to more efficiently handle the queries.\nFor example, the code sample with the \"byFirstLetter\" dictionary doesn't help at all with an \"endsWith\" query.\nSo, it really comes down to what queries you want to do against that data.\nIn Databases, this problem is the burden of the \"query optimizer\". In a typical database, if you have a database with no indexes, obviously every query is going to be a table scan. As you add indexes to the table, the optimizer can use that data to make more sophisticated query plans to better get to the data. That's essentially the problem you're describing.\nOnce you have a more concrete subset of the types of queries then you can make a better decision as to what structure is best. Also, you need to consider the amount of data. If you have a list of 10 elements each less than 100 byte, a scan of everything may well be the fastest thing you can do since you have such a small amount of data. Obviously that doesn't scale to a 1M elements, but even clever access techniques carry a cost in setup, maintenance (like index maintenance), and memory.\nEDIT, based on the comment\nIf it's an auto completer, if the data is static, then sort it and use a binary search. You're really not going to get faster than that.\nIf the data is dynamic, then store it in a balanced tree, and search that. That's effectively a binary search, and it lets you keep add the data randomly.\nAnything else is some specialization on these concepts.\n", "var Answers = myList.Where(item => item.bar.StartsWith(query) || item.bar.EndsWith(query));\nthat's the easiest in my opinion, should execute rather quickly.\n", "Not sure I understand... All you can really do is optimize the rule, that's the part that needs to be fastest. You can't speed up the loop without just throwing more hardware at it. \nYou could parallelize if you have multiple cores or machines.\n", "I'm not up on my Java right now, but I would think about the following things.\nHow you are creating your list? Perhaps you can create it already ordered in a way which cuts down on comparison time.\nIf you are just doing a straight loop through your collection, you won't see much difference between storing it as an array or as a linked list. \nFor storing the results, depending on how you are collecting them, the structure could make a difference (but assuming Java's generic structures are smart, it won't). As I said, I'm not up on my Java, but I assume that the generic linked list would keep a tail pointer. In this case, it wouldn't really make a difference. Someone with more knowledge of the underlying array vs linked list implementation and how it ends up looking in the byte code could probably tell you whether appending to a linked list with a tail pointer or inserting into an array is faster (my guess would be the array). On the other hand, you would need to know the size of your result set or sacrifice some storage space and make it as big as the whole collection you are iterating through if you wanted to use an array.\nOptimizing your comparison query by figuring out which comparison is most likely to be true and doing that one first could also help. ie: If in general 10% of the time a member of the collection starts with your query, and 30% of the time a member ends with the query, you would want to do the end comparison first.\n", "For your particular example, sorting the collection would help as you could binarychop to the first item that starts with query and terminate early when you reach the next one that doesn't; you could also produce a table of pointers to collection items sorted by the reverse of each string for the second clause.\nIn general, if you know the structure of the query in advance, you can sort your collection (or build several sorted indexes for your collection if there are multiple clauses) appropriately; if you do not, you will not be able to do better than linear search.\n", "If it's something where you populate the list once and then do many lookups (thousands or more) then you could create some kind of lookup dictionary that maps starts with/ends with values to their actual values. That would be a fast lookup, but would use much more memory. If you aren't doing that many lookups or know you're going to be repopulating the list at least semi-frequently I'd go with the LINQ query that CQ suggested.\n", "You can create some sort of index and it might get faster.\nWe can build a index like this:\nDictionary<char, List<ClassFoo>> indexByFirstLetter;\nforeach (var cf in collection) {\n indexByFirstLetter[cf.bar[0]] = indexByFirstLetter[cf.bar[0]] ?? new List<ClassFoo>();\n indexByFirstLetter[cf.bar[0]].Add(cf);\n indexByFirstLetter[cf.bar[cf.bar.length - 1]] = indexByFirstLetter[cf.bar[cf.bar.Length - 1]] ?? new List<ClassFoo>();\n indexByFirstLetter[cf.bar[cf.bar.Length - 1]].Add(cf);\n}\n\nThen use the it like this:\nforeach (ClasssFoo cf in indexByFirstLetter[query[0]]) {\n if (cf.bar.StartsWith(query) || cf.bar.EndsWith(query))\n result.Add(cf);\n}\n\nNow we possibly do not have to loop through as many ClassFoo as in your example, but then again we have to keep the index up to date. There is no guarantee that it is faster, but it is definately more complicated.\n", "Depends. Are all your objects always going to be loaded in memory? Do you have a finite limit of objects that may be loaded? Will your queries have to consider objects that haven't been loaded yet?\nIf the collection will get large, I would definitely use an index.\nIn fact, if the collection can grow to an arbitrary size and you're not sure that you will be able to fit it all in memory, I'd look into an ORM, an in-memory database, or another embedded database. XPO from DevExpress for ORM or SQLite.Net for in-memory database comes to mind.\nIf you don't want to go this far, make a simple index consisting of the \"bar\" member references mapping to class references.\n", "If the set of possible criteria is fixed and small, you can assign a bitmask to each element in the list. The size of the bitmask is the size of the set of the criteria. When you create an element/add it to the list, you check which criteria it satisfies and then set the corresponding bits in the bitmask of this element. Matching the elements from the list will be as easy as matching their bitmasks with the target bitmask. A more general method is the Bloom filter.\n" ]
[ 2, 1, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "arrays", "c#", "collections", "performance", "string" ]
stackoverflow_0000097329_arrays_c#_collections_performance_string.txt
Q: Extra tables or non-specific foreign keys? There are several types of objects in a system, and each has it's own table in the database. A user should be able to comment on any of them. How would you design the comments table(s)? I can think of a few options: One comments table, with a FK column for each object type (ObjectAID, ObjectBID, etc) Several comments tables, one for each object type (ObjectAComments, ObjectBComments, etc) One generic FK (ParentObjectID) with another column to indicate the type ("ObjectA") Which would you choose? Is there a better method I'm not thinking of? A: Is it feasible to design the schema so that the commentable (for lack of a better word) tables follow one of the standard inheritance-modeling patterns? If so, you can have the comment table's FK point to the common parent table. A: @palmsey Pretty much, but the variation on that pattern that I've seen most often gets rid of ObjectAID et al. ParentID becomes both the PK and the FK to Parents. That gets you something like: Parents ParentID ObjectA ParentID (FK and PK) ColumnFromA NOT NULL ObjectB ParentID (FK and PK) ColumnFromB NOT NULL Comments would remain the same. Then you just need to constrain ID generation so that you don't accidentally wind up with an ObjectA row and an ObjectB row that both point to the same Parents row; the easiest way to do that is to use the same sequence (or whatever) that you're using for Parents for ObjectA and ObjectB. You also see a lot of schemas with something like: Parents ID SubclassDiscriminator ColumnFromA (nullable) ColumnFromB (nullable) and Comments would remain unchanged. But now you can't enforce all of your business constraints (the subclasses' properties are all nullable) without writing triggers or doing it at a different layer. A: @Hank Gay So something like: ObjectA ObjectAID ParentID ObjectB ObjectBID ParentID Comments CommentID ParentID Parents ParentID A: Be careful with generic foreign keys that don't point to exactly one table. Query performance suffers dramatically if you have to split the where condition on a type and point to several different tables. If you only have a few types, and the number of types will not grow, it's Ok to have separate nullable foreign keys to the different tables, but if you will have more types, it's better to come up with a different data model (like @palmsey's suggestion). A: One of the things I like to do is have a separate tables that link the generic/common table to all of the individualized tables. So, for objects Foo and Bar and then comments on Foo & Bar, you'd have something like this: Foo Foo ID (PK) Bar Bar ID (PK) Comment Comment ID (PK) Comment Text FooComment Foo ID (PK FK) Comment ID (PK FK) BarComment Bar ID (PK FK) Comment ID (PK FK) This structure: Lets you have a common Comments table Doesn't require a DB with table inheritance Doesn't pollute the Foo and Bar tables with Comment-related information Lets you attach a Comment to multiple objects (which can be desireable) Lets you attach other properties to the junction of Foo/Bar and Comment if so desired. Still preserves the relations with standard (ie: fast, simple, reliable) foreign keys
Extra tables or non-specific foreign keys?
There are several types of objects in a system, and each has it's own table in the database. A user should be able to comment on any of them. How would you design the comments table(s)? I can think of a few options: One comments table, with a FK column for each object type (ObjectAID, ObjectBID, etc) Several comments tables, one for each object type (ObjectAComments, ObjectBComments, etc) One generic FK (ParentObjectID) with another column to indicate the type ("ObjectA") Which would you choose? Is there a better method I'm not thinking of?
[ "Is it feasible to design the schema so that the commentable (for lack of a better word) tables follow one of the standard inheritance-modeling patterns? If so, you can have the comment table's FK point to the common parent table.\n", "@palmsey\nPretty much, but the variation on that pattern that I've seen most often gets rid of ObjectAID et al. ParentID becomes both the PK and the FK to Parents. That gets you something like:\n\nParents\n\nParentID\n\nObjectA\n\nParentID (FK and PK)\nColumnFromA NOT NULL\n\nObjectB\n\nParentID (FK and PK)\nColumnFromB NOT NULL\n\n\nComments would remain the same. Then you just need to constrain ID generation so that you don't accidentally wind up with an ObjectA row and an ObjectB row that both point to the same Parents row; the easiest way to do that is to use the same sequence (or whatever) that you're using for Parents for ObjectA and ObjectB.\nYou also see a lot of schemas with something like:\n\nParents\n\nID\nSubclassDiscriminator\nColumnFromA (nullable)\nColumnFromB (nullable)\n\n\nand Comments would remain unchanged. But now you can't enforce all of your business constraints (the subclasses' properties are all nullable) without writing triggers or doing it at a different layer.\n", "@Hank Gay\nSo something like:\n\nObjectA\n\n\nObjectAID\nParentID\n\nObjectB\n\n\nObjectBID\nParentID\n\nComments\n\n\nCommentID\nParentID\n\nParents\n\n\nParentID\n\n\n", "Be careful with generic foreign keys that don't point to exactly one table. Query performance suffers dramatically if you have to split the where condition on a type and point to several different tables. If you only have a few types, and the number of types will not grow, it's Ok to have separate nullable foreign keys to the different tables, but if you will have more types, it's better to come up with a different data model (like @palmsey's suggestion).\n", "One of the things I like to do is have a separate tables that link the generic/common table to all of the individualized tables. \nSo, for objects Foo and Bar and then comments on Foo & Bar, you'd have something like this:\n\nFoo\n\n\nFoo ID (PK)\n\nBar\n\n\nBar ID (PK)\n\nComment\n\n\nComment ID (PK)\nComment Text\n\nFooComment\n\n\nFoo ID (PK FK)\nComment ID (PK FK)\n\nBarComment\n\n\nBar ID (PK FK)\nComment ID (PK FK)\n\n\nThis structure:\n\nLets you have a common Comments table\nDoesn't require a DB with table inheritance\nDoesn't pollute the Foo and Bar tables with Comment-related information\nLets you attach a Comment to multiple objects (which can be desireable)\nLets you attach other properties to the junction of Foo/Bar and Comment if so desired.\nStill preserves the relations with standard (ie: fast, simple, reliable) foreign keys\n\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "database_design", "foreign_keys", "normalization" ]
stackoverflow_0000039915_database_design_foreign_keys_normalization.txt
Q: When using an ORM, how to safely send loaded entities across the tiers When a system has N tiers, and when using an ORM, how do you send loaded entities across the tiers ? Do you use DTO ? When DTO are not used and the entities are directly sent, how do you protect againt the uninitialized lazy loaded relationship errors ? Note : this is not a "should N tiers be used ?" question. I assume that the system already has N tiers. A: Well I don't know if there is a better way, but when we use Hibernate we just turn lazy loading off so that it loads everything. It obviously costs more to do this, but I wasn't sure how to get away from the lazy loading methods that Hibernate would create. If a Containers has sets of data that are not used often then they will not be loaded and it is up to the requesting UI Form to call it and send it for update. (We built update classes to pass all the information together) In the case of UI Forms that loaded lots of Containers we just make special classes and fill in what we need for them. They are sort of read-only containers that aren't used for persistence. There may be better ways.. but I am learning :) A: I'm just trying to find my way with ORMs. It's an appealing concept. Like you I don't want other tiers in the application to know that the ORM exists. What I'm looking at currently is using interfaces that I design and using partial classes (a C#/.net thing, without partial classes I guess I'd write a wrapper) to add the implementation of the interface onto the types that are generated by the ORM. As far as lazy loading / deferred execution goes, that also should be invisible to the application. It's a nice service for the ORM to provide and I'm happy that it does but my application should not need to know or care about it. So if the ORM doesn't hide that from you then again I'd look at a wrapper that took care of this so that the application does not need to know or care.
When using an ORM, how to safely send loaded entities across the tiers
When a system has N tiers, and when using an ORM, how do you send loaded entities across the tiers ? Do you use DTO ? When DTO are not used and the entities are directly sent, how do you protect againt the uninitialized lazy loaded relationship errors ? Note : this is not a "should N tiers be used ?" question. I assume that the system already has N tiers.
[ "Well I don't know if there is a better way, but when we use Hibernate we just turn lazy loading off so that it loads everything. It obviously costs more to do this, but I wasn't sure how to get away from the lazy loading methods that Hibernate would create.\nIf a Containers has sets of data that are not used often then they will not be loaded and it is up to the requesting UI Form to call it and send it for update. (We built update classes to pass all the information together)\nIn the case of UI Forms that loaded lots of Containers we just make special classes and fill in what we need for them. They are sort of read-only containers that aren't used for persistence.\nThere may be better ways.. but I am learning :)\n", "I'm just trying to find my way with ORMs. \nIt's an appealing concept. Like you I don't want other tiers in the application to know that the ORM exists. \nWhat I'm looking at currently is using interfaces that I design and using partial classes (a C#/.net thing, without partial classes I guess I'd write a wrapper) to add the implementation of the interface onto the types that are generated by the ORM. \nAs far as lazy loading / deferred execution goes, that also should be invisible to the application. It's a nice service for the ORM to provide and I'm happy that it does but my application should not need to know or care about it. So if the ORM doesn't hide that from you then again I'd look at a wrapper that took care of this so that the application does not need to know or care.\n" ]
[ 1, 0 ]
[]
[]
[ "architecture", "orm" ]
stackoverflow_0000088192_architecture_orm.txt
Q: Best approach to web service powered by a daemon I am relatively new to web services and am wondering what the standard "best approach" is. Basically, the way things work is I need to have a task running the background constantly. The the web service will connect to the daemon and return with an appropriate response. Currently, the communication is over unix domain sockets (Linux is the expected server platform). Is this the "right" way to do this? Or is there a more proper way to have a background task that your web-server is based on? A: That's pretty much the best practice. You may be familiar with this pattern from other web applications: The daemon is frequently a database. :)
Best approach to web service powered by a daemon
I am relatively new to web services and am wondering what the standard "best approach" is. Basically, the way things work is I need to have a task running the background constantly. The the web service will connect to the daemon and return with an appropriate response. Currently, the communication is over unix domain sockets (Linux is the expected server platform). Is this the "right" way to do this? Or is there a more proper way to have a background task that your web-server is based on?
[ "That's pretty much the best practice. You may be familiar with this pattern from other web applications: The daemon is frequently a database. :)\n" ]
[ 3 ]
[]
[]
[ "linux", "web_services" ]
stackoverflow_0000106045_linux_web_services.txt
Q: how can I debug exe with soem switch flags from command prompt for e.g from command prompt I need to launch the exe with some switch flags under debugger. How do I do it? This is an exe from c/c++ and built using VS2005 environment that I need debug. I pass some flags to this exe to perform some stuff. A: You'll need to give more information about your development environment to get a specific answer. For example, with a C# project in Visual Studio, you can right-click the project->Properties and then fill out the "Command line arguments" field in the "Debug" tab. A: I think I have it worked. Right click on the project->Properties and then fill out the "Command line arguments" field in the "Debug" tab. bkane solution worked. thx.
how can I debug exe with soem switch flags from command prompt
for e.g from command prompt I need to launch the exe with some switch flags under debugger. How do I do it? This is an exe from c/c++ and built using VS2005 environment that I need debug. I pass some flags to this exe to perform some stuff.
[ "You'll need to give more information about your development environment to get a specific answer.\nFor example, with a C# project in Visual Studio, you can right-click the project->Properties and then fill out the \"Command line arguments\" field in the \"Debug\" tab.\n", "I think I have it worked. Right click on the project->Properties and then fill out the \"Command line arguments\" field in the \"Debug\" tab. bkane solution worked. thx.\n" ]
[ 3, 1 ]
[]
[]
[ "batch_file", "command", "debugging", "prompt" ]
stackoverflow_0000106038_batch_file_command_debugging_prompt.txt
Q: Parameterized SQL Columns? I have some code which utilizes parameterized queries to prevent against injection, but I also need to be able to dynamically construct the query regardless of the structure of the table. What is the proper way to do this? Here's an example, say I have a table with columns Name, Address, Telephone. I have a web page where I run Show Columns and populate a select drop-down with them as options. Next, I have a textbox called Search. This textbox is used as the parameter. Currently my code looks something like this: result = pquery('SELECT * FROM contacts WHERE `' + escape(column) + '`=?', search); I get an icky feeling from it though. The reason I'm using parameterized queries is to avoid using escape. Also, escape is likely not designed for escaping column names. How can I make sure this works the way I intend? Edit: The reason I require dynamic queries is that the schema is user-configurable, and I will not be around to fix anything hard-coded. A: Instead of passing the column names, just pass an identifier that you code will translate to a column name using a hardcoded table. This means you don't need to worry about malicious data being passed, since all the data is either translated legally, or is known to be invalid. Psudoish code: @columns = qw/Name Address Telephone/; if ($columns[$param]) { $query = "select * from contacts where $columns[$param] = ?"; } else { die "Invalid column!"; } run_sql($query, $search); A: The trick is to be confident in your escaping and validating routines. I use my own SQL escape function that is overloaded for literals of different types. Nowhere do I insert expressions (as opposed to quoted literal values) directly from user input. Still, it can be done, I recommend a separate — and strict — function for validating the column name. Allow it to accept only a single identifier, something like /^\w[\w\d_]*$/ You'll have to rely on assumptions you can make about your own column names. A: I use ADO.NET and the use of SQL Commands and SQLParameters to those commands which take care of the Escape problem. So if you are in a Microsoft-tool environment as well, I can say that I use this very sucesfully to build dynamic SQL and yet protect my parameters best of luck A: Make the column based on the results of another query to a table that enumerates the possible schema values. In that second query you can hardcode the select to the column name that is used to define the schema. if no rows are returned then the entered column is invalid. A: In standard SQL, you enclose delimited identifiers in double quotes. This means that: SELECT * FROM "SomeTable" WHERE "SomeColumn" = ? will select from a table called SomeTable with the shown capitalization (not a case-converted version of the name), and will apply a condition to a column called SomeColumn with the shown capitalization. Of itself, that's not very helpful, but...if you can apply the escape() technique with double quotes to the names entered via your web form, then you can build up your query reasonably confidently. Of course, you said you wanted to avoid using escape - and indeed you don't have to use it on the parameters where you provide the ? place-holders. But where you are putting user-provided data into the query, you need to protect yourself from malicious people. Different DBMS have different ways of providing delimited identifiers. MS SQL Server, for instance, seems to use square brackets [SomeTable] instead of double quotes. A: Column names in some databases can contain spaces, which mean you'd have to quote the column name, but if your database contains no such columns, just run the column name through a regular expression or some sort of check before splicing into the SQL: if ( $column !~ /^\w+$/ ) { die "Bad column name [$column]"; }
Parameterized SQL Columns?
I have some code which utilizes parameterized queries to prevent against injection, but I also need to be able to dynamically construct the query regardless of the structure of the table. What is the proper way to do this? Here's an example, say I have a table with columns Name, Address, Telephone. I have a web page where I run Show Columns and populate a select drop-down with them as options. Next, I have a textbox called Search. This textbox is used as the parameter. Currently my code looks something like this: result = pquery('SELECT * FROM contacts WHERE `' + escape(column) + '`=?', search); I get an icky feeling from it though. The reason I'm using parameterized queries is to avoid using escape. Also, escape is likely not designed for escaping column names. How can I make sure this works the way I intend? Edit: The reason I require dynamic queries is that the schema is user-configurable, and I will not be around to fix anything hard-coded.
[ "Instead of passing the column names, just pass an identifier that you code will translate to a column name using a hardcoded table. This means you don't need to worry about malicious data being passed, since all the data is either translated legally, or is known to be invalid. Psudoish code:\n@columns = qw/Name Address Telephone/;\nif ($columns[$param]) {\n $query = \"select * from contacts where $columns[$param] = ?\";\n} else {\n die \"Invalid column!\";\n}\n\nrun_sql($query, $search);\n\n", "The trick is to be confident in your escaping and validating routines. I use my own SQL escape function that is overloaded for literals of different types. Nowhere do I insert expressions (as opposed to quoted literal values) directly from user input.\nStill, it can be done, I recommend a separate — and strict — function for validating the column name. Allow it to accept only a single identifier, something like\n\n/^\\w[\\w\\d_]*$/\n\nYou'll have to rely on assumptions you can make about your own column names.\n", "I use ADO.NET and the use of SQL Commands and SQLParameters to those commands which take care of the Escape problem. So if you are in a Microsoft-tool environment as well, I can say that I use this very sucesfully to build dynamic SQL and yet protect my parameters\nbest of luck\n", "Make the column based on the results of another query to a table that enumerates the possible schema values. In that second query you can hardcode the select to the column name that is used to define the schema. if no rows are returned then the entered column is invalid.\n", "In standard SQL, you enclose delimited identifiers in double quotes. This means that:\nSELECT * FROM \"SomeTable\" WHERE \"SomeColumn\" = ?\n\nwill select from a table called SomeTable with the shown capitalization (not a case-converted version of the name), and will apply a condition to a column called SomeColumn with the shown capitalization.\nOf itself, that's not very helpful, but...if you can apply the escape() technique with double quotes to the names entered via your web form, then you can build up your query reasonably confidently.\nOf course, you said you wanted to avoid using escape - and indeed you don't have to use it on the parameters where you provide the ? place-holders. But where you are putting user-provided data into the query, you need to protect yourself from malicious people.\nDifferent DBMS have different ways of providing delimited identifiers. MS SQL Server, for instance, seems to use square brackets [SomeTable] instead of double quotes.\n", "Column names in some databases can contain spaces, which mean you'd have to quote the column name, but if your database contains no such columns, just run the column name through a regular expression or some sort of check before splicing into the SQL:\nif ( $column !~ /^\\w+$/ ) {\n die \"Bad column name [$column]\";\n}\n\n" ]
[ 7, 0, 0, 0, 0, 0 ]
[]
[]
[ "parameterized", "sql_injection" ]
stackoverflow_0000106001_parameterized_sql_injection.txt
Q: In OOP, In what cases do you act on an object instead of letting the object act? In what cases,or for what kind of algorithms, do you start using your objects as data structure with methodes outside of the objects (ie : Tree Walking, etc...). What scheme do you use ? (Visitor ? pattern-matching ?) Or do you think an object should always be the only one allowed to act on its own data ? A: Objects should have a single responsibility. If the operation you're doing is acting on an object but has nothing to do with the responsibility of that object. It's better to put it outside that object. A: Or do you think an object should always be the only one allowed to act on its own data? That is my philosophy (except for objects that are only entities; ie: they map something else, like an xml file or something and only contain properties)
In OOP, In what cases do you act on an object instead of letting the object act?
In what cases,or for what kind of algorithms, do you start using your objects as data structure with methodes outside of the objects (ie : Tree Walking, etc...). What scheme do you use ? (Visitor ? pattern-matching ?) Or do you think an object should always be the only one allowed to act on its own data ?
[ "Objects should have a single responsibility. If the operation you're doing is acting on an object but has nothing to do with the responsibility of that object. It's better to put it outside that object.\n", "\nOr do you think an object should always be the only one allowed to act on its own data?\n\nThat is my philosophy (except for objects that are only entities; ie: they map something else, like an xml file or something and only contain properties)\n" ]
[ 1, 0 ]
[]
[]
[ "oop" ]
stackoverflow_0000105988_oop.txt
Q: How do I access a char ** through ffi in plt-scheme? I'm mocking about with plt-scheme's ffi and I have a C-function that returns a char ** (array of strings). If I declare my function as (_fun _pointer -> _pointer), how do I convert the result to a list of strings in scheme? Here are the relevant C-declarations: typedef char **MYSQL_ROW; /* return data as array of strings */ // ... MYSQL_ROW STDCALL mysql_fetch_row(MYSQL_RES *result); A: I think that what you want is the cvector: http://docs.plt-scheme.org/foreign/Derived_Utilities.html#(part._foreign~3acvector) A cvector of _string/utf-8 or whichever encoding you need seems reasanable. But that's from a quick survey of the docs - I haven't tried this myself. Please let me know if it works! A: I know it's not exactly what you are looking for, but it might help a little bit. I've done some work on a basic Gambit Scheme FFI for MySQL. I don't know how PLT Scheme and Gambit differ in terms of their FFI implementation (I'd venture with "quite a bit") but maybe you can get something out of it: http://bunny.jonnay.net/zengarden/trunk/lib/mysql/mysql-ffi.scm A: Aha, I figured it out myself. I have to use the _cpointer procedure, described at the page that mike linked to: (_fun _pointer -> (_cpointer/null 'mysql-row (make-ctype _pointer #f #f))) It also seems that someone already beat me to creating a ffi to mysqlclient. Not to worry; My main goal is understanding the ffi api, and it's going forward.
How do I access a char ** through ffi in plt-scheme?
I'm mocking about with plt-scheme's ffi and I have a C-function that returns a char ** (array of strings). If I declare my function as (_fun _pointer -> _pointer), how do I convert the result to a list of strings in scheme? Here are the relevant C-declarations: typedef char **MYSQL_ROW; /* return data as array of strings */ // ... MYSQL_ROW STDCALL mysql_fetch_row(MYSQL_RES *result);
[ "I think that what you want is the cvector:\nhttp://docs.plt-scheme.org/foreign/Derived_Utilities.html#(part._foreign~3acvector)\nA cvector of _string/utf-8 or whichever encoding you need seems reasanable.\nBut that's from a quick survey of the docs - I haven't tried this myself. Please let me know if it works!\n", "I know it's not exactly what you are looking for, but it might help a little bit. I've done some work on a basic Gambit Scheme FFI for MySQL. I don't know how PLT Scheme and Gambit differ in terms of their FFI implementation (I'd venture with \"quite a bit\") but maybe you can get something out of it:\nhttp://bunny.jonnay.net/zengarden/trunk/lib/mysql/mysql-ffi.scm\n", "Aha, I figured it out myself.\nI have to use the _cpointer procedure, described at the page that mike linked to:\n(_fun _pointer -> (_cpointer/null 'mysql-row (make-ctype _pointer #f #f)))\n\nIt also seems that someone already beat me to creating a ffi to mysqlclient. Not to worry; My main goal is understanding the ffi api, and it's going forward.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "c", "lisp", "mysql", "scheme" ]
stackoverflow_0000105816_c_lisp_mysql_scheme.txt
Q: Running an MVC application through IIS results in "Directory listing denied" I have an .Net MVC application which runs fine if I use the build in Visual Studio Webserver. If I use the projects property pages to switch to IIS as the webserver and create a virtual directory for my project, any request I send to the server results in a "Directory listing denied" failure. Does anyone know a solution for this? A: Might be the IIS does not have default.aspx set up as a start page. A: It's an issue with 'extensionless' urls prior to IIS7. It needs either an ISAPI filter or duplicate routes in the routing table with a .mvc extension. Try ScottGu's blog at weblogs.asp.net.
Running an MVC application through IIS results in "Directory listing denied"
I have an .Net MVC application which runs fine if I use the build in Visual Studio Webserver. If I use the projects property pages to switch to IIS as the webserver and create a virtual directory for my project, any request I send to the server results in a "Directory listing denied" failure. Does anyone know a solution for this?
[ "Might be the IIS does not have default.aspx set up as a start page.\n", "It's an issue with 'extensionless' urls prior to IIS7.\nIt needs either an ISAPI filter or duplicate routes in the routing table with a .mvc extension. Try ScottGu's blog at weblogs.asp.net. \n" ]
[ 0, 0 ]
[]
[]
[ "asp.net_mvc", "iis", "visual_studio", "webproject" ]
stackoverflow_0000105884_asp.net_mvc_iis_visual_studio_webproject.txt
Q: How to add a version number to an Access file in a .msi I'm building an install using VS 2003. The install has an Excel workbook and two Access databases. I need to force the Access files to load regardless of the create/mod date of the existing databases on the user's computer. I currently use ORCA to force in a Version number on the two files, but would like to find a simpler, more elegant solution (hand editing a .msi file is not something I see as "best practice". Is there a way to add a version number to the databases using Access that would then be used in the install? Is there a better way for me to do this? A: @LanceSc I don't think MsiFileHash table will help here. See this excellent post by Aaron Stebner. Most likely last modified date of Access database on client computer will be different from its creation date. Windows Installer will correctly assume that the file has changed since installation and will not replace it. The right way to solve this (as question author pointed out) is to set Version field in File table. Unfortunately setup projects in Visual Studio are very limited. You can create simple VBS script that would modify records in File table (using SQL) but I suggest looking at alternative setup authoring tools instead, such as WiX, InstallShield or Wise. WiX in my opinion is the best. A: Since it sounds like you don't have properly versioned resources, have you tried changing the REINSTALLMODE property? IIRC, in the default value of 'omus', it's the 'o' flag that's only allowing you to install if you have an older version. You may try changing this from 'o' to 'e'. Be warned that this will overwrite missing, older AND equally versioned files. Manually adding in versions was the wrong way to start, but this should ensure that you don't have to manually bump up the version numbers to get them to install. A: Look into Build Events for your project. It may be possible to rev the versions of the files during a build event. [Just don't quote me on that]. I am not sure if you can or not, but that would be the place I would start investigating first. A: You should populate the MsiFileHash table for these files. Look at WiFilVer.vbs thtat is part of the Microsoft Platform SDK to see how to do this. My other suggestion would be to look at WiX instead of Visual Studio 2003 for doing installs. Visual Studio 2003 has very limited MSI support and you can end up spending a lot of time fighting it, rather than getting useful work don.
How to add a version number to an Access file in a .msi
I'm building an install using VS 2003. The install has an Excel workbook and two Access databases. I need to force the Access files to load regardless of the create/mod date of the existing databases on the user's computer. I currently use ORCA to force in a Version number on the two files, but would like to find a simpler, more elegant solution (hand editing a .msi file is not something I see as "best practice". Is there a way to add a version number to the databases using Access that would then be used in the install? Is there a better way for me to do this?
[ "@LanceSc\nI don't think MsiFileHash table will help here. See this excellent post by Aaron Stebner. Most likely last modified date of Access database on client computer will be different from its creation date. Windows Installer will correctly assume that the file has changed since installation and will not replace it.\nThe right way to solve this (as question author pointed out) is to set Version field in File table.\nUnfortunately setup projects in Visual Studio are very limited. You can create simple VBS script that would modify records in File table (using SQL) but I suggest looking at alternative setup authoring tools instead, such as WiX, InstallShield or Wise. WiX in my opinion is the best.\n", "Since it sounds like you don't have properly versioned resources, have you tried changing the REINSTALLMODE property?\nIIRC, in the default value of 'omus', it's the 'o' flag that's only allowing you to install if you have an older version. You may try changing this from 'o' to 'e'. Be warned that this will overwrite missing, older AND equally versioned files. \nManually adding in versions was the wrong way to start, but this should ensure that you don't have to manually bump up the version numbers to get them to install.\n", "Look into Build Events for your project. It may be possible to rev the versions of the files during a build event. [Just don't quote me on that]. I am not sure if you can or not, but that would be the place I would start investigating first.\n", "You should populate the MsiFileHash table for these files. Look at WiFilVer.vbs thtat is part of the Microsoft Platform SDK to see how to do this. \nMy other suggestion would be to look at WiX instead of Visual Studio 2003 for doing installs. Visual Studio 2003 has very limited MSI support and you can end up spending a lot of time fighting it, rather than getting useful work don.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "ms_access", "version", "windows_installer" ]
stackoverflow_0000102374_ms_access_version_windows_installer.txt
Q: Web designer for simple reports Users of my web app need to edit and "save as" their reports and then execute and export them to PDF or Excel files. I need to know if there is a designer (web) for simple reports (open source would be better). Reports are not complex: just data fields, master-detail, labels, simple formulas, lines, static images... Is there any? (too much to ask?) Thanks A: I'd just produce a csv file from the information and save that for the excel side of things. In PHP, something like this: <?php // load info from database into an array header("Content-type: application/vnd.ms-excel"); header( "Content-disposition: report.csv"); // loop through array and export each entry as so echo ($item[1].",".$item[2].",".$item[3]."\n"); // end loop ?> Obviously, that's just the barebones, but you can see what I'm getting at. Alternatively there are libraries in PEAR for PHP that will let you save as an xls or pdf, but I've always preferred simplicity over complex libraries when I can get away with it!
Web designer for simple reports
Users of my web app need to edit and "save as" their reports and then execute and export them to PDF or Excel files. I need to know if there is a designer (web) for simple reports (open source would be better). Reports are not complex: just data fields, master-detail, labels, simple formulas, lines, static images... Is there any? (too much to ask?) Thanks
[ "I'd just produce a csv file from the information and save that for the excel side of things.\nIn PHP, something like this:\n<?php\n// load info from database into an array\nheader(\"Content-type: application/vnd.ms-excel\");\nheader( \"Content-disposition: report.csv\");\n\n// loop through array and export each entry as so\necho ($item[1].\",\".$item[2].\",\".$item[3].\"\\n\");\n// end loop\n?>\n\nObviously, that's just the barebones, but you can see what I'm getting at.\nAlternatively there are libraries in PEAR for PHP that will let you save as an xls or pdf, but I've always preferred simplicity over complex libraries when I can get away with it!\n" ]
[ 1 ]
[]
[]
[ "designer", "report" ]
stackoverflow_0000105716_designer_report.txt
Q: How do I synchronize the address book in my app using MAPI? The system I'm working on contains an address book. I am looking for sample code that will synchronize addresses with the current users address book through MAPI. I need two-way sync. If you know of any open-source library with easy to use functions for this, I'd be glad to hear about it. If you know of a library that is not open-source, well, that is fine too. The best would be a library which license will allow me to use it in our own solution. And if you, god forbid, know of a library that will make it easy for me to publish my address book in a MAPI provider - well, then I'm dying to hear about it! Using an external address book and ditching our own is not an option that would serve our customers. A good, working code sample using vanilla MAPI is of course also acceptable. ;-) A: Zarafa just released their 100% MAPI compatible groupware suite as GPL. Maybe that's useful for you? EDIT: The link is slashdotted. More info here.
How do I synchronize the address book in my app using MAPI?
The system I'm working on contains an address book. I am looking for sample code that will synchronize addresses with the current users address book through MAPI. I need two-way sync. If you know of any open-source library with easy to use functions for this, I'd be glad to hear about it. If you know of a library that is not open-source, well, that is fine too. The best would be a library which license will allow me to use it in our own solution. And if you, god forbid, know of a library that will make it easy for me to publish my address book in a MAPI provider - well, then I'm dying to hear about it! Using an external address book and ditching our own is not an option that would serve our customers. A good, working code sample using vanilla MAPI is of course also acceptable. ;-)
[ "Zarafa just released their 100% MAPI compatible groupware suite as GPL. Maybe that's useful for you?\nEDIT: The link is slashdotted. More info here.\n" ]
[ 1 ]
[]
[]
[ "c++", "mapi" ]
stackoverflow_0000106243_c++_mapi.txt
Q: How to get scientific results from non-experimental data (datamining?) I want to obtain maximum performance out of a process with many variables, many of which cannot be controlled. I cannot run thousands of experiments, so it'd be nice if I could run hundreds of experiments and vary many controllable parameters collect data on many parameters indicating performance 'correct,' as much as possible, for those parameters I couldn't control Tease out the 'best' values for those things I can control, and start all over again It feels like this would be called data mining, where you're going through tons of data which doesn't immediately appear to relate, but does show correlation after some effort. So... Where do I start looking at algorithms, concepts, theory of this sort of thing? Even related terms for purposes of search would be useful. Background: I like to do ultra-marathon cycling, and keep logs of each ride. I'd like to keep more data, and after hundreds of rides be able to pull out information about how I perform. However, everything varies - routes, environment (temp, pres., hum., sun load, wind, precip., etc), fuel, attitude, weight, water load, etc, etc, etc. I can control a few things, but running the same route 20 times to test out a new fuel regime would just be depressing, and take years to perform all the experiments that I'd like to do. I can, however, record all these things and more(telemetry on bicycle FTW). A: It sounds like you want to do some regression analysis. You certainly have plenty of data! Regression analysis is an extremely common modeling technique in statistics and science. (It could be argued that statistics is the art and science of regression analysis.) There are many statistics packages out there to do the computation you'll need. (I'd recommend one, but I'm years out of date.) Data mining has gotten a bad name because far too often people assume correlation equals causation. I found that a good technique is to start with variables you know have an influence and build a statistical model around them first. So you know that wind, weight and climb have an influence on how fast you can travel and statistical software can take your dataset and calculate what the correlation between those factors are. That will give you a statistical model or linear equation: speed = x*weight + y*wind + z*climb + constant When you explore new variables, you will be able to see if the model is improved or not by comparing a goodness of fit metric like R-squared. So you might check if temperature or time of day adds anything to the model. You may want to apply a transformation to you data. For instance, you might find that you perform better on colder days. But really cold days and really hot days might hurt performance. In that case, you could assign temperatures to bins or segments: < 0°C; 0°C to 40°C; > 40°C, or some such. The key is to transform the data in a way that matches a rational model of what is going on in the real world, not just the data itself. In case someone thinks this is not a programming related topic, notice that you can use these same techniques to analyze system performance. A: With that many variables you have too many dimensions and you may want to look at Principal Component Analysis. It takes some of the "art" out of regression analysis and lets the data speak for itself. Some software to do that sort of analysis is shown at the bottom of the link. A: I have used the Perl module Statistics::Regression for somewhat similar problems in the past. Be warned, however, that regression analysis is definitely an art. As the warning in the Perl module says, it won't make sense to you if you haven't learned the appropriate math.
How to get scientific results from non-experimental data (datamining?)
I want to obtain maximum performance out of a process with many variables, many of which cannot be controlled. I cannot run thousands of experiments, so it'd be nice if I could run hundreds of experiments and vary many controllable parameters collect data on many parameters indicating performance 'correct,' as much as possible, for those parameters I couldn't control Tease out the 'best' values for those things I can control, and start all over again It feels like this would be called data mining, where you're going through tons of data which doesn't immediately appear to relate, but does show correlation after some effort. So... Where do I start looking at algorithms, concepts, theory of this sort of thing? Even related terms for purposes of search would be useful. Background: I like to do ultra-marathon cycling, and keep logs of each ride. I'd like to keep more data, and after hundreds of rides be able to pull out information about how I perform. However, everything varies - routes, environment (temp, pres., hum., sun load, wind, precip., etc), fuel, attitude, weight, water load, etc, etc, etc. I can control a few things, but running the same route 20 times to test out a new fuel regime would just be depressing, and take years to perform all the experiments that I'd like to do. I can, however, record all these things and more(telemetry on bicycle FTW).
[ "It sounds like you want to do some regression analysis. You certainly have plenty of data!\n\nRegression analysis is an extremely common modeling technique in statistics and science. (It could be argued that statistics is the art and science of regression analysis.) There are many statistics packages out there to do the computation you'll need. (I'd recommend one, but I'm years out of date.)\nData mining has gotten a bad name because far too often people assume correlation equals causation. I found that a good technique is to start with variables you know have an influence and build a statistical model around them first. So you know that wind, weight and climb have an influence on how fast you can travel and statistical software can take your dataset and calculate what the correlation between those factors are. That will give you a statistical model or linear equation:\nspeed = x*weight + y*wind + z*climb + constant\n\nWhen you explore new variables, you will be able to see if the model is improved or not by comparing a goodness of fit metric like R-squared. So you might check if temperature or time of day adds anything to the model.\nYou may want to apply a transformation to you data. For instance, you might find that you perform better on colder days. But really cold days and really hot days might hurt performance. In that case, you could assign temperatures to bins or segments: < 0°C; 0°C to 40°C; > 40°C, or some such. The key is to transform the data in a way that matches a rational model of what is going on in the real world, not just the data itself.\n\nIn case someone thinks this is not a programming related topic, notice that you can use these same techniques to analyze system performance.\n", "With that many variables you have too many dimensions and you may want to look at Principal Component Analysis. It takes some of the \"art\" out of regression analysis and lets the data speak for itself. Some software to do that sort of analysis is shown at the bottom of the link.\n", "I have used the Perl module Statistics::Regression for somewhat similar problems in the past. Be warned, however, that regression analysis is definitely an art. As the warning in the Perl module says, it won't make sense to you if you haven't learned the appropriate math.\n" ]
[ 2, 2, 1 ]
[]
[]
[ "algorithm", "data_mining" ]
stackoverflow_0000105996_algorithm_data_mining.txt
Q: Which thread should I process the RxTx SerialEvent.DATA_AVAILABLE event? I'm using the RxTx library over usbserial on a Linux distro. The RxTx lib seems to behave quite differently (in a bad way) than how it works over serial. My application has several threads and one of my biggest problems is that out of nowhere, I seem to be getting one to two extra bytes on my stream. I can't figure out where they come from or why. This problem seems to occur a lot more frequently when I write to the RxTx stream using another thread. So I was wonder if I should process the read on the current RxTx thread or should I process the read on another thread when I get the DATA_AVAILABLE event. I'm hoping someone might have good or bad reasons for doing it one way or the other. A: This is just a guess, but it may give you a clue. Is it possible that the send and receive shares a buffer, or that when you send, the bytes are also received on the input somehow - I have seen this before on some embedded systems. You may find the best thing to do is initially keep both send and receive on the same thread. Another thing may be to make sure the output drains before trying a read. Hopefully this may give you some clue.
Which thread should I process the RxTx SerialEvent.DATA_AVAILABLE event?
I'm using the RxTx library over usbserial on a Linux distro. The RxTx lib seems to behave quite differently (in a bad way) than how it works over serial. My application has several threads and one of my biggest problems is that out of nowhere, I seem to be getting one to two extra bytes on my stream. I can't figure out where they come from or why. This problem seems to occur a lot more frequently when I write to the RxTx stream using another thread. So I was wonder if I should process the read on the current RxTx thread or should I process the read on another thread when I get the DATA_AVAILABLE event. I'm hoping someone might have good or bad reasons for doing it one way or the other.
[ "This is just a guess, but it may give you a clue.\nIs it possible that the send and receive shares a buffer, or that when you send, the bytes are also received on the input somehow - I have seen this before on some embedded systems. \nYou may find the best thing to do is initially keep both send and receive on the same thread. Another thing may be to make sure the output drains before trying a read.\nHopefully this may give you some clue.\n" ]
[ 1 ]
[]
[]
[ "linux", "rxtx" ]
stackoverflow_0000084755_linux_rxtx.txt
Q: File format for generating dynamic reports in applications We generate dynamic reports in all of our business web applications written for .Net and J2EE. On the server side we use ActiveReports.Net and JasperReports to generate the reports. We then export them to PDF to send down to the browser. Our clients all use Adobe Reader. We have endless problems with the different versions of Adobe Reader and how they are setup on the client. What file format/readers are others using for their dynamic reports? We need something that allows for precise layout as many of the reports are forms that are printed with data from out systems. HTML is not expressive enough. A: I've used SQL Reporting Services for this purpose. You can design a report template in Visual Studio or generate the XML for the report on the fly in code. You can then have SSRS export the report to about 10 different formats and send to the client including pdf, excel, html, etc. You can also write your own plugin to export to your own format. Crystal Reports has a similar product thats more expensive but has a better report designer. A: I've always had the most success using PDFs to accomplish this. I can't think of a more universally acceptable format that does what you are trying to do. Rather than looking for another format, perhaps it would be better to try to understand how to overcome the problems that you are experiencing with Acrobat on the client side. Can you provide some more information on the types of problems that you are experiencing with Acrobat? A: I does know only 3(4) possible viewer(formats) for reporting in browser. PDF Flash Java (Silverlihgt) For all 3 there are reporting solutions. Silverlight are to new and I does not know a solution. You can test how flash and Java in your intranet work and then search a reporting solution. I think PDF should be made the few problems if you use the newest readers. The old readers has many bad bugs.
File format for generating dynamic reports in applications
We generate dynamic reports in all of our business web applications written for .Net and J2EE. On the server side we use ActiveReports.Net and JasperReports to generate the reports. We then export them to PDF to send down to the browser. Our clients all use Adobe Reader. We have endless problems with the different versions of Adobe Reader and how they are setup on the client. What file format/readers are others using for their dynamic reports? We need something that allows for precise layout as many of the reports are forms that are printed with data from out systems. HTML is not expressive enough.
[ "I've used SQL Reporting Services for this purpose. You can design a report template in Visual Studio or generate the XML for the report on the fly in code. You can then have SSRS export the report to about 10 different formats and send to the client including pdf, excel, html, etc. You can also write your own plugin to export to your own format.\nCrystal Reports has a similar product thats more expensive but has a better report designer.\n", "I've always had the most success using PDFs to accomplish this. I can't think of a more universally acceptable format that does what you are trying to do. Rather than looking for another format, perhaps it would be better to try to understand how to overcome the problems that you are experiencing with Acrobat on the client side. Can you provide some more information on the types of problems that you are experiencing with Acrobat? \n", "I does know only 3(4) possible viewer(formats) for reporting in browser.\n\nPDF\nFlash\nJava\n(Silverlihgt)\n\nFor all 3 there are reporting solutions. Silverlight are to new and I does not know a solution. You can test how flash and Java in your intranet work and then search a reporting solution. I think PDF should be made the few problems if you use the newest readers. The old readers has many bad bugs.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "adobe_reader", "pdf" ]
stackoverflow_0000046133_adobe_reader_pdf.txt
Q: Detecting when ChucK child shred has finished Is it possible to determine when a ChucK child shred has finished executing if you have a reference to the child shred? For example, in this code: // define function go() fun void go() { // insert code } // spork another, store reference to new shred in offspring spork ~ go() => Shred @ offspring; Is it possible to determine when offspring is done executing? A: I'd say so, let me quote from the "VERSIONS" file from the latest release; - (added) int Shred.done() // is the shred done? int Shred.running() // is the shred running? I'm not 100% sure what "running" is supposed to refer to (perhaps I misunderstand it?) but "done" seems to suit your needs; ================== 8<================ fun void foo() { second => now; } spork ~ foo() @=> Shred bar; <<<bar.done()>>>; <<<bar.running()>>>; // why is this 0? Bug? 2::second => now; <<<bar.done()>>>; <<<bar.running()>>>; ==========8<====================== Please note that calling these on a Shred object with no shred process attached to it will return more or less random numbers which is probably a bug. ---Answer from Kassen on chuck-users mailing list.
Detecting when ChucK child shred has finished
Is it possible to determine when a ChucK child shred has finished executing if you have a reference to the child shred? For example, in this code: // define function go() fun void go() { // insert code } // spork another, store reference to new shred in offspring spork ~ go() => Shred @ offspring; Is it possible to determine when offspring is done executing?
[ "I'd say so, let me quote from the \"VERSIONS\" file from the latest release;\n - (added) int Shred.done() // is the shred done?\n int Shred.running() // is the shred running? \n\nI'm not 100% sure what \"running\" is supposed to refer to (perhaps I misunderstand it?) but \"done\" seems to suit your needs;\n================== 8<================\nfun void foo()\n {\n second => now;\n }\n\nspork ~ foo() @=> Shred bar;\n\n<<<bar.done()>>>;\n<<<bar.running()>>>; // why is this 0? Bug?\n2::second => now;\n<<<bar.done()>>>;\n<<<bar.running()>>>;\n\n==========8<======================\nPlease note that calling these on a Shred object with no shred process attached to it will return more or less random numbers which is probably a bug.\n---Answer from Kassen on chuck-users mailing list.\n" ]
[ 4 ]
[]
[]
[ "chuck" ]
stackoverflow_0000105771_chuck.txt
Q: Can anyone recommend a Silverlight 2 book? Even though Silverlight2 is still in it's infancy, can anyone recommend a book to get started with? One that has more of a developer focus than a designer one? A: I have this one pre-ordered: "Programming Silverlight 2" by Jesse Liberty and Tim Heuer. The authors are both employed by Microsoft working on Silverlight 2, and their blogs are great, so I expect the book (to be released after RTM) to be up to date. A: I have seen some of the work done by Laurent... wait for his book (source: galasoft.ch) Sams Silverlight 2 Unleashed A: I'm currently working my way through Introducing Microsoft Silverlight 2, so far so good. It's a typical Microsoft book serving up the koolaid, but gives a good introduction. I saw the guy speak at one of the local .NET User Groups in the Metro Detroit area and was impressed. alt text http://ecx.images-amazon.com/images/I/51YD6H7PQyL._SL500_BO2,204,203,200_PIsitb-dp-500-arrow,TopRight,45,-64_OU01_AA240_SH20_.jpg A: I've started reading Silverlight 2 in Action which seems pretty ok. One good thing is that because they have an early access program, you can get most of the book as an electronic copy already now. Even though the full book is not due to be released until sometime in October. A: A lot of the books available are simply a rehash of the help files. Your best bet would be to start at silverlight.net and read the Help files that get installed with the SDK. Then when Jesse Liberty's book is released read that.
Can anyone recommend a Silverlight 2 book?
Even though Silverlight2 is still in it's infancy, can anyone recommend a book to get started with? One that has more of a developer focus than a designer one?
[ "I have this one pre-ordered: \"Programming Silverlight 2\"\nby Jesse Liberty and Tim Heuer. The authors are both employed by Microsoft working on Silverlight 2, and their blogs are great, so I expect the book (to be released after RTM) to be up to date.\n", "I have seen some of the work done by Laurent... wait for his book\n\n(source: galasoft.ch)\nSams Silverlight 2 Unleashed\n", "I'm currently working my way through Introducing Microsoft Silverlight 2, so far so good. \nIt's a typical Microsoft book serving up the koolaid, but gives a good introduction. I saw the guy speak at one of the local .NET User Groups in the Metro Detroit area and was impressed.\nalt text http://ecx.images-amazon.com/images/I/51YD6H7PQyL._SL500_BO2,204,203,200_PIsitb-dp-500-arrow,TopRight,45,-64_OU01_AA240_SH20_.jpg\n", "I've started reading Silverlight 2 in Action which seems pretty ok. One good thing is that because they have an early access program, you can get most of the book as an electronic copy already now. Even though the full book is not due to be released until sometime in October.\n", "A lot of the books available are simply a rehash of the help files. Your best bet would be to start at silverlight.net and read the Help files that get installed with the SDK. Then when Jesse Liberty's book is released read that.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "silverlight" ]
stackoverflow_0000060822_silverlight.txt
Q: Most efficient way to see if an item is or is not in a listbox control This request is based in MS Access VBA. I would like to know what the most efficient way is, to see if an item exists in a listbox control. A: Here is a sample function that might be adapted to suit. Function CheckForItem(strItem, ListB As ListBox) As Boolean Dim rs As DAO.Recordset Dim db As Database Dim tdf As TableDef Set db = CurrentDb CheckForItem = False Select Case ListB.RowSourceType Case "Value List" CheckForItem = InStr(ListB.RowSource, strItem) > 0 Case "Table/Query" Set rs = db.OpenRecordset(ListB.RowSource) For i = 0 To rs.Fields.Count - 1 strList = strList & " & "","" & " & rs.Fields(i).Name Next rs.FindFirst "Instr(" & Mid(strList, 10) & ",'" & strItem & "')>0" If Not rs.EOF Then CheckForItem = True Case "Field List" Set tdf = db.TableDefs(ListB.RowSource) For Each itm In tdf.Fields If itm.Name = strItem Then CheckForItem = True Next End Select End Function A: Unfortunately there is no more efficient way than a linear search, unless you know that your listbox is sorted or indexed in some particular fashion. For i = 1 To TheComboBoxControl.ListCount if TheComboBoxControl.ItemData(i) = "Item to search for" Then do_something() Next i A: If you don't mind resorting to the Windows API you can search for a string like this: Private Declare Function SendMessage Lib "user32" Alias "SendMessageA" (ByVal hwnd As Long, ByVal wMsg As Long, ByVal wParam As Long, lParam As Any) As Long Private Const LB_FINDSTRINGEXACT = &H1A2 Dim index as Integer Dim searchString as String searchString = "Target" & Chr(0) index = SendMessage(ListBox1.hWnd, LB_FINDSTRINGEXACT , -1, searchString) Which should return the index of the row that contains the target string.
Most efficient way to see if an item is or is not in a listbox control
This request is based in MS Access VBA. I would like to know what the most efficient way is, to see if an item exists in a listbox control.
[ "Here is a sample function that might be adapted to suit.\nFunction CheckForItem(strItem, ListB As ListBox) As Boolean\nDim rs As DAO.Recordset\nDim db As Database\nDim tdf As TableDef\n\n Set db = CurrentDb\n\n CheckForItem = False\n\n Select Case ListB.RowSourceType\n Case \"Value List\"\n CheckForItem = InStr(ListB.RowSource, strItem) > 0\n\n Case \"Table/Query\"\n Set rs = db.OpenRecordset(ListB.RowSource)\n\n For i = 0 To rs.Fields.Count - 1\n strList = strList & \" & \"\",\"\" & \" & rs.Fields(i).Name\n Next\n\n rs.FindFirst \"Instr(\" & Mid(strList, 10) & \",'\" & strItem & \"')>0\"\n\n If Not rs.EOF Then CheckForItem = True\n\n Case \"Field List\"\n\n Set tdf = db.TableDefs(ListB.RowSource)\n\n For Each itm In tdf.Fields\n If itm.Name = strItem Then CheckForItem = True\n Next\n\n End Select\n\nEnd Function\n\n", "Unfortunately there is no more efficient way than a linear search, unless you know that your listbox is sorted or indexed in some particular fashion.\nFor i = 1 To TheComboBoxControl.ListCount\n if TheComboBoxControl.ItemData(i) = \"Item to search for\" Then do_something()\nNext i\n\n", "If you don't mind resorting to the Windows API you can search for a string like this: \nPrivate Declare Function SendMessage Lib \"user32\" Alias \"SendMessageA\" (ByVal hwnd As Long, ByVal wMsg As Long, ByVal wParam As Long, lParam As Any) As Long \nPrivate Const LB_FINDSTRINGEXACT = &H1A2\n\nDim index as Integer\nDim searchString as String\nsearchString = \"Target\" & Chr(0)\n\nindex = SendMessage(ListBox1.hWnd, LB_FINDSTRINGEXACT , -1, searchString)\n\nWhich should return the index of the row that contains the target string.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "controls", "listbox", "ms_access", "vba" ]
stackoverflow_0000105935_controls_listbox_ms_access_vba.txt
Q: How can I make a batch file to act like a simple grep using Perl? I already know the obvious answer to this question: "just download <insert favorite windows grep or grep-like tool here>". However, I work in an environment with strict controls by the local IT staff as to what we're allowed to have on our computers. Suffice it to say: I have access to Perl on Windows XP. Here's a quick Perl script I came up with that does what I want, but I haven't figured up how to set up a batch file such that I can either pipe a command output into it, or pass a file (or list of files?) as an argument after the "expression to grep": perl -n -e "print $_ if (m![expression]!);" [filename] How do I write a batch script that I can do something like, for example: dir | grep.bat mypattern grep.bat mypattern myfile.txt EDIT: Even though I marked another "answer", I wanted to give kudos to Ray Hayes answer, as it is really the "Windows Way" to do it, even if another answer is technically closer to what I wanted. A: Most of the power of grep is already available on your machine in the Windows application FindStr.exe which is part of all Windows 2000, XP and Vista machines! It offers RegExpr etc. Far easier than a batch file which in turn calls Perl! c:\>FindStr /? Searches for strings in files. FINDSTR [/B] [/E] [/L] [/R] [/S] [/I] [/X] [/V] [/N] [/M] [/O] [/P] [/F:file] [/C:string] [/G:file] [/D:dir list] [/A:color attributes] [/OFF[LINE]] strings [[drive:][path]filename[ ...]] /B Matches pattern if at the beginning of a line. /E Matches pattern if at the end of a line. /L Uses search strings literally. /R Uses search strings as regular expressions. /S Searches for matching files in the current directory and all subdirectories. /I Specifies that the search is not to be case-sensitive. /X Prints lines that match exactly. /V Prints only lines that do not contain a match. /N Prints the line number before each line that matches. /M Prints only the filename if a file contains a match. /O Prints character offset before each matching line. /P Skip files with non-printable characters. /OFF[LINE] Do not skip files with offline attribute set. /A:attr Specifies color attribute with two hex digits. See "color /?" /F:file Reads file list from the specified file(/ stands for console). /C:string Uses specified string as a literal search string. /G:file Gets search strings from the specified file(/ stands for console). /D:dir Search a semicolon delimited list of directories strings Text to be searched for. [drive:][path]filename Specifies a file or files to search. Use spaces to separate multiple search strings unless the argument is prefixed with /C. For example, 'FINDSTR "hello there" x.y' searches for "hello" or "there" in file x.y. 'FINDSTR /C:"hello there" x.y' searches for "hello there" in file x.y. Regular expression quick reference: . Wildcard: any character * Repeat: zero or more occurances of previous character or class ^ Line position: beginning of line $ Line position: end of line [class] Character class: any one character in set [^class] Inverse class: any one character not in set [x-y] Range: any characters within the specified range \x Escape: literal use of metacharacter x \<xyz Word position: beginning of word xyz\> Word position: end of word A: Download and install ack. It's a superior replacement to grep and - thanks to Perl's magic dual mode .BAT / Perl script magic - it'll work on the command line for you. A: I wrote this a while back: @rem = '--*-Perl-*-- @echo off perl -x -S %0 %* goto endofperl @rem -- BEGIN PERL -- '; #!d:/Perl/bin/perl.exe -w #line 10 use strict; #use Test::Setup; use Getopt::Long; Getopt::Long::Configure ("bundling"); my $ignore_case = 0; my $number_line = 0; my $invert_results = 0; my $verbose = 0; my $result = GetOptions( 'i|ignore_case' => \$ignore_case, 'n|number' => \$number_line, 'v|invert' => \$invert_results, 'verbose' => \$verbose, ); my $regex = shift; if ( $ignore_case ) { $regex = "(?i:$regex)"; } $regex = qr/$regex/; print "\$regex=$regex\n"; if ( $verbose ) { print "Verbose: Ignoring case.\n" if $ignore_case; print "Verbose: Printing file name and line number.\n" if $number_line; print "Verbose: Inverting result set.\n" if $invert_results; print "\n"; } @ARGV = map { glob "$_" } @ARGV; while ( <> ) { my $matches = m/$regex/; next unless $matches ^ $invert_results; print "$ARGV\:$.:" if $number_line; print; } __END__ :endofperl A: First, turn it into a real script instead of a one-liner: use strict; use warnings; my $pattern = shift or die "Usage: $0 <pattern> [files|-]\n"; while (<>) { print if /$pattern/ } Then turn it into a batch file using pl2bat: pl2bat mygrep.pl This will create "mygrep.bat". For a full-featured grep (and many other Unix applications) written completely in Perl, see the Perl Power Tools project. While the Perl Power Tools are good if you can only run Perl, I generally prefer the set of GnuWin32 tools. They don't require installation. (You don't need administrative privileges, just a directory you can write to.) A: You need to do something like this: @echo off perl -x -S script.pl %1 The "%1" will pass the argument to the Perl script. Save it as a .bat file, and you're good to go. A: I agree with Axeman and Mr. Hayes about using a better tool for the job. That said, you could try something like this in your batch file to run your custom script against a file wildcard expression: @echo off for /f "usebackq delims==" %%f in (`dir /w /b %2`) do ( perl -n -e "print $_ if (m!%1!);" "%%f" REM or something like: myperlscript.pl %1 "%%f" ) In this way, you can do things like "grep mypattern myfile.txt", "grep mypattern .", "grep mypattern *.doc", etc.
How can I make a batch file to act like a simple grep using Perl?
I already know the obvious answer to this question: "just download <insert favorite windows grep or grep-like tool here>". However, I work in an environment with strict controls by the local IT staff as to what we're allowed to have on our computers. Suffice it to say: I have access to Perl on Windows XP. Here's a quick Perl script I came up with that does what I want, but I haven't figured up how to set up a batch file such that I can either pipe a command output into it, or pass a file (or list of files?) as an argument after the "expression to grep": perl -n -e "print $_ if (m![expression]!);" [filename] How do I write a batch script that I can do something like, for example: dir | grep.bat mypattern grep.bat mypattern myfile.txt EDIT: Even though I marked another "answer", I wanted to give kudos to Ray Hayes answer, as it is really the "Windows Way" to do it, even if another answer is technically closer to what I wanted.
[ "Most of the power of grep is already available on your machine in the Windows application FindStr.exe which is part of all Windows 2000, XP and Vista machines! It offers RegExpr etc.\nFar easier than a batch file which in turn calls Perl!\nc:\\>FindStr /? \nSearches for strings in files.\n\nFINDSTR [/B] [/E] [/L] [/R] [/S] [/I] [/X] [/V] [/N] [/M] [/O] [/P] [/F:file]\n [/C:string] [/G:file] [/D:dir list] [/A:color attributes] [/OFF[LINE]]\n strings [[drive:][path]filename[ ...]]\n\n /B Matches pattern if at the beginning of a line.\n /E Matches pattern if at the end of a line.\n /L Uses search strings literally.\n /R Uses search strings as regular expressions.\n /S Searches for matching files in the current directory and all\n subdirectories.\n /I Specifies that the search is not to be case-sensitive.\n /X Prints lines that match exactly.\n /V Prints only lines that do not contain a match.\n /N Prints the line number before each line that matches.\n /M Prints only the filename if a file contains a match.\n /O Prints character offset before each matching line.\n /P Skip files with non-printable characters.\n /OFF[LINE] Do not skip files with offline attribute set.\n /A:attr Specifies color attribute with two hex digits. See \"color /?\"\n /F:file Reads file list from the specified file(/ stands for console).\n /C:string Uses specified string as a literal search string.\n /G:file Gets search strings from the specified file(/ stands for console).\n /D:dir Search a semicolon delimited list of directories\n strings Text to be searched for.\n [drive:][path]filename\n Specifies a file or files to search.\n\nUse spaces to separate multiple search strings unless the argument is prefixed\nwith /C. For example, 'FINDSTR \"hello there\" x.y' searches for \"hello\" or\n\"there\" in file x.y. 'FINDSTR /C:\"hello there\" x.y' searches for\n\"hello there\" in file x.y.\n\nRegular expression quick reference:\n . Wildcard: any character\n * Repeat: zero or more occurances of previous character or class\n ^ Line position: beginning of line\n $ Line position: end of line\n [class] Character class: any one character in set\n [^class] Inverse class: any one character not in set\n [x-y] Range: any characters within the specified range\n \\x Escape: literal use of metacharacter x\n \\<xyz Word position: beginning of word\n xyz\\> Word position: end of word\n\n", "Download and install ack. It's a superior replacement to grep and - thanks to Perl's magic dual mode .BAT / Perl script magic - it'll work on the command line for you.\n", "I wrote this a while back:\n@rem = '--*-Perl-*--\n@echo off\nperl -x -S %0 %*\ngoto endofperl\n\n\n@rem -- BEGIN PERL -- ';\n#!d:/Perl/bin/perl.exe -w\n#line 10\nuse strict; \n#use Test::Setup;\nuse Getopt::Long;\n\nGetopt::Long::Configure (\"bundling\");\n\nmy $ignore_case = 0;\nmy $number_line = 0;\nmy $invert_results = 0;\nmy $verbose = 0;\n\nmy $result = GetOptions( \n 'i|ignore_case' => \\$ignore_case, \n 'n|number' => \\$number_line,\n 'v|invert' => \\$invert_results,\n 'verbose' => \\$verbose,\n);\nmy $regex = shift;\n\nif ( $ignore_case ) { \n $regex = \"(?i:$regex)\";\n}\n$regex = qr/$regex/;\nprint \"\\$regex=$regex\\n\";\nif ( $verbose ) { \n print \"Verbose: Ignoring case.\\n\" if $ignore_case;\n print \"Verbose: Printing file name and line number.\\n\" if $number_line;\n print \"Verbose: Inverting result set.\\n\" if $invert_results;\n print \"\\n\";\n}\n\n@ARGV = map { glob \"$_\" } @ARGV;\n\nwhile ( <> ) { \n my $matches = m/$regex/;\n next unless $matches ^ $invert_results;\n print \"$ARGV\\:$.:\" if $number_line;\n print;\n}\n\n__END__\n:endofperl\n\n", "First, turn it into a real script instead of a one-liner:\nuse strict;\nuse warnings;\n\nmy $pattern = shift or die \"Usage: $0 <pattern> [files|-]\\n\";\nwhile (<>) { print if /$pattern/ }\n\nThen turn it into a batch file using pl2bat:\npl2bat mygrep.pl\n\nThis will create \"mygrep.bat\".\nFor a full-featured grep (and many other Unix applications) written completely in Perl, see the Perl Power Tools project.\nWhile the Perl Power Tools are good if you can only run Perl, I generally prefer the set of GnuWin32 tools. They don't require installation. (You don't need administrative privileges, just a directory you can write to.)\n", "You need to do something like this:\n@echo off\nperl -x -S script.pl %1\n\nThe \"%1\" will pass the argument to the Perl script. Save it as a .bat file, and you're good to go.\n", "I agree with Axeman and Mr. Hayes about using a better tool for the job. That said, you could try something like this in your batch file to run your custom script against a file wildcard expression:\n@echo off\n\nfor /f \"usebackq delims==\" %%f in (`dir /w /b %2`) do (\n perl -n -e \"print $_ if (m!%1!);\" \"%%f\"\n REM or something like: myperlscript.pl %1 \"%%f\"\n)\n\nIn this way, you can do things like \"grep mypattern myfile.txt\", \"grep mypattern .\", \"grep mypattern *.doc\", etc.\n" ]
[ 27, 12, 5, 5, 1, 1 ]
[]
[]
[ "batch_file", "perl", "windows" ]
stackoverflow_0000106053_batch_file_perl_windows.txt
Q: What is the most stable, least intrusive way to track web traffic between two sites? I need to track traffic between a specific set of web sites. I would then store the number of clicks in a database table with the fields fromSite, toSite, day, noOfClicks. The complete urls are unimportant - only web site identity is needed. I've ruled out redirects since I don't want my server to be a single point of failure. I want the links to work even if the tracking application or server is down or overloaded. Another goal is to minimize the work each participating site has to do in order for the tracking to work. What would be the best way to solve this problem? A: The best way is to use an analytics program like Google Analytics, and to review the reports for each domain. A: Do you have access to the logs on all of the sites in question? If so, you should be able to extract that data from the log files (Referer header). A: You could place an onclick event on to each link that goes between each site. the onclick would then call some javascript which could do a server call back to register the click.
What is the most stable, least intrusive way to track web traffic between two sites?
I need to track traffic between a specific set of web sites. I would then store the number of clicks in a database table with the fields fromSite, toSite, day, noOfClicks. The complete urls are unimportant - only web site identity is needed. I've ruled out redirects since I don't want my server to be a single point of failure. I want the links to work even if the tracking application or server is down or overloaded. Another goal is to minimize the work each participating site has to do in order for the tracking to work. What would be the best way to solve this problem?
[ "The best way is to use an analytics program like Google Analytics, and to review the reports for each domain.\n", "Do you have access to the logs on all of the sites in question? If so, you should be able to extract that data from the log files (Referer header).\n", "You could place an onclick event on to each link that goes between each site. the onclick would then call some javascript which could do a server call back to register the click.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "tracking", "traffic" ]
stackoverflow_0000106285_tracking_traffic.txt
Q: How do I change the colors displayed in cygwin rxvt? When I print "\[\e[34m\]sometext" I get some text in blue, but can I specify the shade of blue somewhere? A: If your rxvt has 256 colors enabled, you can use extended color codes (e.g. "^[[38;5;36m"). Try this script to test if your terminal has 256 colors enabled. Probably you need to patch/recompile it or download another version. I recommend this tutorial. A: You're using an ANSI escape sequence, which has very limited color options. I use an .Xdefaults file (explained in this tutorial). These options won't make your shell prompt all colorful, but is used by editors such as vim. Be aware that .Xdefaults can be picky about UNIX (CR) vs Windows (CRLF) line endings. Use d2u or u2d to switch line endings as necessary.
How do I change the colors displayed in cygwin rxvt?
When I print "\[\e[34m\]sometext" I get some text in blue, but can I specify the shade of blue somewhere?
[ "If your rxvt has 256 colors enabled, you can use extended color codes (e.g. \"^[[38;5;36m\"). Try this script to test if your terminal has 256 colors enabled. Probably you need to patch/recompile it or download another version. I recommend this tutorial.\n", "You're using an ANSI escape sequence, which has very limited color options. I use an .Xdefaults file (explained in this tutorial). These options won't make your shell prompt all colorful, but is used by editors such as vim.\nBe aware that .Xdefaults can be picky about UNIX (CR) vs Windows (CRLF) line endings. Use d2u or u2d to switch line endings as necessary.\n" ]
[ 3, 1 ]
[]
[]
[ "bash", "cygwin", "rxvt" ]
stackoverflow_0000099348_bash_cygwin_rxvt.txt
Q: Polymorphic Models in Ruby on Rails? For a project I'm working on, the store has two types of products - a real product and a group of products. For this discussion, let's call them "1 T shirt" and "a box of T shirts". For one t-shirt, I need to store the normal attributes - price, sku, size, color, description, etc. For the box of t-shirts I need to have a price, sku, description, and a list of t-shirts that are included. So right now, I'm representing this with the Shirt and ShirtCollection models. I can see this causing difficulty down the road when I need to do reporting and order management and making sure SKUs are unique. So what's the best way of representing this? A: You can have a Tshirt table and then self reference it with a has_many :through association. Tshirt - id, sku, price, size, color, description, is_box TshirtBox - parent_tshirt (id that references tshirt), child_tshirt (id that references tshirt) Check out this link for more on self referential has_many :through http://www.aldenta.com/2006/11/10/has_many-through-self-referential-example/ A: I would have the following models Tshirt TshirtBox has_many TshirtItems TshirtBoxItems (This is basically a join table with an id tshirt_box_id and tshirt_id) belongs_to TshirtBox TshirtBoxItems is a way to link a Tshirt with a box and potentially other things in the future.
Polymorphic Models in Ruby on Rails?
For a project I'm working on, the store has two types of products - a real product and a group of products. For this discussion, let's call them "1 T shirt" and "a box of T shirts". For one t-shirt, I need to store the normal attributes - price, sku, size, color, description, etc. For the box of t-shirts I need to have a price, sku, description, and a list of t-shirts that are included. So right now, I'm representing this with the Shirt and ShirtCollection models. I can see this causing difficulty down the road when I need to do reporting and order management and making sure SKUs are unique. So what's the best way of representing this?
[ "You can have a Tshirt table and then self reference it with a has_many :through association.\nTshirt - id, sku, price, size, color, description, is_box\nTshirtBox - parent_tshirt (id that references tshirt), child_tshirt (id that references tshirt)\nCheck out this link for more on self referential has_many :through http://www.aldenta.com/2006/11/10/has_many-through-self-referential-example/\n", "I would have the following models\nTshirt\nTshirtBox has_many TshirtItems\nTshirtBoxItems (This is basically a join table with an id tshirt_box_id and tshirt_id) belongs_to TshirtBox\nTshirtBoxItems is a way to link a Tshirt with a box and potentially other things in the future.\n" ]
[ 2, 1 ]
[]
[]
[ "ruby", "ruby_on_rails" ]
stackoverflow_0000104086_ruby_ruby_on_rails.txt
Q: Communicating between Java and Flash without a Flash-specific server I have Java and Flash client applications. What is the best way for the two to communicate without special Flash-specific servers such as BlazeDS or Red5? I am looking for a light client-only solution. A: Well, you can make http requests from flash to any url... so if your java server has a point where it can listen to incoming requests and process XML or JSON, your flash client can just make the request to that url. BlazeDS and Red5 just aim to make it simpler by handling the translation for you making it possible to call the server-side functions transparently. A: Are they running in a browser (applet and SWF), or are they standalone apps? If they're running in a browser then you can use javascript. Both Flash and Java are can access javascript. It's fragile, but it works. If they're running as actual applications then you can have Java open a socket connection on some port. Then Flash can connect to that and they can send XML data back and forth. I've done both of these, so I know they both work. The javascript thing is fragile, but the socket stuff has worked great. A: WebORB for Java may be of some help to you. It integrates with your J2EE code. For more info: http://www.themidnightcoders.com/weborb/java/ I'm sorry, I reread your question that you are only looking for a client side solution. In this case, WebORB will not help you. Sorry for the misunderstanding. A: Merapi Bridge API Merapi allows developers to connect Adobe AIR applications, written in Adobe Flex to Java applications running on the user's local computer. A: There's a Flash implementation of Caucho's Hessian web service protocol. This approach would be similar to using JSon or XML, but is more performant, since Hessian is a binary protocol. If you happen to be using Spring on your server, you can use the Spring/Hessian binding to call you Spring services directly from your Flash application with minimal work.
Communicating between Java and Flash without a Flash-specific server
I have Java and Flash client applications. What is the best way for the two to communicate without special Flash-specific servers such as BlazeDS or Red5? I am looking for a light client-only solution.
[ "Well, you can make http requests from flash to any url... so if your java server has a point where it can listen to incoming requests and process XML or JSON, your flash client can just make the request to that url. BlazeDS and Red5 just aim to make it simpler by handling the translation for you making it possible to call the server-side functions transparently.\n", "Are they running in a browser (applet and SWF), or are they standalone apps?\nIf they're running in a browser then you can use javascript. Both Flash and Java are can access javascript. It's fragile, but it works.\nIf they're running as actual applications then you can have Java open a socket connection on some port. Then Flash can connect to that and they can send XML data back and forth.\nI've done both of these, so I know they both work. The javascript thing is fragile, but the socket stuff has worked great.\n", "WebORB for Java may be of some help to you. It integrates with your J2EE code.\nFor more info:\nhttp://www.themidnightcoders.com/weborb/java/\nI'm sorry, I reread your question that you are only looking for a client side solution. In this case, WebORB will not help you. Sorry for the misunderstanding.\n", "Merapi Bridge API\n\nMerapi allows developers to connect Adobe AIR applications, written in Adobe Flex to Java applications running on the user's local computer.\n\n", "There's a Flash implementation of Caucho's Hessian web service protocol. This approach would be similar to using JSon or XML, but is more performant, since Hessian is a binary protocol. If you happen to be using Spring on your server, you can use the Spring/Hessian binding to call you Spring services directly from your Flash application with minimal work.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "flash", "java" ]
stackoverflow_0000103861_flash_java.txt
Q: Accessing an image in the projects resources? How do I access an image during run time that I have added to the projects resources? I would like to be able to do something like this: if (value) { picBox1.image = Resources.imageA; } else { picBox2.image = Resources.imageB; } A: something.Image = Namespace.ProjectName.Properties.Resources.ImageName;
Accessing an image in the projects resources?
How do I access an image during run time that I have added to the projects resources? I would like to be able to do something like this: if (value) { picBox1.image = Resources.imageA; } else { picBox2.image = Resources.imageB; }
[ "something.Image = Namespace.ProjectName.Properties.Resources.ImageName;\n" ]
[ 4 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0000106396_.net_c#.txt
Q: Anyone used Dabo for a medium-big project? We're at the beginning of a new ERP-ish client-server application, developed as a Python rich client. We're currently evaluating Dabo as our main framework and it looks quite nice and easy to use, but I was wondering, has anyone used it for medium-to-big sized projects? Thanks for your time! A: I'm one of the authors of the Dabo framework. One of our users pointed out to me the extremely negative answer you received, and so I thought I had better chime in and clear up some of the incorrect assumptions in the first reply. Dabo is indeed well-known in the Python community. I have presented it at 3 of the last 4 US PyCons, and we have several hundred users who subscribe to our email lists. Our website (http://dabodev.com) has not had any service interruptions; I don't know why the first responder claimed to have trouble. Support is through our email lists, and we pride ourselves on helping people quickly and efficiently. Many of the newbie questions help us to identify places where our docs are lacking, so we strongly encourage newcomers to ask questions! Dabo has been around for 4 years. The fact that it is still a few days away from a 0.9 release is more of a reflection of the rather conservative version numbering of my partner, Paul McNett, than any instabilities in the framework. I know of Dabo apps that have been in production since 2006; I have used it for my own projects since 2004. Whatever importance you attach to release numbers, we are at revision 4522, with consistent work being done to add more and more stuff to the framework; refactor and streamline some of the older code, and yes, clean up some bugs. Please sign up for our free email support list: http://leafe.com/mailman/listinfo/dabo-users ...and ask any questions you may have about Dabo there. Not many people have discovered Stack Overflow yet, so I wouldn't expect very informed answers here yet. There are several regular contributors there who use Dabo on a daily basis, and are usually more than happy to offer their opinions and their help. A: I have no Dabo experience at all but this question is on the top of the list fo such a long time that I decided to give it a shot: Framework selection Assumptions: medium-to-big project: we're talking about a team of more than 20 people working on something for about a year for the first phase. This is usually an expensive and very important effort for the client. this project will have significant amount of users (around a hundred) so performance is essential it's an ERP project so the application will work with large amounts of information you have no prior Dabo experience in your team Considerations: I could not open Dabo project site right now. There seems to be some server problem. That alone would make me think twice about using it for a big project. It's not a well-known framework. Typing Dabo in Google returns almost no useful results, it does not have a Wikipedia page, all-in-all it's quite obscure. It means that when you will have problems with it (and you will have problems with it) you will have almost no place to go. Your question was unanswered for 8 days on SO, this alone would make me re-consider. If you base your project on an obscure technology you have no previous experience with - it's a huge risk. You don't have people who know that framework in your team. It means that you have to learn it to get any results at all and to master it will require quite significant amount of time. You will have to factor that time into your project plan. Do you really need it? What does this framework give you that you cannot do yourself? Quite a lot of time my team tried to use some third-party component or tool only to find that building a custom one would be faster than dealing with third-party problems and limitations. There are brilliant tools available to people nowadays and we would be lost without them - but you have to carefully consider if this tool is one of them Dabo project version is 0.84. Do you know if they spend time optimising their code for performance at this stage? Did you run any tests to see it will sustain the load you have in your NFRs. Hope that helps :) Good luck with your project
Anyone used Dabo for a medium-big project?
We're at the beginning of a new ERP-ish client-server application, developed as a Python rich client. We're currently evaluating Dabo as our main framework and it looks quite nice and easy to use, but I was wondering, has anyone used it for medium-to-big sized projects? Thanks for your time!
[ "I'm one of the authors of the Dabo framework. One of our users pointed out to me the extremely negative answer you received, and so I thought I had better chime in and clear up some of the incorrect assumptions in the first reply.\nDabo is indeed well-known in the Python community. I have presented it at 3 of the last 4 US PyCons, and we have several hundred users who subscribe to our email lists. Our website (http://dabodev.com) has not had any service interruptions; I don't know why the first responder claimed to have trouble. Support is through our email lists, and we pride ourselves on helping people quickly and efficiently. Many of the newbie questions help us to identify places where our docs are lacking, so we strongly encourage newcomers to ask questions!\nDabo has been around for 4 years. The fact that it is still a few days away from a 0.9 release is more of a reflection of the rather conservative version numbering of my partner, Paul McNett, than any instabilities in the framework. I know of Dabo apps that have been in production since 2006; I have used it for my own projects since 2004. Whatever importance you attach to release numbers, we are at revision 4522, with consistent work being done to add more and more stuff to the framework; refactor and streamline some of the older code, and yes, clean up some bugs.\nPlease sign up for our free email support list:\nhttp://leafe.com/mailman/listinfo/dabo-users\n...and ask any questions you may have about Dabo there. Not many people have discovered Stack Overflow yet, so I wouldn't expect very informed answers here yet. There are several regular contributors there who use Dabo on a daily basis, and are usually more than happy to offer their opinions and their help.\n", "I have no Dabo experience at all but this question is on the top of the list fo such a long time that I decided to give it a shot:\nFramework selection\nAssumptions:\n\nmedium-to-big project: we're talking about a team of more than 20 people working on something for about a year for the first phase. This is usually an expensive and very important effort for the client.\nthis project will have significant amount of users (around a hundred) so performance is essential\nit's an ERP project so the application will work with large amounts of information\nyou have no prior Dabo experience in your team\n\nConsiderations:\n\nI could not open Dabo project site right now. There seems to be some server problem. That alone would make me think twice about using it for a big project.\nIt's not a well-known framework. Typing Dabo in Google returns almost no useful results, it does not have a Wikipedia page, all-in-all it's quite obscure. It means that when you will have problems with it (and you will have problems with it) you will have almost no place to go. Your question was unanswered for 8 days on SO, this alone would make me re-consider. If you base your project on an obscure technology you have no previous experience with - it's a huge risk.\nYou don't have people who know that framework in your team. It means that you have to learn it to get any results at all and to master it will require quite significant amount of time. You will have to factor that time into your project plan. Do you really need it?\nWhat does this framework give you that you cannot do yourself? Quite a lot of time my team tried to use some third-party component or tool only to find that building a custom one would be faster than dealing with third-party problems and limitations. There are brilliant tools available to people nowadays and we would be lost without them - but you have to carefully consider if this tool is one of them\nDabo project version is 0.84. Do you know if they spend time optimising their code for performance at this stage? Did you run any tests to see it will sustain the load you have in your NFRs.\n\nHope that helps :) Good luck with your project\n" ]
[ 25, 2 ]
[]
[]
[ "dabo", "erp", "python" ]
stackoverflow_0000056417_dabo_erp_python.txt
Q: sorting hashes/arrays in awk Is there an easy way to do any of the following things in awk? Sorting array/hash by it's data Sorting a hash by it's string key A: Here's someone else's answer to a very similar problem: http://www.computing.net/answers/unix/urgent-help-with-sorting-in-awk/4442.html Which is supposed to be something like this: gawk 'BEGIN {c=1} { array[c] = sprintf ("%s %s", $2, $1); c++ } END { asort(array); for (x=1;x&lt;c;x++) { print array[x] } }' Note that I used 'gawk'. If you want built-in sorting, use gawk. That example takes a 'space-separated' input of key value pairs and sorts them based on the second value (of course it prints them out in value/key format, but you see what I'm doing there.) In order to do that to an array extant in gawk, you'd use something similar. If using awk or mawk, you'll have to use one of the many sort functions available in man pages to accomplish the sort. From the gawk manpage: All arrays in AWK are associative, i.e. indexed by string values. The special operator in may be used in an if or while statement to see if an array has an index consisting of a particular value. if (val in array) print array[val] If the array has multiple subscripts, use (i, j) in array.
sorting hashes/arrays in awk
Is there an easy way to do any of the following things in awk? Sorting array/hash by it's data Sorting a hash by it's string key
[ "Here's someone else's answer to a very similar problem:\nhttp://www.computing.net/answers/unix/urgent-help-with-sorting-in-awk/4442.html\nWhich is supposed to be something like this:\ngawk 'BEGIN {c=1} { array[c] = sprintf (\"%s %s\", $2, $1); c++ } \nEND { asort(array); for (x=1;x&lt;c;x++) { print array[x] } }'\n\nNote that I used 'gawk'. If you want built-in sorting, use gawk.\nThat example takes a 'space-separated' input of key value pairs and sorts them based on the second value (of course it prints them out in value/key format, but you see what I'm doing there.)\nIn order to do that to an array extant in gawk, you'd use something similar.\nIf using awk or mawk, you'll have to use one of the many sort functions available in man pages to accomplish the sort.\n\nFrom the gawk manpage:\n All arrays in AWK are associative, i.e. indexed by string values.\n The special operator in may be used in an if or while statement to see if an array has an index consisting of a\n particular value.\n if (val in array)\n print array[val]\n If the array has multiple subscripts, use (i, j) in array.\n\n" ]
[ 3 ]
[]
[]
[ "arrays", "awk", "hash", "sorting" ]
stackoverflow_0000106484_arrays_awk_hash_sorting.txt
Q: How do I get CTRL-LEFTCLICK navigation on Ruby to work in IntelliJ 7? In IntelliJ when editing Java files CTRL+LEFTCLICK on an identifier takes me to where that identifier is defined. For some reason it doesn't work when editing Ruby code. Any ideas? A: This is because the program doesn't know where to find that identifier... it's more of a program use question than a programming problem. A: No, this is not the case. When I hover over the identifier, in 95% of cases IntelliJ is able to work out precisely what the identifier is (local variable, member variable, class etc) and show me the fully qualified class name etc in a tool tip. Even the holding CTRL and hovering over the identifier shows the blue underline: it's just that nothing happens when I click on it. I would expect to be taken to where the identifier is defined. So either I have a configuration issue or there is a bug in IntelliJ - just don't know whicj.
How do I get CTRL-LEFTCLICK navigation on Ruby to work in IntelliJ 7?
In IntelliJ when editing Java files CTRL+LEFTCLICK on an identifier takes me to where that identifier is defined. For some reason it doesn't work when editing Ruby code. Any ideas?
[ "This is because the program doesn't know where to find that identifier... it's more of a program use question than a programming problem.\n", "No, this is not the case. When I hover over the identifier, in 95% of cases IntelliJ is able to work out precisely what the identifier is (local variable, member variable, class etc) and show me the fully qualified class name etc in a tool tip. \nEven the holding CTRL and hovering over the identifier shows the blue underline: it's just that nothing happens when I click on it. I would expect to be taken to where the identifier is defined. So either I have a configuration issue or there is a bug in IntelliJ - just don't know whicj.\n" ]
[ 1, 0 ]
[]
[]
[ "intellij_idea", "ruby" ]
stackoverflow_0000092768_intellij_idea_ruby.txt
Q: Is there any way to enable the mousewheel (for scrolling) in Java apps? Ideally I'd like a way to enable the mouse wheel for scrolling in old compiled java runtime apps, but java code to explicitly utilise it for an individual app would suffice. A: You shouldn't have to recompile against 1.5 or 1.6 to get mousewheel, unless you wrote custom components. The mousewheel behaviors were added to the swing classes, so just running old java apps against the new JRE should have mousewheel support without having to do anything (at least in scrollable/JScrollPane based stuff) A: Mousewheel scrolling is supported in current Swing applications. You could try compiling your application using JDK 1.4, 1.5 or 1.6. Depending on the complexity and environment moving to a new version may or may not be a viable option. This tutorial shows how to write your own mousewheel listener if you want something different to the normal behaviour. A: Take a look Pushing Pixels blog: http://www.pushing-pixels.org/index.php?s=mouse+wheel A: Without access to the source code, you can't do it. If you do have access to the source, then do what RichH said.
Is there any way to enable the mousewheel (for scrolling) in Java apps?
Ideally I'd like a way to enable the mouse wheel for scrolling in old compiled java runtime apps, but java code to explicitly utilise it for an individual app would suffice.
[ "You shouldn't have to recompile against 1.5 or 1.6 to get mousewheel, unless you wrote custom components. The mousewheel behaviors were added to the swing classes, so just running old java apps against the new JRE should have mousewheel support without having to do anything (at least in scrollable/JScrollPane based stuff)\n", "Mousewheel scrolling is supported in current Swing applications. You could try compiling your application using JDK 1.4, 1.5 or 1.6. Depending on the complexity and environment moving to a new version may or may not be a viable option.\nThis tutorial shows how to write your own mousewheel listener if you want something different to the normal behaviour.\n", "Take a look Pushing Pixels blog: http://www.pushing-pixels.org/index.php?s=mouse+wheel\n", "Without access to the source code, you can't do it. If you do have access to the source, then do what RichH said.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "java", "mouse", "mousewheel", "scroll" ]
stackoverflow_0000105698_java_mouse_mousewheel_scroll.txt
Q: Exceptions: Is this a good practice? This is written in PHP, but it's really language agnostic. try { try { $issue = new DM_Issue($core->db->escape_string($_GET['issue'])); } catch(DM_Exception $e) { throw new Error_Page($tpl, ERR_NOT_FOUND, $e->getMessage()); } } catch(Error_Page $e) { die($e); } Is nested try, catch blocks a good practice to follow? It seems a little bulky just for an error page - however my Issue Datamanager throws an Exception if an error occurs and I consider that to be a good way of error detecting. The Error_Page exception is simply an error page compiler. I might just be pedantic, but do you think this is a good way to report errors and if so can you suggest a better way to write this? Thanks A: You're using Exceptions for page logic, and I personally think that's not a good thing. Exceptions should be used to signal when bad or unexpected things happen, not to control the output of an error page. If you want to generate an error page based on Exceptions, consider using set_exception_handler. Any uncaught exceptions are run through whatever callback method you specify. Keep in mind that this doesn't stop the "fatalness" of an Exception. After an exception is passed through your callback, execution will stop like normal after any uncaught exception. A: I think you'd be better off not nesting. If you expect multiple exception types, have multiple catches. try{ Something(); } catch( SpecificException se ) {blah();} catch( AnotherException ae ) {blah();} A: The ideal is for exceptions to be caught at the level which can handle them. Not before (waste of time), and not after (you lose context). So, if $tpl and ERR_NOT_FOUND are information which is only "known" close to the new DM_Issue call, for example because there are other places where you create a DM_Issue and would want ERR_SOMETHING_ELSE, or because the value of $tpl varies, then you're catching the first exception at the right place. How to get from that place to dying is another question. An alternative would be to die right there. But if you do that then there's no opportunity for intervening code to do anything (such as clearing something up in some way or modifying the error page) after the error but before exit. It's also good to have explicit control flow. So I think you're good. I'm assuming that your example isn't a complete application - if it is then it's probably needlessly verbose, and you could just die in the DM_Exception catch clause. But for a real app I approve of the principle of not just dying in the middle of nowhere. A: Depending on your needs this could be fine, but I am generally pretty hesitant to catch an exception, wrap the message in a new exception, and rethrow it because you loose the stack trace (and potentially other) information from the original exception in the wrapping exception. If you're sure that you don't need that information when examining the wrapping exception then it's probably alright. A: I'm not sure about PHP but in e.g. C# you can have more then one catch-Block so there is no need for nested try/catch-combinations. Generally I believe that errorhandling with try/catch/finally is always common-sense, also for showing "only" a error-page. It's a clean way to handle errors and avoid strange behavior on crashing. A: I wouldn't throw an exception on issue not found - it's a valid state of an application, and you don't need a stack trace just to display a 404. What you need to catch is unexpected failures, like sql errors - that's when exception handling comes in handy. I would change your code to look more like this: try { $issue = DM_Issue::fetch($core->db->escape_string($_GET['issue'])); } catch (SQLException $e) { log_error('SQL Error: DM_Issue::fetch()', $e->get_message()); } catch (Exception $e) { log_error('Exception: DM_Issue::fetch()', $e->get_message()); } if(!$issue) { display_error_page($tpl, ERR_NOT_FOUND); } else { // ... do stuff with $issue object. } A: Exceptions should be used only if there is a potentially site-breaking event - such as a database query not executing properly or something is misconfigured. A good example is that a cache or log directory is not writable by the Apache process. The idea here is that exceptions are for you, the developer, to halt code that can break the entire site so you can fix them before deployment. They are also sanity checks to make sure that if the environment changes (i.e. somebody alters the permissions of the cache folder or change the database scheme) the site is halted before it can damage anything. So, no; nested catch handlers are not a good idea. In my pages, my index.php file wraps its code in a try...cache block - and if something bad happens it checks to see if its in production or not; and either emails me and display a generic error page, or shows the error right on the screen. Remember: PHP is not C#. C# is (with the exception (hehe, no pun intended :p) of ASP.net) for applications that contain state - whereas PHP is a stateless scripting language.
Exceptions: Is this a good practice?
This is written in PHP, but it's really language agnostic. try { try { $issue = new DM_Issue($core->db->escape_string($_GET['issue'])); } catch(DM_Exception $e) { throw new Error_Page($tpl, ERR_NOT_FOUND, $e->getMessage()); } } catch(Error_Page $e) { die($e); } Is nested try, catch blocks a good practice to follow? It seems a little bulky just for an error page - however my Issue Datamanager throws an Exception if an error occurs and I consider that to be a good way of error detecting. The Error_Page exception is simply an error page compiler. I might just be pedantic, but do you think this is a good way to report errors and if so can you suggest a better way to write this? Thanks
[ "You're using Exceptions for page logic, and I personally think that's not a good thing. Exceptions should be used to signal when bad or unexpected things happen, not to control the output of an error page. If you want to generate an error page based on Exceptions, consider using set_exception_handler. Any uncaught exceptions are run through whatever callback method you specify. Keep in mind that this doesn't stop the \"fatalness\" of an Exception. After an exception is passed through your callback, execution will stop like normal after any uncaught exception.\n", "I think you'd be better off not nesting. If you expect multiple exception types, have multiple catches.\ntry{\n Something();\n}\ncatch( SpecificException se )\n{blah();}\ncatch( AnotherException ae )\n{blah();}\n\n", "The ideal is for exceptions to be caught at the level which can handle them. Not before (waste of time), and not after (you lose context).\nSo, if $tpl and ERR_NOT_FOUND are information which is only \"known\" close to the new DM_Issue call, for example because there are other places where you create a DM_Issue and would want ERR_SOMETHING_ELSE, or because the value of $tpl varies, then you're catching the first exception at the right place.\nHow to get from that place to dying is another question. An alternative would be to die right there. But if you do that then there's no opportunity for intervening code to do anything (such as clearing something up in some way or modifying the error page) after the error but before exit. It's also good to have explicit control flow. So I think you're good.\nI'm assuming that your example isn't a complete application - if it is then it's probably needlessly verbose, and you could just die in the DM_Exception catch clause. But for a real app I approve of the principle of not just dying in the middle of nowhere.\n", "Depending on your needs this could be fine, but I am generally pretty hesitant to catch an exception, wrap the message in a new exception, and rethrow it because you loose the stack trace (and potentially other) information from the original exception in the wrapping exception. If you're sure that you don't need that information when examining the wrapping exception then it's probably alright.\n", "I'm not sure about PHP but in e.g. C# you can have more then one catch-Block so there is no need for nested try/catch-combinations.\nGenerally I believe that errorhandling with try/catch/finally is always common-sense, also for showing \"only\" a error-page. It's a clean way to handle errors and avoid strange behavior on crashing.\n", "I wouldn't throw an exception on issue not found - it's a valid state of an application, and you don't need a stack trace just to display a 404. \nWhat you need to catch is unexpected failures, like sql errors - that's when exception handling comes in handy. I would change your code to look more like this:\ntry {\n $issue = DM_Issue::fetch($core->db->escape_string($_GET['issue']));\n}\ncatch (SQLException $e) {\n log_error('SQL Error: DM_Issue::fetch()', $e->get_message());\n}\ncatch (Exception $e) {\n log_error('Exception: DM_Issue::fetch()', $e->get_message());\n}\n\nif(!$issue) {\n display_error_page($tpl, ERR_NOT_FOUND);\n}\nelse\n{\n // ... do stuff with $issue object.\n}\n\n", "Exceptions should be used only if there is a potentially site-breaking event - such as a database query not executing properly or something is misconfigured. A good example is that a cache or log directory is not writable by the Apache process.\nThe idea here is that exceptions are for you, the developer, to halt code that can break the entire site so you can fix them before deployment. They are also sanity checks to make sure that if the environment changes (i.e. somebody alters the permissions of the cache folder or change the database scheme) the site is halted before it can damage anything.\nSo, no; nested catch handlers are not a good idea. In my pages, my index.php file wraps its code in a try...cache block - and if something bad happens it checks to see if its in production or not; and either emails me and display a generic error page, or shows the error right on the screen.\nRemember: PHP is not C#. C# is (with the exception (hehe, no pun intended :p) of ASP.net) for applications that contain state - whereas PHP is a stateless scripting language.\n" ]
[ 9, 2, 2, 0, 0, 0, 0 ]
[]
[]
[ "error_handling", "exception", "exception_handling", "language_agnostic", "php" ]
stackoverflow_0000103583_error_handling_exception_exception_handling_language_agnostic_php.txt
Q: Need refactoring ideas for Arrow Anti-Pattern I have inherited a monster. It is masquerading as a .NET 1.1 application processes text files that conform to Healthcare Claim Payment (ANSI 835) standards, but it's a monster. The information being processed relates to healthcare claims, EOBs, and reimbursements. These files consist of records that have an identifier in the first few positions and data fields formatted according to the specs for that type of record. Some record ids are Control Segment ids, which delimit groups of records relating to a particular type of transaction. To process a file, my little monster reads the first record, determines the kind of transaction that is about to take place, then begins to process other records based on what kind of transaction it is currently processing. To do this, it uses a nested if. Since there are a number of record types, there are a number decisions that need to be made. Each decision involves some processing and 2-3 other decisions that need to be made based on previous decisions. That means the nested if has a lot of nests. That's where my problem lies. This one nested if is 715 lines long. Yes, that's right. Seven-Hundred-And-Fif-Teen Lines. I'm no code analysis expert, so I downloaded a couple of freeware analysis tools and came up with a McCabe Cyclomatic Complexity rating of 49. They tell me that's a pretty high number. High as in pollen count in the Atlanta area where 100 is the standard for high and the news says "Today's pollen count is 1,523". This is one of the finest examples of the Arrow Anti-Pattern I have ever been priveleged to see. At its highest, the indentation goes 15 tabs deep. My question is, what methods would you suggest to refactor or restructure such a thing? I have spent some time searching for ideas, but nothing has given me a good foothold. For example, substituting a guard condition for a level is one method. I have only one of those. One nest down, fourteen to go. Perhaps there is a design pattern that could be helpful. Would Chain of Command be a way to approach this? Keep in mind that it must stay in .NET 1.1. Thanks for any and all ideas. A: I just had some legacy code at work this week that was similar (although not as dire) as what you are describing. There is no one thing that will get you out of this. The state machine might be the final form your code takes, but thats not going to help you get there, nor should you decide on such a solution before untangling the mess you already have. First step I would take is to write a test for the existing code. This test isn't to show that the code is correct but to make sure you have not broken something when you start refactoring. Get a big wad of data to process, feed it to the monster, and get the output. That's your litmus test. if you can do this with a code coverage tool you will see what you test does not cover. If you can, construct some artificial records that will also exercise this code, and repeat. Once you feel you have done what you can with this task, the output data becomes your expected result for your test. Refactoring should not change the behavior of the code. Remember that. This is why you have known input and known output data sets to validate you are not going to break things. This is your safety net. Now Refactor! A couple things I did that i found useful: Invert if statements A huge problem I had was just reading the code when I couldn't find the corresponding else statement, I noticed that a lot of the blocks looked like this if (someCondition) { 100+ lines of code { ... } } else { simple statement here } By inverting the if I could see the simple case and then move onto the more complex block knowing what the other one already did. not a huge change, but helped me in understanding. Extract Method I used this a lot.Take some complex multi line block, grok it and shove it aside in it's own method. this allowed me to more easily see where there was code duplication. Now, hopefully, you haven't broken your code (test still passes right?), and you have more readable and better understood procedural code. Look it's already improved! But that test you wrote earlier isn't really good enough... it only tells you that you a duplicating the functionality (bugs and all) of the original code, and thats only the line you had coverage on as I'm sure you would find blocks of code that you can't figure out how to hit or just cannot ever hit (I've seen both in my work). Now the big changes where all the big name patterns come into play is when you start looking at how you can refactor this in a proper OO fashion. There is more than one way to skin this cat, and it will involve multiple patterns. Not knowing details about the format of these files you're parsing I can only toss around some helpful suggestions that may or may not be the best solutions. Refactoring to Patterns is a great book to assist in explainging patterns that are helpful in these situations. You're trying to eat an elephant, and there's no other way to do it but one bite at a time. Good luck. A: A state machine seems like the logical place to start, and using WF if you can swing it (sounds like you can't). You can still implement one without WF, you just have to do it yourself. However, thinking of it like a state machine from the start will probably give you a better implementation then creating a procedural monster that checks internal state on every action. Diagram out your states, what causes a transition. The actual code to process a record should be factored out, and called when the state executes (if that particular state requires it). So State1's execute calls your "read a record", then based on that record transitions to another state. The next state may read multiple records and call record processing instructions, then transition back to State1. A: One thing I do in these cases is to use the 'Composed Method' pattern. See Jeremy Miller's Blog Post on this subject. The basic idea is to use the refactoring tools in your IDE to extract small meaningful methods. Once you've done that, you may be able to further refactor and extract meaningful classes. A: I would start with uninhibited use of Extract Method. If you don't have it in your current Visual Studio IDE, you can either get a 3rd-party addin, or load your project in a newer VS. (It'll try to upgrade your project, but you will carefully ignore those changes instead of checking them in.) You said that you have code indented 15 levels. Start about 1/2-way out, and Extract Method. If you can come up with a good name, use it, but if you can't, extract anyway. Split in half again. You're not going for the ideal structure here; you're trying to break the code in to pieces that will fit in your brain. My brain is not very big, so I'd keep breaking & breaking until it doesn't hurt any more. As you go, look for any new long methods that seem to be different than the rest; make these in to new classes. Just use a simple class that has only one method for now. Heck, making the method static is fine. Not because you think they're good classes, but because you are so desperate for some organization. Check in often as you go, so you can checkpoint your work, understand the history later, be ready to do some "real work" without needing to merge, and save your teammates the hassle of hard merging. Eventually you'll need to go back and make sure the method names are good, that the set of methods you've created make sense, clean up the new classes, etc. If you have a highly reliable Extract Method tool, you can get away without good automated tests. (I'd trust VS in this, for example.) Otherwise, make sure you're not breaking things, or you'll end up worse than you started: with a program that doesn't work at all. A pairing partner would be helpful here. A: Judging by the description, a state machine might be the best way to deal with it. Have an enum variable to store the current state, and implement the processing as a loop over the records, with a switch or if statements to select the action to take based on the current state and the input data. You can also easily dispatch the work to separate functions based on the state using function pointers, too, if it's getting too bulky. A: There was a pretty good blog post about it at Coding Horror. I've only come across this anti-pattern once, and I pretty much just followed his steps. A: Sometimes I combine the state pattern with a stack. It works well for hierarchical structures; a parent element knows what state to push onto the stack to handle a child element, but a child doesn't have to know anything about its parent. In other words, the child doesn't know what the next state is, it simply signals that it is "complete" and gets popped off the stack. This helps to decouple the states from each other by keeping dependencies uni-directional. It works great for processing XML with a SAX parser (the content handler just pushes and pops states to change its behavior as elements are entered and exited). EDI should lend itself to this approach too.
Need refactoring ideas for Arrow Anti-Pattern
I have inherited a monster. It is masquerading as a .NET 1.1 application processes text files that conform to Healthcare Claim Payment (ANSI 835) standards, but it's a monster. The information being processed relates to healthcare claims, EOBs, and reimbursements. These files consist of records that have an identifier in the first few positions and data fields formatted according to the specs for that type of record. Some record ids are Control Segment ids, which delimit groups of records relating to a particular type of transaction. To process a file, my little monster reads the first record, determines the kind of transaction that is about to take place, then begins to process other records based on what kind of transaction it is currently processing. To do this, it uses a nested if. Since there are a number of record types, there are a number decisions that need to be made. Each decision involves some processing and 2-3 other decisions that need to be made based on previous decisions. That means the nested if has a lot of nests. That's where my problem lies. This one nested if is 715 lines long. Yes, that's right. Seven-Hundred-And-Fif-Teen Lines. I'm no code analysis expert, so I downloaded a couple of freeware analysis tools and came up with a McCabe Cyclomatic Complexity rating of 49. They tell me that's a pretty high number. High as in pollen count in the Atlanta area where 100 is the standard for high and the news says "Today's pollen count is 1,523". This is one of the finest examples of the Arrow Anti-Pattern I have ever been priveleged to see. At its highest, the indentation goes 15 tabs deep. My question is, what methods would you suggest to refactor or restructure such a thing? I have spent some time searching for ideas, but nothing has given me a good foothold. For example, substituting a guard condition for a level is one method. I have only one of those. One nest down, fourteen to go. Perhaps there is a design pattern that could be helpful. Would Chain of Command be a way to approach this? Keep in mind that it must stay in .NET 1.1. Thanks for any and all ideas.
[ "I just had some legacy code at work this week that was similar (although not as dire) as what you are describing.\nThere is no one thing that will get you out of this. The state machine might be the final form your code takes, but thats not going to help you get there, nor should you decide on such a solution before untangling the mess you already have.\nFirst step I would take is to write a test for the existing code. This test isn't to show that the code is correct but to make sure you have not broken something when you start refactoring. Get a big wad of data to process, feed it to the monster, and get the output. That's your litmus test. if you can do this with a code coverage tool you will see what you test does not cover. If you can, construct some artificial records that will also exercise this code, and repeat. Once you feel you have done what you can with this task, the output data becomes your expected result for your test. \nRefactoring should not change the behavior of the code. Remember that. This is why you have known input and known output data sets to validate you are not going to break things. This is your safety net.\nNow Refactor!\nA couple things I did that i found useful:\nInvert if statements \nA huge problem I had was just reading the code when I couldn't find the corresponding else statement, I noticed that a lot of the blocks looked like this\nif (someCondition)\n{\n 100+ lines of code\n {\n ...\n }\n}\nelse\n{\n simple statement here\n}\n\nBy inverting the if I could see the simple case and then move onto the more complex block knowing what the other one already did. not a huge change, but helped me in understanding.\nExtract Method\nI used this a lot.Take some complex multi line block, grok it and shove it aside in it's own method. this allowed me to more easily see where there was code duplication.\nNow, hopefully, you haven't broken your code (test still passes right?), and you have more readable and better understood procedural code. Look it's already improved! But that test you wrote earlier isn't really good enough... it only tells you that you a duplicating the functionality (bugs and all) of the original code, and thats only the line you had coverage on as I'm sure you would find blocks of code that you can't figure out how to hit or just cannot ever hit (I've seen both in my work).\nNow the big changes where all the big name patterns come into play is when you start looking at how you can refactor this in a proper OO fashion. There is more than one way to skin this cat, and it will involve multiple patterns. Not knowing details about the format of these files you're parsing I can only toss around some helpful suggestions that may or may not be the best solutions.\nRefactoring to Patterns is a great book to assist in explainging patterns that are helpful in these situations.\nYou're trying to eat an elephant, and there's no other way to do it but one bite at a time. Good luck.\n", "A state machine seems like the logical place to start, and using WF if you can swing it (sounds like you can't).\nYou can still implement one without WF, you just have to do it yourself. However, thinking of it like a state machine from the start will probably give you a better implementation then creating a procedural monster that checks internal state on every action.\nDiagram out your states, what causes a transition. The actual code to process a record should be factored out, and called when the state executes (if that particular state requires it).\nSo State1's execute calls your \"read a record\", then based on that record transitions to another state.\nThe next state may read multiple records and call record processing instructions, then transition back to State1.\n", "One thing I do in these cases is to use the 'Composed Method' pattern. See Jeremy Miller's Blog Post on this subject. The basic idea is to use the refactoring tools in your IDE to extract small meaningful methods. Once you've done that, you may be able to further refactor and extract meaningful classes. \n", "I would start with uninhibited use of Extract Method. If you don't have it in your current Visual Studio IDE, you can either get a 3rd-party addin, or load your project in a newer VS. (It'll try to upgrade your project, but you will carefully ignore those changes instead of checking them in.)\nYou said that you have code indented 15 levels. Start about 1/2-way out, and Extract Method. If you can come up with a good name, use it, but if you can't, extract anyway. Split in half again. You're not going for the ideal structure here; you're trying to break the code in to pieces that will fit in your brain. My brain is not very big, so I'd keep breaking & breaking until it doesn't hurt any more.\nAs you go, look for any new long methods that seem to be different than the rest; make these in to new classes. Just use a simple class that has only one method for now. Heck, making the method static is fine. Not because you think they're good classes, but because you are so desperate for some organization.\nCheck in often as you go, so you can checkpoint your work, understand the history later, be ready to do some \"real work\" without needing to merge, and save your teammates the hassle of hard merging.\nEventually you'll need to go back and make sure the method names are good, that the set of methods you've created make sense, clean up the new classes, etc.\nIf you have a highly reliable Extract Method tool, you can get away without good automated tests. (I'd trust VS in this, for example.) Otherwise, make sure you're not breaking things, or you'll end up worse than you started: with a program that doesn't work at all.\nA pairing partner would be helpful here.\n", "Judging by the description, a state machine might be the best way to deal with it. Have an enum variable to store the current state, and implement the processing as a loop over the records, with a switch or if statements to select the action to take based on the current state and the input data. You can also easily dispatch the work to separate functions based on the state using function pointers, too, if it's getting too bulky.\n", "There was a pretty good blog post about it at Coding Horror. I've only come across this anti-pattern once, and I pretty much just followed his steps.\n", "Sometimes I combine the state pattern with a stack. \nIt works well for hierarchical structures; a parent element knows what state to push onto the stack to handle a child element, but a child doesn't have to know anything about its parent. In other words, the child doesn't know what the next state is, it simply signals that it is \"complete\" and gets popped off the stack. This helps to decouple the states from each other by keeping dependencies uni-directional.\nIt works great for processing XML with a SAX parser (the content handler just pushes and pops states to change its behavior as elements are entered and exited). EDI should lend itself to this approach too.\n" ]
[ 20, 2, 2, 2, 1, 1, 1 ]
[]
[]
[ "anti_patterns", "if_statement", "nested", "refactoring" ]
stackoverflow_0000105602_anti_patterns_if_statement_nested_refactoring.txt
Q: Why do I need to SEM_PRIORITY_Q when using a VxWorks inversion safe mutex? In VxWorks, I am creating a mutex with the SEM_INVERSION_SAFE option, to protect against the priority inversion problem. The manual says that I must also use the SEM_PRIORITY_Q option. Why is that? A: When creating a mutex semaphore in VxWroks, you have two options to deal with multiple tasks queued (waiting) for the semaphore: FIFO or Highest priority task first. When you use the SEM_INVERSION_SAFE option, the task holding the mutex will be bumped up to the same priority as the highest priority task waiting for the semaphore. If you were to use a FIFO queue for the semaphore, the kernel would have to traverse the queue of tasks waiting for the mutex to find the one with the highest priority. This operation is not deterministic, as the time to traverse the queue changes as the number of tasks queued changes. When you use a SEM_PRIORITY_Q option, the kernel simply has to look at the task at the head of the queue, as it is the highest priority. This is a constant time operation.
Why do I need to SEM_PRIORITY_Q when using a VxWorks inversion safe mutex?
In VxWorks, I am creating a mutex with the SEM_INVERSION_SAFE option, to protect against the priority inversion problem. The manual says that I must also use the SEM_PRIORITY_Q option. Why is that?
[ "When creating a mutex semaphore in VxWroks, you have two options to deal with multiple tasks queued (waiting) for the semaphore: FIFO or Highest priority task first.\nWhen you use the SEM_INVERSION_SAFE option, the task holding the mutex will be bumped up to the same priority as the highest priority task waiting for the semaphore.\nIf you were to use a FIFO queue for the semaphore, the kernel would have to traverse the queue of tasks waiting for the mutex to find the one with the highest priority. This operation is not deterministic, as the time to traverse the queue changes as the number of tasks queued changes.\nWhen you use a SEM_PRIORITY_Q option, the kernel simply has to look at the task at the head of the queue, as it is the highest priority. This is a constant time operation.\n" ]
[ 4 ]
[]
[]
[ "mutex", "semaphore", "vxworks" ]
stackoverflow_0000106540_mutex_semaphore_vxworks.txt
Q: What are the semantics of a const member function? I understand that the function is not allowed to change the state of the object, but I thought I read somewhere that the compiler was allowed to assume that if the function was called with the same arguments, it would return the same value and thus could reuse a cached value if it was available. e.g. class object { int get_value(int n) const { ... } ... object x; int a = x.get_value(1); ... int b = x.get_value(1); then the compiler could optimize the second call away and either use the value in a register or simply do b = a; Is this true? A: const is about program semantics and not about implementation details. You should mark a member function const when it does not change the visible state of the object, and should be callable on an object that is itself const. Within a const member function on a class X, the type of this is X const *: pointer to constant X object. Thus all member variables are effectively const within that member function (except mutable ones). If you have a const object, you can only call const member functions on it. You can use mutable to indicate that a member variable may change even within a const member function. This is typically used to identify variables used for caching results, or for variables that don't affect the actual observable state such as mutexes (you still need to lock the mutex in the const member functions) or use counters. class X { int data; mutable boost::mutex m; public: void set_data(int i) { boost::lock_guard<boost::mutex> lk(m); data=i; } int get_data() const // we want to be able to get the data on a const object { boost::lock_guard<boost::mutex> lk(m); // this requires m to be non-const return data; } }; If you hold the data by pointer rather than directly (including smart pointers such as std::auto_ptr or boost::shared_ptr) then the pointer becomes const in a const member function, but not the pointed-to data, so you can modify the pointed-to data. As for caching: in general the compiler cannot do this because the state might change between calls (especially in my multi-threaded example with the mutex). However, if the definition is inline then the compiler can pull the code into the calling function and optimize what it can see there. This might result in the function effectively only being called once. The next version of the C++ Standard (C++0x) will have a new keyword constexpr. Functions tagged constexpr return a constant value, so the results can be cached. There are limits on what you can do in such a function (in order that the compiler can verify this fact). A: No. A const method is a method that doesn't change the state of the object (i.e. its fields), but you can't assume that given the same input, return value of a const method is determined. In other words, const keyword does NOT imply that the function is one-to-one. For instance a method that returns the current time is a const method but its return value changes between calls. A: The keyword mutable on member variables allows for const functions to alter the state of the object at hand. And no, it doesn't cache data (at least not all calls) since the following code is a valid const function that changes over time: int something() const { return m_pSomeObject->NextValue(); } Note that the pointer can be const, though the object pointed to is not const, therefore the call to NextValue on SomeObject may or may not alter it's own internal state. This causes the function something to return different values each time it's called. However, I can't answer how the compiler works with const methods. I have heard that it can optimize certain things, though I'd have to look it up to be certain. A: The const keyword on a member function marks the this parameter as constant. The function can still mute global data (so can't be cached), but not object data (allowing for calls on const objects). A: In this context, a const member function means that this is treated as a const pointer also. In practical terms, it means you aren't allowed to modify the state of this inside a const member function. For no-side-effect functions (i.e., what you're trying to achieve), GCC has a "function attribute" called pure (you use it by saying __attribute__((pure))): http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html A: I doubt it, the function could still call a global function that altered the state of the world and not violate const. A: On top of the fact that the member function can modify global data, it is possible for the member function to modify explicitly declared mutable members of the object in question. A: Corey is correct, but bear in mind that any member variables that are marked as mutable can be modified in const member functions. It also means that these functions can be called from other const functions, or via other const references. Edit: Damn, was beaten by 9 seconds.... 9!!! :) A: const methods are also allowed to modify static locals. For example, the following is perfectly legal (and repeated calls to bar() will return increasing values - not a cached 0): class Foo { public: int bar() const { static int x = 0; return x++; } };
What are the semantics of a const member function?
I understand that the function is not allowed to change the state of the object, but I thought I read somewhere that the compiler was allowed to assume that if the function was called with the same arguments, it would return the same value and thus could reuse a cached value if it was available. e.g. class object { int get_value(int n) const { ... } ... object x; int a = x.get_value(1); ... int b = x.get_value(1); then the compiler could optimize the second call away and either use the value in a register or simply do b = a; Is this true?
[ "const is about program semantics and not about implementation details. You should mark a member function const when it does not change the visible state of the object, and should be callable on an object that is itself const. Within a const member function on a class X, the type of this is X const *: pointer to constant X object. Thus all member variables are effectively const within that member function (except mutable ones). If you have a const object, you can only call const member functions on it.\nYou can use mutable to indicate that a member variable may change even within a const member function. This is typically used to identify variables used for caching results, or for variables that don't affect the actual observable state such as mutexes (you still need to lock the mutex in the const member functions) or use counters.\nclass X\n{\n int data;\n mutable boost::mutex m;\npublic:\n void set_data(int i)\n {\n boost::lock_guard<boost::mutex> lk(m);\n data=i;\n }\n int get_data() const // we want to be able to get the data on a const object\n {\n boost::lock_guard<boost::mutex> lk(m); // this requires m to be non-const\n return data;\n }\n};\n\nIf you hold the data by pointer rather than directly (including smart pointers such as std::auto_ptr or boost::shared_ptr) then the pointer becomes const in a const member function, but not the pointed-to data, so you can modify the pointed-to data.\nAs for caching: in general the compiler cannot do this because the state might change between calls (especially in my multi-threaded example with the mutex). However, if the definition is inline then the compiler can pull the code into the calling function and optimize what it can see there. This might result in the function effectively only being called once.\nThe next version of the C++ Standard (C++0x) will have a new keyword constexpr. Functions tagged constexpr return a constant value, so the results can be cached. There are limits on what you can do in such a function (in order that the compiler can verify this fact).\n", "No. \nA const method is a method that doesn't change the state of the object (i.e. its fields), but you can't assume that given the same input, return value of a const method is determined. In other words, const keyword does NOT imply that the function is one-to-one. For instance a method that returns the current time is a const method but its return value changes between calls.\n", "The keyword mutable on member variables allows for const functions to alter the state of the object at hand. \nAnd no, it doesn't cache data (at least not all calls) since the following code is a valid const function that changes over time:\nint something() const { return m_pSomeObject->NextValue(); }\n\nNote that the pointer can be const, though the object pointed to is not const, therefore the call to NextValue on SomeObject may or may not alter it's own internal state. This causes the function something to return different values each time it's called.\nHowever, I can't answer how the compiler works with const methods. I have heard that it can optimize certain things, though I'd have to look it up to be certain.\n", "The const keyword on a member function marks the this parameter as constant. The function can still mute global data (so can't be cached), but not object data (allowing for calls on const objects).\n", "In this context, a const member function means that this is treated as a const pointer also. In practical terms, it means you aren't allowed to modify the state of this inside a const member function.\nFor no-side-effect functions (i.e., what you're trying to achieve), GCC has a \"function attribute\" called pure (you use it by saying __attribute__((pure))): http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html\n", "I doubt it, the function could still call a global function that altered the state of the world and not violate const.\n", "On top of the fact that the member function can modify global data, it is possible for the member function to modify explicitly declared mutable members of the object in question.\n", "Corey is correct, but bear in mind that any member variables that are marked as mutable can be modified in const member functions.\nIt also means that these functions can be called from other const functions, or via other const references.\n\nEdit: Damn, was beaten by 9 seconds.... 9!!! :)\n", "const methods are also allowed to modify static locals. For example, the following is perfectly legal (and repeated calls to bar() will return increasing values - not a cached 0):\nclass Foo\n{\npublic:\n int bar() const\n {\n static int x = 0;\n return x++;\n }\n};\n\n" ]
[ 26, 3, 3, 2, 2, 0, 0, 0, 0 ]
[]
[]
[ "c++", "constants", "methods" ]
stackoverflow_0000098705_c++_constants_methods.txt
Q: WPF - find actual top and left of an image after rotating it I am using WPF and I have an image of an 8.5" * 11" piece of paper on a Canvas. I am then rotating the image using a RotateTransform, with the axis being in the middle of the page (that is, RotateTransformOrigin="0.5,0.5"). How can I find the actual location on the canvas of the corners of the image? A: http://au.answers.yahoo.com/question/index?qid=20080607033505AAF75UC (this is the geometry way) A: _image.TranslatePoint(new Point(0, 0), _canvas); Will this do?
WPF - find actual top and left of an image after rotating it
I am using WPF and I have an image of an 8.5" * 11" piece of paper on a Canvas. I am then rotating the image using a RotateTransform, with the axis being in the middle of the page (that is, RotateTransformOrigin="0.5,0.5"). How can I find the actual location on the canvas of the corners of the image?
[ "http://au.answers.yahoo.com/question/index?qid=20080607033505AAF75UC\n(this is the geometry way)\n", "_image.TranslatePoint(new Point(0, 0), _canvas);\n\nWill this do?\n" ]
[ 1, 0 ]
[]
[]
[ "rotatetransform", "wpf" ]
stackoverflow_0000101949_rotatetransform_wpf.txt
Q: How do you bind in xaml to a dynamic xpath? I have a list box that displays items based on an XPath query. This XPath query changes depending on the user's selection elsewhere in the GUI. The XPath always refers to the same document. At the moment, I use some C# code behind to change the binding of the control to a new XPath expression. I'd like instead to bind in XAML to an XPath, then change the value of that XPath as required. How would I do that? A: I think that you're trying to over complicate the problem. But have you thought about allocating the XPath to a dynamic resource: <.... ={Binding XPath={DynamicResource:res resource-name}} ... /> The best place to read about all-binding is Beatriz's blog: http://www.beacosta.com/blog/
How do you bind in xaml to a dynamic xpath?
I have a list box that displays items based on an XPath query. This XPath query changes depending on the user's selection elsewhere in the GUI. The XPath always refers to the same document. At the moment, I use some C# code behind to change the binding of the control to a new XPath expression. I'd like instead to bind in XAML to an XPath, then change the value of that XPath as required. How would I do that?
[ "I think that you're trying to over complicate the problem. But have you thought about allocating the XPath to a dynamic resource:\n<.... ={Binding XPath={DynamicResource:res resource-name}} ... />\n\nThe best place to read about all-binding is Beatriz's blog: http://www.beacosta.com/blog/ \n" ]
[ 2 ]
[]
[]
[ ".net", "code_behind", "wpf", "xaml", "xpath" ]
stackoverflow_0000100500_.net_code_behind_wpf_xaml_xpath.txt
Q: How to create an AxHost solely in code [C#] I'm using a COM Wrapper to interact with Windows Media Player. The it is using an AxHost to somehow wrap the player, for me it's all just magic under the hood^^ The AxHost.AttachInterfaces looks like this protected override void AttachInterfaces() { try { //Get the IOleObject for Windows Media Player. IOleObject oleObject = this.GetOcx() as IOleObject; //Set the Client Site for the WMP control. oleObject.SetClientSite(this as IOleClientSite); Player = this.GetOcx() as WMPLib.WindowsMediaPlayer; ... Everything is working find as long as I host this AxHost in a Windows Forms control. But I can't hook up the events in a constructor. This for example doesn't work: public WMPMediaRating() { var remote = new WMPRemote.RemotedWindowsMediaPlayer(); _WMP = remote.Player; _WMP.MediaChange += new _WMPOCXEvents_MediaChangeEventHandler(_WMP_MediaChange); } remote.Player is always null and the program crashes with a NullReferencesException. The code in AttachInterfaces() is somehow only executed after the Form has been drawn, or after everything else is done. I tried calling AttachInterfaces() by hand, but that didn't work either because GetOcx() returns nothing. So how can I instantiate my AxHost-inherited control without Windows Forms, to use it for example in a console application? A: FYI: nobody stops you from using a hidden window in your console application. You'll not be able to host the media player in a non-windows application - it requires hosting. If you want to play some music you can use the Media Graphs to create a graph that renders (plays) your music file - it'll not require any extra hosting.
How to create an AxHost solely in code [C#]
I'm using a COM Wrapper to interact with Windows Media Player. The it is using an AxHost to somehow wrap the player, for me it's all just magic under the hood^^ The AxHost.AttachInterfaces looks like this protected override void AttachInterfaces() { try { //Get the IOleObject for Windows Media Player. IOleObject oleObject = this.GetOcx() as IOleObject; //Set the Client Site for the WMP control. oleObject.SetClientSite(this as IOleClientSite); Player = this.GetOcx() as WMPLib.WindowsMediaPlayer; ... Everything is working find as long as I host this AxHost in a Windows Forms control. But I can't hook up the events in a constructor. This for example doesn't work: public WMPMediaRating() { var remote = new WMPRemote.RemotedWindowsMediaPlayer(); _WMP = remote.Player; _WMP.MediaChange += new _WMPOCXEvents_MediaChangeEventHandler(_WMP_MediaChange); } remote.Player is always null and the program crashes with a NullReferencesException. The code in AttachInterfaces() is somehow only executed after the Form has been drawn, or after everything else is done. I tried calling AttachInterfaces() by hand, but that didn't work either because GetOcx() returns nothing. So how can I instantiate my AxHost-inherited control without Windows Forms, to use it for example in a console application?
[ "FYI: nobody stops you from using a hidden window in your console application.\nYou'll not be able to host the media player in a non-windows application - it requires hosting. If you want to play some music you can use the Media Graphs to create a graph that renders (plays) your music file - it'll not require any extra hosting.\n" ]
[ 1 ]
[]
[]
[ "activex", "c#", "ocx" ]
stackoverflow_0000106081_activex_c#_ocx.txt
Q: What is the shortcut to open a file within your solution in Visual Studio 2008? What is the shortcut to open a file within your solution in Visual Studio 2008 (+ Resharper)? A: Ctrl + T (ReSharper, Goto, type) will open a class file for you. Looks like Ctrl + Shift + T opens files. A: Depending on your keymap, Ctrl + Shift + N will open any file in the solution, or Ctrl + N will open any type. A: If the standard toolbar is visible the following will open any file in the solution (resharper is not necessary). Ctrl + D places you in the Find textbox. >of f will provide a dropdown with all files that start with f with path information after the filename to distinguish name collisions. Complete the filename, or arrow down to the correct one and hit enter to open it in the editor. A: It depends on the key mapping that you have set. With default keymapping: Do Ctrl + T to open a type and Ctrl + Shift + T to open a file. With IntelliJ like mapping : Do Ctrl + N to open a type and Ctrl + Shift + N to open a file. Visit the following links for all your key mapping. ReSharper 4 Default Keymap: Visual Studio scheme http://www.jetbrains.com/resharper/docs/ReSharper40DefaultKeymap.pdf ReSharper 4 Default Keymap: ReSharper 2.x / IDEA scheme http://www.jetbrains.com/resharper/docs/ReSharper40DefaultKeymap2.pdf A: I attended a presentation recently where Kirk Jackson showed how to add aliases to the command window in Visual Studio. Bear with me, it gets better. So it went like this: Open Command Window and type alias fo File.FileOpen Now in your editor window hit Ctrl + / to put the focus into the Find box on the toolbar If you use the prefix > this is command window (sneaky huh?) so type: fo and intellisense kicks in and shows you the names of the folders and files in the solution. The alias is persistent between Visual Studio sessions. Not exactly a keyboard shortcut but using this technique you can access any command in Visual Studio from the keyboard. You should also check out Kirk's list of essential VS tips and tricks
What is the shortcut to open a file within your solution in Visual Studio 2008?
What is the shortcut to open a file within your solution in Visual Studio 2008 (+ Resharper)?
[ "Ctrl + T (ReSharper, Goto, type) will open a class file for you.\nLooks like Ctrl + Shift + T opens files.\n", "Depending on your keymap, Ctrl + Shift + N will open any file in the solution, or Ctrl + N will open any type.\n", "If the standard toolbar is visible the following will open any file in the solution (resharper is not necessary).\nCtrl + D places you in the Find textbox. >of f will provide a dropdown with all files that start with f with path information after the filename to distinguish name collisions. Complete the filename, or arrow down to the correct one and hit enter to open it in the editor.\n", "It depends on the key mapping that you have set. \nWith default keymapping: Do Ctrl + T to open a type and Ctrl + Shift + T to open a file.\nWith IntelliJ like mapping : Do Ctrl + N to open a type and Ctrl + Shift + N to open a file.\nVisit the following links for all your key mapping.\nReSharper 4 Default Keymap: Visual Studio scheme\nhttp://www.jetbrains.com/resharper/docs/ReSharper40DefaultKeymap.pdf\nReSharper 4 Default Keymap: ReSharper 2.x / IDEA scheme\nhttp://www.jetbrains.com/resharper/docs/ReSharper40DefaultKeymap2.pdf\n", "I attended a presentation recently where Kirk Jackson showed how to add aliases to the command window in Visual Studio. Bear with me, it gets better.\nSo it went like this: \n\nOpen Command Window and type \n\nalias fo File.FileOpen\n\nNow in your editor window hit Ctrl + / to put the focus into the Find box on the toolbar\nIf you use the prefix > this is command window (sneaky huh?) so type:\n\nfo \n\n\nand intellisense kicks in and shows you the names of the folders and files in the solution.\nThe alias is persistent between Visual Studio sessions. \nNot exactly a keyboard shortcut but using this technique you can access any command in Visual Studio from the keyboard.\nYou should also check out Kirk's list of essential VS tips and tricks\n" ]
[ 12, 2, 2, 0, 0 ]
[]
[]
[ "keyboard_shortcuts", "resharper", "visual_studio" ]
stackoverflow_0000052108_keyboard_shortcuts_resharper_visual_studio.txt
Q: Algorithm to blend gradient filled corners in image I need to put an alpha blended gradient border around an image. My problem is in blending the corners so they are smooth where the horizontal and vertical gradients meet. I believe there is a standard algorithm that solves this problem. I think I even encountered it in school many years ago. But I have been unsuccessful in finding any reference to one in several web searches. (I have implemented a radial fill pattern in the corner, but the transition is still not smooth enough.) My questions: If there is a standard algorithm for this problem, what is the name of it, and even better, how is it implemented? Forgoing any standard algorithm, what's the best way to determine the desired pixel value to produce a smooth gradient in the corners? (Make a smooth transition from the vertical gradient to the horizontal gradient.) EDIT: So imagine I have an image I will insert on top of a larger image. The larger image is solid black and the smaller image is solid white. Before I insert it, I want to blend the smaller image into the larger one by setting the alpha value on the smaller image to create a transparent "border" around it so it "fades" into the larger image. Done correctly, I should have a smooth gradient from black to white, and I do everywhere except the corners and the inside edge. At the edge of the gradient border near the center of the image, the value would be 255 (not transparent). As the border approaches the outside edge, the alpha value approaches 0. In the corners of the image where the vert & horiz borders meet, you end up with what amounts to a diagonal line. I want to eliminate that line and have a smooth transition. What I need is an algorithm that determines the alpha value (0 - 255) for each pixel that overlaps in the corner of an image as the horizontal and vertical edges meet. A: Presumably you're multiplying the two gradients where they overlap, right? Dunno about a standard algorithm. But if you use a signoid shaped gradient instead of a linear one, that should eliminate the visible edge where the two overlap. A simple sigmoid function is smoothstep(t) = tt(3 - 2*t) where 0 <= t <= 1 A: If you don't need it to be resizable, then you can just use a simple alpha map. However, I once used a simple Gaussian fade, with the mean at the location where I wanted it to be the last fully-opaque pixels to be. If that makes sense.
Algorithm to blend gradient filled corners in image
I need to put an alpha blended gradient border around an image. My problem is in blending the corners so they are smooth where the horizontal and vertical gradients meet. I believe there is a standard algorithm that solves this problem. I think I even encountered it in school many years ago. But I have been unsuccessful in finding any reference to one in several web searches. (I have implemented a radial fill pattern in the corner, but the transition is still not smooth enough.) My questions: If there is a standard algorithm for this problem, what is the name of it, and even better, how is it implemented? Forgoing any standard algorithm, what's the best way to determine the desired pixel value to produce a smooth gradient in the corners? (Make a smooth transition from the vertical gradient to the horizontal gradient.) EDIT: So imagine I have an image I will insert on top of a larger image. The larger image is solid black and the smaller image is solid white. Before I insert it, I want to blend the smaller image into the larger one by setting the alpha value on the smaller image to create a transparent "border" around it so it "fades" into the larger image. Done correctly, I should have a smooth gradient from black to white, and I do everywhere except the corners and the inside edge. At the edge of the gradient border near the center of the image, the value would be 255 (not transparent). As the border approaches the outside edge, the alpha value approaches 0. In the corners of the image where the vert & horiz borders meet, you end up with what amounts to a diagonal line. I want to eliminate that line and have a smooth transition. What I need is an algorithm that determines the alpha value (0 - 255) for each pixel that overlaps in the corner of an image as the horizontal and vertical edges meet.
[ "Presumably you're multiplying the two gradients where they overlap, right?\nDunno about a standard algorithm. But if you use a signoid shaped gradient instead of a linear one, that should eliminate the visible edge where the two overlap.\nA simple sigmoid function is smoothstep(t) = tt(3 - 2*t) where 0 <= t <= 1\n", "If you don't need it to be resizable, then you can just use a simple alpha map.\nHowever, I once used a simple Gaussian fade, with the mean at the location where I wanted it to be the last fully-opaque pixels to be. If that makes sense.\n" ]
[ 2, 0 ]
[]
[]
[ "algorithm", "gradient", "graphics", "image", "image_processing" ]
stackoverflow_0000106609_algorithm_gradient_graphics_image_image_processing.txt
Q: Are there any "nice to program" GUI toolkits for Python? I've played around with GTK, TK, wxPython, Cocoa, curses and others. They are are fairly horrible to use.. GTK/TK/wx/curses all seem to basically be direct-ports of the appropriate C libraries, and Cocoa basically mandates using both PyObjC and Interface Builder, both of which I dislike.. The Shoes GUI library for Ruby is great.. It's very sensibly designed, and very "rubyish", and borrows some nice-to-use things from web development (like using hex colours codes, or :color => rgb(128,0,0)) As the title says: are there any nice, "Pythonic" GUI toolkits? A: Have you looked at Qt/PyQt? Although PyQt is a direct port from the C++ library, I find it much more pythonic and nice to program with compared to the others you listed. It also has very good documentation. Dabo has a nice ui library implemented on top of wxPython. It's a framework intended mostly for database-centric applications, but the ui library can be used separately. There are/were several other attempts to create a very pythonic gui as a layer on top of PyGtk or wxPython, such as wax and PyGui, which seem to be "stuck" at various degrees of being complete. Also, an exhaustive list of Python GUI toolkits can be found here. A: Please check out Dabo, our framework for desktop applications. http://dabodev.com We have wrapped the wxPython toolkit for the UI classes, and replaced their ugly C++ style functions with simple properties. You mentioned assigning color: in Dabo, you would do it very simply, using your choice of: obj.BackColor = "red" obj.BackColor = (255, 0, 0) obj.BackColor = "FF0000" obj.BackColor = "#FF0000" Dabo understands all of these, and handles the differences for you automatically. I am one of the authors of Dabo, and would be happy to answer any other questions that you may have. --- Ed Leafe A: Seconding PyQt. Coupled with the book Rapid GUI Programming with Python and Qt, it's really easy to learn. A: I've used Glade with some success, though I didn't manage to wrap my head around creating anything really complex. It has a nice GUI builder and stores the forms as xml files that are loaded dynamically. Kind of like XAML afiak. A: I use pyGtk. I think wxPython is nice but it's too limited, and PyQt is, well, Qt. =)
Are there any "nice to program" GUI toolkits for Python?
I've played around with GTK, TK, wxPython, Cocoa, curses and others. They are are fairly horrible to use.. GTK/TK/wx/curses all seem to basically be direct-ports of the appropriate C libraries, and Cocoa basically mandates using both PyObjC and Interface Builder, both of which I dislike.. The Shoes GUI library for Ruby is great.. It's very sensibly designed, and very "rubyish", and borrows some nice-to-use things from web development (like using hex colours codes, or :color => rgb(128,0,0)) As the title says: are there any nice, "Pythonic" GUI toolkits?
[ "Have you looked at Qt/PyQt? Although PyQt is a direct port from the C++ library, I find it much more pythonic and nice to program with compared to the others you listed. It also has very good documentation.\nDabo has a nice ui library implemented on top of wxPython. It's a framework intended mostly for database-centric applications, but the ui library can be used separately. \nThere are/were several other attempts to create a very pythonic gui as a layer on top of PyGtk or wxPython, such as wax and PyGui, which seem to be \"stuck\" at various degrees of being complete.\nAlso, an exhaustive list of Python GUI toolkits can be found here.\n", "Please check out Dabo, our framework for desktop applications. http://dabodev.com\nWe have wrapped the wxPython toolkit for the UI classes, and replaced their ugly C++ style functions with simple properties. You mentioned assigning color: in Dabo, you would do it very simply, using your choice of:\nobj.BackColor = \"red\"\nobj.BackColor = (255, 0, 0)\nobj.BackColor = \"FF0000\"\nobj.BackColor = \"#FF0000\"\n\nDabo understands all of these, and handles the differences for you automatically.\nI am one of the authors of Dabo, and would be happy to answer any other questions that you may have.\n--- Ed Leafe\n", "Seconding PyQt. Coupled with the book Rapid GUI Programming with Python and Qt, it's really easy to learn.\n", "I've used Glade with some success, though I didn't manage to wrap my head around creating anything really complex. It has a nice GUI builder and stores the forms as xml files that are loaded dynamically. Kind of like XAML afiak.\n", "I use pyGtk. I think wxPython is nice but it's too limited, and PyQt is, well, Qt. =)\n" ]
[ 14, 14, 2, 1, 1 ]
[]
[]
[ "python", "user_interface" ]
stackoverflow_0000035922_python_user_interface.txt
Q: Web-page template where content takes full height of view-port if has 1 line minus footer I am looking for a CSS-based web page template where the main content div occupies the full height of the view port (minus header and footer heights) when its content has few lines. The footer should be at the bottom of the viewport, rather than right below content, where it's more in the middle of the viewport. Content area needs to expand vertically to be joined with the top of footer. If the content requires more space than the viewport, then footer can be at the bottom of the web page (instead of the bottom of view-port) as standard web design. A link to a specific link or sample code appreciated. Don't mention a template site and tell me to do a search there. Must work at least in IE 6 and FF. If JavaScript is required, it's OK as long as if browser doesn't support JS, it defaults to putting the footer at the bottom of the content area without breaking the layout. Sketch for case #1: -------------- <----- header area | | -------------| | small content| | | view-port | | | | -------------| | footer area | | -------------- <----- all other cases: -------------- <----- header area | | -------------| | big content | | | view-port | | | | | | | | | <---- | -------------| footer area | -------------- A: Example here: http://www.rossdmartin.com/aitp/index.htm More in-depth resources: http://www.themaninblue.com/experiment/footerStickAlt/ http://ryanfait.com/sticky-footer/ A: Look for "Footer Stick Alt"... there was a long blog write up on how to make this work. Done by Cameron Adams a.k.a. "The Man in Blue".
Web-page template where content takes full height of view-port if has 1 line minus footer
I am looking for a CSS-based web page template where the main content div occupies the full height of the view port (minus header and footer heights) when its content has few lines. The footer should be at the bottom of the viewport, rather than right below content, where it's more in the middle of the viewport. Content area needs to expand vertically to be joined with the top of footer. If the content requires more space than the viewport, then footer can be at the bottom of the web page (instead of the bottom of view-port) as standard web design. A link to a specific link or sample code appreciated. Don't mention a template site and tell me to do a search there. Must work at least in IE 6 and FF. If JavaScript is required, it's OK as long as if browser doesn't support JS, it defaults to putting the footer at the bottom of the content area without breaking the layout. Sketch for case #1: -------------- <----- header area | | -------------| | small content| | | view-port | | | | -------------| | footer area | | -------------- <----- all other cases: -------------- <----- header area | | -------------| | big content | | | view-port | | | | | | | | | <---- | -------------| footer area | --------------
[ "Example here:\nhttp://www.rossdmartin.com/aitp/index.htm\nMore in-depth resources:\n\nhttp://www.themaninblue.com/experiment/footerStickAlt/\nhttp://ryanfait.com/sticky-footer/\n\n", "Look for \"Footer Stick Alt\"... there was a long blog write up on how to make this work.\nDone by Cameron Adams a.k.a. \"The Man in Blue\".\n" ]
[ 3, 2 ]
[]
[]
[ "css", "html" ]
stackoverflow_0000106646_css_html.txt
Q: Why unicode() uses str() on my object only with no encoding given? I start by creating a string variable with some non-ascii utf-8 encoded data on it: >>> text = 'á' >>> text '\xc3\xa1' >>> text.decode('utf-8') u'\xe1' Using unicode() on it raises errors... >>> unicode(text) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) ...but if I know the encoding I can use it as second parameter: >>> unicode(text, 'utf-8') u'\xe1' >>> unicode(text, 'utf-8') == text.decode('utf-8') True Now if I have a class that returns this text in the __str__() method: >>> class ReturnsEncoded(object): ... def __str__(self): ... return text ... >>> r = ReturnsEncoded() >>> str(r) '\xc3\xa1' unicode(r) seems to use str() on it, since it raises the same error as unicode(text) above: >>> unicode(r) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) Until now everything is as planned! But as no one would ever expect, unicode(r, 'utf-8') won't even try: >>> unicode(r, 'utf-8') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: coercing to Unicode: need string or buffer, ReturnsEncoded found Why? Why this inconsistent behavior? Is it a bug? is it intended? Very awkward. A: The behaviour does seem confusing, but intensional. I reproduce here the entirety of the unicode documentation from the Python Built-In Functions documentation (for version 2.5.2, as I write this): unicode([object[, encoding [, errors]]]) Return the Unicode string version of object using one of the following modes: If encoding and/or errors are given, unicode() will decode the object which can either be an 8-bit string or a character buffer using the codec for encoding. The encoding parameter is a string giving the name of an encoding; if the encoding is not known, LookupError is raised. Error handling is done according to errors; this specifies the treatment of characters which are invalid in the input encoding. If errors is 'strict' (the default), a ValueError is raised on errors, while a value of 'ignore' causes errors to be silently ignored, and a value of 'replace' causes the official Unicode replacement character, U+FFFD, to be used to replace input characters which cannot be decoded. See also the codecs module. If no optional parameters are given, unicode() will mimic the behaviour of str() except that it returns Unicode strings instead of 8-bit strings. More precisely, if object is a Unicode string or subclass it will return that Unicode string without any additional decoding applied. For objects which provide a __unicode__() method, it will call this method without arguments to create a Unicode string. For all other objects, the 8-bit string version or representation is requested and then converted to a Unicode string using the codec for the default encoding in 'strict' mode. New in version 2.0. Changed in version 2.2: Support for __unicode__() added. So, when you call unicode(r, 'utf-8'), it requires an 8-bit string or a character buffer as the first argument, so it coerces your object using the __str__() method, and attempts to decode that using the utf-8 codec. Without the utf-8, the unicode() function looks for a for a __unicode__() method on your object, and not finding it, calls the __str__() method, as you suggested, attempting to use the default codec to convert to unicode. A: unicode does not guess the encoding of your text. If your object can print itself as unicode, define the __unicode__() method that returns a Unicode string. The secret is that unicode(r) is not actually calling __str__() itself. Instead, it's looking for a __unicode__() method. The default implementation of __unicode__() will call __str__() and then attempt to decode it using the ascii charset. When you pass the encoding, unicode() expects the first object to be something that can be decoded -- that is, an instance of basestring. Behavior is weird because it tries to decode as ascii if I don't pass 'utf-8'. But if I pass 'utf-8' it gives a different error... That's because when you specify "utf-8", it treats the first parameter as a string-like object to be decoded. Without it, it treats the parameter as an object to be coerced to unicode. I do not understand the confusion. If you know that the object's text attribute will always be UTF-8 encoded, just define __unicode__() and then everything will work fine.
Why unicode() uses str() on my object only with no encoding given?
I start by creating a string variable with some non-ascii utf-8 encoded data on it: >>> text = 'á' >>> text '\xc3\xa1' >>> text.decode('utf-8') u'\xe1' Using unicode() on it raises errors... >>> unicode(text) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) ...but if I know the encoding I can use it as second parameter: >>> unicode(text, 'utf-8') u'\xe1' >>> unicode(text, 'utf-8') == text.decode('utf-8') True Now if I have a class that returns this text in the __str__() method: >>> class ReturnsEncoded(object): ... def __str__(self): ... return text ... >>> r = ReturnsEncoded() >>> str(r) '\xc3\xa1' unicode(r) seems to use str() on it, since it raises the same error as unicode(text) above: >>> unicode(r) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) Until now everything is as planned! But as no one would ever expect, unicode(r, 'utf-8') won't even try: >>> unicode(r, 'utf-8') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: coercing to Unicode: need string or buffer, ReturnsEncoded found Why? Why this inconsistent behavior? Is it a bug? is it intended? Very awkward.
[ "The behaviour does seem confusing, but intensional. I reproduce here the entirety of the unicode documentation from the Python Built-In Functions documentation (for version 2.5.2, as I write this):\n\nunicode([object[, encoding [, errors]]])\nReturn the Unicode string version of object using one of the following modes:\nIf encoding and/or errors are given, unicode() will decode the\n object which can either be an 8-bit string or a character buffer\n using the codec for encoding. The encoding parameter is a string\n giving the name of an encoding; if the encoding is not known,\n LookupError is raised. Error handling is done according to\n errors; this specifies the treatment of characters which are\n invalid in the input encoding. If errors is 'strict' (the\n default), a ValueError is raised on errors, while a value of\n 'ignore' causes errors to be silently ignored, and a value of\n 'replace' causes the official Unicode replacement character,\n U+FFFD, to be used to replace input characters which cannot be\n decoded. See also the codecs module.\nIf no optional parameters are given, unicode() will mimic the\n behaviour of str() except that it returns Unicode strings\n instead of 8-bit strings. More precisely, if object is a Unicode\n string or subclass it will return that Unicode string without\n any additional decoding applied.\nFor objects which provide a __unicode__() method, it will call\n this method without arguments to create a Unicode string. For\n all other objects, the 8-bit string version or representation is\n requested and then converted to a Unicode string using the codec\n for the default encoding in 'strict' mode.\nNew in version 2.0. Changed in version 2.2: Support for __unicode__() added. \n\nSo, when you call unicode(r, 'utf-8'), it requires an 8-bit string or a character buffer as the first argument, so it coerces your object using the __str__() method, and attempts to decode that using the utf-8 codec. Without the utf-8, the unicode() function looks for a for a __unicode__() method on your object, and not finding it, calls the __str__() method, as you suggested, attempting to use the default codec to convert to unicode.\n", "unicode does not guess the encoding of your text. If your object can print itself as unicode, define the __unicode__() method that returns a Unicode string.\n\nThe secret is that unicode(r) is not actually calling __str__() itself. Instead, it's looking for a __unicode__() method. The default implementation of __unicode__() will call __str__() and then attempt to decode it using the ascii charset. When you pass the encoding, unicode() expects the first object to be something that can be decoded -- that is, an instance of basestring.\n\n\nBehavior is weird because it tries to decode as ascii if I don't pass 'utf-8'. But if I pass 'utf-8' it gives a different error...\n\nThat's because when you specify \"utf-8\", it treats the first parameter as a string-like object to be decoded. Without it, it treats the parameter as an object to be coerced to unicode.\nI do not understand the confusion. If you know that the object's text attribute will always be UTF-8 encoded, just define __unicode__() and then everything will work fine.\n" ]
[ 8, 5 ]
[]
[]
[ "encoding", "python", "unicode" ]
stackoverflow_0000106630_encoding_python_unicode.txt
Q: ssh-agent with passwords without spawning too many processes I use ssh-agent with password-protected keys on Linux. Every time I log into a certain machine, I do this: eval `ssh-agent` && ssh-add This works well enough, but every time I log in and do this, I create another ssh-agent. Once in a while, I will do a killall ssh-agent to reap them. Is there a simple way to reuse the same ssh-agent process across different sessions? A: have a look at Keychain. It was written b people in a similar situation to yourself. Keychain A: How much control do you have over this machine? One answer would be to run ssh-agent as a daemon process. Other options are explained on this web page, basically testing to see if the agent is around and then running it if it's not. To reproduce one of the ideas here: SSH_ENV="$HOME/.ssh/environment" function start_agent { echo "Initialising new SSH agent..." /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}" echo succeeded chmod 600 "${SSH_ENV}" . "${SSH_ENV}" > /dev/null /usr/bin/ssh-add; } # Source SSH settings, if applicable if [ -f "${SSH_ENV}" ]; then . "${SSH_ENV}" > /dev/null #ps ${SSH_AGENT_PID} doesn’t work under cywgin ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || { start_agent; } else start_agent; fi A: You can do: ssh-agent $SHELL This will cause ssh-agent to exit when the shell exits. They still won't be shared across sessions, but at least they will go away when you do. A: Depending on which shell you use, you can set different profiles for login shells and mere regular new shells. In general you want to start ssh-agent for login shells, but not for every subshell. In bash these files would be .bashrc and .bash_login, for example. Most desktop linuxes these days run ssh-agent for you. You just add your key with ssh-add, and then forward the keys over to remote ssh sessions by running ssh -A
ssh-agent with passwords without spawning too many processes
I use ssh-agent with password-protected keys on Linux. Every time I log into a certain machine, I do this: eval `ssh-agent` && ssh-add This works well enough, but every time I log in and do this, I create another ssh-agent. Once in a while, I will do a killall ssh-agent to reap them. Is there a simple way to reuse the same ssh-agent process across different sessions?
[ "have a look at Keychain. It was written b people in a similar situation to yourself.\nKeychain\n", "How much control do you have over this machine? One answer would be to run ssh-agent as a daemon process. Other options are explained on this web page, basically testing to see if the agent is around and then running it if it's not.\nTo reproduce one of the ideas here:\nSSH_ENV=\"$HOME/.ssh/environment\"\n\nfunction start_agent {\n echo \"Initialising new SSH agent...\"\n /usr/bin/ssh-agent | sed 's/^echo/#echo/' > \"${SSH_ENV}\"\n echo succeeded\n chmod 600 \"${SSH_ENV}\"\n . \"${SSH_ENV}\" > /dev/null\n /usr/bin/ssh-add;\n}\n\n# Source SSH settings, if applicable\n\nif [ -f \"${SSH_ENV}\" ]; then\n . \"${SSH_ENV}\" > /dev/null\n #ps ${SSH_AGENT_PID} doesn’t work under cywgin\n ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {\n start_agent;\n }\nelse\n start_agent;\nfi \n\n", "You can do:\nssh-agent $SHELL\n\nThis will cause ssh-agent to exit when the shell exits. They still won't be shared across sessions, but at least they will go away when you do.\n", "Depending on which shell you use, you can set different profiles for login shells and mere regular new shells. In general you want to start ssh-agent for login shells, but not for every subshell. In bash these files would be .bashrc and .bash_login, for example.\nMost desktop linuxes these days run ssh-agent for you. You just add your key with ssh-add, and then forward the keys over to remote ssh sessions by running \n\nssh -A\n\n" ]
[ 5, 3, 2, 0 ]
[]
[]
[ "linux", "ssh" ]
stackoverflow_0000102382_linux_ssh.txt
Q: Fetch top X users, plus a specific user (if they're not in the top X) I have a list of ranked users, and would like to select the top 50. I also want to make sure one particular user is in this result set, even if they aren't in the top 50. Is there a sensible way to do this in a single mysql query? Or should I just check the results for the particular user and fetch him separately, if necessary? Thanks! A: If I understand correctly, you could do: select * from users order by max(rank) desc limit 0, 49 union select * from users where user = x This way you get 49 top users plus your particular user. A: Regardless if a single, fancy SQL query could be made, the most maintainable code would probably be two queries: select user from users where id = "fred"; select user from users where id != "fred" order by rank limit 49; Of course "fred" (or whomever) would usually be replaced by a placeholder but the specifics depend on the environment. A: declare @topUsers table( userId int primary key, username varchar(25) ) insert into @topUsers select top 50 userId, userName from Users order by rank desc insert into @topUsers select userID, userName from Users where userID = 1234 --userID of special user select * from @topUsers A: The simplest solution depends on your requirements, and what your database supports. If you don't mind the possibility of having duplicate results, then a simple union (as Mariano Conti demonstrated) is fine. Otherwise, you could do something like select distinct <columnlist> from (select * from users order by max(rank) desc limit 0, 49 union select * from users where user = x) if you database supports it.
Fetch top X users, plus a specific user (if they're not in the top X)
I have a list of ranked users, and would like to select the top 50. I also want to make sure one particular user is in this result set, even if they aren't in the top 50. Is there a sensible way to do this in a single mysql query? Or should I just check the results for the particular user and fetch him separately, if necessary? Thanks!
[ "If I understand correctly, you could do:\nselect * from users order by max(rank) desc limit 0, 49 \nunion \nselect * from users where user = x\n\nThis way you get 49 top users plus your particular user.\n", "Regardless if a single, fancy SQL query could be made, the most maintainable code would probably be two queries:\nselect user from users where id = \"fred\"; \nselect user from users where id != \"fred\" order by rank limit 49;\n\nOf course \"fred\" (or whomever) would usually be replaced by a placeholder but the specifics depend on the environment. \n", "declare @topUsers table(\n userId int primary key,\n username varchar(25)\n)\ninsert into @topUsers\nselect top 50 \n userId, \n userName\nfrom Users\norder by rank desc\n\ninsert into @topUsers\nselect\n userID,\n userName\nfrom Users\nwhere userID = 1234 --userID of special user\n\nselect * from @topUsers\n\n", "The simplest solution depends on your requirements, and what your database supports.\nIf you don't mind the possibility of having duplicate results, then a simple union (as Mariano Conti demonstrated) is fine.\nOtherwise, you could do something like \nselect distinct <columnlist>\nfrom (select * from users order by max(rank) desc limit 0, 49 \n union \n select * from users where user = x)\n\nif you database supports it.\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "database", "mysql", "sql" ]
stackoverflow_0000106400_database_mysql_sql.txt
Q: Can Spring Parse and Inject Properties Files? I already know how to: Load properties files into my Spring configuration using: <context:property-placeholder location="aaa/bbb/ccc/stuff.properties"/> Build properties objects on the fly using: <props><prop key="abc">some value</prop></props> But what I cant do, and would be really useful, is to have Spring load a properties file and then build the matching properties object. I could then inject this into a bean in the normal way. I've searched for this elsewhere without success. Any ideas? A: Take a look at util:properties <util:properties id="myProperties" location="classpath:com/foo/my.properties"/> Then, to inject the Properties into your Spring-managed Bean, it's as simple as this: @Resource(name = "myProperties") private Properties myProperties;
Can Spring Parse and Inject Properties Files?
I already know how to: Load properties files into my Spring configuration using: <context:property-placeholder location="aaa/bbb/ccc/stuff.properties"/> Build properties objects on the fly using: <props><prop key="abc">some value</prop></props> But what I cant do, and would be really useful, is to have Spring load a properties file and then build the matching properties object. I could then inject this into a bean in the normal way. I've searched for this elsewhere without success. Any ideas?
[ "Take a look at util:properties\n<util:properties id=\"myProperties\" location=\"classpath:com/foo/my.properties\"/>\n\nThen, to inject the Properties into your Spring-managed Bean, it's as simple as this:\n@Resource(name = \"myProperties\")\nprivate Properties myProperties;\n\n" ]
[ 17 ]
[]
[]
[ "configuration_files", "java", "spring" ]
stackoverflow_0000106402_configuration_files_java_spring.txt
Q: How do I call a .NET assembly from C/C++? Suppose I am writing an application in C++ and C#. I want to write the low level parts in C++ and write the high level logic in C#. How can I load a .NET assembly from my C++ program and start calling methods and accessing the properties of my C# classes? A: You should really look into C++/CLI. It makes tasks like this nearly trivial. Otherwise, you'll have to generate COM wrappers around the C# code and have your C++ app call the COM wrappers. A: [Guid("123565C4-C5FA-4512-A560-1D47F9FDFA20")] public interface IConfig { [DispId(1)] string Destination{ get; } [DispId(2)] void Unserialize(); [DispId(3)] void Serialize(); } [ComVisible(true)] [Guid("12AC8095-BD27-4de8-A30B-991940666927")] [ClassInterface(ClassInterfaceType.None)] public sealed class Config : IConfig { public Config() { } public string Destination { get { return ""; } } public void Serialize() { } public void Unserialize() { } } After that, you need to regasm your assembly. Regasm will add the necessary registry entries to allow your .NET component to be see as a COM Component. After, you can call your .NET Component in C++ in the same way as any other COM component. A: I would definitely investigate C++/CLI for this and avoid COM and all the registration hassles that tends to produce. What is the motivation for using C++? If it is simply style then you might find you can write everything in C++/CLI. If it is performance then calling back and forth between managed C++ and unmanaged code is relatively straight forward. But it is never going to be transparent. You can't pass a managed pointer to unmanaged code first without pinning it so that the garbage collector won't move it, and of course unmanaged code won't know about your managed types. But managed (C++) code can know about your unmanaged types. One other thing to note is that C++/CLI assemblies that include unmanaged code will be architecture specific. You will need separates builds for x86 and x64 (and IA64). A: If you can have both managed and unmanaged code in your process, you can create a C++ class with virtual functions. Implement the class with mixed mode C++/CLI. Inject the implementation to your C++ code, so that the (high-level) implementation can be called from your (low-level) C++ code. A: You can wrap the .NET component in a COM component - which is quite easy with the .NET tools - and call it via COM. A: If the low level parts in in C++ then typically you call that from the C# code passing in the values that are needed. This should work in the standard way that you're probably accustomed to. You'll need to read up on marshalling for example. You could look at this blog to get some concrete details. A: Create your .NET assembly as normal, but be sure to mark the class with the ClassInterface(ClassInterfaceType.AutoDual) and be sure an assembly info SetAssemblyAtribute to ComVisible( true ). Then, create the COM wrapper with REGASM: regasm mydll.dll /tlb:mydll.tbl /codebase f:_code\ClassLibraryForCom be sure to use the /codebase directive -- it is necessary if you aren't going to give the assembly a strong name. rp A: Since C# can import C++ standard exports, it might be easier to load up your C++ dll inside of a C# application instead of using COM from C++. See documentation for System.Runtime.InteropServices.DllImport. Also, here is a complete list of the types of Interop that you can do between managed and unmanaged code: http://blogs.msdn.com/deeptanshuv/archive/2005/06/26/432870.aspx In a nutshell: (a) Using COM-Interop (b) Using imports/pinvoke (explicit method calls) (c) IJW and MC++ apps : MC++ & IJW apps can freely call back and forth to each other. (d) Hosting. This is rare, but the CLR can be hosted by an unmanaged app which means that the runtime invokes a bunch of hosting callbacks. A: I found this link to embedding Mono: http://www.mono-project.com/Embedding_Mono It provides what seems to be a pretty straightforward interface for interacting with assemblies. This could be an attractive option, especially if you want to be cross-platform
How do I call a .NET assembly from C/C++?
Suppose I am writing an application in C++ and C#. I want to write the low level parts in C++ and write the high level logic in C#. How can I load a .NET assembly from my C++ program and start calling methods and accessing the properties of my C# classes?
[ "You should really look into C++/CLI. It makes tasks like this nearly trivial. \nOtherwise, you'll have to generate COM wrappers around the C# code and have your C++ app call the COM wrappers.\n", "[Guid(\"123565C4-C5FA-4512-A560-1D47F9FDFA20\")]\npublic interface IConfig\n{\n [DispId(1)]\n string Destination{ get; }\n\n [DispId(2)]\n void Unserialize();\n\n [DispId(3)]\n void Serialize();\n}\n\n[ComVisible(true)]\n[Guid(\"12AC8095-BD27-4de8-A30B-991940666927\")]\n[ClassInterface(ClassInterfaceType.None)]\npublic sealed class Config : IConfig\n{\n public Config()\n {\n }\n\n public string Destination\n {\n get { return \"\"; }\n }\n\n public void Serialize()\n {\n }\n\n public void Unserialize()\n {\n }\n}\n\nAfter that, you need to regasm your assembly. Regasm will add the necessary registry entries to allow your .NET component to be see as a COM Component. After, you can call your .NET Component in C++ in the same way as any other COM component.\n", "I would definitely investigate C++/CLI for this and avoid COM and all the registration hassles that tends to produce.\nWhat is the motivation for using C++? If it is simply style then you might find you can write everything in C++/CLI. If it is performance then calling back and forth between managed C++ and unmanaged code is relatively straight forward. But it is never going to be transparent. You can't pass a managed pointer to unmanaged code first without pinning it so that the garbage collector won't move it, and of course unmanaged code won't know about your managed types. But managed (C++) code can know about your unmanaged types.\nOne other thing to note is that C++/CLI assemblies that include unmanaged code will be architecture specific. You will need separates builds for x86 and x64 (and IA64).\n", "If you can have both managed and unmanaged code in your process, you can create a C++ class with virtual functions. Implement the class with mixed mode C++/CLI. Inject the implementation to your C++ code, so that the (high-level) implementation can be called from your (low-level) C++ code.\n", "You can wrap the .NET component in a COM component - which is quite easy with the .NET tools - and call it via COM.\n", "If the low level parts in in C++ then typically you call that from the C# code passing in the values that are needed. This should work in the standard way that you're probably accustomed to. You'll need to read up on marshalling for example.\nYou could look at this blog to get some concrete details.\n", "Create your .NET assembly as normal, but be sure to mark the class with the ClassInterface(ClassInterfaceType.AutoDual) and be sure an assembly info SetAssemblyAtribute to ComVisible( true ).\nThen, create the COM wrapper with REGASM:\nregasm mydll.dll /tlb:mydll.tbl /codebase f:_code\\ClassLibraryForCom\nbe sure to use the /codebase directive -- it is necessary if you aren't going to give the assembly a strong name. \nrp\n", "Since C# can import C++ standard exports, it might be easier to load up your C++ dll inside of a C# application instead of using COM from C++.\nSee documentation for System.Runtime.InteropServices.DllImport.\nAlso, here is a complete list of the types of Interop that you can do between managed and unmanaged code:\nhttp://blogs.msdn.com/deeptanshuv/archive/2005/06/26/432870.aspx\nIn a nutshell:\n(a) Using COM-Interop\n(b) Using imports/pinvoke (explicit method calls)\n(c) IJW and MC++ apps : MC++ & IJW apps can freely call back and forth to each other.\n(d) Hosting. This is rare, but the CLR can be hosted by an unmanaged app which means that the runtime invokes a bunch of hosting callbacks.\n", "I found this link to embedding Mono:\nhttp://www.mono-project.com/Embedding_Mono\nIt provides what seems to be a pretty straightforward interface for interacting with assemblies. This could be an attractive option, especially if you want to be cross-platform\n" ]
[ 13, 12, 4, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ ".net", "c#", "c++" ]
stackoverflow_0000106033_.net_c#_c++.txt
Q: Annotating YouTube videos programmatically I want to be able to display a normal YouTube video with overlaid annotations, consisting of coloured rectangles for each frame. The only requirement is that this should be done programmatically. YouTube has annotations now, but require you to use their front end to create them by hand. I want to be able to generate them. What's the best way of doing this? Some ideas: Build your own Flash player (ew?) Somehow draw over the YouTube Flash player. Will this work? Reverse engineer & hijack YouTube's annotation system. Either messing with the local files or redirecting its attempt to download the annotations. (using Greasemonkey? Firefox plugin?) Idea that doesn't count: download the video A: YouTube provides an ActionScript API. Using this, you could load the videos into Flash using their API and then have your Flash app create the annotations on a layer above the video. Or, alternatively, if you want to stay away from creating something in Flash, using YouTube's JavaScript API you could draw HTML DIVs over the YouTube player on your web page. Just remember when you embed the player to have WMODE="transparent" in the params list. So using the example from YouTube: <script type="text/javascript"> var params = { allowScriptAccess: "always" }; var atts = { id: "myytplayer", wmode: "transparent" }; swfobject.embedSWF("http://www.youtube.com/v/VIDEO_ID&enablejsapi=1&playerapiid=ytplayer", "ytapiplayer", "425", "356", "8", null, null, params, atts); </script> And then you should be able to draw your annotations over the YouTube movie using CSS/DHTML. A: Joe Berkovitz has written a sample application called ReviewTube which "Allows users to create time-based subtitles for any YouTube video, a la closed captioning. These captions become publicly accessible, and visitors to the site can browse the set of videos with captions. Think of it as a “subtitle graffiti wall” for YouTube!" The app is the example used to demonstrate the MVCS framework/approach for building Flex applications. http://www.joeberkovitz.com/blog/reviewtube/ Not sure if this will help with the colored rectangles and whatnot, but it's a decent place to start. A: The player itself has a Javascript API that might be useful for syncing the video if you choose to make your own annotation-thingamajig.
Annotating YouTube videos programmatically
I want to be able to display a normal YouTube video with overlaid annotations, consisting of coloured rectangles for each frame. The only requirement is that this should be done programmatically. YouTube has annotations now, but require you to use their front end to create them by hand. I want to be able to generate them. What's the best way of doing this? Some ideas: Build your own Flash player (ew?) Somehow draw over the YouTube Flash player. Will this work? Reverse engineer & hijack YouTube's annotation system. Either messing with the local files or redirecting its attempt to download the annotations. (using Greasemonkey? Firefox plugin?) Idea that doesn't count: download the video
[ "YouTube provides an ActionScript API.\nUsing this, you could load the videos into Flash using their API and then have your Flash app create the annotations on a layer above the video. \nOr, alternatively, if you want to stay away from creating something in Flash, using YouTube's JavaScript API you could draw HTML DIVs over the YouTube player on your web page. Just remember when you embed the player to have WMODE=\"transparent\" in the params list. \nSo using the example from YouTube:\n <script type=\"text/javascript\">\n\n var params = { allowScriptAccess: \"always\" };\n var atts = { id: \"myytplayer\", wmode: \"transparent\" };\n swfobject.embedSWF(\"http://www.youtube.com/v/VIDEO_ID&enablejsapi=1&playerapiid=ytplayer\", \n \"ytapiplayer\", \"425\", \"356\", \"8\", null, null, params, atts);\n\n </script>\n\nAnd then you should be able to draw your annotations over the YouTube movie using CSS/DHTML. \n", "Joe Berkovitz has written a sample application called ReviewTube which \"Allows users to create time-based subtitles for any YouTube video, a la closed captioning. These captions become publicly accessible, and visitors to the site can browse the set of videos with captions. Think of it as a “subtitle graffiti wall” for YouTube!\"\nThe app is the example used to demonstrate the MVCS framework/approach for building Flex applications.\nhttp://www.joeberkovitz.com/blog/reviewtube/\nNot sure if this will help with the colored rectangles and whatnot, but it's a decent place to start.\n", "The player itself has a Javascript API that might be useful for syncing the video if you choose to make your own annotation-thingamajig.\n" ]
[ 18, 8, 5 ]
[]
[]
[ "reverse_engineering", "youtube" ]
stackoverflow_0000000175_reverse_engineering_youtube.txt
Q: How can I unit test Flex applications from within the IDE or a build script? I'm currently working on an application with a frontend written in Adobe Flex 3. I'm aware of FlexUnit but what I'd really like is a unit test runner for Ant/NAnt and a runner that integrates with the Flex Builder IDE (AKA Eclipse). Does one exist? Also, are there any other resources on how to do Flex development "the right way" besides the Cairngorm microarchitecture example? A: The dpUint testing framework has a test runner built with AIR which can be integrated with a build script. There is also my FlexUnit automation kit which does more or less the same for FlexUnit. It has an Ant macro that makes it possible to run the tests as a part of an Ant script, for example: <target name="run-tests" depends="compile-tests"> <flexunit swf="${build.home}/tests.swf" failonerror="true"/> </target> A: On my project we're using Maven to build both our Flex RIA and the Java-based back end. In order to build and test the Flex app we use the flex-mojos maven plugins. They do a great job for us and I would highly recommend using Maven over Ant. That being said, if you're already using Ant it can be a little tricky to transition over to Maven. So if you're in that position I would recommend using the flexunit tasks available here: Ant Task Both of these libraries do basically the same thing, they launch a generated flexunit test runner mxml application in a window and open a socket connection back to the build process using a JUnit test runner. Amazingly enough it works pretty well. The only problem is that you can't run it headless so if you want to run the build from a CI server you have to make sure that process has the ability to launch new windows otherwise it won't work. A: About how to develop Flex applications the right way, I wouldn't look too much at the Cairngorm framework. It does claim to show "best practice" and so on, but I would say that the opposite is true. It's based around the use of global variables, and other things you should try to avoid. I've outlined some of the problems on my blog. I would suggest that you look at the Mate framework instead, which has good documentation and good examples to get you going. It uses Flex to its full potential, doesn't rely on global variables as Cairngorm and PureMVC, and it makes it possible to write much more decoupled code. A: An alternative to FlexUnit is the AsUnit testing tools. There are versions for actionscript 2 and 3. It also has good integration with Project Sprouts, which is a build tool for Flex and Flash similar to ant, however it uses ruby rake tasks and includes excellent dependency management along the lines of maven. No IDE integration that I know of however.
How can I unit test Flex applications from within the IDE or a build script?
I'm currently working on an application with a frontend written in Adobe Flex 3. I'm aware of FlexUnit but what I'd really like is a unit test runner for Ant/NAnt and a runner that integrates with the Flex Builder IDE (AKA Eclipse). Does one exist? Also, are there any other resources on how to do Flex development "the right way" besides the Cairngorm microarchitecture example?
[ "The dpUint testing framework has a test runner built with AIR which can be integrated with a build script.\nThere is also my FlexUnit automation kit which does more or less the same for FlexUnit. It has an Ant macro that makes it possible to run the tests as a part of an Ant script, for example:\n<target name=\"run-tests\" depends=\"compile-tests\">\n <flexunit swf=\"${build.home}/tests.swf\" failonerror=\"true\"/>\n</target>\n\n", "On my project we're using Maven to build both our Flex RIA and the Java-based back end. In order to build and test the Flex app we use the flex-mojos maven plugins. They do a great job for us and I would highly recommend using Maven over Ant.\nThat being said, if you're already using Ant it can be a little tricky to transition over to Maven. So if you're in that position I would recommend using the flexunit tasks available here: Ant Task\nBoth of these libraries do basically the same thing, they launch a generated flexunit test runner mxml application in a window and open a socket connection back to the build process using a JUnit test runner. Amazingly enough it works pretty well. The only problem is that you can't run it headless so if you want to run the build from a CI server you have to make sure that process has the ability to launch new windows otherwise it won't work.\n", "About how to develop Flex applications the right way, I wouldn't look too much at the Cairngorm framework. It does claim to show \"best practice\" and so on, but I would say that the opposite is true. It's based around the use of global variables, and other things you should try to avoid. I've outlined some of the problems on my blog.\nI would suggest that you look at the Mate framework instead, which has good documentation and good examples to get you going. It uses Flex to its full potential, doesn't rely on global variables as Cairngorm and PureMVC, and it makes it possible to write much more decoupled code.\n", "An alternative to FlexUnit is the AsUnit testing tools. There are versions for actionscript 2 and 3. It also has good integration with Project Sprouts, which is a build tool for Flex and Flash similar to ant, however it uses ruby rake tasks and includes excellent dependency management along the lines of maven.\nNo IDE integration that I know of however.\n" ]
[ 5, 3, 2, 0 ]
[]
[]
[ "apache_flex", "build_automation", "cairngorm", "eclipse", "unit_testing" ]
stackoverflow_0000002222_apache_flex_build_automation_cairngorm_eclipse_unit_testing.txt
Q: How do I make WISA act like LAMP (Protecting .mp3s on IIS) I have created a few small flash widgets that stream .mp3 audio from an Apache/php host. The mp3 file cannot be directly accessed and does not save it self to the browsers cache. To do this I set the mp3 file permission on the host to "owner: read/write" (numeric value 600). This makes it so that only my .php file can read the .mp3. Then I make a request to my php file from my ActionScript and it streams the mp3 to my widget. (If the client/user looks in the browsers cache the mp3 file is not found as desired) This is the php code that streams the file: <?php ob_start(); header("Expires: Mon, 20 Dec 1977 00:00:00 GMT"); header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); header("Content-Type: audio/mpeg"); @readfile($_GET["file"]); ob_end_flush(); ?> Does anyone know how to reproduce this behavior using IIS/ASP.Net 1.) Make it so a file is only accessible to a file on the server. 2.) Stream that file using an .ASPX or .ASHX? A: You're not really protecting the MP3s, just obfuscating them. Anyone can still save them, especially if they just fire up an HTTP debugger like Fiddler to figure out what HTTP calls are being made. The fact that you've set them to not cache and to go through a PHP script doesn't help much. To get this same effect using ASP.NET, you'd write an HTTPHandler (probably just off an .ashx), set up all the headers the same way using context.Response.Headers, then load the .mp3 file using System.IO.FileStream and send that to context.Response.OutputStream. Look up System.Web.HTTPHandler, System.IO.FileStream, and System.Web.HTTPResponse on MSDN for more info.
How do I make WISA act like LAMP (Protecting .mp3s on IIS)
I have created a few small flash widgets that stream .mp3 audio from an Apache/php host. The mp3 file cannot be directly accessed and does not save it self to the browsers cache. To do this I set the mp3 file permission on the host to "owner: read/write" (numeric value 600). This makes it so that only my .php file can read the .mp3. Then I make a request to my php file from my ActionScript and it streams the mp3 to my widget. (If the client/user looks in the browsers cache the mp3 file is not found as desired) This is the php code that streams the file: <?php ob_start(); header("Expires: Mon, 20 Dec 1977 00:00:00 GMT"); header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); header("Content-Type: audio/mpeg"); @readfile($_GET["file"]); ob_end_flush(); ?> Does anyone know how to reproduce this behavior using IIS/ASP.Net 1.) Make it so a file is only accessible to a file on the server. 2.) Stream that file using an .ASPX or .ASHX?
[ "You're not really protecting the MP3s, just obfuscating them. Anyone can still save them, especially if they just fire up an HTTP debugger like Fiddler to figure out what HTTP calls are being made. The fact that you've set them to not cache and to go through a PHP script doesn't help much.\nTo get this same effect using ASP.NET, you'd write an HTTPHandler (probably just off an .ashx), set up all the headers the same way using context.Response.Headers, then load the .mp3 file using System.IO.FileStream and send that to context.Response.OutputStream. Look up System.Web.HTTPHandler, System.IO.FileStream, and System.Web.HTTPResponse on MSDN for more info.\n" ]
[ 2 ]
[]
[]
[ "apache", "asp.net", "iis", "no_cache", "php" ]
stackoverflow_0000106871_apache_asp.net_iis_no_cache_php.txt
Q: What tools do you use to monitor a web service? From basic things likes page views per second to more advanced stuff like cpu or memory usage. Any ideas? A: I think someone has asked the same type of question before here? Though I'm not too sure how helpful it is. For CPU usage, etc, I would try RRDTool, or maybe something like Cacti. A: Web service or web site? Since you mention page views: I believe you mean web site. Google Analytics will probably give you everything you need to track usage statistics and best of all is free under most circumstances. You might also want to monitor site up-time and have something to send email alerts if the site is down for some reason. We've used Nagios in the past and it works just fine. A: I've been using monit (http://www.tildeslash.com/monit/) for years. It monitors CPU and memory usage as well as downtime for apache/mysql/etc... you can also configure it to notify you of outages and automatically restart services in real time. I also use munin for reporting: http://munin.projects.linpro.no/ If you want reports on pageviews and whatnot, AWStats is the best I've used. A: I use Nagios for general machine monitoring on Linux and I pretty much rely on Google Analytics for website reporting - I know that's not for everyone since some folks have privacy concerns about giving all their site data to Google. Both are free and easy to install (Nagios is generally available through apt-get and Analytics is a pretty easy install on a site). Nagios, however, can be a bear to configure. A: I cast my vote for monit as well. The nice thing about is that it understands apache-status info and can notify/take actions when say 80% of max num of apache workers are in "busy" state. but you need something else for hardware and general monitoring, something SNMP-aware, like zennos or zabbix A: Munin and Cacti provide very nice interfaces and pre-built scripts for rrdtool. They can also monitor multiple servers and send out warnings and alerts through naigos.
What tools do you use to monitor a web service?
From basic things likes page views per second to more advanced stuff like cpu or memory usage. Any ideas?
[ "I think someone has asked the same type of question before here? Though I'm not too sure how helpful it is.\nFor CPU usage, etc, I would try RRDTool, or maybe something like Cacti.\n", "Web service or web site? Since you mention page views: I believe you mean web site.\nGoogle Analytics will probably give you everything you need to track usage statistics and best of all is free under most circumstances.\nYou might also want to monitor site up-time and have something to send email alerts if the site is down for some reason. We've used Nagios in the past and it works just fine.\n", "I've been using monit (http://www.tildeslash.com/monit/) for years. It monitors CPU and memory usage as well as downtime for apache/mysql/etc... you can also configure it to notify you of outages and automatically restart services in real time.\nI also use munin for reporting: http://munin.projects.linpro.no/\nIf you want reports on pageviews and whatnot, AWStats is the best I've used.\n", "I use Nagios for general machine monitoring on Linux and I pretty much rely on Google Analytics for website reporting - I know that's not for everyone since some folks have privacy concerns about giving all their site data to Google. \nBoth are free and easy to install (Nagios is generally available through apt-get and Analytics is a pretty easy install on a site). \nNagios, however, can be a bear to configure.\n", "I cast my vote for monit as well. The nice thing about is that it understands apache-status info and can notify/take actions when say 80% of max num of apache workers are in \"busy\" state. \nbut you need something else for hardware and general monitoring, something SNMP-aware, like zennos or zabbix \n", "Munin and Cacti provide very nice interfaces and pre-built scripts for rrdtool. They can also monitor multiple servers and send out warnings and alerts through naigos.\n" ]
[ 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "monitoring", "production" ]
stackoverflow_0000106358_monitoring_production.txt
Q: In Delphi, is TDataSet thread safe? I'd like to be able to open a TDataSet asynchronously in its own thread so that the main VCL thread can continue until that's done, and then have the main VCL thread read from that TDataSet afterwards. I've done some experimenting and have gotten into some very weird situations, so I'm wondering if anyone has done this before. I've seen some sample apps where a TDataSet is created in a separate thread, it's opened and then data is read from it, but that's all done in the separate thread. I'm wondering if it's safe to read from the TDataSet from the main VCL thread after the other thread opens the data source. I'm doing Win32 programming in Delphi 7, using TmySQLQuery from DAC for MySQL as my TDataSet descendant. A: Provided you only want to use the dataset in its own thread, you can just use synchronize to communicate with the main thread for any VCL/UI update, like with any other component. Or, better, you can implement communication between the mainthread and worker threads with your own messaging system. check Hallvard's solution for threading here: http://hallvards.blogspot.com/2008/03/tdm6-knitting-your-own-threads.html or this other one: http://dn.codegear.com/article/22411 for some explanation on synchronize and its inefficiencies: http://www.eonclash.com/Tutorials/Multithreading/MartinHarvey1.1/Ch3.html A: I have seen it done with other implementations of TDataSet, namely in the Asta components. These would contact the server, return immediately, and then fire an event once the data had been loaded. However, I believe it depends very much on the component. For example, those same Asta components could not be opened in a synchronous manner from anything other than the main VCL thread. So in short, I don't believe it is a limitation of TDataSet per se, but rather something that is implementation specific, and I don't have access to the components you've mentioned. A: One thing to keep in mind about using the same TDataSet between multiple threads is you can only read the current record at any given time. So if you are reading the record in one thread and then the other thread calls Next then you are in trouble. A: Also remember the thread will most likely need its own database connection. I believe what is needed here is a multi-threaded "holding" object to load the data from the thread into (write only) which is then read only from the main VCL thread. Before reading use some sort of syncronization method to insure that your not reading the same moment your writing, or writing the same moment your reading, or load everything into a memory file and write a sync method to tell the main app where in the file to stop reading. I have taken the last approach a few times, depdending on the number of expected records (and the size of the dataset) I have even taken this to a physical disk file on the local system. It works quite well. A: I've done multithreaded data access, and it's not straightforward: 1) You need to create a session per thread. 2) Everything done to that TDataSet instance must be done in context of the thread where it was created. That's not easy if you wanted to place e.g. a db grid on top of it. 3) If you want to let e.g. main thread play with your data, the straight-forward solution is to move it into a separate container of some kind,e.g. a Memory dataset. 4) You need some kind of signaling mechanism to notify main thread once your data retrieval is complete. ...and exception handling isn't straightforward, either... But: Once you've succeeded, the application will be really elegant ! A: Most TDatasets are not thread safe. One that I know is thread safe is kbmMemtable. It also has the ability to clone a dataset so that the problem of moving the record pointer (as explained by Jim McKeeth) does occur. They're one of the best datasets you can get (bought or free).
In Delphi, is TDataSet thread safe?
I'd like to be able to open a TDataSet asynchronously in its own thread so that the main VCL thread can continue until that's done, and then have the main VCL thread read from that TDataSet afterwards. I've done some experimenting and have gotten into some very weird situations, so I'm wondering if anyone has done this before. I've seen some sample apps where a TDataSet is created in a separate thread, it's opened and then data is read from it, but that's all done in the separate thread. I'm wondering if it's safe to read from the TDataSet from the main VCL thread after the other thread opens the data source. I'm doing Win32 programming in Delphi 7, using TmySQLQuery from DAC for MySQL as my TDataSet descendant.
[ "Provided you only want to use the dataset in its own thread, you can just use synchronize to communicate with the main thread for any VCL/UI update, like with any other component.\nOr, better, you can implement communication between the mainthread and worker threads with your own messaging system. \ncheck Hallvard's solution for threading here:\nhttp://hallvards.blogspot.com/2008/03/tdm6-knitting-your-own-threads.html \nor this other one:\nhttp://dn.codegear.com/article/22411 \nfor some explanation on synchronize and its inefficiencies:\nhttp://www.eonclash.com/Tutorials/Multithreading/MartinHarvey1.1/Ch3.html\n", "I have seen it done with other implementations of TDataSet, namely in the Asta components. These would contact the server, return immediately, and then fire an event once the data had been loaded.\nHowever, I believe it depends very much on the component. For example, those same Asta components could not be opened in a synchronous manner from anything other than the main VCL thread.\nSo in short, I don't believe it is a limitation of TDataSet per se, but rather something that is implementation specific, and I don't have access to the components you've mentioned.\n", "One thing to keep in mind about using the same TDataSet between multiple threads is you can only read the current record at any given time. So if you are reading the record in one thread and then the other thread calls Next then you are in trouble.\n", "Also remember the thread will most likely need its own database connection. I believe what is needed here is a multi-threaded \"holding\" object to load the data from the thread into (write only) which is then read only from the main VCL thread. Before reading use some sort of syncronization method to insure that your not reading the same moment your writing, or writing the same moment your reading, or load everything into a memory file and write a sync method to tell the main app where in the file to stop reading.\nI have taken the last approach a few times, depdending on the number of expected records (and the size of the dataset) I have even taken this to a physical disk file on the local system. It works quite well.\n", "I've done multithreaded data access, and it's not straightforward:\n1) You need to create a session per thread. \n2) Everything done to that TDataSet instance must be done in context of the thread where it was created. That's not easy if you wanted to place e.g. a db grid on top of it.\n3) If you want to let e.g. main thread play with your data, the straight-forward solution is to move it into a separate container of some kind,e.g. a Memory dataset.\n4) You need some kind of signaling mechanism to notify main thread once your data retrieval is complete.\n...and exception handling isn't straightforward, either...\nBut: Once you've succeeded, the application will be really elegant !\n", "Most TDatasets are not thread safe. One that I know is thread safe is kbmMemtable. It also has the ability to clone a dataset so that the problem of moving the record pointer (as explained by Jim McKeeth) does occur. They're one of the best datasets you can get (bought or free).\n" ]
[ 5, 4, 3, 2, 1, 0 ]
[]
[]
[ "dataset", "delphi", "multithreading" ]
stackoverflow_0000078475_dataset_delphi_multithreading.txt
Q: Merge Modules for Crystal Reports 2008 - Needs a Keycode? I've not been able to find any information on this, but is a keycode required to be embedded in the CR2008 merge modules for a .NET distribution? They used to require this (which had to be done using ORCA), but I've not found any information on this for CR2008. A: I would email Crystal Reports (Business Objects). They have helped me in the past with KeyCode issues.
Merge Modules for Crystal Reports 2008 - Needs a Keycode?
I've not been able to find any information on this, but is a keycode required to be embedded in the CR2008 merge modules for a .NET distribution? They used to require this (which had to be done using ORCA), but I've not found any information on this for CR2008.
[ "I would email Crystal Reports (Business Objects). They have helped me in the past with KeyCode issues.\n" ]
[ 1 ]
[]
[]
[ ".net", "crystal_reports", "merge" ]
stackoverflow_0000101728_.net_crystal_reports_merge.txt
Q: Linux: What is the best way to estimate the code & static data size of program? I want to be able to get an estimate of how much code & static data is used by my C++ program? Is there a way to find this out by looking at the executable or object files? Or perhaps something I can do at runtime? Will objdump & readelf help? A: "size" is the traditional tool. "readelf" has a lot of options. $ size /bin/sh text data bss dec hex filename 712739 37524 21832 772095 bc7ff /bin/sh A: If you want to take the next step of identifying the functions and data structures to focus on for footprint reduction, the --size-sort argument to nm can show you: $ nm --size-sort /usr/bin/fld | tail -10 000000ae T FontLoadFontx 000000b0 T CodingByRegistry 000000b1 t ShmFont 000000ec t FontLoadw 000000ef T LoadFontFile 000000f6 T FontLoadDFontx 00000108 D fSRegs 00000170 T FontLoadMinix 000001e7 T main 00000508 T FontLoadBdf A: readelf will indeed help. You can use the -S option; that will show the sizes of all sections. .text is (the bulk of) your executable code. .data and .rodata is your static data. There are other sections too, some of which are used at runtime, others only at link time. A: size -A
Linux: What is the best way to estimate the code & static data size of program?
I want to be able to get an estimate of how much code & static data is used by my C++ program? Is there a way to find this out by looking at the executable or object files? Or perhaps something I can do at runtime? Will objdump & readelf help?
[ "\"size\" is the traditional tool. \"readelf\" has a lot of options.\n$ size /bin/sh\n text data bss dec hex filename\n 712739 37524 21832 772095 bc7ff /bin/sh\n\n", "If you want to take the next step of identifying the functions and data structures to focus on for footprint reduction, the --size-sort argument to nm can show you:\n$ nm --size-sort /usr/bin/fld | tail -10\n000000ae T FontLoadFontx\n000000b0 T CodingByRegistry\n000000b1 t ShmFont\n000000ec t FontLoadw\n000000ef T LoadFontFile\n000000f6 T FontLoadDFontx\n00000108 D fSRegs\n00000170 T FontLoadMinix\n000001e7 T main\n00000508 T FontLoadBdf\n\n", "readelf will indeed help. You can use the -S option; that will show the sizes of all sections. .text is (the bulk of) your executable code. .data and .rodata is your static data. There are other sections too, some of which are used at runtime, others only at link time.\n", "\nsize -A\n\n" ]
[ 5, 2, 1, 1 ]
[]
[]
[ "c++", "linux", "unix" ]
stackoverflow_0000035485_c++_linux_unix.txt
Q: How can I get Datetime to display in military time in oracle? I am running some queries to track down a problem with our backup logs and would like to display datetime fields in 24-hour military time. Is there a simple way to do this? I've tried googling and could find nothing. A: select to_char(sysdate,'DD/MM/YYYY HH24:MI:SS') from dual; Give the time in 24 hour format. More options are described here. A: If you want all queries in your session to show the full datetime, then do alter session set NLS_DATE_FORMAT='DD/MM/YYYY HH24:MI:SS' at the start of your session. A: Use a to_char(field,'YYYYMMDD HH24MISS'). A good list of date formats is available here A: It's not oracle that determines the display of the date, it's the tool you're using to run queries. What are you using to display results? Then we can point you to the correct settings hopefully.
How can I get Datetime to display in military time in oracle?
I am running some queries to track down a problem with our backup logs and would like to display datetime fields in 24-hour military time. Is there a simple way to do this? I've tried googling and could find nothing.
[ "select to_char(sysdate,'DD/MM/YYYY HH24:MI:SS') from dual;\n\nGive the time in 24 hour format.\nMore options are described here.\n", "If you want all queries in your session to show the full datetime, then do\nalter session set NLS_DATE_FORMAT='DD/MM/YYYY HH24:MI:SS'\n\nat the start of your session.\n", "Use a to_char(field,'YYYYMMDD HH24MISS').\nA good list of date formats is available here\n", "It's not oracle that determines the display of the date, it's the tool you're using to run queries. What are you using to display results? Then we can point you to the correct settings hopefully.\n" ]
[ 17, 3, 1, 0 ]
[]
[]
[ "oracle", "sql" ]
stackoverflow_0000102881_oracle_sql.txt
Q: Usefulness of SQL Server "with encryption" statement Recently a friend and I were talking about securing stored procedure code in a SQL server database. From distant memory, I'm pretty certain that "with encryption" is incredibly easily broken in all versions of SQL Server, however he said it has been greatly improved in SQL 2005. As a result I have not seriously considered it as a security option in any systems I have ever worked on. So in what scenarious could "with encryption" be used, and when should it be avoided at all costs? A: It can be used to hide your code from casual observers, but as you say: it's easily circumvented. It really can't be any other way, since the server needs to decrypt the code to execute it. It's DRM, basically, and fails for the same reason as all the other DRM does - you can't simultaneously hide the data, and allow it to be accessed. A: @Blorgbeard Good response, the MSDN documentation on "WITH ENCRYPTION" seems to agree with your point, now calling it "obfuscated" rather then encrypted. I've met a few developers who were completely unaware of this point however. Hopefully this question/response will inform others too. A: Yes, it's easily broken. I had a situation this past week where I had to decrypt several sprocs that a former developer had encrypted for a client of mine. After decrypting it, which took a moderate effort, I wouldn't rely on that for any means of protecting intellectual property, passwords, user ids. Anything really.
Usefulness of SQL Server "with encryption" statement
Recently a friend and I were talking about securing stored procedure code in a SQL server database. From distant memory, I'm pretty certain that "with encryption" is incredibly easily broken in all versions of SQL Server, however he said it has been greatly improved in SQL 2005. As a result I have not seriously considered it as a security option in any systems I have ever worked on. So in what scenarious could "with encryption" be used, and when should it be avoided at all costs?
[ "It can be used to hide your code from casual observers, but as you say: it's easily circumvented.\nIt really can't be any other way, since the server needs to decrypt the code to execute it. It's DRM, basically, and fails for the same reason as all the other DRM does - you can't simultaneously hide the data, and allow it to be accessed.\n", "@Blorgbeard\nGood response, the MSDN documentation on \"WITH ENCRYPTION\" seems to agree with your point, now calling it \"obfuscated\" rather then encrypted.\nI've met a few developers who were completely unaware of this point however. Hopefully this question/response will inform others too.\n", "Yes, it's easily broken. I had a situation this past week where I had to decrypt several sprocs that a former developer had encrypted for a client of mine. After decrypting it, which took a moderate effort, I wouldn't rely on that for any means of protecting intellectual property, passwords, user ids. Anything really.\n" ]
[ 6, 2, 1 ]
[]
[]
[ "encryption", "sql_server" ]
stackoverflow_0000051124_encryption_sql_server.txt
Q: Testing running condition of a Windows app I have several applications that are part of a suite of tools that various developers at our studio use. these applications are mainly command line apps that open a DOS cmd shell. These apps in turn start up a GUI application that tracks output and status (via sockets) of these command line apps. The command line apps can be started with the user is logged in, when their workstation is locked (they fire off a batch file and then immediately lock their workstaion), and when they are logged out (via a scheduled task). The problems that I have are with the last two cases. If any of these apps fire off when the user is locked or logged out, these command will spawn the GUI windows which tracks the output/status. That's fine, but say the user has their workstation locked -- when they unlock their workstation, the GUI isn't visible. It's running the task list, but it's not visible. The next time these users run some of our command line apps, the GUI doesn't get launched (because it's already running), but because it's not visible on the desktop, users don't see any output. What I'm looking for is a way to tell from my command line apps if they are running behind a locked workstation or when a user is logged out (via scheduled task) -- basically are they running without a user's desktop visible. If I can tell that, then I can simply not start up our GUI and can prevent a lot of problem. These apps that I need to test are C/C++ Windows applications. I hope that this make sense. A: I found the programmatic answer that I was looking for. It has to do with stations. Apparently anything running on the desktop will run on a station with a particular name. Anything that isn't on the desktop (i.e. a process started by the task manager when logged off or on a locked workstation) will get started with a different station name. Example code: HWINSTA dHandle = GetProcessWindowStation(); if ( GetUserObjectInformation(dHandle, UOI_NAME, nameBuffer, bufferLen, &lenNeeded) ) { if ( stricmp(nameBuffer, "winsta0") ) { // when we get here, we are not running on the real desktop return false; } } If you get inside the 'if' statement, then your process is not on the desktop, but running "somewhere else". I looked at the namebuffer value when not running from the desktop and the names don't mean much, but they are not WinSta0. Link to the docs here. A: You might be able to use SENS (System Event Notification Services). I've never used it myself, but I'm almost positive it will do what you want: give you notification for events like logon, logoff, screen saver, etc. I know that's pretty vague, but hopefully it will get you started. A quick google search turned up this, among others: http://discoveringdotnet.alexeyev.org/2008/02/sens-events.html A: I have successfully used this approach to detect whether the desktop is locked on Windows: bool isDesktopLocked = false; HDESK inputDesktop = OpenInputDesktop(0, FALSE, DESKTOP_CREATEMENU | DESKTOP_CREATEWINDOW | DESKTOP_ENUMERATE | DESKTOP_SWITCHDESKTOP | DESKTOP_WRITEOBJECTS | DESKTOP_READOBJECTS | DESKTOP_WRITE); if (NULL == inputDesktop) { isDesktopLocked = true; } else { CloseDesktop(inputDesktop); }
Testing running condition of a Windows app
I have several applications that are part of a suite of tools that various developers at our studio use. these applications are mainly command line apps that open a DOS cmd shell. These apps in turn start up a GUI application that tracks output and status (via sockets) of these command line apps. The command line apps can be started with the user is logged in, when their workstation is locked (they fire off a batch file and then immediately lock their workstaion), and when they are logged out (via a scheduled task). The problems that I have are with the last two cases. If any of these apps fire off when the user is locked or logged out, these command will spawn the GUI windows which tracks the output/status. That's fine, but say the user has their workstation locked -- when they unlock their workstation, the GUI isn't visible. It's running the task list, but it's not visible. The next time these users run some of our command line apps, the GUI doesn't get launched (because it's already running), but because it's not visible on the desktop, users don't see any output. What I'm looking for is a way to tell from my command line apps if they are running behind a locked workstation or when a user is logged out (via scheduled task) -- basically are they running without a user's desktop visible. If I can tell that, then I can simply not start up our GUI and can prevent a lot of problem. These apps that I need to test are C/C++ Windows applications. I hope that this make sense.
[ "I found the programmatic answer that I was looking for. It has to do with stations. Apparently anything running on the desktop will run on a station with a particular name. Anything that isn't on the desktop (i.e. a process started by the task manager when logged off or on a locked workstation) will get started with a different station name. Example code:\nHWINSTA dHandle = GetProcessWindowStation();\nif ( GetUserObjectInformation(dHandle, UOI_NAME, nameBuffer, bufferLen, &lenNeeded) ) {\n if ( stricmp(nameBuffer, \"winsta0\") ) {\n // when we get here, we are not running on the real desktop\n return false;\n }\n}\n\nIf you get inside the 'if' statement, then your process is not on the desktop, but running \"somewhere else\". I looked at the namebuffer value when not running from the desktop and the names don't mean much, but they are not WinSta0.\nLink to the docs here.\n", "You might be able to use SENS (System Event Notification Services). I've never used it myself, but I'm almost positive it will do what you want: give you notification for events like logon, logoff, screen saver, etc.\nI know that's pretty vague, but hopefully it will get you started. A quick google search turned up this, among others: http://discoveringdotnet.alexeyev.org/2008/02/sens-events.html\n", "I have successfully used this approach to detect whether the desktop is locked on Windows:\nbool isDesktopLocked = false;\nHDESK inputDesktop = OpenInputDesktop(0, FALSE,\n DESKTOP_CREATEMENU | DESKTOP_CREATEWINDOW |\n DESKTOP_ENUMERATE | DESKTOP_SWITCHDESKTOP |\n DESKTOP_WRITEOBJECTS | DESKTOP_READOBJECTS |\n DESKTOP_WRITE);\n\nif (NULL == inputDesktop)\n{\n isDesktopLocked = true;\n}\nelse\n{\n CloseDesktop(inputDesktop);\n}\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "c++", "command_line", "windows" ]
stackoverflow_0000087689_c++_command_line_windows.txt
Q: How do you maintain separate webservices for dev/stage/production We want to maintain 3 webservices for the different steps of deployment, but how do we define in our application which service to use? Do we just maintain 3 web references and ifdef the uses of them somehow? A: Don't maintain the differences in code, but rather through a configuration file. That way they're all running the same code, just with different configuration values (ie. port to bind to, hostname to answer to, etc.) A: As others have mentioned you'll want to stash this information in a config file. In fact, I'd suggest using a different configuration file for each environment. This will address the inevitable problem of having multiple settings for each environment, e.g. you might have separate settings for the web service URL and web service port or have some extra settings to deal with https/security. All that said, make sure you address these potential issues: If the web service does anything particularly essential to the application you might want to marry the application to web services in each environment (i.e. have a version of your application in each environment). Certainly, any changes to the interface are easier when you do it this way. Make sure it's obvious to someone which version of the web service you are speaking with. A: My recommendation would be to keep this information in configuration files for the application. Even better would be to inject the appropriate values for a given environment into the configuration during the build process, assuming your build process has some kind of macro-replacement functionality. This way you can create a targeted build for a given environment and not have to change the configuration every time you do a build for a different environment. A: When I last worked on a project with a web server, we dealt with this problem as follows: msbuild /t:deploy would build & deploy to a test environment that was partially shared by the team, and partially dev-specific. The default value for $(SERVER) was $(USERNAME). msbuild /t:deploy /p:server=test would deploy to the shared test environment, which non-devs could look at. msbuild /t:deploy /p:server=live would deploy to live server. I think I added an extra handshake, like an error unless you had /p:secret=foo, just to make sure you didn't do this by accident. A: All the stuff that can change from dev to test to prod must be configurable. If you can afford to build the process that updates these variable things during the installation of your product -- do it. (Baking the customizations into the build seems like an inferior idea -- you end up with a bunch of different incompatible builds for the same version of the source code) A: FYI, this was addressed here yesterday: How do you maintain java webapps in different staging environments? A: Put the service address and port into your application's configuration. It's probably a good idea to do the same thing in the service's config, at least for the port, so that your dev service listens on the right port. This way you don't have to modify your code just to change the server/port you're hitting. Using config rather than code for switching between dev, stage, and production is very valuable for testing. When you deploy to production you want to make sure you're deploying the same exact code that was tested, not something slightly different. All you should change between dev and production is the config. A: Instead of using web references, generate the proxy classes from the web service WSDL's using wsdl.exe. The generated classes will have a Url property that can be set depending on the step of deployment (dev, qa, production, etc.).
How do you maintain separate webservices for dev/stage/production
We want to maintain 3 webservices for the different steps of deployment, but how do we define in our application which service to use? Do we just maintain 3 web references and ifdef the uses of them somehow?
[ "Don't maintain the differences in code, but rather through a configuration file. That way they're all running the same code, just with different configuration values (ie. port to bind to, hostname to answer to, etc.)\n", "As others have mentioned you'll want to stash this information in a config file. In fact, I'd suggest using a different configuration file for each environment. This will address the inevitable problem of having multiple settings for each environment, e.g. you might have separate settings for the web service URL and web service port or have some extra settings to deal with https/security.\nAll that said, make sure you address these potential issues:\nIf the web service does anything particularly essential to the application you might want to marry the application to web services in each environment (i.e. have a version of your application in each environment). Certainly, any changes to the interface are easier when you do it this way.\nMake sure it's obvious to someone which version of the web service you are speaking with.\n", "My recommendation would be to keep this information in configuration files for the application. Even better would be to inject the appropriate values for a given environment into the configuration during the build process, assuming your build process has some kind of macro-replacement functionality. This way you can create a targeted build for a given environment and not have to change the configuration every time you do a build for a different environment.\n", "When I last worked on a project with a web server, we dealt with this problem as follows:\n\nmsbuild /t:deploy would build & deploy to a test environment that was partially shared by the team, and partially dev-specific. The default value for $(SERVER) was $(USERNAME).\nmsbuild /t:deploy /p:server=test would deploy to the shared test environment, which non-devs could look at.\nmsbuild /t:deploy /p:server=live would deploy to live server. I think I added an extra handshake, like an error unless you had /p:secret=foo, just to make sure you didn't do this by accident.\n\n", "All the stuff that can change from dev to test to prod must be configurable. If you can afford to build the process that updates these variable things during the installation of your product -- do it. (Baking the customizations into the build seems like an inferior idea -- you end up with a bunch of different incompatible builds for the same version of the source code)\n", "FYI, this was addressed here yesterday:\nHow do you maintain java webapps in different staging environments?\n", "Put the service address and port into your application's configuration. It's probably a good idea to do the same thing in the service's config, at least for the port, so that your dev service listens on the right port. This way you don't have to modify your code just to change the server/port you're hitting.\nUsing config rather than code for switching between dev, stage, and production is very valuable for testing. When you deploy to production you want to make sure you're deploying the same exact code that was tested, not something slightly different. All you should change between dev and production is the config.\n", "Instead of using web references, generate the proxy classes from the web service WSDL's using wsdl.exe. The generated classes will have a Url property that can be set depending on the step of deployment (dev, qa, production, etc.).\n" ]
[ 10, 4, 3, 2, 1, 1, 0, 0 ]
[]
[]
[ ".net", "asp.net", "c#", "web_services" ]
stackoverflow_0000105479_.net_asp.net_c#_web_services.txt
Q: DB2 Transport Component is not registered correctly I'm trying to test the DB2 adapter for BizTalk 2006 (not R2). While trying to configure an instance in an application, I get an error stating: DB2 Transport Component is not registered correctly The enivronment is 2 BizTalk servers sharing a messagebox. The DB2 adapter works fine on the first server. It is the second server I am having problems with. I've exported the .msi files from the first server, then installed them onto the second server and imported them into BizTalk. All of the other adapters that I'm using work fine on both servers. Google searches don't bring up a whole lot regarding troubleshooting the BizTalk DB2 adapter. Further troubleshooting has shown that MS BizTalk Adapters for Host Systems is installed on both machines. However, it was only configured on the machine that is giving me the issue. I've unconfigured it, but that still has not helped. I've double checked tht version numbers of the .dll's for the DB2 adapter are the same on both servers, and made sure that they are installed in the GAC. None of this has helped. Has anyone run into an issue like this before, or point me in the direction of where to look for BizTalk DB2 adapter troubleshooting guidence? A: When the "registered" word appears, I think about the registration of COM components, not the installation of .NET assemblies. The underlying driver the DB2 adapter uses is the Microsoft ODBC Driver for DB2. You may want to check if your ODBC DSN control panel shows up that particular driver for you to configure a DSN. I'd recommend a reinstallation for the Adapter pack for Host Systems.
DB2 Transport Component is not registered correctly
I'm trying to test the DB2 adapter for BizTalk 2006 (not R2). While trying to configure an instance in an application, I get an error stating: DB2 Transport Component is not registered correctly The enivronment is 2 BizTalk servers sharing a messagebox. The DB2 adapter works fine on the first server. It is the second server I am having problems with. I've exported the .msi files from the first server, then installed them onto the second server and imported them into BizTalk. All of the other adapters that I'm using work fine on both servers. Google searches don't bring up a whole lot regarding troubleshooting the BizTalk DB2 adapter. Further troubleshooting has shown that MS BizTalk Adapters for Host Systems is installed on both machines. However, it was only configured on the machine that is giving me the issue. I've unconfigured it, but that still has not helped. I've double checked tht version numbers of the .dll's for the DB2 adapter are the same on both servers, and made sure that they are installed in the GAC. None of this has helped. Has anyone run into an issue like this before, or point me in the direction of where to look for BizTalk DB2 adapter troubleshooting guidence?
[ "When the \"registered\" word appears, I think about the registration of COM components, not the installation of .NET assemblies. The underlying driver the DB2 adapter uses is the Microsoft ODBC Driver for DB2. You may want to check if your ODBC DSN control panel shows up that particular driver for you to configure a DSN.\nI'd recommend a reinstallation for the Adapter pack for Host Systems.\n" ]
[ 0 ]
[]
[]
[ "biztalk", "biztalk_2006", "db2" ]
stackoverflow_0000086863_biztalk_biztalk_2006_db2.txt
Q: How do I use the softkeys with a CDialog based application in windows mobile 6 via MFC? How do I use the softkeys with a CDialog based application in windows mobile 6 via MFC? I have a CDialog based Windows Mobile 6 (touchscreen) Professional app that I am workign on. The default behavior of a CDialog based app in WM6 Professional is to not use any softkeys by default... I want to map the softkeys to "Cancel" and "OK" functionality that sends IDOK and IDCANCEL to my Main Dialog class. I have been trying to work with CCommandBar with no luck, and SHCreateMenuBar was not working out for me either. Does anyone have a sample of how to get this to work? A: What's "not working" with the CCommandBar for you? You should be able to add a CCommandBar member to your dialog class, then in teh DIalog's InitDialog you call Create and InsertMenuBar on the command bar - something like this: m_cmdBar.Create(this); m_cmdBar.InsertMenuBar(IDR_MENU_RESRC_ID); Your menu resource might look something like this: IDR_MENU_RESRC_ID MENU DISCARDABLE BEGIN MENUITEM "OK", IDOK MENUITEM "Cancel", IDCANCEL END A: thank you so much... I was going crazy with this... your code worked exactly as expected... At first I used it and had the same results, the softkey area would be blank except for the SIP input button. After an hour or so of debugging I tried putting those 2 lines of code at the END of my OnInitDIalog() and it worked :) My problem ende dup being that in my OnIitDialog() I am creating some child dialogs. when I put the CCommandBar.InsertMenuBar() before I create child dialogs I do not get my "ok" or "Cancel" soft keys, when I put that line after the creation of child dialogs the softkeys show as expected and work great. Thanks again
How do I use the softkeys with a CDialog based application in windows mobile 6 via MFC?
How do I use the softkeys with a CDialog based application in windows mobile 6 via MFC? I have a CDialog based Windows Mobile 6 (touchscreen) Professional app that I am workign on. The default behavior of a CDialog based app in WM6 Professional is to not use any softkeys by default... I want to map the softkeys to "Cancel" and "OK" functionality that sends IDOK and IDCANCEL to my Main Dialog class. I have been trying to work with CCommandBar with no luck, and SHCreateMenuBar was not working out for me either. Does anyone have a sample of how to get this to work?
[ "What's \"not working\" with the CCommandBar for you? You should be able to add a CCommandBar member to your dialog class, then in teh DIalog's InitDialog you call Create and InsertMenuBar on the command bar - something like this:\nm_cmdBar.Create(this);\nm_cmdBar.InsertMenuBar(IDR_MENU_RESRC_ID);\n\nYour menu resource might look something like this:\nIDR_MENU_RESRC_ID MENU DISCARDABLE\nBEGIN\nMENUITEM \"OK\", IDOK\nMENUITEM \"Cancel\", IDCANCEL\nEND\n\n", "thank you so much... I was going crazy with this...\nyour code worked exactly as expected... \nAt first I used it and had the same results, the softkey area would be blank except for the SIP input button.\nAfter an hour or so of debugging I tried putting those 2 lines of code at the END of my OnInitDIalog() and it worked :)\nMy problem ende dup being that in my OnIitDialog() I am creating some child dialogs. when I put the CCommandBar.InsertMenuBar() before I create child dialogs I do not get my \"ok\" or \"Cancel\" soft keys, when I put that line after the creation of child dialogs the softkeys show as expected and work great.\nThanks again\n" ]
[ 2, 0 ]
[]
[]
[ "mfc", "windows_mobile" ]
stackoverflow_0000105731_mfc_windows_mobile.txt
Q: How to Naturally/Numerically Sort a DataView? I am wondering how to naturally sort a DataView... I really need help on this. I found articles out there that can do lists with IComparable, but I need to sort the numbers in my dataview. They are currently alpha sorted because they are numbers with 'commas' in them. Please help me out. I would like to find something instead of spending the time to create it my self. P.S. expression and sortdirection work, but they of course Alpha sort. Please help. A: I often like to add a "SortOrder" column to results that I want to sort in a way other than is provided by the data. I usually use an integer and just add it when I am getting the data. I don't show this column and only use it for the purposes of establishing the order. I'm not sure if this is what you are looking for, but it is quick and easy and gives you a great deal of control. A: See these related questions: How to Naturally Sort a DataView with something like IComparable How do I sort an ASP.NET DataGrid by the length of a field?
How to Naturally/Numerically Sort a DataView?
I am wondering how to naturally sort a DataView... I really need help on this. I found articles out there that can do lists with IComparable, but I need to sort the numbers in my dataview. They are currently alpha sorted because they are numbers with 'commas' in them. Please help me out. I would like to find something instead of spending the time to create it my self. P.S. expression and sortdirection work, but they of course Alpha sort. Please help.
[ "I often like to add a \"SortOrder\" column to results that I want to sort in a way other than is provided by the data. I usually use an integer and just add it when I am getting the data.\nI don't show this column and only use it for the purposes of establishing the order.\nI'm not sure if this is what you are looking for, but it is quick and easy and gives you a great deal of control.\n", "See these related questions:\n\nHow to Naturally Sort a DataView with something like IComparable\nHow do I sort an ASP.NET DataGrid by the length of a field?\n\n" ]
[ 1, 0 ]
[]
[]
[ ".net", "asp.net", "icomparable" ]
stackoverflow_0000098770_.net_asp.net_icomparable.txt
Q: Custom titlebars/chrome in a WinForms app I'm almost certain I know the answer to this question, but I'm hoping there's something I've overlooked. Certain applications seem to have the Vista Aero look and feel to their caption bars and buttons even when running on Windows XP. (Google Chrome and Windows Live Photo Gallery come to mind as examples.) I know that one way to accomplish this from WinForms would be to create a borderless form and draw the caption bar/buttons yourself, then overriding WndProc to make sure moving, resizing, and button clicks do what they're supposed to do (I'm not clear on the specifics but could probably pull it off given a day to read documentation.) I'm curious if there's a different, easier way that I'm overlooking. Perhaps some API calls or window styles I've overlooked? I believe Google has answered it for me by using the roll-your-own-window approach with Chrome. I will leave the question open for another day in case someone has new information, but I believe I have answered the question myself. A: Here's an article with full code sample on how to use your own custom "chrome" for an application: http://geekswithblogs.net/kobush/articles/CustomBorderForms3.aspx This looks like some really good stuff. There are a total of 3 articles in it's series, and it runs great, and on Vista too! A: Google Chrome is not using the Vista SDK to achieve this on XP. If you peek into src\chrome\browser\views\frame there are several files to define the browser frame depending on the capabilities of the system. On XP, it looks like OpaqueFrame is used; line 19 has this to say: // OpaqueFrame // // OpaqueFrame is a CustomFrameWindow subclass that in conjunction with // OpaqueNonClientView provides the window frame on Windows XP and on Windows // Vista when DWM desktop compositing is disabled. The window title and // borders are provided with bitmaps. It looks like it's using the resources in src\chrome\app\theme to draw the frame buttons. So it looks like my hopes that there's some kind of cheap way to enable Vista theming on XP are dashed. The only way to do it is to manually draw the non-client area of your window. I believe something like this is probably the right track, since it lets Windows handle the non-client stuff like moving and resizing the window. Unless someone can find a method to magically enable the Vista theming on XP, this is the answer to the question but I obviously cannot mark my own post as the answer. A: Owen, I'm using Chrome on XP and I don't see "Vista Aero" glass theme on the Chrome window. I see it as solid blue. If it's custom theming of controls and windows title bars you want, that can be accomplished. There's an excellent, free UI toolkit for WinForms that does exactly that: KryptonToolkit A: Nope, I am afraid, there is no other easy way of doing this. You are on the right track. You will need to create a custom Winform and then proceed as illustrated in this example. A: Google Chrome uses the Windows Vista SDK to get the glass look on XP. You can download it here: http://www.microsoft.com/downloads/details.aspx?FamilyID=4377f86d-c913-4b5c-b87e-ef72e5b4e065&displaylang=en Using this, you need to enabled delay loading of the following DLL's to get the Glass Effect in XP: uxtheme.dll dwmapi.dl A: @Jonathan Holland: Is this something that can be done from .NET? Yes, using DllImport. Here is a good blog post
Custom titlebars/chrome in a WinForms app
I'm almost certain I know the answer to this question, but I'm hoping there's something I've overlooked. Certain applications seem to have the Vista Aero look and feel to their caption bars and buttons even when running on Windows XP. (Google Chrome and Windows Live Photo Gallery come to mind as examples.) I know that one way to accomplish this from WinForms would be to create a borderless form and draw the caption bar/buttons yourself, then overriding WndProc to make sure moving, resizing, and button clicks do what they're supposed to do (I'm not clear on the specifics but could probably pull it off given a day to read documentation.) I'm curious if there's a different, easier way that I'm overlooking. Perhaps some API calls or window styles I've overlooked? I believe Google has answered it for me by using the roll-your-own-window approach with Chrome. I will leave the question open for another day in case someone has new information, but I believe I have answered the question myself.
[ "Here's an article with full code sample on how to use your own custom \"chrome\" for an application:\nhttp://geekswithblogs.net/kobush/articles/CustomBorderForms3.aspx\nThis looks like some really good stuff. There are a total of 3 articles in it's series, and it runs great, and on Vista too!\n", "Google Chrome is not using the Vista SDK to achieve this on XP. If you peek into src\\chrome\\browser\\views\\frame there are several files to define the browser frame depending on the capabilities of the system. On XP, it looks like OpaqueFrame is used; line 19 has this to say:\n// OpaqueFrame\n//\n// OpaqueFrame is a CustomFrameWindow subclass that in conjunction with\n// OpaqueNonClientView provides the window frame on Windows XP and on Windows\n// Vista when DWM desktop compositing is disabled. The window title and\n// borders are provided with bitmaps.\nIt looks like it's using the resources in src\\chrome\\app\\theme to draw the frame buttons.\nSo it looks like my hopes that there's some kind of cheap way to enable Vista theming on XP are dashed. The only way to do it is to manually draw the non-client area of your window. I believe something like this is probably the right track, since it lets Windows handle the non-client stuff like moving and resizing the window.\nUnless someone can find a method to magically enable the Vista theming on XP, this is the answer to the question but I obviously cannot mark my own post as the answer.\n", "Owen, I'm using Chrome on XP and I don't see \"Vista Aero\" glass theme on the Chrome window. I see it as solid blue.\nIf it's custom theming of controls and windows title bars you want, that can be accomplished. There's an excellent, free UI toolkit for WinForms that does exactly that: KryptonToolkit\n", "Nope, I am afraid, there is no other easy way of doing this. \nYou are on the right track. You will need to create a custom Winform and then proceed as illustrated in this example.\n", "Google Chrome uses the Windows Vista SDK to get the glass look on XP. You can download it here:\nhttp://www.microsoft.com/downloads/details.aspx?FamilyID=4377f86d-c913-4b5c-b87e-ef72e5b4e065&displaylang=en\nUsing this, you need to enabled delay loading of the following DLL's to get the Glass Effect in XP:\n\nuxtheme.dll \ndwmapi.dl\n\n", "\n@Jonathan Holland: Is this something that can be done from .NET?\n\nYes, using DllImport. Here is a good blog post\n" ]
[ 11, 5, 4, 0, 0, 0 ]
[]
[]
[ ".net", "user_interface", "windows_xp", "winforms" ]
stackoverflow_0000042460_.net_user_interface_windows_xp_winforms.txt
Q: Possible to monitor processes another process launches via WMI? I have a setup executable that I need to install. When I run it, it launches a msi to do the actual install and then dies immediately. The side effect of this is it will return control back to any console you call it from before the install finishes. Depending on what machine I run it on, it can take from three to ten minutes so having the calling script sleep is undesirable. I would launch the msi directly but it complains about missing components. I have a WSH script that uses WMI to start a process and then watch until it's pid is no longer running. Is there some way to determine the pid of the MSI the initial executable is executing, and then watch for that pid to end using WMI? Is the launching process information even associated with a process? A: Would doing a WMI lookup of processes that have the initial setup as the parent process do the trick? For example, if I launch an MSI from a command prompt with process id 4000, I can execute the following command line to find information about msiexec process: c:\>wmic PROCESS WHERE ParentProcessId=4000 GET CommandLine, ProcessId CommandLine ProcessId "C:\Windows\System32\msiexec.exe" /i "C:\blahblahblah.msi" 2752 That may be one way to find the information you need. Here is a demo of looking up that information in vbs: Set objWMIService = GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\cimv2") Set colProcesses = objWMIService.ExecQuery("select * from Win32_Process where ParentProcessId = 4000") For Each objProcess in colProcesses Wscript.Echo "Process ID: " & objProcess.ProcessId Next I hope this helps. A: If you're using a .NET language (you can do it in Win32, but waaaay easier in .NET) you can enumerate all the Processes in the system (after your initial call to Setup.exe completes) and find all the processes which parent's PID equal to the PID of the Setup.exe - and then monitor all those processes. When they will complete - setup is complete. Make sure that they don't spawn any more child processes as well. A: This should do it. $p1 = [diagnostics.process]::start($pathToExecutable) # this way we know the PID of the initial exe $p2 = get-wmiobject win32_process -filter "ParentProcessId = $($p1.Id)" # using Jim Olsen's tip (get-process -id $p2.ProcessId).WaitForExit() # voila--no messy sleeping Unfortunately, the .NET object doesn't have a ParentProcessId property, and the WMI object doesn't have the WaitForExit() method, so we have to go back and forth. Props to Jeffrey Snover (always) for this article.
Possible to monitor processes another process launches via WMI?
I have a setup executable that I need to install. When I run it, it launches a msi to do the actual install and then dies immediately. The side effect of this is it will return control back to any console you call it from before the install finishes. Depending on what machine I run it on, it can take from three to ten minutes so having the calling script sleep is undesirable. I would launch the msi directly but it complains about missing components. I have a WSH script that uses WMI to start a process and then watch until it's pid is no longer running. Is there some way to determine the pid of the MSI the initial executable is executing, and then watch for that pid to end using WMI? Is the launching process information even associated with a process?
[ "Would doing a WMI lookup of processes that have the initial setup as the parent process do the trick? For example, if I launch an MSI from a command prompt with process id 4000, I can execute the following command line to find information about msiexec process:\nc:\\>wmic PROCESS WHERE ParentProcessId=4000 GET CommandLine, ProcessId \nCommandLine ProcessId\n\"C:\\Windows\\System32\\msiexec.exe\" /i \"C:\\blahblahblah.msi\" 2752\n\nThat may be one way to find the information you need. Here is a demo of looking up that information in vbs:\nSet objWMIService = GetObject(\"winmgmts:{impersonationLevel=impersonate}!\\\\.\\root\\cimv2\")\nSet colProcesses = objWMIService.ExecQuery(\"select * from Win32_Process where ParentProcessId = 4000\")\nFor Each objProcess in colProcesses\n Wscript.Echo \"Process ID: \" & objProcess.ProcessId\nNext\n\nI hope this helps.\n", "If you're using a .NET language (you can do it in Win32, but waaaay easier in .NET) you can enumerate all the Processes in the system (after your initial call to Setup.exe completes) and find all the processes which parent's PID equal to the PID of the Setup.exe - and then monitor all those processes. When they will complete - setup is complete. Make sure that they don't spawn any more child processes as well.\n", "This should do it.\n$p1 = [diagnostics.process]::start($pathToExecutable) # this way we know the PID of the initial exe\n$p2 = get-wmiobject win32_process -filter \"ParentProcessId = $($p1.Id)\" # using Jim Olsen's tip\n(get-process -id $p2.ProcessId).WaitForExit() # voila--no messy sleeping\n\nUnfortunately, the .NET object doesn't have a ParentProcessId property, and the WMI object doesn't have the WaitForExit() method, so we have to go back and forth.\nProps to Jeffrey Snover (always) for this article.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "scripting", "windows", "wmi", "wsh" ]
stackoverflow_0000106476_scripting_windows_wmi_wsh.txt
Q: Gsoap Error in C++ I am using gsoap to create a soap server in C++. Messages are routed through a bus written in Java. Both the server and the bus are multithreaded. Everything works well sending one message at a time through the system. If I start 3 clients each sending messages as fast as possible everything is fine for about 3500 messages. Then I begin receiving periodic "Only one socket connection allowed at a time." errors from the gsoap code. Typical about 3950 of 4000 messages make it through OK. With all 50 failures happening in the last 500 sends. Why would these errors occur after many sends, but not at the beginning of the sending? The rate of send does not increase. What is it talking about? I cannot find any explanation of the error, and its meaning is not clear to me. Anyone successfully multithreaded a gsoap app? Here is my server code. long WINAPI threadGO(soap *x); int main(int argc, char* argv[]) { HANDLE thread1; int m, s; /* master and slave sockets */ struct soap *soap = soap_new(); if (argc < 2) soap_serve(soap); /* serve as CGI application */ else { m = soap_bind(soap, NULL, atoi(argv[1]), 100); if (m < 0) { soap_print_fault(soap, stderr); exit(-1); } fprintf(stderr, "Socket connection successful: master socket = %d\n", m); for (;;) { s = soap_accept(soap); thread1 = CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)threadGO,soap_copy(soap),0,NULL); } } soap_done(soap); free(soap); return 0; } long WINAPI threadGO(soap *x) { soap_serve(x); soap_end(x); return 0 ; } A: I believe you've got a resource leak in threadGO. After copying the soap struct with soap_copy(), I believe it needs to be freed by calling all of: soap_destroy(x); soap_end(x); soap_free(x); Specifically, the missing call to soap_done() (which is called from soap_free()) calls soap_closesock(), which closes the socket.
Gsoap Error in C++
I am using gsoap to create a soap server in C++. Messages are routed through a bus written in Java. Both the server and the bus are multithreaded. Everything works well sending one message at a time through the system. If I start 3 clients each sending messages as fast as possible everything is fine for about 3500 messages. Then I begin receiving periodic "Only one socket connection allowed at a time." errors from the gsoap code. Typical about 3950 of 4000 messages make it through OK. With all 50 failures happening in the last 500 sends. Why would these errors occur after many sends, but not at the beginning of the sending? The rate of send does not increase. What is it talking about? I cannot find any explanation of the error, and its meaning is not clear to me. Anyone successfully multithreaded a gsoap app? Here is my server code. long WINAPI threadGO(soap *x); int main(int argc, char* argv[]) { HANDLE thread1; int m, s; /* master and slave sockets */ struct soap *soap = soap_new(); if (argc < 2) soap_serve(soap); /* serve as CGI application */ else { m = soap_bind(soap, NULL, atoi(argv[1]), 100); if (m < 0) { soap_print_fault(soap, stderr); exit(-1); } fprintf(stderr, "Socket connection successful: master socket = %d\n", m); for (;;) { s = soap_accept(soap); thread1 = CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)threadGO,soap_copy(soap),0,NULL); } } soap_done(soap); free(soap); return 0; } long WINAPI threadGO(soap *x) { soap_serve(x); soap_end(x); return 0 ; }
[ "I believe you've got a resource leak in threadGO.\nAfter copying the soap struct with soap_copy(), I believe it needs to be freed by calling all of:\nsoap_destroy(x);\nsoap_end(x);\nsoap_free(x);\n\nSpecifically, the missing call to soap_done() (which is called from soap_free()) calls soap_closesock(), which closes the socket.\n" ]
[ 1 ]
[]
[]
[ "c++", "gsoap", "java", "soap" ]
stackoverflow_0000105326_c++_gsoap_java_soap.txt
Q: Consistent outdent of first letter with CSS? I'm trying to implement an outdent of the first letter of the first paragraph of the body text. Where I'm stuck is in getting consistent spacing between the first letter and the rest of the paragraph. For example, there is a huge difference in spacing between a "W" and an "I" Anyone have any ideas about how to mitigate the differences? I'd prefer a pure CSS solution, but will resort to JavaScript if need be. PS: I don't necessarily need compatibility in IE or Opera A: Apply this to p.outdent:first-letter: margin-left: -800px; padding-right: 460px; float: right; This will position the first letter on the right edge of the paragraph, then shove it left it by more or less the width of the paragraph, then move both the letter and all the padding into the float's large negative margin so the paragraph fits in the margin and doesn't try to wrap around. A: I tried using a fix-width font like 'courier new' and since the characters are more or less the same width it made it a lot less noticeable. Edit - this font is decent but might only work for windows p.outdent:first-letter { font-family: ms mincho; font-size: 8em; line-height: 1; font-weight: normal; float: left; margin: -0.1em 0 0 -.55em; letter-spacing: 0.05em; }
Consistent outdent of first letter with CSS?
I'm trying to implement an outdent of the first letter of the first paragraph of the body text. Where I'm stuck is in getting consistent spacing between the first letter and the rest of the paragraph. For example, there is a huge difference in spacing between a "W" and an "I" Anyone have any ideas about how to mitigate the differences? I'd prefer a pure CSS solution, but will resort to JavaScript if need be. PS: I don't necessarily need compatibility in IE or Opera
[ "Apply this to p.outdent:first-letter:\nmargin-left: -800px;\npadding-right: 460px;\nfloat: right;\n\nThis will position the first letter on the right edge of the paragraph, then shove it left it by more or less the width of the paragraph, then move both the letter and all the padding into the float's large negative margin so the paragraph fits in the margin and doesn't try to wrap around.\n", "I tried using a fix-width font like 'courier new' and since the characters are more or less the same width it made it a lot less noticeable.\nEdit - this font is decent but might only work for windows\np.outdent:first-letter {\n font-family: ms mincho;\n font-size: 8em;\n line-height: 1;\n font-weight: normal;\n float: left;\n margin: -0.1em 0 0 -.55em;\n letter-spacing: 0.05em;\n}\n\n" ]
[ 5, 0 ]
[]
[]
[ "css", "firefox", "safari", "typography" ]
stackoverflow_0000107054_css_firefox_safari_typography.txt
Q: What's the preferred way to connect to a postgresql database from PHP? I've been using PHP & MySQL for ages and am about to start using PostgreSQL instead. What's the preferred method? Is it via the PDO objects or is there something better? A: PDO objects are the new hotness. I'd recommend that as long as you can ensure that your target platform will always be running PHP 5.2+. There are many other database abstraction layers that support PostgreSQL that are compatible with older versions of PHP; I'd recommend ADODB. You should really be using PDO or a different abstraction layer even for your MySQL work; that way you won't have this problem again! A: Using Zend Db: require_once 'Zend/Db.php'; $DB_ADAPTER = 'Pdo_Pgsql'; $DB_CONFIG = array( 'username' => 'app_db_user', 'password' => 'xxxxxxxxx', 'host' => 'localhost', 'port' => 5432, 'dbname' => 'mydb' ); $db = Zend_Db::factory($DB_ADAPTER, $DB_CONFIG); A: I, personally, use PDO for all my database work when I have the choice. Prepared statements make my life easy, and it is seamless between database systems - handy if you have to work with one you're not used to. If you want to roll your own abstraction, or go with the procedural model, here's the Postgre functions: http://ca.php.net/manual/en/ref.pgsql.php A: There are also the pg_whatever functions, but don't use them. They use older, unmaintained database drivers. PDO is the way to go. A: I would also suggest creating an inherited PDO class or a wrapper class if you decide not to use PDO. This would provide you with a lot more flexibility in the future. ie. Calculating query execution time. A: Depending on the scale of your application, you might wish to consider the number of connections going to the backend. The consensus seem to be that PHP persistent connections and PostgreSQL don't work well together, so something like pgpool-|| should be used as an intermediary.
What's the preferred way to connect to a postgresql database from PHP?
I've been using PHP & MySQL for ages and am about to start using PostgreSQL instead. What's the preferred method? Is it via the PDO objects or is there something better?
[ "PDO objects are the new hotness. I'd recommend that as long as you can ensure that your target platform will always be running PHP 5.2+.\nThere are many other database abstraction layers that support PostgreSQL that are compatible with older versions of PHP; I'd recommend ADODB.\nYou should really be using PDO or a different abstraction layer even for your MySQL work; that way you won't have this problem again!\n", "Using Zend Db:\nrequire_once 'Zend/Db.php';\n$DB_ADAPTER = 'Pdo_Pgsql';\n$DB_CONFIG = array(\n 'username' => 'app_db_user',\n 'password' => 'xxxxxxxxx',\n 'host' => 'localhost',\n 'port' => 5432,\n 'dbname' => 'mydb'\n);\n$db = Zend_Db::factory($DB_ADAPTER, $DB_CONFIG);\n\n", "I, personally, use PDO for all my database work when I have the choice. Prepared statements make my life easy, and it is seamless between database systems - handy if you have to work with one you're not used to.\nIf you want to roll your own abstraction, or go with the procedural model, here's the Postgre functions: http://ca.php.net/manual/en/ref.pgsql.php\n", "There are also the pg_whatever functions, but don't use them. \nThey use older, unmaintained database drivers. PDO is the way to go.\n", "I would also suggest creating an inherited PDO class or a wrapper class if you decide not to use PDO. This would provide you with a lot more flexibility in the future. ie. Calculating query execution time.\n", "Depending on the scale of your application, you might wish to consider the number of connections going to the backend. The consensus seem to be that PHP persistent connections and PostgreSQL don't work well together, so something like pgpool-|| should be used as an intermediary.\n" ]
[ 5, 2, 1, 1, 0, 0 ]
[]
[]
[ "php", "postgresql" ]
stackoverflow_0000010532_php_postgresql.txt
Q: How to refer to the path to an assembly in the GAC within registry entries added by a Windows Installer package? I have a .NET assembly that contains classes to be registered as ServicedComponent through EnterpriseServices (COM+) and invoked through COM RPC by a third-party application. Therefore, I need to both add it to the GAC and add a registry entry under HKEY_CLASSES_ROOT\CLSID\{clsid}\CodeBase with the path to the assembly DLL within the GAC folder. (I can't rely on regsvcs to do it, because this is a 32-bit assembly --- it relies on 32-bit third-party components --- and the third-party application I referred to before cannot see classes in Wow6432Node) So the question is: Are paths to assemblies to be created in the GAC, or at least the path to the GAC folder itself, available in Windows Installer as properties that can be used in values of registry keys etc.? A: If you have a component per file, which you should anyway, the KeyPath of the component points to the location where the file gets installed (in this case the GAC). You can use the component key as a token in the value field of the entry in the Registry table in your MSI. Assuming you have an assembly with a File key in the File table of "assmb.dll" and its corresponding component, also "assmb.dll". You can set the value field in the Registry table to register your assembly to [$assmb.dll], and it will get resolved to the install location of the assembly. If this directory is the GAC, it will be resolved to the location of the GAC. You can find more information about Formatted fields in an MSI here.
How to refer to the path to an assembly in the GAC within registry entries added by a Windows Installer package?
I have a .NET assembly that contains classes to be registered as ServicedComponent through EnterpriseServices (COM+) and invoked through COM RPC by a third-party application. Therefore, I need to both add it to the GAC and add a registry entry under HKEY_CLASSES_ROOT\CLSID\{clsid}\CodeBase with the path to the assembly DLL within the GAC folder. (I can't rely on regsvcs to do it, because this is a 32-bit assembly --- it relies on 32-bit third-party components --- and the third-party application I referred to before cannot see classes in Wow6432Node) So the question is: Are paths to assemblies to be created in the GAC, or at least the path to the GAC folder itself, available in Windows Installer as properties that can be used in values of registry keys etc.?
[ "If you have a component per file, which you should anyway, the KeyPath of the component points to the location where the file gets installed (in this case the GAC). You can use the component key as a token in the value field of the entry in the Registry table in your MSI.\nAssuming you have an assembly with a File key in the File table of \"assmb.dll\" and its corresponding component, also \"assmb.dll\". You can set the value field in the Registry table to register your assembly to [$assmb.dll], and it will get resolved to the install location of the assembly. If this directory is the GAC, it will be resolved to the location of the GAC.\nYou can find more information about Formatted fields in an MSI here.\n" ]
[ 2 ]
[]
[]
[ "64_bit", "c#", "com", "com+", "windows_installer" ]
stackoverflow_0000106414_64_bit_c#_com_com+_windows_installer.txt
Q: An algorithm to generate a game map from individual images I am designing a game to be played in the browser. Game is a space theme and I need to generate a map of the "Galaxy". The basic idea of the map is here: game map http://www.oglehq.com/map.png The map is a grid, with each grid sector can contain a planet/system and each of these has links to a number of adjacent grids. To generate the maps I figured that I would have a collection of images representing the grid elements. So in the case of the sample above, each of the squares is a separate graphic. To create a new map I would "weave" the images together. The map element images would have the planets and their links already on them, and I, therefore, need to stitch the map together in such a way that each image is positioned with its appropriate counterparts => so the image in the bottom corner must have images to the left and diagonal left that link up with it correctly. How would you go about creating the code to know where to place the images? Is there a better way than using images? At the moment performance and/or load should not be a consideration (if I need to generate maps to have preconfigured rather than do it in real-time, I don't mind). If it makes a difference I will be using HTML, CSS, and JavaScript and backed by a Ruby on Rails app. A: There are two very nice browser-based vector / javascript-manipulable graphics packages which, together, are virtually universal: SVG and VML. They generally produce high-quality vector-based images with low bandwidth. SVG is supported by firefox, opera, safari, and chrome - technically only part of the specification is supported, but for practical purposes you should be able to do what you need. w3schools has a good reference for learning/using svg. VML is Microsoft's answer to SVG, and (surprise) is natively supported by IE, although SVG is not. Msdn has the best reference for vml. Although it's more work, you could write two similar/somewhat integrated code bases for these two technologies. The real benefit is that users won't have to install anything to play your game online - it'll just work, for 99.9% of all users. By the way, you say that you're asking for an algorithm, and I'm offering technologies (if that's the right term for SVG/VML). If you could clarify the input/output specification and perhaps what part presents the challenge (e.g. which naive implementation won't work, and why), that would clarify the question and maybe provide more focused answers. Addendum The canvas tag is becoming more widely supported, with the notable exception of IE. This might be a cleaner way to embed graphic elements in html. Useful canvas stuff: Opera's canvas tutorial | Mozilla's canvas tutorial | canvas-in-IE partial implementation A: Hmm. If each box can only link to its 8 neighbours, then you only have 2^8 = 256 tile types. Fewer if you limit the number of possible links from any one tile. You can encode which links are present in an image with an 8 char filename: 11000010.jpeg Or save some bytes and convert that to decimal or hex 196.jpg Then the code. There's lots of ways you could choose to represent the map internally. One way is to have an object for each planet. A planet object knows its own position in the grid, and the positions of its linked planets. Hence it has enough information to choose the appropriate file. Or have a 2D array. To work out which image to show for each array item, look at the 8 neighbouring array items. If you do this, you can avoid coding for boundaries by making the array two bigger in both axes, and having an empty 'border' around the edges. This saves you checking whether a neighbouring array item is off the array. A: There are two ways to represent your map. One way is to represent it is a grid of squares, where each square can have a planet/system in it or not. You can then specify that if there is a neighbor one square away in any of the eight directions (NW, N, NE, W, E, SW, S, SE) then there is a connection to that neighbor. Note however in your sample map the center system is not connected to the system north/east of it, so perhaps this is not the representation you want. But it can be used to build the other representation The second way is to represent each square as having eight bits, defining whether or not there is a connection to a neighbor along each of the same eight directions. Presumably if there is even one connection, then the square has a system inside it, otherwise if there are no connections it is blank. So in your example 3x3 grid, the data would be: Tile Connections nw n ne w e sw s se nw 0 0 0 0 0 0 0 0 n 0 0 0 0 1 0 1 0 ne 0 0 0 1 0 0 0 0 w 0 0 0 0 0 0 0 0 center 0 1 0 0 0 0 1 1 e 0 0 0 0 0 0 0 0 se 0 0 0 0 0 0 0 0 s 0 1 0 0 1 0 0 0 sw 1 0 0 1 0 0 0 0 You could represent these connections as an array of eight boolean values, or much more compactly as an eight bit integer. Its then easy to use the eight boolean values (or the eight bit integer) to form the filename of the bitmap to load for that grid square. For example, your center tile using this scheme could be called "Bitmap01000011.png" (just using the boolean values), or alternatively "Bitmap43.png" (using the hexidecimal value of the eight bit integer representing that binary pattern for a shorter filename). Since you have 256 possible combinations, you will need 256 bitmaps. You could also reduce the data to four booleans/bits per tile, since a "north" connection for instance implies that the tile to the north has a "south" connection, but that makes selecting the bitmaps a bit harder, but you can work it out if you want. Alternatively you could layer between zero (empty) and nine (fully connected + system circle) bitmaps together in each square. You would just need to use transparent .png's so that you could combine them together. The downside is that the browser might be slow to draw each square (especially the fully connected ones). The advantage would be less data for you to create, and less data to load from your website. You would represent the map itself as a table, and add your bitmaps as image links to each cell as needed. The pseudo-code to map would be: draw_map(connection_map): For each grid_square in connection_map connection_data = connection_map[grid_square] filenames = bitmap_filenames_from(connection_data) insert_image_references_into_table(grid_square,filenames) # For each square having one of 256 bitmaps: bitmap_filenames_from(connection_data): filename="Bitmap" for each bit in connection_data: filename += bit ? "1" : 0 return [filename,] # For each square having zero through nine bitmaps: bitmap_filename_from(connection_data): # Special case - square is empty if 1 not in connection_data: return [] filenames=[] for i in 0..7: if connection_data[i]: filenames.append("Bitmap"+i) filenames.append("BitmapSystem"); return filenames A: I would recommend using a graphics library to draw the map. If you do you won't have the above problem and you will end up with much cleaner/simpler code. Some options are SVG, Canvas, and flash/flex. A: Personally I would just render the links in game, and have the cell graphics only provide a background. This gives you more flexibility, allows you to more easily increase the number of ways cells can link to each other, and generally more scalable. Otherwise you will need to account for every possible way a cell might be linked, and this is rather a lot even if you take into account rotational and mirror symmetries. A: Oh, and you could also just have a small number of tile png files with transparency on them, and overlap these using css-positioned div's to form a picture similar to your example, if that suffices. Last time I checked, older versions of IE did not have great support for transparency in image files, though. Can anyone edit this to provide better info on transparency support?
An algorithm to generate a game map from individual images
I am designing a game to be played in the browser. Game is a space theme and I need to generate a map of the "Galaxy". The basic idea of the map is here: game map http://www.oglehq.com/map.png The map is a grid, with each grid sector can contain a planet/system and each of these has links to a number of adjacent grids. To generate the maps I figured that I would have a collection of images representing the grid elements. So in the case of the sample above, each of the squares is a separate graphic. To create a new map I would "weave" the images together. The map element images would have the planets and their links already on them, and I, therefore, need to stitch the map together in such a way that each image is positioned with its appropriate counterparts => so the image in the bottom corner must have images to the left and diagonal left that link up with it correctly. How would you go about creating the code to know where to place the images? Is there a better way than using images? At the moment performance and/or load should not be a consideration (if I need to generate maps to have preconfigured rather than do it in real-time, I don't mind). If it makes a difference I will be using HTML, CSS, and JavaScript and backed by a Ruby on Rails app.
[ "There are two very nice browser-based vector / javascript-manipulable graphics packages which, together, are virtually universal: SVG and VML. They generally produce high-quality vector-based images with low bandwidth.\nSVG is supported by firefox, opera, safari, and chrome - technically only part of the specification is supported, but for practical purposes you should be able to do what you need. w3schools has a good reference for learning/using svg.\nVML is Microsoft's answer to SVG, and (surprise) is natively supported by IE, although SVG is not. Msdn has the best reference for vml.\nAlthough it's more work, you could write two similar/somewhat integrated code bases for these two technologies. The real benefit is that users won't have to install anything to play your game online - it'll just work, for 99.9% of all users.\nBy the way, you say that you're asking for an algorithm, and I'm offering technologies (if that's the right term for SVG/VML). If you could clarify the input/output specification and perhaps what part presents the challenge (e.g. which naive implementation won't work, and why), that would clarify the question and maybe provide more focused answers.\nAddendum The canvas tag is becoming more widely supported, with the notable exception of IE. This might be a cleaner way to embed graphic elements in html.\nUseful canvas stuff: Opera's canvas tutorial | Mozilla's canvas tutorial | canvas-in-IE partial implementation\n", "Hmm. If each box can only link to its 8 neighbours, then you only have 2^8 = 256 tile types. Fewer if you limit the number of possible links from any one tile.\nYou can encode which links are present in an image with an 8 char filename:\n11000010.jpeg\n\nOr save some bytes and convert that to decimal or hex\n196.jpg\n\nThen the code. There's lots of ways you could choose to represent the map internally. One way is to have an object for each planet. A planet object knows its own position in the grid, and the positions of its linked planets. Hence it has enough information to choose the appropriate file.\nOr have a 2D array. To work out which image to show for each array item, look at the 8 neighbouring array items. If you do this, you can avoid coding for boundaries by making the array two bigger in both axes, and having an empty 'border' around the edges. This saves you checking whether a neighbouring array item is off the array.\n", "There are two ways to represent your map.\nOne way is to represent it is a grid of squares, where each square can have a planet/system in it or not. You can then specify that if there is a neighbor one square away in any of the eight directions (NW, N, NE, W, E, SW, S, SE) then there is a connection to that neighbor. Note however in your sample map the center system is not connected to the system north/east of it, so perhaps this is not the representation you want. But it can be used to build the other representation\nThe second way is to represent each square as having eight bits, defining whether or not there is a connection to a neighbor along each of the same eight directions. Presumably if there is even one connection, then the square has a system inside it, otherwise if there are no connections it is blank.\nSo in your example 3x3 grid, the data would be:\n Tile Connections\n nw n ne w e sw s se\n nw 0 0 0 0 0 0 0 0 \n n 0 0 0 0 1 0 1 0\n ne 0 0 0 1 0 0 0 0\n w 0 0 0 0 0 0 0 0\n center 0 1 0 0 0 0 1 1\n e 0 0 0 0 0 0 0 0\n se 0 0 0 0 0 0 0 0\n s 0 1 0 0 1 0 0 0\n sw 1 0 0 1 0 0 0 0\n\nYou could represent these connections as an array of eight boolean values, or much more compactly as an eight bit integer.\nIts then easy to use the eight boolean values (or the eight bit integer) to form the filename of the bitmap to load for that grid square. For example, your center tile using this scheme could be called \"Bitmap01000011.png\" (just using the boolean values), or alternatively \"Bitmap43.png\" (using the hexidecimal value of the eight bit integer representing that binary pattern for a shorter filename).\nSince you have 256 possible combinations, you will need 256 bitmaps.\nYou could also reduce the data to four booleans/bits per tile, since a \"north\" connection for instance implies that the tile to the north has a \"south\" connection, but that makes selecting the bitmaps a bit harder, but you can work it out if you want.\nAlternatively you could layer between zero (empty) and nine (fully connected + system circle) bitmaps together in each square. You would just need to use transparent .png's so that you could combine them together. The downside is that the browser might be slow to draw each square (especially the fully connected ones). The advantage would be less data for you to create, and less data to load from your website.\nYou would represent the map itself as a table, and add your bitmaps as image links to each cell as needed.\nThe pseudo-code to map would be:\ndraw_map(connection_map):\n For each grid_square in connection_map\n connection_data = connection_map[grid_square]\n filenames = bitmap_filenames_from(connection_data)\n insert_image_references_into_table(grid_square,filenames)\n\n# For each square having one of 256 bitmaps:\nbitmap_filenames_from(connection_data):\n filename=\"Bitmap\"\n for each bit in connection_data:\n filename += bit ? \"1\" : 0\n return [filename,]\n\n# For each square having zero through nine bitmaps:\nbitmap_filename_from(connection_data):\n # Special case - square is empty\n if 1 not in connection_data:\n return []\n filenames=[]\n for i in 0..7:\n if connection_data[i]:\n filenames.append(\"Bitmap\"+i)\n filenames.append(\"BitmapSystem\");\n return filenames\n\n", "I would recommend using a graphics library to draw the map. If you do you won't have the above problem and you will end up with much cleaner/simpler code. Some options are SVG, Canvas, and flash/flex.\n", "Personally I would just render the links in game, and have the cell graphics only provide a background. This gives you more flexibility, allows you to more easily increase the number of ways cells can link to each other, and generally more scalable.\nOtherwise you will need to account for every possible way a cell might be linked, and this is rather a lot even if you take into account rotational and mirror symmetries. \n", "Oh, and you could also just have a small number of tile png files with transparency on them, and overlap these using css-positioned div's to form a picture similar to your example, if that suffices.\nLast time I checked, older versions of IE did not have great support for transparency in image files, though. Can anyone edit this to provide better info on transparency support?\n" ]
[ 3, 2, 1, 0, 0, 0 ]
[ "As long as links have a maximum length that's not too long, then you don't have too many different possible images for each cell. You need to come up with an ordering on the kinds of image cells. For example, an integer where each bit indicates the presense or absence of an image component.\nBit 0 : Has planet\nBit 1 : Has line from planet going north\nBit 2 : Has line from planet going northwest\n...\nBit 8 : Has line from planet going northeast\n\nOk, now create 512 images. Many languages have libraries that let you edit and write images to disk. If you like Ruby, try this: http://raa.ruby-lang.org/project/ruby-gd\nI don't know how you plan to store your data structure describing the graph of planets and links. An adjacency matrix might make it easy to generate the map, although it's not the smallest representation by far. Then it's pretty straightforward to spit out html like (for a 2x2 grid):\n<table border=\"0\" cellspace=\"0\" cellpadding=\"0\">\n<tr>\n<td><img src=\"cell_X.gif\"></td>\n<td><img src=\"cell_X.gif\"></td>\n</tr>\n<tr>\n<td><img src=\"cell_X.gif\"></td>\n<td><img src=\"cell_X.gif\"></td>\n</tr>\n</table>\n\nOf course, replace each X with the appropriate number corresponding to the combination of bits describing the appearance of the cell. If you're using an adjacency matrix, putting the bits together is pretty simple--just look at the cells around the \"current\" cell.\n" ]
[ -1 ]
[ "algorithm" ]
stackoverflow_0000107073_algorithm.txt
Q: Can you debug a .NET app with ONLY the source code of one file? I want to debug an application in Visual Studio but I ONLY have the source code for 1 class. I only need to step through a single function in that file, but I don't understand what I need to do it. I think the steps are normally something like this: Open a file in VS Load in the "symbols" (.PDB file) Attach to the running process I know how to do #1 and #3, but I don't how to do #2 without the .PDB file. Is it possible to generate the .PDB file for this to make it work? Thanks! A: You need *.pdb files (step 2 from your post) These files contain mapping between source code and compiled assembly. So your step are correct. If your source file has differences with original file, set check mark "Allow the source code to be different from the original version" in BP's properties dialog. Breakpoints and Tracepoints in Visual Studio If you don't have PDB files you can try to decompile your project using Reflector.FileDisassembler or FileGenerator For Reflector. They you can recompile these files to get PDBs Also take a look at Deblector - debugging addin for Reflector. A: You need the symbol file (.PDB) file that belongs to the application you are trying to debug. MSDN: PDB Files The Visual Studio debugger uses the path to the PDB in the EXE or DLL file to find the project.pdb file. If the debugger cannot find the PDB file at that location, or if the path is invalid, for example, if the project was moved to another computer, the debugger searches the path containing the EXE followed by the symbol paths specified in the Options dialog box. This path is generally the Debugging folder in the Symbols node. The debugger will not load a PDB that does not match the binary being debugged. A: The symbol file is the .pdb file. If you place that next to the exectuable, that will load the symbols, and point to the source file. A: In your case 'Symbols' means a pdb file for the assembly you want to debug. The debugger doesn't require that you have all the source, just that you have the matching pdb. The pdb is generated during the build of the assembly, and no you unfortunately cannot create one after the fact. If you don't have the pdb you will need to debug at a lower level then the source code. If you built the assembly on your machine then the symbols will be found when you attach. In that case just set a breakpoint on the source and do whatever is necessary to make that code run, and you'll hit the breakpoint. If you did not build it you need to find the pdb for the assembly. The modules window found under Debug/Windows/Modules can often help by telling you the assemblies loaded in the process along with version info, and timestamps. You'll need that information in cases where there might be multiple versions of an assembly (such as keep many nightly builds, or the last 20 or so versions from continuous integration builds). hope that helps.
Can you debug a .NET app with ONLY the source code of one file?
I want to debug an application in Visual Studio but I ONLY have the source code for 1 class. I only need to step through a single function in that file, but I don't understand what I need to do it. I think the steps are normally something like this: Open a file in VS Load in the "symbols" (.PDB file) Attach to the running process I know how to do #1 and #3, but I don't how to do #2 without the .PDB file. Is it possible to generate the .PDB file for this to make it work? Thanks!
[ "You need *.pdb files (step 2 from your post) These files contain mapping between source code and compiled assembly. So your step are correct. If your source file has differences with original file, set check mark \"Allow the source code to be different from the original version\" in BP's properties dialog.\nBreakpoints and Tracepoints in Visual Studio\nIf you don't have PDB files you can try to decompile your project using Reflector.FileDisassembler or FileGenerator For Reflector. They you can recompile these files to get PDBs\nAlso take a look at Deblector - debugging addin for Reflector.\n", "You need the symbol file (.PDB) file that belongs to the application you are trying to debug.\nMSDN: PDB Files\n\nThe Visual Studio debugger uses the path to the PDB in the EXE or DLL file to find the project.pdb file. If the debugger cannot find the PDB file at that location, or if the path is invalid, for example, if the project was moved to another computer, the debugger searches the path containing the EXE followed by the symbol paths specified in the Options dialog box. This path is generally the Debugging folder in the Symbols node. The debugger will not load a PDB that does not match the binary being debugged. \n\n", "The symbol file is the .pdb file. If you place that next to the exectuable, that will load the symbols, and point to the source file.\n", "In your case 'Symbols' means a pdb file for the assembly you want to debug. The debugger doesn't require that you have all the source, just that you have the matching pdb. The pdb is generated during the build of the assembly, and no you unfortunately cannot create one after the fact. If you don't have the pdb you will need to debug at a lower level then the source code.\nIf you built the assembly on your machine then the symbols will be found when you attach. In that case just set a breakpoint on the source and do whatever is necessary to make that code run, and you'll hit the breakpoint.\nIf you did not build it you need to find the pdb for the assembly. The modules window found under Debug/Windows/Modules can often help by telling you the assemblies loaded in the process along with version info, and timestamps. \nYou'll need that information in cases where there might be multiple versions of an assembly (such as keep many nightly builds, or the last 20 or so versions from continuous integration builds). \nhope that helps.\n" ]
[ 7, 1, 0, 0 ]
[]
[]
[ ".net", "debugging", "symbols", "visual_studio" ]
stackoverflow_0000107179_.net_debugging_symbols_visual_studio.txt
Q: Is tagging organizationally superior to discrete subforums? I am interested in choosing a good structure for an online message board-type application. I will use SO as an example, as I think it's an example that we are all familiar with, but my question is more general; it is about how to achieve the right balance between organization and flexibility in online message boards. The questions page is a load of random stuff. It moves quickly (some might say, too quickly) and contains a huge number of questions that I'm not interested in. The idea, I imagine, is that we can use tags to find questions that we're interested in. However, I'm not sure that this works: you can't use tags negatively. I'm not interested in PHP or perl or web development. I want to exclude such posts. But with the tags, I can't. Although discrete subforums are in a sense less flexible, as they generally force you to pick a category even if a question might fit into two (if SO had, say, areas for "Web Development", "Games development", "Computer Science", "Systems Programming", "Databases", etc. then sure, some people might want to post about developing of web-based games, for example) is it worth sacrificing some of that flexibility in order to make it easier to find the content that you are interested in, and hide the content that you are not interested in? Is there any way with a pure tagging system to achieve the greater ease of use that subforums provide? A: The real problem with subforums comes when you guess wrong about which topics have enough interest to get their own subforums. While some topics end up with their own vibrant subcommunities others end up as empty ghettos, with little activity or feeling of community. Topics that might flourish as occasional subjects in a larger forum end up fragmented among many subforums, none of which has the critical mass of people necessary to have an active, vibrant community. A: Though I think that tagging is supperior to grouping, people tend to think hierarchically. In general it depends on the target group for the forum. Maybe you can go with a mixture: use tagging and later use tag groups to order to posts. Delicious uses this, for example, and I find it rather helpful. A: If you're worried about the divide between specific forums and open tag-based systems, like Stack Overflow, consider making a query system that allows you to do a bit more complex queries than just the AND operator, like here on Stack Overflow. I cannot make a query here that will give me all questions in .NET, SQL or C#, combined, and that is the biggest irritation I have with the tags. With such a query system, you can create virtual forums at least. Other than that, I don't really have a good opinion. I like both, and I haven't yet decided which one is best. A: The idea, I imagine, is that we can use tags to find questions that we're interested in. However, I'm not sure that this works: you can't use tags negatively. I'm not interested in PHP or perl or web development. I want to exclude such posts. But with the tags, I can't. While it's currently the case that you can't use tags to hide content, it shouldn't be impossible. Using SO as an example again, there's no reason that a system similar to the ignore function on a forum couldn't be made for the tag system. By adding a right-click context menu or a small "X" link somewhere in the tag display, tags could be marked as ignored. This would also allow the current tag feature to function; Seeing everything (minus your ignore list), or clicking a tag to see only questions with that tag. Ignored tags could be managed in your profile if you should later develop an interest in PHP or INTERCAL that you lacked before. The real question is that of performance. In my head it's as simple as replacing a SELECT [stuff] WHERE Tag = 'buffer-overflow' with SELECT [stuff] WHERE Tag NOT IN ('php','offtopic','funny-hat-friday') but I've not put together any DB backed sites that get absolutely pounded on by thousands people.
Is tagging organizationally superior to discrete subforums?
I am interested in choosing a good structure for an online message board-type application. I will use SO as an example, as I think it's an example that we are all familiar with, but my question is more general; it is about how to achieve the right balance between organization and flexibility in online message boards. The questions page is a load of random stuff. It moves quickly (some might say, too quickly) and contains a huge number of questions that I'm not interested in. The idea, I imagine, is that we can use tags to find questions that we're interested in. However, I'm not sure that this works: you can't use tags negatively. I'm not interested in PHP or perl or web development. I want to exclude such posts. But with the tags, I can't. Although discrete subforums are in a sense less flexible, as they generally force you to pick a category even if a question might fit into two (if SO had, say, areas for "Web Development", "Games development", "Computer Science", "Systems Programming", "Databases", etc. then sure, some people might want to post about developing of web-based games, for example) is it worth sacrificing some of that flexibility in order to make it easier to find the content that you are interested in, and hide the content that you are not interested in? Is there any way with a pure tagging system to achieve the greater ease of use that subforums provide?
[ "The real problem with subforums comes when you guess wrong about which topics have enough interest to get their own subforums. While some topics end up with their own vibrant subcommunities others end up as empty ghettos, with little activity or feeling of community. Topics that might flourish as occasional subjects in a larger forum end up fragmented among many subforums, none of which has the critical mass of people necessary to have an active, vibrant community.\n", "Though I think that tagging is supperior to grouping, people tend to think hierarchically.\nIn general it depends on the target group for the forum. \nMaybe you can go with a mixture: use tagging and later use tag groups to order to posts. Delicious uses this, for example, and I find it rather helpful.\n", "If you're worried about the divide between specific forums and open tag-based systems, like Stack Overflow, consider making a query system that allows you to do a bit more complex queries than just the AND operator, like here on Stack Overflow.\nI cannot make a query here that will give me all questions in .NET, SQL or C#, combined, and that is the biggest irritation I have with the tags. With such a query system, you can create virtual forums at least.\nOther than that, I don't really have a good opinion. I like both, and I haven't yet decided which one is best.\n", "\nThe idea, I imagine, is that we can use tags to find questions that we're interested in. However, I'm not sure that this works: you can't use tags negatively. I'm not interested in PHP or perl or web development. I want to exclude such posts. But with the tags, I can't.\n\nWhile it's currently the case that you can't use tags to hide content, it shouldn't be impossible. Using SO as an example again, there's no reason that a system similar to the ignore function on a forum couldn't be made for the tag system. By adding a right-click context menu or a small \"X\" link somewhere in the tag display, tags could be marked as ignored. This would also allow the current tag feature to function; Seeing everything (minus your ignore list), or clicking a tag to see only questions with that tag.\nIgnored tags could be managed in your profile if you should later develop an interest in PHP or INTERCAL that you lacked before.\nThe real question is that of performance. In my head it's as simple as replacing a SELECT [stuff] WHERE Tag = 'buffer-overflow' with SELECT [stuff] WHERE Tag NOT IN ('php','offtopic','funny-hat-friday') but I've not put together any DB backed sites that get absolutely pounded on by thousands people.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "tags" ]
stackoverflow_0000048365_tags.txt
Q: How to load a specific version of an assembly To complete some testing I need to load the 64 bit version of an assembly even though I am running a 32 bit version of Windows. Is this possible? A: I'm not sure why you would want to do this, but I suppose you could. If you don't do anything to tell it otherwise, the CLR will load the version of the assembly that is specific to the CPU you are using. That's usually what you want. But I have had an occasion where I needed to load the neutral IL version of an assembly. I used the Load method to specify the version. I haven't tried it (and others here suggest it won't work for an executable assembly), but I suppose you can do the same to specify you want to load the 64 bit version. (You'll have to specify if you want the AMD64 or IA64 version.) A: From CLR Via C# (Jeff Richter): "If your assembly files contain only type-safe managed code, you are writing code that should work on both 32-bit and 64-bit versions of Windows. No source code changes are required for your code to run on either version of Windows. In fact, the resulting EXE/DLL file produced by the compiler will run on 32-bit Windows as well as the x64 and IA64 versions of 64-bit Windows! In other words, the one file will run on any machine that has a version of the .NET Framework installed on it." " The C# compiler offers a /platform command-line switch. This switch allows you to specify whether the resulting assembly can run on x86 machines running 32-bit Windows versions only, x64 machines running 64-bit Windows only, or Intel Itanium machines running 64-bit Windows only. If you don't specify a platform, the default is anycpu, which indicates that the resulting assembly can run on any version of Windows.
How to load a specific version of an assembly
To complete some testing I need to load the 64 bit version of an assembly even though I am running a 32 bit version of Windows. Is this possible?
[ "I'm not sure why you would want to do this, but I suppose you could. If you don't do anything to tell it otherwise, the CLR will load the version of the assembly that is specific to the CPU you are using. That's usually what you want. But I have had an occasion where I needed to load the neutral IL version of an assembly. I used the Load method to specify the version. I haven't tried it (and others here suggest it won't work for an executable assembly), but I suppose you can do the same to specify you want to load the 64 bit version. (You'll have to specify if you want the AMD64 or IA64 version.)\n", "From CLR Via C# (Jeff Richter):\n\"If your assembly files contain only type-safe managed code,\nyou are writing code that should work on both 32-bit and 64-bit versions of Windows. No\nsource code changes are required for your code to run on either version of Windows. \nIn fact,\nthe resulting EXE/DLL file produced by the compiler will run on 32-bit Windows as well as\nthe x64 and IA64 versions of 64-bit Windows! In other words, the one file will run on any\nmachine that has a version of the .NET Framework installed on it.\"\n\" The C# compiler offers a /platform command-line switch. This switch allows you to specify\nwhether the resulting assembly can run on x86 machines running 32-bit Windows versions\nonly, x64 machines running 64-bit Windows only, or Intel Itanium machines running 64-bit\nWindows only. If you don't specify a platform, the default is anycpu, which indicates that the\nresulting assembly can run on any version of Windows.\n" ]
[ 4, 1 ]
[ "32 bit Windows can not run 64 bit executables without a VM/emutalor\n32 bit Windows can compile for execution on 64 bit Windows\n", "No, you cannot run assemblies that are compiled for 64-bit on a system running the 32-bit version of Windows.\n" ]
[ -1, -1 ]
[ ".net", "64_bit", "assemblies", "c#" ]
stackoverflow_0000107216_.net_64_bit_assemblies_c#.txt
Q: Shared File Database Suggestion I would like to build and deploy a database application for Windows based systems, but need to live within the following constraints: Cannot run as a server (i.e., have open ports); Must be able to share database files with other instances of the program (running on other machines); Must not require a DBA for maintenance; No additional cost for run-time license. In addition, the following are nice to have "features": Zero-install (e.g., no registry entries, no need to put files in \Windows\..., etc.); "Reasonable" performance (yes, that's vague); "Reasonable" file size limitations (at least 1GB per table/file--just in case). I've seen this question   Embedded Database for .net that can run off a network but it doesn't quite answer it for me. I have seen the VistaDB site, but while it looks promising, I have no personal experience with it. I have also looked at SQLite, and while it seems good enough for Goggle, I (again) have no personal experience with it. I would love to use a Java based solution because it's cross-platform (even though my main target is Windows, I'd like to be flexible) and WebStart is a really nice way to distribute software, but the most commonly used DBs (Derby and hsqldb) won't support shared access. I know that I'm not the only one who's trying/tried to do this, so I'm hoping I could get some advice. A: I'd go with SQLite. There are SQLite bindings for everything, and it's very widely used as a embedded database for a large number of applications. A: I use SQLite at work and one thing that you should keep in mind is that its file based and uses a file lock for managing concurrent connections. It is not a great solution when you have multiple users trying to use the database at the same time. SQLite is however a great database for one user application, its fast, has a small foot print and has a thriving community built around it. A: If you've got VStudio sitting around, how about SQL Server 3.5 Compact edition? MSSQL running in-proc. http://www.microsoft.com/sql/editions/compact/downloads.mspx
Shared File Database Suggestion
I would like to build and deploy a database application for Windows based systems, but need to live within the following constraints: Cannot run as a server (i.e., have open ports); Must be able to share database files with other instances of the program (running on other machines); Must not require a DBA for maintenance; No additional cost for run-time license. In addition, the following are nice to have "features": Zero-install (e.g., no registry entries, no need to put files in \Windows\..., etc.); "Reasonable" performance (yes, that's vague); "Reasonable" file size limitations (at least 1GB per table/file--just in case). I've seen this question   Embedded Database for .net that can run off a network but it doesn't quite answer it for me. I have seen the VistaDB site, but while it looks promising, I have no personal experience with it. I have also looked at SQLite, and while it seems good enough for Goggle, I (again) have no personal experience with it. I would love to use a Java based solution because it's cross-platform (even though my main target is Windows, I'd like to be flexible) and WebStart is a really nice way to distribute software, but the most commonly used DBs (Derby and hsqldb) won't support shared access. I know that I'm not the only one who's trying/tried to do this, so I'm hoping I could get some advice.
[ "I'd go with SQLite. There are SQLite bindings for everything, and it's very widely used as a embedded database for a large number of applications. \n", "I use SQLite at work and one thing that you should keep in mind is that its file based and uses a file lock for managing concurrent connections. It is not a great solution when you have multiple users trying to use the database at the same time. SQLite is however a great database for one user application, its fast, has a small foot print and has a thriving community built around it. \n", "If you've got VStudio sitting around, how about SQL Server 3.5 Compact edition? MSSQL running in-proc.\nhttp://www.microsoft.com/sql/editions/compact/downloads.mspx\n" ]
[ 1, 1, 0 ]
[]
[]
[ "database", "windows" ]
stackoverflow_0000106931_database_windows.txt
Q: How do you enable the network on a virtual machine running Vista x64? I'm running Server 2008 64bit with Hyper-V. I've created a virtual machine with Vista 64bit and installed it. I can't get the Vista virtual machine to see the network adapter. I've set-up an external network on the Virtual Network Manager (Hyper-V) and associated that with the virtual machine (Vista). I've also tried using a Legacy Network Adapter but that didn't work either although that time the Vista machine saw the network card but couldn't connect through it. This is (obviously) the first time I've tried to set-up a virtual machine. Any ideas? EDIT: I notice that this question has been voted down a couple of times. I know that it's not a programming question but I'm a developer setting up a virtual machine to test my C#/ASP.NET code on and thought that other developers may hit this problem as well when they're doing this... A: I don't know Hyper-V, but I know in VMWare you can create a network connection in Bridged mode (meaning the VM will get it's own IP address via DHCP if that's enabled) or host-only mode (meaning the VM can only communicate with the host). When Vista could see the card, could it communicate with the host machine (which would indicate a host-only connection was specified)? What kind of IP address did it have (I would guess Hyper-V has a built-in DHCP server like VMWare does?) -- that might give additional clues. Sorry I don't know Hyper-V better... A: Make sure you have the Hyper-V Tools installed on the Guest VM. You shouldn't need the legacy adapter. You also may want to make sure you have all of the latest updates which may have addressed your issue. Particularly, KB950050 http://support.microsoft.com/kb/950050 A: It turns out that Vista x64 running as a VM through Hyper-V doesn't support the virtual network connection/card and that you have to set it up as a legacy network card. When I eventually got the config settings correct for the legacy network and disable the virtual network it connected. Thanks for the help guys - much appreciated!
How do you enable the network on a virtual machine running Vista x64?
I'm running Server 2008 64bit with Hyper-V. I've created a virtual machine with Vista 64bit and installed it. I can't get the Vista virtual machine to see the network adapter. I've set-up an external network on the Virtual Network Manager (Hyper-V) and associated that with the virtual machine (Vista). I've also tried using a Legacy Network Adapter but that didn't work either although that time the Vista machine saw the network card but couldn't connect through it. This is (obviously) the first time I've tried to set-up a virtual machine. Any ideas? EDIT: I notice that this question has been voted down a couple of times. I know that it's not a programming question but I'm a developer setting up a virtual machine to test my C#/ASP.NET code on and thought that other developers may hit this problem as well when they're doing this...
[ "I don't know Hyper-V, but I know in VMWare you can create a network connection in Bridged mode (meaning the VM will get it's own IP address via DHCP if that's enabled) or host-only mode (meaning the VM can only communicate with the host). When Vista could see the card, could it communicate with the host machine (which would indicate a host-only connection was specified)? What kind of IP address did it have (I would guess Hyper-V has a built-in DHCP server like VMWare does?) -- that might give additional clues.\nSorry I don't know Hyper-V better...\n", "Make sure you have the Hyper-V Tools installed on the Guest VM. You shouldn't need the legacy adapter.\nYou also may want to make sure you have all of the latest updates which may have addressed your issue. Particularly, KB950050\nhttp://support.microsoft.com/kb/950050\n", "It turns out that Vista x64 running as a VM through Hyper-V doesn't support the virtual network connection/card and that you have to set it up as a legacy network card. When I eventually got the config settings correct for the legacy network and disable the virtual network it connected.\nThanks for the help guys - much appreciated!\n" ]
[ 1, 1, 1 ]
[]
[]
[ "hyper_v", "vista64", "windows_server_2008" ]
stackoverflow_0000102804_hyper_v_vista64_windows_server_2008.txt
Q: Alternative to 'truss -p' instruction I am looking for a command in Unix that returns the status of a process(active, dead, sleeping, waiting for another process, etc.) is there any available? A shell script maybe? A: in linux, something like ps -p somepid --no-headers -o state should work, alternately you can look for the info in proc with grep ^State: /proc/somepid/status A: Try pflags <pid>, which will give you per-thread status information. Example: root@weetbix # pflags $$ 3384: bash data model = _ILP32 flags = ORPHAN|MSACCT|MSFORK /1: flags = ASLEEP waitid(0x7,0x0,0xffbfefc0,0xf) sigmask = 0x00020000,0x00000000 Also check out the manpage for pflags to see other useful tools like pstack, pfiles, pargs etc. A: Playing with ps options doesn't give you what you need?
Alternative to 'truss -p' instruction
I am looking for a command in Unix that returns the status of a process(active, dead, sleeping, waiting for another process, etc.) is there any available? A shell script maybe?
[ "in linux, something like ps -p somepid --no-headers -o state should work, alternately you can look for the info in proc with grep ^State: /proc/somepid/status\n", "Try pflags <pid>, which will give you per-thread status information. Example:\n\nroot@weetbix # pflags $$\n3384: bash\n data model = _ILP32 flags = ORPHAN|MSACCT|MSFORK\n /1: flags = ASLEEP waitid(0x7,0x0,0xffbfefc0,0xf)\n sigmask = 0x00020000,0x00000000\n\nAlso check out the manpage for pflags to see other useful tools like pstack, pfiles, pargs etc.\n", "Playing with ps options doesn't give you what you need?\n" ]
[ 3, 3, 0 ]
[]
[]
[ "process", "solaris", "status", "truss", "unix" ]
stackoverflow_0000094999_process_solaris_status_truss_unix.txt
Q: Can a console app stay alive until it has finished its work? I've just read up on Thread.IsBackground and if I understand it correctly, when it's set to false the Thread is a foreground thread which means it should stay alive until it has finished working even though the app have been exited out. Now I tested this with a winform app and it works as expected but when used with a console app the process doesn't stay alive but exits right away. Does the Thread.IsBackground behave differently from a console app than a winform app? A: The Thread.IsBackground property only marks if the thread should block the process from exiting. It doesn't perform any magic to keep the thread alive until some sort of explicit exit. To quote the Thread.IsBackground Property MSDN (emphasis mine): A thread is either a background thread or a foreground thread. Background threads are identical to foreground threads, except that background threads do not prevent a process from terminating. Once all foreground threads belonging to a process have terminated, the common language runtime ends the process. Any remaining background threads are stopped and do not complete. In order to keep your console app alive you'll need to have some sort of loop which will spin until you ask it to stop via a flag or similar. Windows Forms applications have this built in because of their message pump (I believe). A: I believe with the winforms based app you get a seperate thread to handle the messaging, so if the "main" thread exits, you still have a thread going to keep the process alive. With a console app, once main exits, unless you started a foreground thread, the process also terminates. A: IMHO really you should be a lot more explicit about the expected semantics of your application and deliberately <thread>.Join.
Can a console app stay alive until it has finished its work?
I've just read up on Thread.IsBackground and if I understand it correctly, when it's set to false the Thread is a foreground thread which means it should stay alive until it has finished working even though the app have been exited out. Now I tested this with a winform app and it works as expected but when used with a console app the process doesn't stay alive but exits right away. Does the Thread.IsBackground behave differently from a console app than a winform app?
[ "The Thread.IsBackground property only marks if the thread should block the process from exiting. It doesn't perform any magic to keep the thread alive until some sort of explicit exit.\nTo quote the Thread.IsBackground Property MSDN (emphasis mine):\n\nA thread is either a background thread or a foreground thread. Background threads are identical to foreground threads, except that background threads do not prevent a process from terminating. Once all foreground threads belonging to a process have terminated, the common language runtime ends the process. Any remaining background threads are stopped and do not complete.\n\nIn order to keep your console app alive you'll need to have some sort of loop which will spin until you ask it to stop via a flag or similar. Windows Forms applications have this built in because of their message pump (I believe).\n", "I believe with the winforms based app you get a seperate thread to handle the messaging, so if the \"main\" thread exits, you still have a thread going to keep the process alive. With a console app, once main exits, unless you started a foreground thread, the process also terminates.\n", "IMHO really you should be a lot more explicit about the expected semantics of your application and deliberately <thread>.Join.\n" ]
[ 2, 1, 0 ]
[]
[]
[ ".net", "multithreading" ]
stackoverflow_0000107154_.net_multithreading.txt
Q: Using a USB controller as auxiliary keyboard for Visual Studio Our family no longer uses our Mixman DM2 USB controller for making music. This frees it up for me to use as an auxiliary keyboard with 31 "keys" (and a few "sliders"). I had the crazy idea to use these buttons to send keyboard shortcuts to Visual Studio. It just seems easier pressing one key than some of the finger-bending ctrl double-key combos. I tried a couple utilities like JoyToKey and XPadder but they only recognize game controllers and the DM2 falls into the more generic "USB Controller" category. Have you ever heard of such nonsense? Clarify Question: Are you aware of a utility to read inputs from a generic USB Controller and map them to keyboard key presses? -OR- Are you aware of a Visual Studio add-in that will read from a generic USB Controller? A: Haven't heard of it before, but I can't wait to see if you can make it useful! You can assign Visual Studio functions to "chords" of key combinations, right? So maybe you could play shave-and-a-haircut to start a build. A: Yeah, I was thinking about doing something similiar to that myself. I'm pretty sure you'd have to write your own driver for it though.
Using a USB controller as auxiliary keyboard for Visual Studio
Our family no longer uses our Mixman DM2 USB controller for making music. This frees it up for me to use as an auxiliary keyboard with 31 "keys" (and a few "sliders"). I had the crazy idea to use these buttons to send keyboard shortcuts to Visual Studio. It just seems easier pressing one key than some of the finger-bending ctrl double-key combos. I tried a couple utilities like JoyToKey and XPadder but they only recognize game controllers and the DM2 falls into the more generic "USB Controller" category. Have you ever heard of such nonsense? Clarify Question: Are you aware of a utility to read inputs from a generic USB Controller and map them to keyboard key presses? -OR- Are you aware of a Visual Studio add-in that will read from a generic USB Controller?
[ "Haven't heard of it before, but I can't wait to see if you can make it useful!\nYou can assign Visual Studio functions to \"chords\" of key combinations, right? So maybe you could play shave-and-a-haircut to start a build.\n", "Yeah, I was thinking about doing something similiar to that myself. I'm pretty sure you'd have to write your own driver for it though.\n" ]
[ 1, 0 ]
[]
[]
[ "controller", "usb" ]
stackoverflow_0000107321_controller_usb.txt
Q: What's the best way to pass data between concurrent threads in .NET? I have two threads, one needs to poll a bunch of separate static resources looking for updates. The other one needs to get the data and store it in the database. How can thread 1 tell thread 2 that there is something to process? A: If the pieces of data are independant then treat the pieces of data as work items to be processed by a pool of threads. Use the thread pool and QueueUserWorkItem to post the data to the thread(s). You should get better scalability using a pool of symmetric threads and limiting the amount of synchronisation that has to occur between the producer and consumer(s). For example (from MSDN): TaskInfo ti = new TaskInfo("This report displays the number {0}.", 42); // Queue the task and data. if (ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadProc), ti)) { Console.WriteLine("Main thread does some work, then sleeps."); // If you comment out the Sleep, the main thread exits before // the ThreadPool task has a chance to run. ThreadPool uses // background threads, which do not keep the application // running. (This is a simple example of a race condition.) Thread.Sleep(1000); Console.WriteLine("Main thread exits."); } else { Console.WriteLine("Unable to queue ThreadPool request."); } // The thread procedure performs the independent task, in this case // formatting and printing a very simple report. // static void ThreadProc(Object stateInfo) { TaskInfo ti = (TaskInfo) stateInfo; Console.WriteLine(ti.Boilerplate, ti.Value); } A: I use Monitor.Wait / Pulse on a Queue of work items. A: Does the "store in the DB" thread always need to be running? It seems like perhaps the best option (if possible) would be to have the polling thread spin up another thread to do the save. Depending on the number of threads being created though, it could be that having the first polling thread use ThreadPool.QueueUserWorkItem() might be the more efficient route. For more efficiency, when saving to the database, I would use async I/O on the DB rather than the sync methods. Anytime you can get away from having to communicate directly between two threads, you should. Having to throw together some sync primitives, your code won't be as easy to debug and could introduce some very subtle race conditions that cause "once in a million execution" type bugs (which are far from fun to find/fix). If the second thread always needs to be executing, let us know why with some more information and we can come back with a more in-depth answer. Good luck! A: I personally would have thread 1 raise events which thread 2 can respond to. The threads can be wired up to the appropriate events by the controlling process which initiates both the threads.
What's the best way to pass data between concurrent threads in .NET?
I have two threads, one needs to poll a bunch of separate static resources looking for updates. The other one needs to get the data and store it in the database. How can thread 1 tell thread 2 that there is something to process?
[ "If the pieces of data are independant then treat the pieces of data as work items to be processed by a pool of threads. Use the thread pool and QueueUserWorkItem to post the data to the thread(s). You should get better scalability using a pool of symmetric threads and limiting the amount of synchronisation that has to occur between the producer and consumer(s).\nFor example (from MSDN):\n TaskInfo ti = new TaskInfo(\"This report displays the number {0}.\", 42);\n\n // Queue the task and data.\n if (ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadProc), ti)) { \n Console.WriteLine(\"Main thread does some work, then sleeps.\");\n\n // If you comment out the Sleep, the main thread exits before\n // the ThreadPool task has a chance to run. ThreadPool uses \n // background threads, which do not keep the application \n // running. (This is a simple example of a race condition.)\n Thread.Sleep(1000);\n\n Console.WriteLine(\"Main thread exits.\");\n }\n else {\n Console.WriteLine(\"Unable to queue ThreadPool request.\"); \n }\n\n\n// The thread procedure performs the independent task, in this case\n// formatting and printing a very simple report.\n//\nstatic void ThreadProc(Object stateInfo) {\n TaskInfo ti = (TaskInfo) stateInfo;\n Console.WriteLine(ti.Boilerplate, ti.Value); \n}\n\n", "I use Monitor.Wait / Pulse on a Queue of work items.\n", "Does the \"store in the DB\" thread always need to be running? It seems like perhaps the best option (if possible) would be to have the polling thread spin up another thread to do the save. Depending on the number of threads being created though, it could be that having the first polling thread use ThreadPool.QueueUserWorkItem() might be the more efficient route. \nFor more efficiency, when saving to the database, I would use async I/O on the DB rather than the sync methods. \nAnytime you can get away from having to communicate directly between two threads, you should. Having to throw together some sync primitives, your code won't be as easy to debug and could introduce some very subtle race conditions that cause \"once in a million execution\" type bugs (which are far from fun to find/fix). \nIf the second thread always needs to be executing, let us know why with some more information and we can come back with a more in-depth answer.\nGood luck!\n", "I personally would have thread 1 raise events which thread 2 can respond to. The threads can be wired up to the appropriate events by the controlling process which initiates both the threads.\n" ]
[ 7, 5, 0, 0 ]
[]
[]
[ ".net", "concurrency", "multithreading" ]
stackoverflow_0000106979_.net_concurrency_multithreading.txt
Q: How do I get the value of a JSObject property from C? In SpiderMonkey, how do I get the value of a property of a JSObject from within my C code? static JSBool JSD_getter(JSContext *cx, JSObject *obj, uintN argc, jsval *argv, jsval *rval) { jsval js_id; JS_GetProperty(cx, obj, "id", &js_id); // js_id has JavaScript type int c_id; JS_ValueToInt32(cx, js_id, &c_id); // Now, c_id contains the actual value } // of obj.id, as a C native type A: JS_GetProperty()
How do I get the value of a JSObject property from C?
In SpiderMonkey, how do I get the value of a property of a JSObject from within my C code? static JSBool JSD_getter(JSContext *cx, JSObject *obj, uintN argc, jsval *argv, jsval *rval) { jsval js_id; JS_GetProperty(cx, obj, "id", &js_id); // js_id has JavaScript type int c_id; JS_ValueToInt32(cx, js_id, &c_id); // Now, c_id contains the actual value } // of obj.id, as a C native type
[ "JS_GetProperty()\n" ]
[ 1 ]
[]
[]
[ "accessor", "javascript", "spidermonkey" ]
stackoverflow_0000107317_accessor_javascript_spidermonkey.txt
Q: I need populate a repeater with pseudo-related data Before I do this I figured I would ask if it was the best way. Each "Vendor" object has a "Bucket" object. In my repeater I need to display some properties from Vendor and some from Bucket, also some images populated by FileSystem that are linked to the vendor. I figured the best way to do this is bind the repeater with the vendor object, then on ItemDataBound I would populate the images and the buckets based on the vendor that is bound to that particular Items[e.Item.ItemIndex]. Is this the best way to go about this? A: That is how I usually go about it, bind on the main object and deal with the details in the ItemDataBound. A: If the Vendor object can only hold a single Bucket object it may be appropriate to bind it all in a single, top-level repeater. You can access the Bucket through simple data-binding all at the top level without overriding ItemDataBound. Because you're most likely binding the "Vendor", you have access to it's members in a databind if you want to do it this way: <%# DataBinder.Eval (Container.DataItem, "Bucket.Property" ) %> You want to do the ItemDataBound if you must "process something" during each iteration of the binding and need detailed access to each Vendor object for decision making. If the Vendor object can hold multiple Buckets, then the best way to get access to that is through ItemDataBound. On each iteration of the Vendor you could bind a new, nested repeater to display the bucket data, or perform whatever repeating/aggregation functionality you may need. Depending on how you want it to behave at your client, you could render the Vendors only. When the user clicks on the Vendor (or whatever), you could perform an AJAX call to the server which would retrieve the Bucket data and render it into your page dynamically. You may want to try that approach if there are a large number of vendors along with their buckets being rendered. This would help database performance, and page render time in contrast to building it all on the ASPX server side. (But this would need to be a lot of data, you should do it for usability/client reasons before trying to merit performance gains.)
I need populate a repeater with pseudo-related data
Before I do this I figured I would ask if it was the best way. Each "Vendor" object has a "Bucket" object. In my repeater I need to display some properties from Vendor and some from Bucket, also some images populated by FileSystem that are linked to the vendor. I figured the best way to do this is bind the repeater with the vendor object, then on ItemDataBound I would populate the images and the buckets based on the vendor that is bound to that particular Items[e.Item.ItemIndex]. Is this the best way to go about this?
[ "That is how I usually go about it, bind on the main object and deal with the details in the ItemDataBound.\n", "If the Vendor object can only hold a single Bucket object it may be appropriate to bind it all in a single, top-level repeater. You can access the Bucket through simple data-binding all at the top level without overriding ItemDataBound.\nBecause you're most likely binding the \"Vendor\", you have access to it's members in a databind if you want to do it this way:\n<%# DataBinder.Eval (Container.DataItem, \"Bucket.Property\" ) %>\n\nYou want to do the ItemDataBound if you must \"process something\" during each iteration of the binding and need detailed access to each Vendor object for decision making.\nIf the Vendor object can hold multiple Buckets, then the best way to get access to that is through ItemDataBound. On each iteration of the Vendor you could bind a new, nested repeater to display the bucket data, or perform whatever repeating/aggregation functionality you may need.\nDepending on how you want it to behave at your client, you could render the Vendors only. When the user clicks on the Vendor (or whatever), you could perform an AJAX call to the server which would retrieve the Bucket data and render it into your page dynamically. You may want to try that approach if there are a large number of vendors along with their buckets being rendered. This would help database performance, and page render time in contrast to building it all on the ASPX server side. (But this would need to be a lot of data, you should do it for usability/client reasons before trying to merit performance gains.)\n" ]
[ 2, 2 ]
[]
[]
[ ".net_3.5", "asp.net", "c#", "controls", "repeater" ]
stackoverflow_0000107329_.net_3.5_asp.net_c#_controls_repeater.txt
Q: How do I get the value of the jdbc.batch_size property at runtime for a Web application using Spring MVC and Hibernate? According to what I have found so far, I can use the following code: LocalSessionFactoryBean sessionFactory = (LocalSessionFactoryBean)super.getApplicationContext().getBean("&sessionFactory"); System.out.println(sessionFactory.getConfiguration().buildSettings().getJdbcBatchSize()); but then I get a Hibernate Exception: org.hibernate.HibernateException: No local DataSource found for configuration - dataSource property must be set on LocalSessionFactoryBean Can somebody shed some light? A: On the versions of Hibernate that I've checked, getConfiguration is not a public method of SessionFactory. In a few desperate cases, I've cast a Session or SessionFactory into its underlying implementation to get at some values that weren't publicly available. In this case that would be: ((SessionFactoryImplementor)sessionFactory).getSettings().getJdbcBatchSize() Of course, that's dangerous because it could break if they change the implementation. I usually only do this for optimizations that I can live without and then wrap the whole thing in a try/catch Throwable block just to make sure it won't hurt anything if it fails. A better idea might be to set the value yourself when you initialize Hibernate so you already know what it is from the beginning. A: Try the following (I can't test it since I don't use Spring): System.out.println(sessionFactory.getConfiguration().getProperty("hibernate.jdbc.batch_size"))
How do I get the value of the jdbc.batch_size property at runtime for a Web application using Spring MVC and Hibernate?
According to what I have found so far, I can use the following code: LocalSessionFactoryBean sessionFactory = (LocalSessionFactoryBean)super.getApplicationContext().getBean("&sessionFactory"); System.out.println(sessionFactory.getConfiguration().buildSettings().getJdbcBatchSize()); but then I get a Hibernate Exception: org.hibernate.HibernateException: No local DataSource found for configuration - dataSource property must be set on LocalSessionFactoryBean Can somebody shed some light?
[ "On the versions of Hibernate that I've checked, getConfiguration is not a public method of SessionFactory. In a few desperate cases, I've cast a Session or SessionFactory into its underlying implementation to get at some values that weren't publicly available. In this case that would be:\n((SessionFactoryImplementor)sessionFactory).getSettings().getJdbcBatchSize()\n\nOf course, that's dangerous because it could break if they change the implementation. I usually only do this for optimizations that I can live without and then wrap the whole thing in a try/catch Throwable block just to make sure it won't hurt anything if it fails. A better idea might be to set the value yourself when you initialize Hibernate so you already know what it is from the beginning.\n", "Try the following (I can't test it since I don't use Spring):\nSystem.out.println(sessionFactory.getConfiguration().getProperty(\"hibernate.jdbc.batch_size\"))\n\n" ]
[ 4, 3 ]
[]
[]
[ "hibernate", "java", "spring" ]
stackoverflow_0000105998_hibernate_java_spring.txt
Q: How would one code test and set behavior without a special hardware instruction? Most of the implementations I find require a hardware instruction to do this. However I strongly doubt this is required (if it is, I can't figure out why...) A: You don't need a test and set instruction to get mutual exclusion locking, if thats what you're asking. Dijkstra described the first mutual exclusion algorithm I am aware of, in 1965. The title of the paper was "Solution of a problem in concurrent programming control", search Google for a copy near you. The original algorithm required no special support from the hardware at all, but providing an atomic instruction in the CPU dramatically improves the performance. Test-and-set, atomic swap, and load-linked + store-conditional are all common primitives for CPUs to provide. All can be used to implement mutual exclusion, which can then be used to implement whatever locking semantics you want. A: If you'd like a cross-arch way to do so, and are using gcc, then you can use gcc's atomic builtins: http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html Calling these will result in a hardware specific machine instruction for the current build architecture. On those that do not support them, the compile will fail. (I think...)
How would one code test and set behavior without a special hardware instruction?
Most of the implementations I find require a hardware instruction to do this. However I strongly doubt this is required (if it is, I can't figure out why...)
[ "You don't need a test and set instruction to get mutual exclusion locking, if thats what you're asking.\nDijkstra described the first mutual exclusion algorithm I am aware of, in 1965. The title of the paper was \"Solution of a problem in concurrent programming control\", search Google for a copy near you. The original algorithm required no special support from the hardware at all, but providing an atomic instruction in the CPU dramatically improves the performance. \nTest-and-set, atomic swap, and load-linked + store-conditional are all common primitives for CPUs to provide. All can be used to implement mutual exclusion, which can then be used to implement whatever locking semantics you want.\n", "If you'd like a cross-arch way to do so, and are using gcc, then you can use gcc's atomic builtins:\nhttp://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html\nCalling these will result in a hardware specific machine instruction for the current build architecture. On those that do not support them, the compile will fail. (I think...)\n" ]
[ 3, 0 ]
[]
[]
[ "primitive", "synchronization", "test_and_set" ]
stackoverflow_0000107184_primitive_synchronization_test_and_set.txt
Q: I want tell the VC++ Compiler to compile all code. Can it be done? I am using VS2005 VC++ for unmanaged C++. I have VSTS and am trying to use the code coverage tool to accomplish two things with regards to unit tests: See how much of my referenced code under test is getting executed See how many methods of my code under test (if any) are not unit tested at all Setting up the VSTS code coverage tool (see the link text) and accomplishing task #1 was straightforward. However #2 has been a surprising challenge for me. Here is my test code. class CodeCoverageTarget { public: std::string ThisMethodRuns() { return "Running"; } std::string ThisMethodDoesNotRun() { return "Not Running"; } }; #include <iostream> #include "CodeCoverageTarget.h" using namespace std; int main() { CodeCoverageTarget cct; cout<<cct.ThisMethodRuns()<<endl; } When both methods are defined within the class as above the compiler automatically eliminates the ThisMethodDoesNotRun() from the obj file. If I move it's definition outside the class then it is included in the obj file and the code coverage tool shows it has not been exercised at all. Under most circumstances I want the compiler to do this elimination for me but for the code coverage tool it defeats a significant portion of the value (e.g. finding untested methods). I have tried a number of things to tell the compiler to stop being smart for me and compile everything but I am stumped. It would be nice if the code coverage tool compensated for this (I suppose by scanning the source and matching it up with the linker output) but I didn't find anything to suggest it has a special mode to be turned on. Am I totally missing something simple here or is this not possible with the VC++ compiler + VSTS code coverage tool? Thanks in advance, KGB A: You could try adding a line of code to call the function only if some condition is true, and guarantee that that condition will never be true. Just make sure the compiler can't figure that out. For example, int main(int argc, char **argv) { if(argv == NULL) // C runtime says this won't happen someMethodWhichIsntReallyEverCalled(); } A: One way to ensure your functions are not discarded is to export them. You can do this by adding __declspec(dllexport) to your function declarations. It is best to wrap this in a C preprocessor macro so that you can turn it off, since it is compiler-specific and you might not want all of your builds to export symbols. Another way to export functions is to create a .DEF file. If inlining is the problem, you might also have success with __declspec(noinline). Is your code in a static library which is then compiled into a test EXE/DLL? The linker will automatically discard unreferenced object files that are in static libraries. Example: if the static library contains a.obj and b.obj and the EXE/DLL that you're linking it into references symbols from b.obj but not a.obj, then the contents of a.obj will not be linked into the executable or DLL. However, after re-reading your description it doesn't sound like that's what's happening here. A: Turn off inlining of functions. The easiest way to do this is to just compile in Debug mode. Edit: after seeing your clarification, I find my answer is in error. Perhaps if you moved the body of the function into another section of the .h file, using the "inline" keyword? A: Sorry I should have clarified that I am building debug mode with inlining and all optimization off. Besides, the code is getting removed before inlining even occurs since it's never referenced to even be considered for inlining. A: Another option is to switch between inline and non inline functions based on your build, using .inl files, like this: in foo.inl file: inline std::string Foo::ThisMethodDoesNotRun() { return "Not Running"; } in foo.h: #if !COVERAGE_BUILD #include "foo.inl" #endif in foo.cpp: #if COVERAGE_BUILD #define inline #include "foo.inl" #endif
I want tell the VC++ Compiler to compile all code. Can it be done?
I am using VS2005 VC++ for unmanaged C++. I have VSTS and am trying to use the code coverage tool to accomplish two things with regards to unit tests: See how much of my referenced code under test is getting executed See how many methods of my code under test (if any) are not unit tested at all Setting up the VSTS code coverage tool (see the link text) and accomplishing task #1 was straightforward. However #2 has been a surprising challenge for me. Here is my test code. class CodeCoverageTarget { public: std::string ThisMethodRuns() { return "Running"; } std::string ThisMethodDoesNotRun() { return "Not Running"; } }; #include <iostream> #include "CodeCoverageTarget.h" using namespace std; int main() { CodeCoverageTarget cct; cout<<cct.ThisMethodRuns()<<endl; } When both methods are defined within the class as above the compiler automatically eliminates the ThisMethodDoesNotRun() from the obj file. If I move it's definition outside the class then it is included in the obj file and the code coverage tool shows it has not been exercised at all. Under most circumstances I want the compiler to do this elimination for me but for the code coverage tool it defeats a significant portion of the value (e.g. finding untested methods). I have tried a number of things to tell the compiler to stop being smart for me and compile everything but I am stumped. It would be nice if the code coverage tool compensated for this (I suppose by scanning the source and matching it up with the linker output) but I didn't find anything to suggest it has a special mode to be turned on. Am I totally missing something simple here or is this not possible with the VC++ compiler + VSTS code coverage tool? Thanks in advance, KGB
[ "You could try adding a line of code to call the function only if some condition is true, and guarantee that that condition will never be true. Just make sure the compiler can't figure that out. For example,\n\nint main(int argc, char **argv)\n{\n if(argv == NULL) // C runtime says this won't happen\n someMethodWhichIsntReallyEverCalled();\n}\n\n", "One way to ensure your functions are not discarded is to export them. You can do this by adding __declspec(dllexport) to your function declarations. It is best to wrap this in a C preprocessor macro so that you can turn it off, since it is compiler-specific and you might not want all of your builds to export symbols. Another way to export functions is to create a .DEF file.\nIf inlining is the problem, you might also have success with __declspec(noinline).\nIs your code in a static library which is then compiled into a test EXE/DLL? The linker will automatically discard unreferenced object files that are in static libraries. Example: if the static library contains a.obj and b.obj and the EXE/DLL that you're linking it into references symbols from b.obj but not a.obj, then the contents of a.obj will not be linked into the executable or DLL. However, after re-reading your description it doesn't sound like that's what's happening here.\n", "Turn off inlining of functions. The easiest way to do this is to just compile in Debug mode.\nEdit: after seeing your clarification, I find my answer is in error. Perhaps if you moved the body of the function into another section of the .h file, using the \"inline\" keyword?\n", "Sorry I should have clarified that I am building debug mode with inlining and all optimization off. Besides, the code is getting removed before inlining even occurs since it's never referenced to even be considered for inlining.\n", "Another option is to switch between inline and non inline functions based on your build, using .inl files, like this:\nin foo.inl file:\ninline std::string Foo::ThisMethodDoesNotRun()\n{\n return \"Not Running\";\n}\n\nin foo.h:\n#if !COVERAGE_BUILD\n#include \"foo.inl\"\n#endif\n\nin foo.cpp:\n#if COVERAGE_BUILD\n#define inline\n#include \"foo.inl\"\n#endif\n\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "c++", "code_coverage", "compiler_construction", "visual_studio_2005" ]
stackoverflow_0000104952_c++_code_coverage_compiler_construction_visual_studio_2005.txt
Q: Why is a different dll produced after a clean build, with no code changes? When I do a clean build my C# project, the produced dll is different then the previously built one (which I saved separately). No code changes were made, just clean and rebuild. Diff shows some bytes in the DLL have changes -- few near the beginning and few near the end, but I can't figure out what these represent. Does anybody have insights on why this is happening and how to prevent it? This is using Visual Studio 2005 / WinForms. Update: Not using automatic version incrementing, or signing the assembly. If it's a timestamp of some sort, how to I prevent VS from writing it? Update: After looking in Ildasm/diff, it seems like the following items are different: Two bytes in PE header at the start of the file. <PrivateImplementationDetails>{guid} section Cryptic part of the string table near the end (wonder why, I did not change the strings) Parts of assembly info at the end of file. No idea how to eliminate any of these, if at all possible... A: My best guess would be the changed bytes you're seeing are the internally-used metadata columns that are automatically generated at build-time. Some of the Ecma-335 Partition II (CLI Specification Metadata Definition) columns that can change per-build, even if the source code doesn't change at all: Module.Mvid: A build-time-generated GUID. Always changes, every build. AssemblyRef.HashValue: Could change if you're referencing another assembly that has also been rebuilt since the old build. If this really, really bothers you, my best tip on finding out exactly what is changing would be to diff the actual metadata tables. The way to get these is to use the ildasm MetaInfo window: View > MetaInfo > Raw:Header,Schema,Rows // important, otherwise you get very basic info from the next step View > MetaInfo > Show! A: I think that would be the TimeDateStamp field in the IMAGE_FILE_HEADER header of the PE32 specifications.
Why is a different dll produced after a clean build, with no code changes?
When I do a clean build my C# project, the produced dll is different then the previously built one (which I saved separately). No code changes were made, just clean and rebuild. Diff shows some bytes in the DLL have changes -- few near the beginning and few near the end, but I can't figure out what these represent. Does anybody have insights on why this is happening and how to prevent it? This is using Visual Studio 2005 / WinForms. Update: Not using automatic version incrementing, or signing the assembly. If it's a timestamp of some sort, how to I prevent VS from writing it? Update: After looking in Ildasm/diff, it seems like the following items are different: Two bytes in PE header at the start of the file. <PrivateImplementationDetails>{guid} section Cryptic part of the string table near the end (wonder why, I did not change the strings) Parts of assembly info at the end of file. No idea how to eliminate any of these, if at all possible...
[ "My best guess would be the changed bytes you're seeing are the internally-used metadata columns that are automatically generated at build-time.\nSome of the Ecma-335 Partition II (CLI Specification Metadata Definition) columns that can change per-build, even if the source code doesn't change at all:\n\nModule.Mvid: A build-time-generated GUID. Always changes, every build.\nAssemblyRef.HashValue: Could change if you're referencing another assembly that has also been rebuilt since the old build.\n\nIf this really, really bothers you, my best tip on finding out exactly what is changing would be to diff the actual metadata tables. The way to get these is to use the ildasm MetaInfo window:\nView > MetaInfo > Raw:Header,Schema,Rows // important, otherwise you get very basic info from the next step\n\nView > MetaInfo > Show!\n\n", "I think that would be the TimeDateStamp field in the IMAGE_FILE_HEADER header of the PE32 specifications.\n" ]
[ 14, 11 ]
[ "Could be that the build or revision numbers have changed. \n" ]
[ -1 ]
[ ".net", "build_process", "c#", "visual_studio" ]
stackoverflow_0000107196_.net_build_process_c#_visual_studio.txt
Q: Is there a good general method for debugging C++ macros? In general, I occasionally have a chain of nested macros with a few preprocessor conditional elements in their definitions. These can be painful to debug since it's hard to directly see the actual code being executed. A while ago I vaguely remember finding a compiler (gcc) flag to expand them, but I had trouble getting this to work in practice. A: gcc -E will output the preprocessed source to stdout. A: For MSVC users, you can right-click on the file/project, view the settings and change the file properties to output preprocessed source (which typically in the obj directory). A: This might not be applicable in your situation, but macros really do hamper debugging and often are overused and avoidable. Can you replace them with inline functions or otherwise get rid of them all together? A: You should probably start moving away form Macros and start using inline and templates. Macros are an old tool, the right tool sometimes. As a last resort remember printf is your friend (and actually printf isn't that bad a friend when your doing multithreaded stuff) A: Debug the dissasembly with the symbols loaded. A: gcc -save-temps will write out a .i (or .ii file for C++) which is the output of the C preprocessor, before it gets handed to the compiler. This can often be enlightening. A: GCC and compatible compilers use the -E option to output the preprocessed source to standard out. gcc -E foo.cpp Sun Studio also supports this flag: CC -E foo.cpp But even better is -xdumpmacros. You can find more information in Suns' docs.
Is there a good general method for debugging C++ macros?
In general, I occasionally have a chain of nested macros with a few preprocessor conditional elements in their definitions. These can be painful to debug since it's hard to directly see the actual code being executed. A while ago I vaguely remember finding a compiler (gcc) flag to expand them, but I had trouble getting this to work in practice.
[ "gcc -E will output the preprocessed source to stdout.\n", "For MSVC users, you can right-click on the file/project, view the settings and change the file properties to output preprocessed source (which typically in the obj directory).\n", "This might not be applicable in your situation, but macros really do hamper debugging and often are overused and avoidable. \nCan you replace them with inline functions or otherwise get rid of them all together?\n", "You should probably start moving away form Macros and start using inline and templates.\nMacros are an old tool, the right tool sometimes. As a last resort remember printf is your friend (and actually printf isn't that bad a friend when your doing multithreaded stuff)\n", "Debug the dissasembly with the symbols loaded.\n", "gcc -save-temps\nwill write out a .i (or .ii file for C++) which is the output of the C preprocessor, before it gets handed to the compiler. This can often be enlightening.\n", "GCC and compatible compilers use the -E option to output the preprocessed source to standard out.\ngcc -E foo.cpp\n\nSun Studio also supports this flag:\nCC -E foo.cpp\n\nBut even better is -xdumpmacros. You can find more information in Suns' docs.\n" ]
[ 14, 8, 5, 2, 1, 1, 0 ]
[]
[]
[ "c++", "debugging", "gcc", "macros" ]
stackoverflow_0000106412_c++_debugging_gcc_macros.txt
Q: Change pointer to an array to get a specific array element I understand the overall meaning of pointers and references(or at least I think i do), I also understand that when I use new I am dynamically allocating memory. My question is the following: If i were to use cout << &p, it would display the "virtual memory location" of p. Is there a way in which I could manipulate this "virtual memory location?" For example, the following code shows an array of ints. If I wanted to show the value of p[1] and I knew the "virtual memory location" of p, could I somehow do "&p + 1" and obtain the value of p[1] with cout << *p, which will now point to the second element in the array? int *p; p = new int[3]; p[0] = 13; p[1] = 54; p[2] = 42; A: Sure, you can manipulate the pointer to access the different elements in the array, but you will need to manipulate the content of the pointer (i.e. the address of what p is pointing to), rather than the address of the pointer itself. int *p = new int[3]; p[0] = 13; p[1] = 54; p[2] = 42; cout << *p << ' ' << *(p+1) << ' ' << *(p+2); Each addition (or subtraction) mean the subsequent (prior) element in the array. If p points to a 4 byte variable (e.g. int on typical 32-bits PCs) at address say 12345, p+1 will point to 12349, and not 12346. Note you want to change the value of what p contains before dereferencing it to access what it points to. A: Not quite. &p is the address of the pointer p. &p+1 will refer to an address which is one int* further along. What you want to do is p=p+1; /* or ++p or p++ */ Now when you do cout << *p; You will get 54. The difference is, p contains the address of the start of the array of ints, while &p is the address of p. To move one item along, you need to point further into the int array, not further along your stack, which is where p lives. If you only had &p then you would need to do the following: int **q = &p; /* q now points to p */ *q = *q+1; cout << *p; That will also output 54 if I am not mistaken. A: It's been a while (many years) since I worked with pointers but I know that if p is pointing at the beginning of the array (i.e. p[0]) and you incremented it (i.e. p++) then p will now be pointing at p[1]. I think that you have to de-reference p to get to the value. You dereference a pointer by putting a * in front of it. So *p = 33 with change p[0] to 33. I'm guessing that to get the second element you would use *(p+1) so the syntax you'd need would be: cout << *(p+1) or cout << *(++p) A: I like to do this: &p[1] To me it looks neater. A: Think of "pointer types" in C and C++ as laying down a very long, logical row of cells superimposed on the bytes in the memory space of the CPU, starting at byte 0. The width of each cell, in bytes, depends on the "type" of the pointer. Each pointer type lays downs a row with differing cell widths. A "int *" pointer lays down a row of 4-byte cells, since the storage width of an int is 4 bytes. A "double *" lays down a 8-byte per-cell row; a "struct foo *" pointer lays down a row with each cell the width of a single "struct foo", whatever that is. The "address" of any "thing" is the byte offset, starting at 0, of the cell in the row holding the "thing".Pointer arithmetic is based on cells in the row, not bytes. "*(p+10)" is a reference to the 10th cell past "p", where the cell size is determined by the type of p. If the type of "p" is "int", the address of "p+10" is 40 bytes past p; if p is a pointer to a struct 1000 bytes long, "p+10" is 10,000 bytes past p. (Note that the compiler gets to choose an optimal size for a struct that may be larger than what you'd think; this is due to "padding" and "alignment". The 1000 byte struct discussed might actually take 1024 bytes per cell, for example, so "p+10" would actually be 10,240 bytes past p.)
Change pointer to an array to get a specific array element
I understand the overall meaning of pointers and references(or at least I think i do), I also understand that when I use new I am dynamically allocating memory. My question is the following: If i were to use cout << &p, it would display the "virtual memory location" of p. Is there a way in which I could manipulate this "virtual memory location?" For example, the following code shows an array of ints. If I wanted to show the value of p[1] and I knew the "virtual memory location" of p, could I somehow do "&p + 1" and obtain the value of p[1] with cout << *p, which will now point to the second element in the array? int *p; p = new int[3]; p[0] = 13; p[1] = 54; p[2] = 42;
[ "Sure, you can manipulate the pointer to access the different elements in the array, but you will need to manipulate the content of the pointer (i.e. the address of what p is pointing to), rather than the address of the pointer itself.\nint *p = new int[3];\np[0] = 13;\np[1] = 54;\np[2] = 42;\n\ncout << *p << ' ' << *(p+1) << ' ' << *(p+2);\n\nEach addition (or subtraction) mean the subsequent (prior) element in the array. If p points to a 4 byte variable (e.g. int on typical 32-bits PCs) at address say 12345, p+1 will point to 12349, and not 12346. Note you want to change the value of what p contains before dereferencing it to access what it points to.\n", "Not quite. &p is the address of the pointer p. &p+1 will refer to an address which is one int* further along. What you want to do is \np=p+1; /* or ++p or p++ */\n\nNow when you do \ncout << *p;\n\nYou will get 54. The difference is, p contains the address of the start of the array of ints, while &p is the address of p. To move one item along, you need to point further into the int array, not further along your stack, which is where p lives.\nIf you only had &p then you would need to do the following:\nint **q = &p; /* q now points to p */\n*q = *q+1;\ncout << *p;\n\nThat will also output 54 if I am not mistaken.\n", "It's been a while (many years) since I worked with pointers but I know that if p is pointing at the beginning of the array (i.e. p[0]) and you incremented it (i.e. p++) then p will now be pointing at p[1].\nI think that you have to de-reference p to get to the value. You dereference a pointer by putting a * in front of it.\nSo *p = 33 with change p[0] to 33.\nI'm guessing that to get the second element you would use *(p+1) so the syntax you'd need would be:\ncout << *(p+1)\n\nor\ncout << *(++p)\n\n", "I like to do this:\n&p[1]\n\nTo me it looks neater.\n", "Think of \"pointer types\" in C and C++ as laying down a very long, logical row of cells superimposed on the bytes in the memory space of the CPU, starting at byte 0. The width of each cell, in bytes, depends on the \"type\" of the pointer. Each pointer type lays downs a row with differing cell widths. A \"int *\" pointer lays down a row of 4-byte cells, since the storage width of an int is 4 bytes. A \"double *\" lays down a 8-byte per-cell row; a \"struct foo *\" pointer lays down a row with each cell the width of a single \"struct foo\", whatever that is. The \"address\" of any \"thing\" is the byte offset, starting at 0, of the cell in the row holding the \"thing\".Pointer arithmetic is based on cells in the row, not bytes. \"*(p+10)\" is a reference to the 10th cell past \"p\", where the cell size is determined by the type of p. If the type of \"p\" is \"int\", the address of \"p+10\" is 40 bytes past p; if p is a pointer to a struct 1000 bytes long, \"p+10\" is 10,000 bytes past p. (Note that the compiler gets to choose an optimal size for a struct that may be larger than what you'd think; this is due to \"padding\" and \"alignment\". The 1000 byte struct discussed might actually take 1024 bytes per cell, for example, so \"p+10\" would actually be 10,240 bytes past p.)\n" ]
[ 8, 3, 2, 2, 1 ]
[]
[]
[ "c++", "pointers" ]
stackoverflow_0000107294_c++_pointers.txt
Q: How can I get source and variable values in ruby tracebacks? Here's the last few frames of a typical Ruby on Rails traceback: And here are the last few frames of a typical Nevow traceback in Python: It's not just the web environment either, you can make similar comparisons between ipython and irb. How can I get more of these sorts of details in Ruby? A: AFAIK, once an exception has been caught it's too late to grab the context in which it was raised. If you trap the exception's new call, you could use evil.rb's Binding.of_caller to grab the calling scope, and do eval("local_variables.collect { |l| [l, eval(l)] }", Binding.of_caller) But that's quite a big hack. The right answer is probably to extend Ruby to allow some inspection of the call stack. I'm not sure if some of the new Ruby implementations will allow this, but I do remember a backlash against Binding.of_caller because it will make optimizations much harder. (To be honest, I don't understand this backlash: as long as the interpreter records enough information about the optimizations performed, Binding.of_caller should be able to work, although perhaps slowly.) Update Ok, I figured it out. Longish code follows: class Foo < Exception attr_reader :call_binding def initialize # Find the calling location expected_file, expected_line = caller(1).first.split(':')[0,2] expected_line = expected_line.to_i return_count = 5 # If we see more than 5 returns, stop tracing # Start tracing until we see our caller. set_trace_func(proc do |event, file, line, id, binding, kls| if file == expected_file && line == expected_line # Found it: Save the binding and stop tracing @call_binding = binding set_trace_func(nil) end if event == :return # Seen too many returns, give up. :-( set_trace_func(nil) if (return_count -= 1) <= 0 end end) end end class Hello def a x = 10 y = 20 raise Foo end end class World def b Hello.new.a end end begin World.new.b rescue Foo => e b = e.call_binding puts eval("local_variables.collect {|l| [l, eval(l)]}", b).inspect end
How can I get source and variable values in ruby tracebacks?
Here's the last few frames of a typical Ruby on Rails traceback: And here are the last few frames of a typical Nevow traceback in Python: It's not just the web environment either, you can make similar comparisons between ipython and irb. How can I get more of these sorts of details in Ruby?
[ "AFAIK, once an exception has been caught it's too late to grab the context in which it was raised. If you trap the exception's new call, you could use evil.rb's Binding.of_caller to grab the calling scope, and do\neval(\"local_variables.collect { |l| [l, eval(l)] }\", Binding.of_caller)\n\nBut that's quite a big hack. The right answer is probably to extend Ruby to allow some inspection of the call stack. I'm not sure if some of the new Ruby implementations will allow this, but I do remember a backlash against Binding.of_caller because it will make optimizations much harder.\n(To be honest, I don't understand this backlash: as long as the interpreter records enough information about the optimizations performed, Binding.of_caller should be able to work, although perhaps slowly.)\nUpdate\nOk, I figured it out. Longish code follows:\nclass Foo < Exception\n attr_reader :call_binding\n\n def initialize\n # Find the calling location\n expected_file, expected_line = caller(1).first.split(':')[0,2]\n expected_line = expected_line.to_i\n return_count = 5 # If we see more than 5 returns, stop tracing\n\n # Start tracing until we see our caller.\n set_trace_func(proc do |event, file, line, id, binding, kls|\n if file == expected_file && line == expected_line\n # Found it: Save the binding and stop tracing\n @call_binding = binding\n set_trace_func(nil)\n end\n\n if event == :return\n # Seen too many returns, give up. :-(\n set_trace_func(nil) if (return_count -= 1) <= 0\n end\n end)\n end\nend\n\nclass Hello\n def a\n x = 10\n y = 20\n raise Foo\n end\nend\nclass World\n def b\n Hello.new.a\n end\nend\n\nbegin World.new.b\nrescue Foo => e\n b = e.call_binding\n puts eval(\"local_variables.collect {|l| [l, eval(l)]}\", b).inspect\nend\n\n" ]
[ 7 ]
[]
[]
[ "debugging", "exception", "ruby", "traceback" ]
stackoverflow_0000106920_debugging_exception_ruby_traceback.txt
Q: Alternative to Excel's RefEdit control that can be used outside of VBA The RefEdit control that comes as part of VBA is a little buggy, but it's good for putting on a form when you want people to specify one or more ranges of cells (i.e. Excel.Range objects). The main problem is that you can only use the RefEdit control on a VBA UserForm (Microsoft states this, and my tests confirm it too). I'm making an Excel add-in using Delphi, and I'm looking for an alternative to the RefEdit control. Excel.Application.InputBox Type:=8 is one alternative way of selecting a range of cells, but it's not very user-friendly when you need people to select multiple ranges of cells on a single form. The best real alternative I have at the moment is to call a VBA form from my Delphi add-in, but that's far from ideal. So ideally I could do with a drop-in replacement for RefEdit - one that I can use on a Delphi form. If there is one, it's not easy to find (I've been searching pretty hard, and I've not been able to find a drop-in RefEdit replacement for Delphi, VB6, or .NET). Failing a drop-in replacement I might try cobbling together my own alternative, but I suspect it would be difficult if not impossible to make one that works as well as RefEdit. RefEdit lets you "select" cells without actually selecting them: it uses marching ants around the cells that you choose instead of highlighting them and changing the Excel.Application.Selection. I don't know of a way to do that by manipulating the Excel object model through VBA, Delphi, or whatever. Any tips, tricks, hacks, or, if I'm really lucky, pointers to drop-in RefEdit replacements would be most welcome. A: I came across this RefEdit control replacement when looking for workarounds to RefEdit's bugs. A third party control wasn't an option for me at the time but it might help you out. A: Not sure from your question: Have you tried to import RefEdit into Delphi? You can import it as an ActiveX control from RefEdit.dll, then drop a TRefEdit control in any Delphi form. and you have the very same RefEdit as in your VBA apps. Or is it what you tried and it does not work because RefEdit needs some VBA woodoo...?
Alternative to Excel's RefEdit control that can be used outside of VBA
The RefEdit control that comes as part of VBA is a little buggy, but it's good for putting on a form when you want people to specify one or more ranges of cells (i.e. Excel.Range objects). The main problem is that you can only use the RefEdit control on a VBA UserForm (Microsoft states this, and my tests confirm it too). I'm making an Excel add-in using Delphi, and I'm looking for an alternative to the RefEdit control. Excel.Application.InputBox Type:=8 is one alternative way of selecting a range of cells, but it's not very user-friendly when you need people to select multiple ranges of cells on a single form. The best real alternative I have at the moment is to call a VBA form from my Delphi add-in, but that's far from ideal. So ideally I could do with a drop-in replacement for RefEdit - one that I can use on a Delphi form. If there is one, it's not easy to find (I've been searching pretty hard, and I've not been able to find a drop-in RefEdit replacement for Delphi, VB6, or .NET). Failing a drop-in replacement I might try cobbling together my own alternative, but I suspect it would be difficult if not impossible to make one that works as well as RefEdit. RefEdit lets you "select" cells without actually selecting them: it uses marching ants around the cells that you choose instead of highlighting them and changing the Excel.Application.Selection. I don't know of a way to do that by manipulating the Excel object model through VBA, Delphi, or whatever. Any tips, tricks, hacks, or, if I'm really lucky, pointers to drop-in RefEdit replacements would be most welcome.
[ "I came across this RefEdit control replacement when looking for workarounds to RefEdit's bugs. A third party control wasn't an option for me at the time but it might help you out.\n", "Not sure from your question: Have you tried to import RefEdit into Delphi?\nYou can import it as an ActiveX control from RefEdit.dll, then drop a TRefEdit control in any Delphi form. and you have the very same RefEdit as in your VBA apps.\nOr is it what you tried and it does not work because RefEdit needs some VBA woodoo...?\n" ]
[ 1, 0 ]
[]
[]
[ "delphi", "excel" ]
stackoverflow_0000101673_delphi_excel.txt
Q: Secure a DLL file with a license file What is the best way to secure the use/loading of a DLL with a license file? A: A couple of things you might want to consider: Check sum the DLL. Using a cryptographic hash function, you can store this inside the license file or inside the DLL. This provides a verification method to determined if my original DLL file is unhacked, or if it is the license file for this DLL. A few simple byte swapping techniques can quickly take your hash function off the beaten track (and thus not easy to reproduce). Don't store you hash as a string, split it into unsigned shorts in different places. As Larry said, a MAC address is fairly common. There are lots of examples of how to get that on The Code Project, but be aware it's easy to fake these days. My suggestion, should be use private/public keys for license generation. In short, modes of attack will be binary (modify the instructions of your DLL file) so protect against this, or key generation so make each license user, machine, and even the install specific. A: You can check for a license inside of DllMain() and die if it's not found. A: It also depends on how your license algorithm works. I'd suggest you look into using something like a Diffie–Hellman key exchange (or even RSA) to generate some sort of public/private key that can be passed to your users, based on some information. (Depending on the application, I know of one case where I wrote the license code on contract for a company, they used a MAC address, and some other data, hashed it, and encrypted the hash, giving them the "key value", if the registration number was correct). This ensures that the key file can't be moved, (or given) to another machine, thus 'stealing' the software. If you want to dig deeper and avoid hackers, that's a whole 'nother topic....
Secure a DLL file with a license file
What is the best way to secure the use/loading of a DLL with a license file?
[ "A couple of things you might want to consider:\nCheck sum the DLL. Using a cryptographic hash function, you can store this inside the license file or inside the DLL. This provides a verification method to determined if my original DLL file is unhacked, or if it is the license file for this DLL. A few simple byte swapping techniques can quickly take your hash function off the beaten track (and thus not easy to reproduce).\nDon't store you hash as a string, split it into unsigned shorts in different places.\nAs Larry said, a MAC address is fairly common. There are lots of examples of how to get that on The Code Project, but be aware it's easy to fake these days.\nMy suggestion, should be use private/public keys for license generation.\nIn short, modes of attack will be binary (modify the instructions of your DLL file) so protect against this, or key generation so make each license user, machine, and even the install specific. \n", "You can check for a license inside of DllMain() and die if it's not found.\n", "It also depends on how your license algorithm works. I'd suggest you look into using something like a Diffie–Hellman key exchange (or even RSA) to generate some sort of public/private key that can be passed to your users, based on some information. \n(Depending on the application, I know of one case where I wrote the license code on contract for a company, they used a MAC address, and some other data, hashed it, and encrypted the hash, giving them the \"key value\", if the registration number was correct). This ensures that the key file can't be moved, (or given) to another machine, thus 'stealing' the software.\nIf you want to dig deeper and avoid hackers, that's a whole 'nother topic....\n" ]
[ 5, 4, 2 ]
[]
[]
[ "c++", "dll", "licensing" ]
stackoverflow_0000106347_c++_dll_licensing.txt