content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: .Net - Detecting the Appearance Setting (Classic or XP?) I have some UI in VB 2005 that looks great in XP Style, but goes hideous in Classic Style. Any ideas about how to detect which mode the user is in and re-format the forms on the fly? Post Answer Edit: Thanks Daniel, looks like this will work. I'm using the first solution you posted with the GetCurrentThemeName() function. I'm doing the following: Function Declaration: Private Declare Unicode Function GetCurrentThemeName Lib "uxtheme" (ByVal stringThemeName As System.Text.StringBuilder, ByVal lengthThemeName As Integer, ByVal stringColorName As System.Text.StringBuilder, ByVal lengthColorName As Integer, ByVal stringSizeName As System.Text.StringBuilder, ByVal lengthSizeName As Integer) As Int32 Code Body: Dim stringThemeName As New System.Text.StringBuilder(260) Dim stringColorName As New System.Text.StringBuilder(260) Dim stringSizeName As New System.Text.StringBuilder(260) GetCurrentThemeName(stringThemeName, 260, stringColorName, 260, stringSizeName, 260) MsgBox(stringThemeName.ToString) The MessageBox comes up Empty when i'm in Windows Classic Style/theme, and Comes up with "C:\WINDOWS\resources\Themes\luna\luna.msstyles" if it's in Windows XP style/theme. I'll have to do a little more checking to see what happens if the user sets another theme than these two, but shouldn't be a big issue. A: Try using a combination of GetCurrentThemeName (MSDN Page) and DwmIsCompositionEnabled I linked the first to PInvoke so you can just drop it in your code, and for the second one you can use the code provided in the MSDN comment: [DllImport("dwmapi.dll", PreserveSig = false)] public static extern bool DwmIsCompositionEnabled(); See what results you get out of those two functions; they should be enough to determine when you want to use a different theme! A: Personally, I use the following to see if the app is running under themed: if (Application.RenderWithVisualStyles) { // you're themed } A: There's the IsThemeActive WinAPI function.
.Net - Detecting the Appearance Setting (Classic or XP?)
I have some UI in VB 2005 that looks great in XP Style, but goes hideous in Classic Style. Any ideas about how to detect which mode the user is in and re-format the forms on the fly? Post Answer Edit: Thanks Daniel, looks like this will work. I'm using the first solution you posted with the GetCurrentThemeName() function. I'm doing the following: Function Declaration: Private Declare Unicode Function GetCurrentThemeName Lib "uxtheme" (ByVal stringThemeName As System.Text.StringBuilder, ByVal lengthThemeName As Integer, ByVal stringColorName As System.Text.StringBuilder, ByVal lengthColorName As Integer, ByVal stringSizeName As System.Text.StringBuilder, ByVal lengthSizeName As Integer) As Int32 Code Body: Dim stringThemeName As New System.Text.StringBuilder(260) Dim stringColorName As New System.Text.StringBuilder(260) Dim stringSizeName As New System.Text.StringBuilder(260) GetCurrentThemeName(stringThemeName, 260, stringColorName, 260, stringSizeName, 260) MsgBox(stringThemeName.ToString) The MessageBox comes up Empty when i'm in Windows Classic Style/theme, and Comes up with "C:\WINDOWS\resources\Themes\luna\luna.msstyles" if it's in Windows XP style/theme. I'll have to do a little more checking to see what happens if the user sets another theme than these two, but shouldn't be a big issue.
[ "Try using a combination of GetCurrentThemeName (MSDN Page) and DwmIsCompositionEnabled\nI linked the first to PInvoke so you can just drop it in your code, and for the second one you can use the code provided in the MSDN comment:\n[DllImport(\"dwmapi.dll\", PreserveSig = false)]\npublic static extern bool DwmIsCompositionEnabled();\n\nSee what results you get out of those two functions; they should be enough to determine when you want to use a different theme!\n", "Personally, I use the following to see if the app is running under themed:\nif (Application.RenderWithVisualStyles)\n{\n // you're themed\n}\n\n", "There's the IsThemeActive WinAPI function.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "appearance", "vb.net", "windows_xp" ]
stackoverflow_0000034712_appearance_vb.net_windows_xp.txt
Q: Pascal's Theorem for non-unique sets? Pascal's rule on counting the subset's of a set works great, when the set contains unique entities. Is there a modification to this rule for when the set contains duplicate items? For instance, when I try to find the count of the combinations of the letters A,B,C,D, it's easy to see that it's 1 + 4 + 6 + 4 + 1 (from Pascal's Triangle) = 16, or 15 if I remove the "use none of the letters" entry. Now, what if the set of letters is A,B,B,B,C,C,D? Computing by hand, I can determine that the sum of subsets is: 1 + 4 + 8 + 11 + 11 + 8 + 4 + 1 = 48, but this doesn't conform to the Triangle I know. Question: How do you modify Pascal's Triangle to take into account duplicate entities in the set? A: A set only contains unique items. If there are duplicates, then it is no longer a set. A: Yes, if you don't want to consider sets, consider the idea of 'factors.' How many factors does: p1^a1.p2^a2....pn^an have if p1's are distinct primes. If the ai's are all 1, then the number is 2^n. In general, the answer is (a1+1)(a2+1)...(an+1) as David Nehme notes. Oh, and note that your answer by hand was wrong, it should be 48, or 47 if you don't want to count the empty set. A: It looks like you want to know how many sub-multi-sets have, say, 3 elements. The math for this gets very tricky, very quickly. The idea is that you want to add together all of the combinations of ways to get there. So you have C(3,4) = 4 ways of doing it with no duplicated elements. B can be repeated twice in C(1,3) = 3 ways. B can be repeated 3 times in 1 way. And C can be repeated twice in C(1,3) = 3 ways. For 11 total. (Your 10 you got by hand was wrong. Sorry.) In general trying to do that logic is too hard. The simpler way to keep track of it is to write out a polynomial whose coefficients have the terms you want which you multiply out. For Pascal's triangle this is easy, the polynomial is (1+x)^n. (You can use repeated squaring to calculate this more efficiently.) In your case if an element is repeated twice you would have a (1+x+x^2) factor. 3 times would be (1+x+x^2+x^3). So your specific problem would be solved as follows: (1 + x) (1 + x + x^2 + x^3) (1 + x + x^2) (1 + x) = (1 + 2x + 2x^2 + 2x^3 + x^4)(1 + 2x + 2x^2 + x^3) = 1 + 2x + 2x^2 + x^3 + 2x + 4x^2 + 4x^3 + 2x^4 + 2x^2 + 4x^3 + 4x^4 + 2x^5 + 2x^3 + 4x^4 + 4x^5 + 2x^6 + x^4 + 2x^5 + 2x^6 + x^7 = 1 + 4x + 8x^2 + 11x^3 + 11x^4 + 8x^5 + 4x^6 + x^7 If you want to produce those numbers in code, I would use the polynomial trick to organize your thinking and code. (You'd be working with arrays of coefficients.) A: You don't need to modify Pascal's Triangle at all. Study C(k,n) and you'll find out -- you basically need to divide the original results to account for the permutation of equivalent letters. E.g., A B1 B2 C1 D1 == A B2 B1 C1 D1, therefore you need to divide C(5,5) by C(2,2). A: Without duplicates (in a set as earlier posters have noted), each element is either in or out of the subset. So you have 2^n subsets. With duplicates, (in a "multi-set") you have to take into account the number the number of times each element is in the "sub-multi-set". If it m_1,m_2...m_n represent the number of times each element repeats, then the number of sub-bags is (1+m_1) * (1+m_2) * ... (1+m_n). A: Even though mathematical sets do contain unique items, you can run into the problem of duplicate items in 'sets' in the real world of programming. See this thread on Lisp unions for an example.
Pascal's Theorem for non-unique sets?
Pascal's rule on counting the subset's of a set works great, when the set contains unique entities. Is there a modification to this rule for when the set contains duplicate items? For instance, when I try to find the count of the combinations of the letters A,B,C,D, it's easy to see that it's 1 + 4 + 6 + 4 + 1 (from Pascal's Triangle) = 16, or 15 if I remove the "use none of the letters" entry. Now, what if the set of letters is A,B,B,B,C,C,D? Computing by hand, I can determine that the sum of subsets is: 1 + 4 + 8 + 11 + 11 + 8 + 4 + 1 = 48, but this doesn't conform to the Triangle I know. Question: How do you modify Pascal's Triangle to take into account duplicate entities in the set?
[ "A set only contains unique items. If there are duplicates, then it is no longer a set.\n", "Yes, if you don't want to consider sets, consider the idea of 'factors.' How many factors does:\np1^a1.p2^a2....pn^an\n\nhave if p1's are distinct primes. If the ai's are all 1, then the number is 2^n. In general, the answer is (a1+1)(a2+1)...(an+1) as David Nehme notes.\nOh, and note that your answer by hand was wrong, it should be 48, or 47 if you don't want to count the empty set.\n", "It looks like you want to know how many sub-multi-sets have, say, 3 elements. The math for this gets very tricky, very quickly. The idea is that you want to add together all of the combinations of ways to get there. So you have C(3,4) = 4 ways of doing it with no duplicated elements. B can be repeated twice in C(1,3) = 3 ways. B can be repeated 3 times in 1 way. And C can be repeated twice in C(1,3) = 3 ways. For 11 total. (Your 10 you got by hand was wrong. Sorry.)\nIn general trying to do that logic is too hard. The simpler way to keep track of it is to write out a polynomial whose coefficients have the terms you want which you multiply out. For Pascal's triangle this is easy, the polynomial is (1+x)^n. (You can use repeated squaring to calculate this more efficiently.) In your case if an element is repeated twice you would have a (1+x+x^2) factor. 3 times would be (1+x+x^2+x^3). So your specific problem would be solved as follows:\n(1 + x) (1 + x + x^2 + x^3) (1 + x + x^2) (1 + x)\n = (1 + 2x + 2x^2 + 2x^3 + x^4)(1 + 2x + 2x^2 + x^3)\n = 1 + 2x + 2x^2 + x^3 +\n 2x + 4x^2 + 4x^3 + 2x^4 +\n 2x^2 + 4x^3 + 4x^4 + 2x^5 +\n 2x^3 + 4x^4 + 4x^5 + 2x^6 +\n x^4 + 2x^5 + 2x^6 + x^7\n = 1 + 4x + 8x^2 + 11x^3 + 11x^4 + 8x^5 + 4x^6 + x^7\n\nIf you want to produce those numbers in code, I would use the polynomial trick to organize your thinking and code. (You'd be working with arrays of coefficients.)\n", "You don't need to modify Pascal's Triangle at all. Study C(k,n) and you'll find out -- you basically need to divide the original results to account for the permutation of equivalent letters.\nE.g., A B1 B2 C1 D1 == A B2 B1 C1 D1, therefore you need to divide C(5,5) by C(2,2).\n", "Without duplicates (in a set as earlier posters have noted), each element is either in or out of the subset. So you have 2^n subsets. With duplicates, (in a \"multi-set\") you have to take into account the number the number of times each element is in the \"sub-multi-set\". If it m_1,m_2...m_n represent the number of times each element repeats, then the number of sub-bags is (1+m_1) * (1+m_2) * ... (1+m_n).\n", "Even though mathematical sets do contain unique items, you can run into the problem of duplicate items in 'sets' in the real world of programming. See this thread on Lisp unions for an example.\n" ]
[ 4, 4, 4, 2, 1, 0 ]
[]
[]
[ "math", "pascals_triangle" ]
stackoverflow_0000103633_math_pascals_triangle.txt
Q: SQL Server 2005 - clicking on job->Properties yields "New Job" window Recently, I've started having a problem with my SQL Server 2005 client running on Windows XP where right-clicking on any job and selecting Properties instead brings me to the New Job window. Also, if I select "View History", I get the history for all jobs, instead of the one I right-clicked on. This happened to me once before, and I found that I hadn't installed a service pack for SQL 2005. Once I installed it, the problem went away, and I haven't seen it in about a year. I haven't run any updates on it since, and I'm not sure what could have caused this. As a possibly related note, I've tried installing XP Service Pack 3 on my machine twice, and it just hung on my machine(I started running it on Friday before leaving for the weekend, and it hadn't gone more than5-10% when I got back on Monday). I'm not sure if that fact is related at all, but I thought it possible that the XP update somehow overwrote something that SQL 2005 used before hanging. Any ideas on what could cause this? I've included the current version info that shows up in SQL 2005. Microsoft SQL Server Management Studio - 9.00.1399.00 Microsoft Analysis Services Client Tools - 2005.090.1399.00 Microsoft Data Access Components (MDAC) - 2000.085.1117.00 (xpsp_sp2_rtm.040803-2158) Microsoft MSXML - 2.6 3.0 4.0 5.0 6.0 Microsoft Internet Explorer - 7.0.5730.13 Microsoft .NET Framework - 2.0.50727.1433 Operating System - 5.1.2600 Update: I reinstalled SQL 2005 service pack 2 on my machine and it fixed the problem. I'll have to see if the problem was caused when I tried installing xp sp3. A: I would suggest the following path: Make sure that you have current backups for the server Try to get a clean install of the XP service pack Try reinstalling the client tools on the machine If that fails, try to install (or reinstall) SP2 for SQL Server
SQL Server 2005 - clicking on job->Properties yields "New Job" window
Recently, I've started having a problem with my SQL Server 2005 client running on Windows XP where right-clicking on any job and selecting Properties instead brings me to the New Job window. Also, if I select "View History", I get the history for all jobs, instead of the one I right-clicked on. This happened to me once before, and I found that I hadn't installed a service pack for SQL 2005. Once I installed it, the problem went away, and I haven't seen it in about a year. I haven't run any updates on it since, and I'm not sure what could have caused this. As a possibly related note, I've tried installing XP Service Pack 3 on my machine twice, and it just hung on my machine(I started running it on Friday before leaving for the weekend, and it hadn't gone more than5-10% when I got back on Monday). I'm not sure if that fact is related at all, but I thought it possible that the XP update somehow overwrote something that SQL 2005 used before hanging. Any ideas on what could cause this? I've included the current version info that shows up in SQL 2005. Microsoft SQL Server Management Studio - 9.00.1399.00 Microsoft Analysis Services Client Tools - 2005.090.1399.00 Microsoft Data Access Components (MDAC) - 2000.085.1117.00 (xpsp_sp2_rtm.040803-2158) Microsoft MSXML - 2.6 3.0 4.0 5.0 6.0 Microsoft Internet Explorer - 7.0.5730.13 Microsoft .NET Framework - 2.0.50727.1433 Operating System - 5.1.2600 Update: I reinstalled SQL 2005 service pack 2 on my machine and it fixed the problem. I'll have to see if the problem was caused when I tried installing xp sp3.
[ "I would suggest the following path:\n\nMake sure that you have current backups for the server\nTry to get a clean install of the XP service pack\nTry reinstalling the client tools on the machine\nIf that fails, try to install (or reinstall) SP2 for SQL Server\n\n" ]
[ 0 ]
[]
[]
[ "sql_server_2005" ]
stackoverflow_0000104057_sql_server_2005.txt
Q: How to read the value of a text input in a Flash SWF from a Flex App? I have a Flex application, which loads a SWF from CS3. The loaded SWF contains a text input called "myText". I can see this in the SWFLoader.content with no problems, but I don't know what type I should be treating it as in my Flex App. I thought the flex docs covered this but I can only find how to interact with another Flex SWF. The Flex debugger tells me it is of type fl.controls.TextInput, which makes sense. But FlexBuilder doesn't seem to know this class. While Flash and Flex both use AS3, Flex has a whole new library of GUI classes. I thought it also had all the Flash classes, but I can't get it to know of ANY fl.*** packages. A: The fl.* hierarchy of classes is Flash CS3-only. It's the Flash Components 3 library (I believe it's called, I might be wrong). However, you don't need the class to work with the object. As long as you can get a reference to it in your code, which you seem to have, you can assign the reference to an untyped variable and work with it anyway: var textInput : * = getTheTextInput(); // insert your own method here textInput.text = "Lorem ipsum dolor sit amet"; textInput.setSelection(4, 15); There is no need to know the type of an object in order to interact with it. Of course you lose type checking at compile time, but that's really not much of an issue, you just have to be extra careful. If you really, really want to reference the object as its real type, the class in question is located in Adobe Flash CS3/Configuration/Component Source/ActionScript 3.0/User Interface/fl/controls/TextInput.as ...if you have Flash CS3 installed, because it only ships with that application. A: Flex and Flash SWFs are essentially the same, just built using different tools. I'm not sure if they share the same component libraries, but based on the package names I'm guessing they at least mostly do. If it's a normal Text Input then I would guess it's an instance of mx.controls.TextInput. A: Keep in mind that if you do as Theo said and reference it with the correct type it will compile that class in both swfs, even if you're not using it in the first one. Unfortunately the fl.* classes don't implement any interfaces so you can't type them to the interface instead of the implementation. If you could, only the interface would get compiled, which is much smaller than the implementation. For this one it won't be a big deal, it's probably going to add only a couple kb, but in the long run it adds up. Just a heads up ;)
How to read the value of a text input in a Flash SWF from a Flex App?
I have a Flex application, which loads a SWF from CS3. The loaded SWF contains a text input called "myText". I can see this in the SWFLoader.content with no problems, but I don't know what type I should be treating it as in my Flex App. I thought the flex docs covered this but I can only find how to interact with another Flex SWF. The Flex debugger tells me it is of type fl.controls.TextInput, which makes sense. But FlexBuilder doesn't seem to know this class. While Flash and Flex both use AS3, Flex has a whole new library of GUI classes. I thought it also had all the Flash classes, but I can't get it to know of ANY fl.*** packages.
[ "The fl.* hierarchy of classes is Flash CS3-only. It's the Flash Components 3 library (I believe it's called, I might be wrong). However, you don't need the class to work with the object. As long as you can get a reference to it in your code, which you seem to have, you can assign the reference to an untyped variable and work with it anyway:\nvar textInput : * = getTheTextInput(); // insert your own method here\n\ntextInput.text = \"Lorem ipsum dolor sit amet\";\n\ntextInput.setSelection(4, 15);\n\nThere is no need to know the type of an object in order to interact with it. Of course you lose type checking at compile time, but that's really not much of an issue, you just have to be extra careful.\nIf you really, really want to reference the object as its real type, the class in question is located in \nAdobe Flash CS3/Configuration/Component Source/ActionScript 3.0/User Interface/fl/controls/TextInput.as\n\n...if you have Flash CS3 installed, because it only ships with that application.\n", "Flex and Flash SWFs are essentially the same, just built using different tools. I'm not sure if they share the same component libraries, but based on the package names I'm guessing they at least mostly do.\nIf it's a normal Text Input then I would guess it's an instance of mx.controls.TextInput.\n", "Keep in mind that if you do as Theo said and reference it with the correct type it will compile that class in both swfs, even if you're not using it in the first one. Unfortunately the fl.* classes don't implement any interfaces so you can't type them to the interface instead of the implementation. If you could, only the interface would get compiled, which is much smaller than the implementation. For this one it won't be a big deal, it's probably going to add only a couple kb, but in the long run it adds up. Just a heads up ;)\n" ]
[ 2, 0, 0 ]
[]
[]
[ "actionscript_3", "apache_flex", "cs3", "flash" ]
stackoverflow_0000096440_actionscript_3_apache_flex_cs3_flash.txt
Q: What's the best way to determine the name of your machine in a .NET app? I need to get the name of the machine my .NET app is running on. What is the best way to do this? A: Whilst others have already said that the System.Environment.MachineName returns you the name of the machine, beware... That property is only returning the NetBIOS name (and only if your application has EnvironmentPermissionAccess.Read permissions). It is possible for your machine name to exceed the length defined in: MAX_COMPUTERNAME_LENGTH In these cases, System.Environment.MachineName will not return you the correct name! Also note, there are several names your machine could go by and in Win32 there is a method GetComputerNameEx that is capable of getting the name matching each of these different name types: ComputerNameDnsDomain ComputerNameDnsFullyQualified ComputerNameDnsHostname ComputerNameNetBIOS ComputerNamePhysicalDnsDomain ComputerNamePhysicalDnsFullyQualified ComputerNamePhysicalDnsHostname ComputerNamePhysicalNetBIOS If you require this information, you're likely to need to go to Win32 through p/invoke, such as: class Class1 { enum COMPUTER_NAME_FORMAT { ComputerNameNetBIOS, ComputerNameDnsHostname, ComputerNameDnsDomain, ComputerNameDnsFullyQualified, ComputerNamePhysicalNetBIOS, ComputerNamePhysicalDnsHostname, ComputerNamePhysicalDnsDomain, ComputerNamePhysicalDnsFullyQualified } [DllImport("kernel32.dll", SetLastError=true, CharSet=CharSet.Auto)] static extern bool GetComputerNameEx(COMPUTER_NAME_FORMAT NameType, [Out] StringBuilder lpBuffer, ref uint lpnSize); [STAThread] static void Main(string[] args) { bool success; StringBuilder name = new StringBuilder(260); uint size = 260; success = GetComputerNameEx(COMPUTER_NAME_FORMAT.ComputerNameDnsDomain, name, ref size); Console.WriteLine(name.ToString()); } } A: System.Environment.MachineName A: Try Environment.MachineName. Ray actually has the best answer, although you will need to use some interop code.
What's the best way to determine the name of your machine in a .NET app?
I need to get the name of the machine my .NET app is running on. What is the best way to do this?
[ "Whilst others have already said that the System.Environment.MachineName returns you the name of the machine, beware...\nThat property is only returning the NetBIOS name (and only if your application has EnvironmentPermissionAccess.Read permissions). It is possible for your machine name to exceed the length defined in:\nMAX_COMPUTERNAME_LENGTH \n\nIn these cases, System.Environment.MachineName will not return you the correct name! \nAlso note, there are several names your machine could go by and in Win32 there is a method GetComputerNameEx that is capable of getting the name matching each of these different name types:\n\nComputerNameDnsDomain\nComputerNameDnsFullyQualified\nComputerNameDnsHostname\nComputerNameNetBIOS\nComputerNamePhysicalDnsDomain\nComputerNamePhysicalDnsFullyQualified\nComputerNamePhysicalDnsHostname\nComputerNamePhysicalNetBIOS\n\nIf you require this information, you're likely to need to go to Win32 through p/invoke, such as:\nclass Class1\n{\n enum COMPUTER_NAME_FORMAT\n {\n ComputerNameNetBIOS,\n ComputerNameDnsHostname,\n ComputerNameDnsDomain,\n ComputerNameDnsFullyQualified,\n ComputerNamePhysicalNetBIOS,\n ComputerNamePhysicalDnsHostname,\n ComputerNamePhysicalDnsDomain,\n ComputerNamePhysicalDnsFullyQualified\n }\n\n [DllImport(\"kernel32.dll\", SetLastError=true, CharSet=CharSet.Auto)]\n static extern bool GetComputerNameEx(COMPUTER_NAME_FORMAT NameType,\n [Out] StringBuilder lpBuffer, ref uint lpnSize);\n\n [STAThread]\n static void Main(string[] args)\n {\n bool success;\n StringBuilder name = new StringBuilder(260);\n uint size = 260;\n success = GetComputerNameEx(COMPUTER_NAME_FORMAT.ComputerNameDnsDomain,\n name, ref size);\n Console.WriteLine(name.ToString());\n }\n}\n\n", "System.Environment.MachineName\n", "Try Environment.MachineName. Ray actually has the best answer, although you will need to use some interop code.\n" ]
[ 8, 6, 1 ]
[]
[]
[ ".net" ]
stackoverflow_0000103460_.net.txt
Q: Datatypes for physics I'm currently designing a program that will involve some physics (nothing too fancy, a few balls crashing to each other) What's the most exact datatype I can use to represent position (without a feeling of discrete jumps) in c#? Also, what's the smallest ammount of time I can get between t and t+1? One tick? EDIT: Clarifying: What is the smallest unit of time in C#? [TimeSpan].Tick? A: In .Net a decimal will be the most precise datatype that you could use for position. I would just write a class for the position: public class Position { decimal x; decimal y; decimal z; } As for time, your processor can't give you anything smaller than one tick. Sounds like an fun project! Good luck! A: The Decimal data type although precise might not be the optimum choice depending on what you want to do. Generally Direct3D and GPUs use 32-bit floats, and vectors of 3 (total 96 bits) to represent a position in x,y,z. This will usually give more than enough precision unless you need to mix both huge scale (planets) and microscopic level (basketballs) in the same "world". Reasons for not using Decimals could be size (4 x larger), speed (orders of magnitude slower) and no trigonometric functions available (AFAIK). On Windows, the QueryPerformanceCounter API function will give you the highest resolution clock, and QueryPerformanceFrequency the frequency of the counter. I believe the Stopwatch described in other comments wraps this in a .net class. A: Unless you're doing rocket-science, a decimal is WAAAY overkill. And although it might give you more precise positions, it will not necessarily give you more precise (eg) velocities, since it is a fixed-point datatype and therefore is limited to a much smaller range than a float or double. Use floats, but leave the door open to move up to doubles in case precision turns out to be a problem. A: I would use a Vector datatype. Just like in Physics, when you want to model an objects movement, you use vectors. Use a Vector2 or Vector3 class out of the XNA framework or roll your own Vector3 struct to represent the position. Vector2 is for 2D and Vector3 is 3D. TimeSpan struct or the Stopwatch class will be your best options for calculating change in time. If I had to recommend, I would use Stopwatch. A: I'm not sure I understand your last question, could you please clarify? Edit: I might still not understand, but you can use any type you want (for example, doubles) to represent time (if what you actually want is to represent the discretization of time for your physics problem, in which case the tick is irrelevant). For most physics problems, doubles would be sufficient. The tick is the best precision you can achieve when measuring time with your machine. A: I think you should be able to get away with the Decimal data type with no problem. It has the most precision available. However, the double data type should be just fine. Yes, a tick is the smallest I'm aware of (using the System.Diagnostics.Stopwatch class). A: For a simulation you're probably better off using a decimal/double (same type as position) for a dimensionless time, then converting it from/to something meaningful on input/output. Otherwise you'll be performing a ton of cast operations when you move things around. You'll get arbitrary precision this way, too, because you can choose the timescale to be as large/small as you want. A: Hey Juan, I'd recommend that you use the Vector3 class as suggested by several others since it's easy to use and above all - supports all operations you need (like addition, multiplication, matrix multiply etc...) without the need to implement it yourself. If you have any doubt about how to proceed - inherit it and at later stage you can always change the inner implementation, or disconnect from vector3. Also, don't use anything less accurate than float - all processors these days runs fast enough to be more accurate than integers (unless it's meant for mobile, but even there...) Using less than float you would lose precision very fast and end up with jumpy rotations and translations, especially if you plan to use more than a single matrix/quaternion multiplication.
Datatypes for physics
I'm currently designing a program that will involve some physics (nothing too fancy, a few balls crashing to each other) What's the most exact datatype I can use to represent position (without a feeling of discrete jumps) in c#? Also, what's the smallest ammount of time I can get between t and t+1? One tick? EDIT: Clarifying: What is the smallest unit of time in C#? [TimeSpan].Tick?
[ "In .Net a decimal will be the most precise datatype that you could use for position. I would just write a class for the position:\npublic class Position\n{\n decimal x;\n decimal y;\n decimal z;\n}\n\nAs for time, your processor can't give you anything smaller than one tick.\nSounds like an fun project! Good luck!\n", "The Decimal data type although precise might not be the optimum choice depending on what you want to do. Generally Direct3D and GPUs use 32-bit floats, and vectors of 3 (total 96 bits) to represent a position in x,y,z.\nThis will usually give more than enough precision unless you need to mix both huge scale (planets) and microscopic level (basketballs) in the same \"world\".\nReasons for not using Decimals could be size (4 x larger), speed (orders of magnitude slower) and no trigonometric functions available (AFAIK).\nOn Windows, the QueryPerformanceCounter API function will give you the highest resolution clock, and QueryPerformanceFrequency the frequency of the counter. I believe the Stopwatch described in other comments wraps this in a .net class.\n", "Unless you're doing rocket-science, a decimal is WAAAY overkill. And although it might give you more precise positions, it will not necessarily give you more precise (eg) velocities, since it is a fixed-point datatype and therefore is limited to a much smaller range than a float or double.\nUse floats, but leave the door open to move up to doubles in case precision turns out to be a problem.\n", "I would use a Vector datatype. Just like in Physics, when you want to model an objects movement, you use vectors. Use a Vector2 or Vector3 class out of the XNA framework or roll your own Vector3 struct to represent the position. Vector2 is for 2D and Vector3 is 3D.\nTimeSpan struct or the Stopwatch class will be your best options for calculating change in time. If I had to recommend, I would use Stopwatch.\n", "I'm not sure I understand your last question, could you please clarify?\nEdit:\nI might still not understand, but you can use any type you want (for example, doubles) to represent time (if what you actually want is to represent the discretization of time for your physics problem, in which case the tick is irrelevant). For most physics problems, doubles would be sufficient.\nThe tick is the best precision you can achieve when measuring time with your machine.\n", "I think you should be able to get away with the Decimal data type with no problem. It has the most precision available. However, the double data type should be just fine.\nYes, a tick is the smallest I'm aware of (using the System.Diagnostics.Stopwatch class).\n", "For a simulation you're probably better off using a decimal/double (same type as position) for a dimensionless time, then converting it from/to something meaningful on input/output. Otherwise you'll be performing a ton of cast operations when you move things around. You'll get arbitrary precision this way, too, because you can choose the timescale to be as large/small as you want.\n", "Hey Juan, I'd recommend that you use the Vector3 class as suggested by several others since it's easy to use and above all - supports all operations you need (like addition, multiplication, matrix multiply etc...) without the need to implement it yourself.\nIf you have any doubt about how to proceed - inherit it and at later stage you can always change the inner implementation, or disconnect from vector3.\nAlso, don't use anything less accurate than float - all processors these days runs fast enough to be more accurate than integers (unless it's meant for mobile, but even there...)\nUsing less than float you would lose precision very fast and end up with jumpy rotations and translations, especially if you plan to use more than a single matrix/quaternion multiplication.\n" ]
[ 9, 9, 5, 3, 0, 0, 0, 0 ]
[]
[]
[ "c#", "physics", "types" ]
stackoverflow_0000028896_c#_physics_types.txt
Q: How Do VB.NET Optional Parameters work 'Under the hood'? Are they CLS-Compliant? Let's say we have the following method declaration: Public Function MyMethod(ByVal param1 As Integer, _ Optional ByVal param2 As Integer = 0, _ Optional ByVal param3 As Integer = 1) As Integer Return param1 + param2 + param3 End Function How does VB.NET make the optional parameters work within the confines of the CLR? Are optional parameters CLS-Compliant? A: Interestingly, this is the decompiled C# code, obtained via reflector. public int MyMethod(int param1, [Optional, DefaultParameterValue(0)] int param2, [Optional, DefaultParameterValue(1)] int param3) { return ((param1 + param2) + param3); } Notice the Optional and DefaultParameterValue attributes. Try putting them in C# methods. You will find that you are still required to pass values to the method. In VB code however, its turned into Default! That being said, I personally have never use Default even in VB code. It feels like a hack. Method overloading does the trick for me. Default does help though, when dealing with the Excel Interop, which is a pain in the ass to use straight out of the box in C#. A: Contrary to popular belief, optional parameters do appear to be CLS-compliant. (However, my primary check for this was to mark the assembly, class and method all with the CLSCompliant attribute, set to True.) So what does this look like in MSIL? .method public static int32 MyMethod(int32 param1, [opt] int32 param2, [opt] int32 param3) cil managed { .custom instance void [mscorlib]System.CLSCompliantAttribute::.ctor(bool) = ( 01 00 01 00 00 ) .param [2] = int32(0x00000000) .param [3] = int32(0x00000001) // Code size 11 (0xb) .maxstack 2 .locals init ([0] int32 MyMethod) IL_0000: nop IL_0001: ldarg.0 IL_0002: ldarg.1 IL_0003: add.ovf IL_0004: ldarg.2 IL_0005: add.ovf IL_0006: stloc.0 IL_0007: br.s IL_0009 IL_0009: ldloc.0 IL_000a: ret } // end of method Module1::MyMethod Note the [opt] markings on the parameters -- MSIL supports this natively, without any hacks. (Unlike MSIL's support for VB's Static keyword, which is another topic altogether.) So, why aren't these in C#? I can't answer that, other than my speculation that it might be a presumed lack of demand. My own preference has always been to specify the parameters, even if they were optional -- to me, the code looks cleaner and is easier to read. (If there are omitted parameters, I often look first for an overload that matches the visible signature -- it's only after I fail to find one that I realize that optional parameters are involved.)
How Do VB.NET Optional Parameters work 'Under the hood'? Are they CLS-Compliant?
Let's say we have the following method declaration: Public Function MyMethod(ByVal param1 As Integer, _ Optional ByVal param2 As Integer = 0, _ Optional ByVal param3 As Integer = 1) As Integer Return param1 + param2 + param3 End Function How does VB.NET make the optional parameters work within the confines of the CLR? Are optional parameters CLS-Compliant?
[ "Interestingly, this is the decompiled C# code, obtained via reflector.\npublic int MyMethod(int param1, \n [Optional, DefaultParameterValue(0)] int param2, \n [Optional, DefaultParameterValue(1)] int param3)\n{\n return ((param1 + param2) + param3);\n}\n\nNotice the Optional and DefaultParameterValue attributes. Try putting them in C# methods. You will find that you are still required to pass values to the method. In VB code however, its turned into Default! That being said, I personally have never use Default even in VB code. It feels like a hack. Method overloading does the trick for me.\nDefault does help though, when dealing with the Excel Interop, which is a pain in the ass to use straight out of the box in C#.\n", "Contrary to popular belief, optional parameters do appear to be CLS-compliant. (However, my primary check for this was to mark the assembly, class and method all with the CLSCompliant attribute, set to True.)\nSo what does this look like in MSIL? \n.method public static int32 MyMethod(int32 param1,\n [opt] int32 param2,\n [opt] int32 param3) cil managed\n{\n .custom instance void [mscorlib]System.CLSCompliantAttribute::.ctor(bool) = ( 01 00 01 00 00 ) \n .param [2] = int32(0x00000000)\n .param [3] = int32(0x00000001)\n // Code size 11 (0xb)\n .maxstack 2\n .locals init ([0] int32 MyMethod)\n IL_0000: nop\n IL_0001: ldarg.0\n IL_0002: ldarg.1\n IL_0003: add.ovf\n IL_0004: ldarg.2\n IL_0005: add.ovf\n IL_0006: stloc.0\n IL_0007: br.s IL_0009\n IL_0009: ldloc.0\n IL_000a: ret\n} // end of method Module1::MyMethod\n\nNote the [opt] markings on the parameters -- MSIL supports this natively, without any hacks. (Unlike MSIL's support for VB's Static keyword, which is another topic altogether.)\nSo, why aren't these in C#? I can't answer that, other than my speculation that it might be a presumed lack of demand. My own preference has always been to specify the parameters, even if they were optional -- to me, the code looks cleaner and is easier to read. (If there are omitted parameters, I often look first for an overload that matches the visible signature -- it's only after I fail to find one that I realize that optional parameters are involved.)\n" ]
[ 7, 6 ]
[]
[]
[ ".net", "cil", "clr", "vb.net" ]
stackoverflow_0000104068_.net_cil_clr_vb.net.txt
Q: Rhino mocks ordered reply, throw exception problem I'm trying to implement some retry logic if there is an exception in my code. I've written the code and now I'm trying to get Rhino Mocks to simulate the scenario. The jist of the code is the following: class Program { static void Main(string[] args) { MockRepository repo = new MockRepository(); IA provider = repo.CreateMock<IA>(); using (repo.Record()) { SetupResult.For(provider.Execute(23)) .IgnoreArguments() .Throw(new ApplicationException("Dummy exception")); SetupResult.For(provider.Execute(23)) .IgnoreArguments() .Return("result"); } repo.ReplayAll(); B retryLogic = new B { Provider = provider }; retryLogic.RetryTestFunction(); repo.VerifyAll(); } } public interface IA { string Execute(int val); } public class B { public IA Provider { get; set; } public void RetryTestFunction() { string result = null; //simplified retry logic try { result = Provider.Execute(23); } catch (Exception e) { result = Provider.Execute(23); } } } What seems to happen is that the exception gets thrown everytime instead of just once. What should I change the setup to be? A: You need to use Expect.Call instead of SetupResult: using (repo.Record()) { Expect.Call(provider.Execute(23)) .IgnoreArguments() .Throw(new ApplicationException("Dummy exception")); Expect.Call(provider.Execute(23)) .IgnoreArguments() .Return("result"); } The Rhino.Mocks wiki says, Using SetupResult.For() completely bypasses the expectations model in Rhino Mocks
Rhino mocks ordered reply, throw exception problem
I'm trying to implement some retry logic if there is an exception in my code. I've written the code and now I'm trying to get Rhino Mocks to simulate the scenario. The jist of the code is the following: class Program { static void Main(string[] args) { MockRepository repo = new MockRepository(); IA provider = repo.CreateMock<IA>(); using (repo.Record()) { SetupResult.For(provider.Execute(23)) .IgnoreArguments() .Throw(new ApplicationException("Dummy exception")); SetupResult.For(provider.Execute(23)) .IgnoreArguments() .Return("result"); } repo.ReplayAll(); B retryLogic = new B { Provider = provider }; retryLogic.RetryTestFunction(); repo.VerifyAll(); } } public interface IA { string Execute(int val); } public class B { public IA Provider { get; set; } public void RetryTestFunction() { string result = null; //simplified retry logic try { result = Provider.Execute(23); } catch (Exception e) { result = Provider.Execute(23); } } } What seems to happen is that the exception gets thrown everytime instead of just once. What should I change the setup to be?
[ "You need to use Expect.Call instead of SetupResult:\n using (repo.Record())\n {\n Expect.Call(provider.Execute(23))\n .IgnoreArguments()\n .Throw(new ApplicationException(\"Dummy exception\"));\n\n Expect.Call(provider.Execute(23))\n .IgnoreArguments()\n .Return(\"result\");\n }\n\nThe Rhino.Mocks wiki says,\n\nUsing SetupResult.For() completely bypasses the expectations model in Rhino Mocks\n\n" ]
[ 2 ]
[]
[]
[ "exception", "mocking", "rhino_mocks" ]
stackoverflow_0000103557_exception_mocking_rhino_mocks.txt
Q: Flash should open window in new tab, but instead it opens a new pop up on Mac using target="_blank" in the navigateToUrl with Firefox on Windows it opens in new tab, with Firefox on Mac it opens a 'popup', How to make the window popup in a new tab on Firefox on Mac as well? A: Check your Firefox preferences >> Tabs >> New windows should be opened in (a new window | a new tab). Do you have different settings for your Firefox on your Windows and on your Mac? A: That is a browser preference, not an actionscript property. A: This is most likely a bug in the browser and/or plug-in. My suggestion would be to try telling JavaScript to open the window, using ExternalInterface. This may be more likely to trigger a pop-up blocker though.
Flash should open window in new tab, but instead it opens a new pop up on Mac
using target="_blank" in the navigateToUrl with Firefox on Windows it opens in new tab, with Firefox on Mac it opens a 'popup', How to make the window popup in a new tab on Firefox on Mac as well?
[ "Check your Firefox preferences >> Tabs >> New windows should be opened in (a new window | a new tab). Do you have different settings for your Firefox on your Windows and on your Mac?\n", "That is a browser preference, not an actionscript property. \n", "This is most likely a bug in the browser and/or plug-in. My suggestion would be to try telling JavaScript to open the window, using ExternalInterface. This may be more likely to trigger a pop-up blocker though.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "actionscript_3", "cross_platform", "firefox", "flash", "macos" ]
stackoverflow_0000100622_actionscript_3_cross_platform_firefox_flash_macos.txt
Q: How do I embed an image in a .NET HTML Mail Message? I have an HTML Mail template, with a place holder for the image. I am getting the image I need to send out of a database and saving it into a photo directory. I need to embed the image in the HTML Message. I have explored using an AlternateView: AlternateView htmlView = AlternateView.CreateAlternateViewFromString("<HTML> <img src=cid:VisitorImage> </HTML>"); LinkedResource VisitorImage = new LinkedResource(p_ImagePath); VisitorImage.ContentId= "VisitorImage"; htmlView.LinkedResources.Add(VisitorImage); A: Try this: LinkedResource objLinkedRes = new LinkedResource( Server.MapPath(".") + "\\fuzzydev-logo.jpg", "image/jpeg"); objLinkedRes.ContentId = "fuzzydev-logo"; AlternateView objHTLMAltView = AlternateView.CreateAlternateViewFromString( "<img src='cid:fuzzydev-logo' />", new System.Net.Mime.ContentType("text/html")); objHTLMAltView.LinkedResources.Add(objLinkedRes); objMailMessage.AlternateViews.Add(objHTLMAltView);
How do I embed an image in a .NET HTML Mail Message?
I have an HTML Mail template, with a place holder for the image. I am getting the image I need to send out of a database and saving it into a photo directory. I need to embed the image in the HTML Message. I have explored using an AlternateView: AlternateView htmlView = AlternateView.CreateAlternateViewFromString("<HTML> <img src=cid:VisitorImage> </HTML>"); LinkedResource VisitorImage = new LinkedResource(p_ImagePath); VisitorImage.ContentId= "VisitorImage"; htmlView.LinkedResources.Add(VisitorImage);
[ "Try this:\nLinkedResource objLinkedRes = new LinkedResource(\n Server.MapPath(\".\") + \"\\\\fuzzydev-logo.jpg\", \n \"image/jpeg\");\nobjLinkedRes.ContentId = \"fuzzydev-logo\"; \nAlternateView objHTLMAltView = AlternateView.CreateAlternateViewFromString(\n \"<img src='cid:fuzzydev-logo' />\", \n new System.Net.Mime.ContentType(\"text/html\"));\nobjHTLMAltView.LinkedResources.Add(objLinkedRes);\nobjMailMessage.AlternateViews.Add(objHTLMAltView);\n\n" ]
[ 22 ]
[]
[]
[ ".net", "c#", "html" ]
stackoverflow_0000104177_.net_c#_html.txt
Q: How do I display an image with ltk? I have written code to read a windows bitmap and would now like to display it with ltk. How can I construct an appropriate object? Is there such functionality in ltk? If not how can I do it directly interfacing to tk? A: It has been a while since I used LTK for anything, but the simplest way to display an image with LTK is as follows: (defpackage #:ltk-image-example (:use #:cl #:ltk)) (in-package #:ltk-image-example) (defun image-example () (with-ltk () (let ((image (make-image))) (image-load image "testimage.gif") (let ((canvas (make-instance 'canvas))) (create-image canvas 0 0 :image image) (configure canvas :width 800) (configure canvas :height 640) (pack canvas))))) Unfortunately what you can do with the image by default is fairly limited, and you can only use gif or ppm images - but the ppm file format is very simple, you could easily create a ppm image from your bitmap. However you say you want to manipulate the displayed image, and looking at the code that defines the image object: (defclass photo-image(tkobject) ((data :accessor data :initform nil :initarg :data) ) ) (defmethod widget-path ((photo photo-image)) (name photo)) (defmethod initialize-instance :after ((p photo-image) &key width height format grayscale data) (check-type data (or null string)) (setf (name p) (create-name)) (format-wish "image create photo ~A~@[ -width ~a~]~@[ -height ~a~]~@[ -format \"~a\"~]~@[ -grayscale~*~]~@[ -data ~s~]" (name p) width height format grayscale data)) (defun make-image () (let* ((name (create-name)) (i (make-instance 'photo-image :name name))) ;(create i) i)) (defgeneric image-load (p filename)) (defmethod image-load((p photo-image) filename) ;(format t "loading file ~a~&" filename) (send-wish (format nil "~A read {~A} -shrink" (name p) filename)) p) It looks like the the actual data for the image is stored by the Tcl/Tk interpreter and not accessible from within lisp. If you wanted to access it you would probably need to write your own functions using format-wish and send-wish. Of course you could simply render each pixel individually on a canvas object, but I don't think you would get very good performance doing that, the canvas widget gets a bit slow once you are trying to display more than a few thousand different things on it. So to summarize - if you don't care about doing anything in real time, you could save your bitmap as a .ppm image every time you wanted to display it and then simply load it using the code above - that would be the easiest. Otherwise you could try to access the data from tk itself (after loading it once as a ppm image), finally if none of that works you could switch to another toolkit. Most of the decent lisp GUI toolkits are for linux, so you may be out of luck if you are using windows. A: Tk does not natively support windows bitmap files. However, the "Img" extension does and is freely available on just about every platform. You do not need to read the data in, you can create the image straight from the file on disk. In plain tcl/tk your code might look something like this: package require Img set image [image create photo -file /path/to/image.bmp] label .l -image $image pack .l a little more information can be found at http://wiki.tcl.tk/6165
How do I display an image with ltk?
I have written code to read a windows bitmap and would now like to display it with ltk. How can I construct an appropriate object? Is there such functionality in ltk? If not how can I do it directly interfacing to tk?
[ "It has been a while since I used LTK for anything, but the simplest way to display an image with LTK is as follows:\n(defpackage #:ltk-image-example\n (:use #:cl #:ltk))\n\n(in-package #:ltk-image-example)\n\n(defun image-example ()\n (with-ltk ()\n (let ((image (make-image)))\n (image-load image \"testimage.gif\")\n (let ((canvas (make-instance 'canvas)))\n (create-image canvas 0 0 :image image)\n (configure canvas :width 800)\n (configure canvas :height 640)\n (pack canvas)))))\n\nUnfortunately what you can do with the image by default is fairly limited, and you can only use gif or ppm images - but the ppm file format is very simple, you could easily create a ppm image from your bitmap. However you say you want to manipulate the displayed image, and looking at the code that defines the image object:\n(defclass photo-image(tkobject)\n ((data :accessor data :initform nil :initarg :data)\n )\n )\n\n(defmethod widget-path ((photo photo-image))\n (name photo))\n\n(defmethod initialize-instance :after ((p photo-image)\n &key width height format grayscale data)\n (check-type data (or null string))\n (setf (name p) (create-name))\n (format-wish \"image create photo ~A~@[ -width ~a~]~@[ -height ~a~]~@[ -format \\\"~a\\\"~]~@[ -grayscale~*~]~@[ -data ~s~]\"\n (name p) width height format grayscale data))\n\n(defun make-image ()\n (let* ((name (create-name))\n (i (make-instance 'photo-image :name name)))\n ;(create i)\n i))\n\n(defgeneric image-load (p filename))\n(defmethod image-load((p photo-image) filename)\n ;(format t \"loading file ~a~&\" filename)\n (send-wish (format nil \"~A read {~A} -shrink\" (name p) filename))\n p)\n\nIt looks like the the actual data for the image is stored by the Tcl/Tk interpreter and not accessible from within lisp. If you wanted to access it you would probably need to write your own functions using format-wish and send-wish.\nOf course you could simply render each pixel individually on a canvas object, but I don't think you would get very good performance doing that, the canvas widget gets a bit slow once you are trying to display more than a few thousand different things on it. So to summarize - if you don't care about doing anything in real time, you could save your bitmap as a .ppm image every time you wanted to display it and then simply load it using the code above - that would be the easiest. Otherwise you could try to access the data from tk itself (after loading it once as a ppm image), finally if none of that works you could switch to another toolkit. Most of the decent lisp GUI toolkits are for linux, so you may be out of luck if you are using windows.\n", "Tk does not natively support windows bitmap files. However, the \"Img\" extension does and is freely available on just about every platform. You do not need to read the data in, you can create the image straight from the file on disk. In plain tcl/tk your code might look something like this:\npackage require Img\nset image [image create photo -file /path/to/image.bmp]\nlabel .l -image $image\npack .l\n\na little more information can be found at http://wiki.tcl.tk/6165\n" ]
[ 4, 2 ]
[]
[]
[ "lisp", "ltk", "sbcl", "tcl", "tk_toolkit" ]
stackoverflow_0000077934_lisp_ltk_sbcl_tcl_tk_toolkit.txt
Q: How is AJAX implemented, and how does it help web dev? From http://en.wikipedia.org/wiki/AJAX, I get a fairly good grasp of what AJAX is. However, it looks like in order to learn it, I'd have to delve into multiple technologies at the same time to get any benefit out of it. So two questions: What are resources that can help me understand/use AJAX? What sort of website would benefit from AJAX? A: If you aren't interested in the nitty gritty, you could use a higher-level library like JQuery or Prototype to create the underlying Javascript for you. The main benefit is a vastly more responsive user interface for web-based applications. A: There are many libraries out there that can help you get benefit out of AJAX without learning about implementing callbacks, etc. Are you using .NET? Look at http://ajax.asp.net. If you're not, then take a look at tools like qcodo for PHP, and learn about prototype.js, jquery, etc. As far as websites that would benefit: Every web application ever. :) Anything you interact with by exchanging information, not just by clicking a link and reading an article. A: Every website can benefit from AJAX, but in my opinion the biggest benefit to AJAX comes in data entry sections - forms basically. I have done entire sites where the front end - the part the user sees had almost no AJAX functionality in it. All the AJAX stuff was in the administration control panel for assisting in (correct!) data entry. There is nothing worse than submitting a form and getting back an error, using AJAX you can pretty much prevent this for everything but file uploads. A: I find it easiest to just stay away from all the frameworks and other helpers and just do basic Javascript. This not only lets you understand what's going on under the covers, it also lets you do it in the simplest way possible. There's really not much to it. User the JS XML DOM objects to create an xml document client side. Sent it to the server with XMLHTTPRequest, and then process the result, again using the JS XML DOM objects. Start with something simple. Just try sending one piece of information to the server, and getting a small piece of information back. A: The Mozilla documentation is good. Sites that benefit from it the most are ones that behave almost like a desktop application and need high interactivity. You can usually improve usability on almost any site by using it, however. A: Ajax should be thought of as a means to alter some content on a page without reloading the entire page. So when do you need to do this? Really only when you have some user interactions or form information that you want to keep intact while you change some content on the page.
How is AJAX implemented, and how does it help web dev?
From http://en.wikipedia.org/wiki/AJAX, I get a fairly good grasp of what AJAX is. However, it looks like in order to learn it, I'd have to delve into multiple technologies at the same time to get any benefit out of it. So two questions: What are resources that can help me understand/use AJAX? What sort of website would benefit from AJAX?
[ "If you aren't interested in the nitty gritty, you could use a higher-level library like JQuery or Prototype to create the underlying Javascript for you. The main benefit is a vastly more responsive user interface for web-based applications.\n", "There are many libraries out there that can help you get benefit out of AJAX without learning about implementing callbacks, etc.\nAre you using .NET? Look at http://ajax.asp.net. If you're not, then take a look at tools like qcodo for PHP, and learn about prototype.js, jquery, etc.\nAs far as websites that would benefit: Every web application ever. :) Anything you interact with by exchanging information, not just by clicking a link and reading an article.\n", "Every website can benefit from AJAX, but in my opinion the biggest benefit to AJAX comes in data entry sections - forms basically. I have done entire sites where the front end - the part the user sees had almost no AJAX functionality in it. All the AJAX stuff was in the administration control panel for assisting in (correct!) data entry. \nThere is nothing worse than submitting a form and getting back an error, using AJAX you can pretty much prevent this for everything but file uploads.\n", "I find it easiest to just stay away from all the frameworks and other helpers and just do basic Javascript. This not only lets you understand what's going on under the covers, it also lets you do it in the simplest way possible. There's really not much to it. User the JS XML DOM objects to create an xml document client side. Sent it to the server with XMLHTTPRequest, and then process the result, again using the JS XML DOM objects. Start with something simple. Just try sending one piece of information to the server, and getting a small piece of information back. \n", "The Mozilla documentation is good. Sites that benefit from it the most are ones that behave almost like a desktop application and need high interactivity. You can usually improve usability on almost any site by using it, however.\n", "Ajax should be thought of as a means to alter some content on a page without reloading the entire page. \nSo when do you need to do this? Really only when you have some user interactions or form information that you want to keep intact while you change some content on the page. \n" ]
[ 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "ajax", "resources" ]
stackoverflow_0000102929_ajax_resources.txt
Q: Displaying Window on Logon Screen Using C# in Windows XP I am trying to create a service with C# that launches a process that can be displayed on the Windows XP Logon screen. I found some code that is doing this in C++. The C++ code is for a service that creates another process with STARTUPINFO.lpDesktop set to "WinSta0\WinLogon". The created process is then displayed to the Windows Logon Screen. I can't seem to find a way to specify the 'desktop' of a new process in C# using System.Diagnostic.Process class. Does anyone know how to do this with C#? A: The solution was to call the C++ Win32 API function CreateProcess from kernel32.dll from the C# code. This site was very helpful in getting the correct function signature for C#: http://www.pinvoke.net/default.aspx/kernel32/CreateProcess.html
Displaying Window on Logon Screen Using C# in Windows XP
I am trying to create a service with C# that launches a process that can be displayed on the Windows XP Logon screen. I found some code that is doing this in C++. The C++ code is for a service that creates another process with STARTUPINFO.lpDesktop set to "WinSta0\WinLogon". The created process is then displayed to the Windows Logon Screen. I can't seem to find a way to specify the 'desktop' of a new process in C# using System.Diagnostic.Process class. Does anyone know how to do this with C#?
[ "The solution was to call the C++ Win32 API function CreateProcess from kernel32.dll from the C# code. This site was very helpful in getting the correct function signature for C#:\nhttp://www.pinvoke.net/default.aspx/kernel32/CreateProcess.html\n" ]
[ 3 ]
[ "I think you'll have to write it in C++, compile that to a DLL and then call the DLL from your managed code.\n" ]
[ -1 ]
[ "c#", "desktop", "service", "windows_xp", "winlogon" ]
stackoverflow_0000103427_c#_desktop_service_windows_xp_winlogon.txt
Q: How to convert an XML file into a .Net class? Can someone please remind me how to create a .Net class from an XML file? I would prefer the batch commands, or a way to integrate it into the shell. Thanks! A: The below batch will create a .Net Class from XML in the current directory. So... XML -> XSD -> VB (Feel free to substitute CS for Language) Create a Convert2Class.Bat in the %UserProfile%\SendTo directory. Then copy/save the below: @Echo off Set XsdExePath="C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\XSD.exe" Set Language=VB %~d1 CD %~d1%~p1 %XsdExePath% "%~n1.xml" /nologo %XsdExePath% "%~n1.xsd" /nologo /c /language:%Language% Works on my machine - Good Luck!! A: You might be able to use the xsd.exe tool to generate a class, otherwise you probably have to implement a custom solution against your XML XML Schema Definition Tool XML Serialization
How to convert an XML file into a .Net class?
Can someone please remind me how to create a .Net class from an XML file? I would prefer the batch commands, or a way to integrate it into the shell. Thanks!
[ "The below batch will create a .Net Class from XML in the current directory.\nSo... XML -> XSD -> VB\n(Feel free to substitute CS for Language)\nCreate a Convert2Class.Bat in the %UserProfile%\\SendTo directory.\nThen copy/save the below:\n@Echo off\nSet XsdExePath=\"C:\\Program Files\\Microsoft Visual Studio 8\\SDK\\v2.0\\Bin\\XSD.exe\"\nSet Language=VB\n%~d1\nCD %~d1%~p1 \n%XsdExePath% \"%~n1.xml\" /nologo\n%XsdExePath% \"%~n1.xsd\" /nologo /c /language:%Language%\n\nWorks on my machine - Good Luck!!\n", "You might be able to use the xsd.exe tool to generate a class, otherwise you probably have to implement a custom solution against your XML\nXML Schema Definition Tool\nXML Serialization\n" ]
[ 2, 0 ]
[]
[]
[ ".net", "xml" ]
stackoverflow_0000103157_.net_xml.txt
Q: Why can't I open this table in SQL Server Management Studio? I created a couple of tables procedurally via C# named something like [MyTableOneCustom0] and [MyTableTwoCustom0]. When I try to return all of the values from these tables via "Open Table" in MSSQL Server Management Studio, I receive the following error: Error Source: Microsoft.VisualStudio.DataTools Error Message: Exception has been thrown by the target of an invocation. However, I can still bring up all of the data via a SELECT * statement. Does anyone know what is causing this? A: Based on a similar post loacated at at Egg Head Cafe, it looks like the Management Studio will thrown an exception if there are too many columns included explicitly in the query. Select * returns them implicitly, so there doesn't seem to be an issue. I have over 800 columns in this table, so I'm sure this is the problem. A: I hesitate to ask, but normally you would not want 800 or columns in a database, so why did you do this? Given how databases store information you are possibly creating many problems for yourself with a design like that in terms of data retrieval and storage. How many bytes of data woudl a full row have? You know there is a limit to the number of bytes of data that can be stored in a row. You could be setting yourself up for issues entering data when a row exceeds those limits. It might be best to break into separate tables even if there is a one-to-one relationship. Read in BOL about data pages and how data is stored to understand why this concerns me.
Why can't I open this table in SQL Server Management Studio?
I created a couple of tables procedurally via C# named something like [MyTableOneCustom0] and [MyTableTwoCustom0]. When I try to return all of the values from these tables via "Open Table" in MSSQL Server Management Studio, I receive the following error: Error Source: Microsoft.VisualStudio.DataTools Error Message: Exception has been thrown by the target of an invocation. However, I can still bring up all of the data via a SELECT * statement. Does anyone know what is causing this?
[ "Based on a similar post loacated at at Egg Head Cafe, it looks like the Management Studio will thrown an exception if there are too many columns included explicitly in the query. Select * returns them implicitly, so there doesn't seem to be an issue.\nI have over 800 columns in this table, so I'm sure this is the problem.\n", "I hesitate to ask, but normally you would not want 800 or columns in a database, so why did you do this? Given how databases store information you are possibly creating many problems for yourself with a design like that in terms of data retrieval and storage. How many bytes of data woudl a full row have? You know there is a limit to the number of bytes of data that can be stored in a row. You could be setting yourself up for issues entering data when a row exceeds those limits. It might be best to break into separate tables even if there is a one-to-one relationship. Read in BOL about data pages and how data is stored to understand why this concerns me.\n" ]
[ 1, 0 ]
[]
[]
[ "c#", "sql", "sql_server" ]
stackoverflow_0000103092_c#_sql_sql_server.txt
Q: What is the best way to obtain a list of site resources when writing a Maven2 site plugin? When creating a plugin that executes in the default life-cycle, it's easy to obtain a reference to the project and its resources, but I'm getting a null instead of a MavenProject object when creating plugins that execute in the site life-cycle. Any hints, tips or suggestions? A: It turns out the problem I was having was related to my declaration of the Project parameter being passed into my Mojo. Since there's only one instance of a MavenProject within a Maven build, you can't specify an expression (and there's really no Java String that can be cast to a MavenProject object) for the parameter and the default value has to be "${project}". So to access the MavenProject from within a Maven Plugin Mojo, for any phase, use the following parameter declaration: /** * Project instance, used to add new source directory to the build. * * @parameter expression="export.project" default-value="${project}" * @required * @readonly */ private MavenProject project;
What is the best way to obtain a list of site resources when writing a Maven2 site plugin?
When creating a plugin that executes in the default life-cycle, it's easy to obtain a reference to the project and its resources, but I'm getting a null instead of a MavenProject object when creating plugins that execute in the site life-cycle. Any hints, tips or suggestions?
[ "It turns out the problem I was having was related to my declaration of the Project parameter being passed into my Mojo. Since there's only one instance of a MavenProject within a Maven build, you can't specify an expression (and there's really no Java String that can be cast to a MavenProject object) for the parameter and the default value has to be \"${project}\".\nSo to access the MavenProject from within a Maven Plugin Mojo, for any phase, use the following parameter declaration:\n/**\n * Project instance, used to add new source directory to the build.\n * \n * @parameter expression=\"export.project\" default-value=\"${project}\"\n * @required\n * @readonly\n */\nprivate MavenProject project;\n\n" ]
[ 1 ]
[]
[]
[ "java", "maven_2", "maven_plugin" ]
stackoverflow_0000099012_java_maven_2_maven_plugin.txt
Q: Does several levels of base classes slow down a class/struct in c++? Does having several levels of base classes slow down a class? A derives B derives C derives D derives F derives G, ... Does multiple inheritance slow down a class? A: Non-virtual function-calls have absolutely no performance hit at run-time, in accordance with the c++ mantra that you shouldn't pay for what you don't use. In a virtual function call, you generally pay for an extra pointer lookup, no matter how many levels of inheritance, or number of base classes you have. Of course this is all implementation defined. Edit: As noted elsewhere, in some multiple inheritance scenarios, an adjustment to the 'this' pointer is required before making the call. Raymond Chen describes how this works for COM objects. Basically, calling a virtual function on an object that inherits from multiple bases can require an extra subtraction and a jmp instruction on top of the extra pointer lookup required for a virtual call. A: [Deep inheritance hierarchies] greatly increases the maintenance burden by adding unnecessary complexity, forcing users to learn the interfaces of many classes even when all they want to do is use a specific derived class. It can also have an impact on memory use and program performance by adding unnecessary vtables and indirection to classes that do not really need them. If you find yourself frequently creating deep inheritance hierarchies, you should review your design style to see if you've picked up this bad habit. Deep hierarchies are rarely needed and almost never good. And if you don't believe that but think that "OO just isn't OO without lots of inheritance," then a good counter-example to consider is the [C++] standard library itself. -- Herb Sutter A: Does multiple inheritance slow down a class? As mentioned several times, a deeply nested single inheritance hierarchy should impose no additional overhead for a virtual call (above the overhead imposed for any virtual call). However, when multiple inheritance is involved, there is sometimes a very slight additional overhead when calling the virtual function through a base class pointer. In this case some implementations have the virtual function go through a small thunk that adjusts the 'this' pointer since (static_cast<Base*>( this) == this) Is not necessarily true depending on the object layout. Note that all of this is very, very implementation dependent. See Lippman's "Inside the C++ Object Model" Chapter 4.2 - Virtual Member Functions/Virtual Functions under MI A: There is no speed difference between virtual calls at different levels since they all get flattened out into the vtable (pointing to the most derived versions of the overridden methods). So, calling ((A*)inst)->Method() when inst is an instance of B is the same overhead as when inst is an instance of D. Now, a virtual call is more expensive than a non-virtual call, but this is because of the pointer dereference and not a function of how deep the class hierarchy actually is. A: Virtual calls themselves are more time consuming than normal calls because it has to lookup the address of the actual function to call from the vtable Additionally compiler optimizations like inlining might be hard to perform due to the lookup requirement. Situations where inlining is not possible itself can lead to quite a high overhead due to stack pop and push and jump operations Here is a proper study which says the overhead can be as high as 50% http://www.cs.ucsb.edu/~urs/oocsb/papers/oopsla96.pdf Here is another resource that looks at a side effect of having a large library of virtual classes http://keycorner.org/pub/text/doc/kde-slow.txt The dispatching of the virtual calls with multiple inheritances is compiler specific, so the implementation will also have an effect in this case. Regarding your specific question of having a large no of base classes, usually the memory layout of a class object would have the vtbl ptrs for all the other constituent classes within it. Check this page for a sample vtable layout - http://www.codesourcery.com/public/cxx-abi/cxx-vtable-ex.html So a call to a method implemented by a class deeper into the heierarchy would still only be a single indirection and not multiple indirections as you seem to think. The call does not have to navigate from class to class to finally find the exact function to call. However if you are using composition instead of inheritance each pointer call would be a virtual call and that overhead would be present and if within that virtual call if that class uses more compositio,n more virtual calls would be made. That kindof a design would be slower depending on the amount of calls you made. A: Almost all answers point toward whether or not virtual methods would be slower in the OP's example, but I think the OP is simply asking if having several level of inheritance in and of itself is slow. The answer is of course no since this all happens at compile-time in C++. I suspect the question is driven from experience with script languages where such inheritance graphs can be dynamic, and in that case, it potentially could be slower. A: If there are no virtual functions, then it shouldn't. If there are then there is a performance impact in calling the virtual functions as these are called via function pointers or other indirect methods (depends on the situation). However, I do not think that the impact is related to the depth of the inheritance hierarchy. Brian, to be clear and answer your comment. If there are no virtual functions anywhere in your inheritance tree, then there is no performance impact. A: Having non-trivial constructors in a deep inheritance tree can slow down object creation, when every creation of a child results in function calls to all the parent constructors all the way up to the base. A: Yes, if you're referencing it like this: // F is-a E, // E is-a D and so on A* aObject = new F(); aObject->CallAVirtual(); Then you're working with a pointer to an A type object. Given you're calling a function that is virtual, it has to look up the function table (vtable) to get the correct pointers. There is some overhead to that, yes. A: Calling a virtual function is slightly slower than calling a nonvirtual function. However, I don't think it matters how deep your inheritance tree is. But this is not a difference that you should normally be worried about. A: While I'm not completely sure, I think that unless you're using virtual methods, the compiler should be able to optimize it well enough that inheritance shouldn't matter too much. However, if you're calling up to functions in the base class which call functions in its base class, and so on, a lot, it could impact performance. In essence, it highly depends on how you've structured your inheritance tree. A: As pointed out by Corey Ross the vtable is known at compile time for any leaf derived class, and so the cost of the virtual call really should be the same irrespective of the structure of the hierarchy. This, however, cannot be said for dynamic_cast. If you consider how you might implement dynamic_cast, a basic approach will be have an O(n) search through your hierarchy! In the case of a multiple inheritance hierarchy, you are also paying a small cost to convert between different classes in the hierarchy: sturct A { int i; }; struct B { int j; }; struct C : public A, public B { int k ; }; // Let's assume that the layout of C is: { [ int i ] [ int j ] [int k ] } void foo (C * c) { A * a = c; // Probably has zero cost B * b = c; // Compiler needed to add sizeof(A) to 'c' c = static_cast<B*> (b); // Compiler needed to take sizeof(A)' from 'b' }
Does several levels of base classes slow down a class/struct in c++?
Does having several levels of base classes slow down a class? A derives B derives C derives D derives F derives G, ... Does multiple inheritance slow down a class?
[ "Non-virtual function-calls have absolutely no performance hit at run-time, in accordance with the c++ mantra that you shouldn't pay for what you don't use.\nIn a virtual function call, you generally pay for an extra pointer lookup, no matter how many levels of inheritance, or number of base classes you have.\nOf course this is all implementation defined.\nEdit: As noted elsewhere, in some multiple inheritance scenarios, an adjustment to the 'this' pointer is required before making the call. Raymond Chen describes how this works for COM objects. Basically, calling a virtual function on an object that inherits from multiple bases can require an extra subtraction and a jmp instruction on top of the extra pointer lookup required for a virtual call.\n", "[Deep inheritance hierarchies] greatly increases the maintenance burden by adding unnecessary complexity, forcing users to learn the interfaces of many classes even when all they want to do is use a specific derived class. It can also have an impact on memory use and program performance by adding unnecessary vtables and indirection to classes that do not really need them. If you find yourself frequently creating deep inheritance hierarchies, you should review your design style to see if you've picked up this bad habit. Deep hierarchies are rarely needed and almost never good. And if you don't believe that but think that \"OO just isn't OO without lots of inheritance,\" then a good counter-example to consider is the [C++] standard library itself. -- Herb Sutter \n", "\nDoes multiple inheritance slow down\na class?\n\nAs mentioned several times, a deeply nested single inheritance hierarchy should impose no additional overhead for a virtual call (above the overhead imposed for any virtual call).\nHowever, when multiple inheritance is involved, there is sometimes a very slight additional overhead when calling the virtual function through a base class pointer. In this case some implementations have the virtual function go through a small thunk that adjusts the 'this' pointer since \n(static_cast<Base*>( this) == this)\n\nIs not necessarily true depending on the object layout. \nNote that all of this is very, very implementation dependent.\nSee Lippman's \"Inside the C++ Object Model\" Chapter 4.2 - Virtual Member Functions/Virtual Functions under MI\n", "There is no speed difference between virtual calls at different levels since they all get flattened out into the vtable (pointing to the most derived versions of the overridden methods). So, calling ((A*)inst)->Method() when inst is an instance of B is the same overhead as when inst is an instance of D.\nNow, a virtual call is more expensive than a non-virtual call, but this is because of the pointer dereference and not a function of how deep the class hierarchy actually is. \n", "Virtual calls themselves are more time consuming than normal calls because it has to lookup the address of the actual function to call from the vtable \nAdditionally compiler optimizations like inlining might be hard to perform due to the lookup requirement. Situations where inlining is not possible itself can lead to quite a high overhead due to stack pop and push and jump operations\nHere is a proper study which says the overhead can be as high as 50% http://www.cs.ucsb.edu/~urs/oocsb/papers/oopsla96.pdf\nHere is another resource that looks at a side effect of having a large library of virtual classes http://keycorner.org/pub/text/doc/kde-slow.txt\nThe dispatching of the virtual calls with multiple inheritances is compiler specific, so the implementation will also have an effect in this case. \nRegarding your specific question of having a large no of base classes, usually the memory layout of a class object would have the vtbl ptrs for all the other constituent classes within it. \nCheck this page for a sample vtable layout - http://www.codesourcery.com/public/cxx-abi/cxx-vtable-ex.html\nSo a call to a method implemented by a class deeper into the heierarchy would still only be a single indirection and not multiple indirections as you seem to think. The call does not have to navigate from class to class to finally find the exact function to call. \nHowever if you are using composition instead of inheritance each pointer call would be a virtual call and that overhead would be present and if within that virtual call if that class uses more compositio,n more virtual calls would be made. That kindof a design would be slower depending on the amount of calls you made. \n", "Almost all answers point toward whether or not virtual methods would be slower in the OP's example, but I think the OP is simply asking if having several level of inheritance in and of itself is slow. The answer is of course no since this all happens at compile-time in C++. I suspect the question is driven from experience with script languages where such inheritance graphs can be dynamic, and in that case, it potentially could be slower.\n", "If there are no virtual functions, then it shouldn't. If there are then there is a performance impact in calling the virtual functions as these are called via function pointers or other indirect methods (depends on the situation). However, I do not think that the impact is related to the depth of the inheritance hierarchy.\nBrian, to be clear and answer your comment. If there are no virtual functions anywhere in your inheritance tree, then there is no performance impact.\n", "Having non-trivial constructors in a deep inheritance tree can slow down object creation, when every creation of a child results in function calls to all the parent constructors all the way up to the base.\n", "Yes, if you're referencing it like this:\n// F is-a E,\n// E is-a D and so on\n\nA* aObject = new F(); \naObject->CallAVirtual();\n\nThen you're working with a pointer to an A type object. Given you're calling a function that is virtual, it has to look up the function table (vtable) to get the correct pointers. There is some overhead to that, yes.\n", "Calling a virtual function is slightly slower than calling a nonvirtual function. However, I don't think it matters how deep your inheritance tree is.\nBut this is not a difference that you should normally be worried about.\n", "While I'm not completely sure, I think that unless you're using virtual methods, the compiler should be able to optimize it well enough that inheritance shouldn't matter too much.\nHowever, if you're calling up to functions in the base class which call functions in its base class, and so on, a lot, it could impact performance.\nIn essence, it highly depends on how you've structured your inheritance tree.\n", "As pointed out by Corey Ross the vtable is known at compile time for any leaf derived class, and so the cost of the virtual call really should be the same irrespective of the structure of the hierarchy.\nThis, however, cannot be said for dynamic_cast. If you consider how you might implement dynamic_cast, a basic approach will be have an O(n) search through your hierarchy!\nIn the case of a multiple inheritance hierarchy, you are also paying a small cost to convert between different classes in the hierarchy:\nsturct A { int i; };\nstruct B { int j; };\n\nstruct C : public A, public B { int k ; };\n\n// Let's assume that the layout of C is: { [ int i ] [ int j ] [int k ] }\n\nvoid foo (C * c) {\n A * a = c; // Probably has zero cost\n B * b = c; // Compiler needed to add sizeof(A) to 'c'\n c = static_cast<B*> (b); // Compiler needed to take sizeof(A)' from 'b'\n}\n\n" ]
[ 29, 7, 4, 2, 2, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "c++", "oop" ]
stackoverflow_0000099510_c++_oop.txt
Q: How do I remove the .NET Compact Framework from my Windows Mobile 5 device? I'm trying to get a number of third party applications to work on my Windows Mobile 5 smartphone. I've installed the latest version (3.5) of the Microsoft.NET Compact Framework, but whenever I run the apps I get an error message which states: "This application [Application Name] requires a newer version of the Microsoft .NET Compact Framework than the version installed on this device." Given I've supposedly successfully installed the latest version, this doesn't make sense, leading me to believe I need to remove the .NET Compact Framework and start again. (I've tried reinstalling it, but as far as I can tell there's no automated way of removing it on the device, or from my PC.) Does anyone have any suggestions as to what I need to do? Thanks! A: It's probably better to not uninstall, and if it's on the device in ROM you can't uninstall it anyway. There are a couple options available to you. The different CF versions coexist fine, so you can install the older version and leave 3.5 on it. The CF can be set for compatibility mode. That means you can tell just a single app compiled against an old version use the 3.5 runtimes in compatibility mode or you can set that device-wide so all older CF apps will run agains the 3.5 EE in compatibility mode. For online resources discussing configuration files and compatibility mode, see these links: MSDN Article on Configuration File Settings MSDN Article on Configuring Runtime Versions David Kline's blog on Compatibility Mode The CF 3.5 Power Toys (includes an app for setting configurations) Note: I forgot to mention in the original response that using option #2 (running against CF 3.5) will very likely improve the performance of the app as well, since it will be running with the newest CLR. A: Have you tried using Microsoft ActiveSync to uninstall it? A: If you have installed it yourself you should be able to remove it by going to Settings -> System -> Remove Programs (could be a slightly different path on the smartphone OS, I'm used to Pocket PCs).
How do I remove the .NET Compact Framework from my Windows Mobile 5 device?
I'm trying to get a number of third party applications to work on my Windows Mobile 5 smartphone. I've installed the latest version (3.5) of the Microsoft.NET Compact Framework, but whenever I run the apps I get an error message which states: "This application [Application Name] requires a newer version of the Microsoft .NET Compact Framework than the version installed on this device." Given I've supposedly successfully installed the latest version, this doesn't make sense, leading me to believe I need to remove the .NET Compact Framework and start again. (I've tried reinstalling it, but as far as I can tell there's no automated way of removing it on the device, or from my PC.) Does anyone have any suggestions as to what I need to do? Thanks!
[ "It's probably better to not uninstall, and if it's on the device in ROM you can't uninstall it anyway.\nThere are a couple options available to you.\n\nThe different CF versions coexist fine, so you can install the older version and leave 3.5 on it.\nThe CF can be set for compatibility mode. That means you can tell just a single app compiled against an old version use the 3.5 runtimes in compatibility mode or you can set that device-wide so all older CF apps will run agains the 3.5 EE in compatibility mode.\n\nFor online resources discussing configuration files and compatibility mode, see these links:\n\nMSDN Article on Configuration File Settings\nMSDN Article on Configuring Runtime Versions\nDavid Kline's blog on Compatibility Mode\nThe CF 3.5 Power Toys (includes an app for setting configurations)\n\nNote: I forgot to mention in the original response that using option #2 (running against CF 3.5) will very likely improve the performance of the app as well, since it will be running with the newest CLR.\n", "Have you tried using Microsoft ActiveSync to uninstall it?\n", "If you have installed it yourself you should be able to remove it by going to Settings -> System -> Remove Programs (could be a slightly different path on the smartphone OS, I'm used to Pocket PCs).\n" ]
[ 3, 0, 0 ]
[]
[]
[ ".net", "compact_framework", "windows_mobile_5.0" ]
stackoverflow_0000096750_.net_compact_framework_windows_mobile_5.0.txt
Q: Is there a way to "align" columns in a data repeater control? Is there a way to "align" columns in a data repeater control? I.E currently it looks like this: user1 - colA colB colC colD colE user2 - colD colE I want it to look like: user1 -colA -colB -colC -colD -colE user1 -colD -colE I need to columns for each record to align properly when additional records might not have data for a given column. The requirements call for a repeater and not a grid control. Any ideas? A: If you have access to how many columns are mising in the repeat, then just the following as the table tag. I you don't have access to this, can you post the source for your data repeater and what DataSource you're going against? <td colspan='<%# MissingCount(Contatiner.DataItem) %>'> A: I would suggest that instead of using <td> to define the columns, that you use CSS instead. .collink { width: 20px; float: left; height: 20px; } AND <td style="padding :0px 0px 0px 0px;"> <div class="collink"> <asp:LinkButton ID="lnkEdit" runat="server" ... /> </div> </td> This approach lets the content grow without actually affecting the table structure. A: <tr class="RadGridItem"> <td width="100"> <asp:Label ID="lblFullName" runat="server" Text ='<%# DataBinder.Eval(Container.DataItem, "FullName") %>' ToolTip='<%# "Current Grade: " + DataBinder.Eval(Container.DataItem,"CurrentGrade") + "%" + " Percent Complete: " + DataBinder.Eval(Container.DataItem,"PercentComplete") + "%" %>' /> </td> <asp:Repeater ID="rptAssessments" runat="server" DataSource='<%# DataBinder.Eval(Container.DataItem, "EnrollmentAssessments") %>'> <ItemTemplate> <td style="padding :0px 0px 0px 0px; width:20px; height: 20px;"> <asp:LinkButton ID="lnkEdit" runat="server" OnClick="AssessmentClick" style=' <%# "color:" + this.GetAssessmentColor(Container.DataItem) %>' ToolTip='<%# DataBinder.Eval(Container.DataItem, "AssessmentName") + Environment.NewLine + DataBinder.Eval(Container.DataItem, "EnrollmentAssessmentStateName") + "(" + DataBinder.Eval(Container.DataItem, "PercentGradeDisplay") + "%) " + GetPointsPossible(Container.DataItem) + " pts possible" %>' CommandArgument='<%# DataBinder.Eval(Container.DataItem, "EnrollmentAssessmentID") %>' Text='<%# this.GetAssessmentDisplay(Container.DataItem) %>' /> </td> </ItemTemplate> </asp:Repeater> </tr> </ItemTemplate> This is the code. The number of columns will be dynamic based on the criteria used to generate the list. Thanks.
Is there a way to "align" columns in a data repeater control?
Is there a way to "align" columns in a data repeater control? I.E currently it looks like this: user1 - colA colB colC colD colE user2 - colD colE I want it to look like: user1 -colA -colB -colC -colD -colE user1 -colD -colE I need to columns for each record to align properly when additional records might not have data for a given column. The requirements call for a repeater and not a grid control. Any ideas?
[ "If you have access to how many columns are mising in the repeat, then just the following as the table tag. I you don't have access to this, can you post the source for your data repeater and what DataSource you're going against?\n<td colspan='<%# MissingCount(Contatiner.DataItem) %>'>\n\n", "I would suggest that instead of using <td> to define the columns, that you use CSS instead.\n.collink {\n width: 20px; \n float: left; \n height: 20px;\n}\n\nAND\n<td style=\"padding :0px 0px 0px 0px;\">\n <div class=\"collink\">\n <asp:LinkButton ID=\"lnkEdit\" runat=\"server\" ... />\n </div>\n</td>\n\nThis approach lets the content grow without actually affecting the table structure.\n", "\n <tr class=\"RadGridItem\">\n <td width=\"100\">\n <asp:Label ID=\"lblFullName\" runat=\"server\" \n Text ='<%# DataBinder.Eval(Container.DataItem, \"FullName\") %>'\n ToolTip='<%# \"Current Grade: \" + DataBinder.Eval(Container.DataItem,\"CurrentGrade\") + \"%\" +\n \" Percent Complete: \" + DataBinder.Eval(Container.DataItem,\"PercentComplete\") + \"%\" %>' />\n </td>\n <asp:Repeater ID=\"rptAssessments\" runat=\"server\" DataSource='<%# DataBinder.Eval(Container.DataItem, \"EnrollmentAssessments\") %>'>\n <ItemTemplate>\n <td style=\"padding :0px 0px 0px 0px; width:20px; height: 20px;\">\n <asp:LinkButton ID=\"lnkEdit\" runat=\"server\"\n OnClick=\"AssessmentClick\" \n style=' <%# \"color:\" + this.GetAssessmentColor(Container.DataItem) %>'\n ToolTip='<%# DataBinder.Eval(Container.DataItem, \"AssessmentName\") + Environment.NewLine + \n DataBinder.Eval(Container.DataItem, \"EnrollmentAssessmentStateName\") + \"(\" + \n DataBinder.Eval(Container.DataItem, \"PercentGradeDisplay\") + \"%) \" + \n GetPointsPossible(Container.DataItem) + \" pts possible\" %>'\n CommandArgument='<%# DataBinder.Eval(Container.DataItem, \"EnrollmentAssessmentID\") %>'\n Text='<%# this.GetAssessmentDisplay(Container.DataItem) %>' />\n </td>\n </ItemTemplate>\n </asp:Repeater>\n </tr>\n</ItemTemplate>\n\nThis is the code. The number of columns will be dynamic based on the criteria used to generate the list.\nThanks.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "asp.net", "datarepeater" ]
stackoverflow_0000102198_asp.net_datarepeater.txt
Q: Duplicate complex MXML binding in ActionScript MXML lets you do some really quite powerful data binding such as: <mx:Button id="myBtn" label="Buy an {itemName}" visible="{itemName!=null}"/> I've found that the BindingUtils class can bind values to simple properties, but neither of the bindings above do this. Is it possible to do the same in AS3 code, or is Flex silently generating many lines of code from my MXML? Can anyone duplicate the above in pure AS3, starting from: var myBtn:Button = new Button(); myBtn.id="myBtn"; ??? A: The way to do it is to use bindSetter. That is also how it is done behind the scenes when the MXML in your example is transformed to ActionScript before being compiled. // assuming the itemName property is defined on this: BindingUtils.bindSetter(itemNameChanged, this, ["itemName"]); // ... private function itemNameChanged( newValue : String ) : void { myBtn.label = newValue; myBtn.visible = newValue != null; } ...except that the code generated by the MXML to ActionScript conversion is longer as it has to be more general. In this example it would likely have generated two functions, one for each binding expression. A: You can also view the auto-generated code that flex makes when it compiles your mxml file, by adding a -keep argument to your compiler settings. You can find your settings by selecting your projects properties and looking at the "Flex Compiler" option, then under "Additional compiler arguments:" add "-keep" to what is already there. Once done Flex will create a "generated" directory in your source folder and inside you'll find all teh temporary as files that were used during compilation. A: I believe flex generates a small anonymous function to deal with this. You could do similar using a ChangeWatcher. You could probably even make a new anonymous function in the changewatcher call.
Duplicate complex MXML binding in ActionScript
MXML lets you do some really quite powerful data binding such as: <mx:Button id="myBtn" label="Buy an {itemName}" visible="{itemName!=null}"/> I've found that the BindingUtils class can bind values to simple properties, but neither of the bindings above do this. Is it possible to do the same in AS3 code, or is Flex silently generating many lines of code from my MXML? Can anyone duplicate the above in pure AS3, starting from: var myBtn:Button = new Button(); myBtn.id="myBtn"; ???
[ "The way to do it is to use bindSetter. That is also how it is done behind the scenes when the MXML in your example is transformed to ActionScript before being compiled.\n// assuming the itemName property is defined on this:\nBindingUtils.bindSetter(itemNameChanged, this, [\"itemName\"]);\n\n// ...\n\nprivate function itemNameChanged( newValue : String ) : void {\n myBtn.label = newValue;\n myBtn.visible = newValue != null;\n}\n\n...except that the code generated by the MXML to ActionScript conversion is longer as it has to be more general. In this example it would likely have generated two functions, one for each binding expression.\n", "You can also view the auto-generated code that flex makes when it compiles your mxml file, by adding a -keep argument to your compiler settings. You can find your settings by selecting your projects properties and looking at the \"Flex Compiler\" option, then under \"Additional compiler arguments:\" add \"-keep\" to what is already there.\nOnce done Flex will create a \"generated\" directory in your source folder and inside you'll find all teh temporary as files that were used during compilation.\n", "I believe flex generates a small anonymous function to deal with this.\nYou could do similar using a ChangeWatcher. You could probably even make a new anonymous function in the changewatcher call.\n" ]
[ 2, 2, 0 ]
[]
[]
[ "apache_flex", "data_binding", "mxml" ]
stackoverflow_0000102185_apache_flex_data_binding_mxml.txt
Q: PHP: GET-data automatically being declared as variables Take this code: <?php if (isset($_POST['action']) && !empty($_POST['action'])) { $action = $_POST['action']; } if ($action) { echo $action; } else { echo 'No variable'; } ?> And then access the file with ?action=test Is there any way of preventing $action from automatically being declared by the GET? Other than of course adding && !isset($_GET['action']) Why would I want the variable to be declared for me? A: Check your php.ini for the register_globals setting. It is probably on, you want it off. Why would I want the variable to be declared for me? You don't. It's a horrible security risk. It makes the Environment, GET, POST, Cookie and Server variables global (PHP manual). These are a handful of reserved variables in PHP. A: Looks like register_globals in your php.ini is the culprit. You should turn this off. It's also a huge security risk to have it on. If you're on shared hosting and can't modify php.ini, you can use ini_set() to turn register_globals off. A: Set register_globals to off, if I'm understanding your question. See http://us2.php.net/manual/en/language.variables.predefined.php A: if you don't have access to the php.ini, a ini_set('register_globals', false) in the php script won't work (variables are already declared) An .htaccess with: php_flag register_globals Off can sometimes help. A: You can test, whether all variables are declared properly by turning the PHP log-level in PHP.INI to error_reporting = E_ALL Your code snippet now should generate a NOTICE. A: At some point in php's history they made the controversial decision to turn off register_globals by default as it was a huge security hazard. It gives anyone the potential to inject variables in your code, create unthinkable consequences! This "feature" is even removed in php6 If you notice that it's on contact your administrator to turn it off.
PHP: GET-data automatically being declared as variables
Take this code: <?php if (isset($_POST['action']) && !empty($_POST['action'])) { $action = $_POST['action']; } if ($action) { echo $action; } else { echo 'No variable'; } ?> And then access the file with ?action=test Is there any way of preventing $action from automatically being declared by the GET? Other than of course adding && !isset($_GET['action']) Why would I want the variable to be declared for me?
[ "Check your php.ini for the register_globals setting. It is probably on, you want it off.\n\nWhy would I want the variable to be declared for me?\n\nYou don't. It's a horrible security risk. It makes the Environment, GET, POST, Cookie and Server variables global (PHP manual). These are a handful of reserved variables in PHP.\n", "Looks like register_globals in your php.ini is the culprit. You should turn this off. It's also a huge security risk to have it on.\nIf you're on shared hosting and can't modify php.ini, you can use ini_set() to turn register_globals off.\n", "Set register_globals to off, if I'm understanding your question.\nSee http://us2.php.net/manual/en/language.variables.predefined.php\n", "if you don't have access to the php.ini, a ini_set('register_globals', false) in the php script won't work (variables are already declared) \nAn .htaccess with:\nphp_flag register_globals Off\n\ncan sometimes help.\n", "You can test, whether all variables are declared properly by turning the PHP log-level in PHP.INI to \nerror_reporting = E_ALL \n\nYour code snippet now should generate a NOTICE. \n", "At some point in php's history they made the controversial decision to turn off register_globals by default as it was a huge security hazard. It gives anyone the potential to inject variables in your code, create unthinkable consequences! This \"feature\" is even removed in php6\nIf you notice that it's on contact your administrator to turn it off.\n" ]
[ 27, 4, 2, 2, 1, 1 ]
[]
[]
[ "get", "php", "url" ]
stackoverflow_0000101850_get_php_url.txt
Q: What is the best documentation for snapshots and flow repositories in Spring Web Flow? I'm looking for more and better documentation about snapshots, flow repositories, and flow state serialization in Spring Web Flow. Available docs I've found seem pretty sparse. "Spring in Action" doesn't talk about this. The Spring Web Flow Reference Manual does mention a couple flags here: http://static.springframework.org/spring-webflow/docs/2.0.x/reference/htmlsingle/spring-webflow-reference.html#tuning-flow-execution-repository but doesn't really talk about why you would change these settings, usage patterns, etc. Anyone have a good reference? A: did you try out any of these books ? http://www.ervacon.com/products/swfbook/index.html -- from the original author of WebFlow ? http://www.amazon.com/Expert-Spring-MVC-Web-Flow/dp/159059584X
What is the best documentation for snapshots and flow repositories in Spring Web Flow?
I'm looking for more and better documentation about snapshots, flow repositories, and flow state serialization in Spring Web Flow. Available docs I've found seem pretty sparse. "Spring in Action" doesn't talk about this. The Spring Web Flow Reference Manual does mention a couple flags here: http://static.springframework.org/spring-webflow/docs/2.0.x/reference/htmlsingle/spring-webflow-reference.html#tuning-flow-execution-repository but doesn't really talk about why you would change these settings, usage patterns, etc. Anyone have a good reference?
[ "did you try out any of these books ?\n\nhttp://www.ervacon.com/products/swfbook/index.html -- from the original author of WebFlow ? \nhttp://www.amazon.com/Expert-Spring-MVC-Web-Flow/dp/159059584X\n\n" ]
[ 2 ]
[]
[]
[ "documentation", "java", "spring", "spring_webflow" ]
stackoverflow_0000086487_documentation_java_spring_spring_webflow.txt
Q: STL vector vs map erase In the STL almost all containers have an erase function. The question I have is in a vector, the erase function returns an iterator pointing to the next element in the vector. The map container does not do this. Instead it returns a void. Anyone know why there is this inconsistancy? A: See http://www.sgi.com/tech/stl/Map.html Map has the important property that inserting a new element into a map does not invalidate iterators that point to existing elements. Erasing an element from a map also does not invalidate any iterators, except, of course, for iterators that actually point to the element that is being erased. The reason for returning an iterator on erase is so that you can iterate over the list erasing elements as you go. If erasing an item doesn't invalidate existing iterators there is no need to do this. A: erase returns an iterator in C++11. This is due to defect report 130: Table 67 (23.1.1) says that container::erase(iterator) returns an iterator. Table 69 (23.1.2) says that in addition to this requirement, associative containers also say that container::erase(iterator) returns void. That's not an addition; it's a change to the requirements, which has the effect of making associative containers fail to meet the requirements for containers. The standards committee accepted this: the LWG agrees the return type should be iterator, not void. (Alex Stepanov agrees too.) (LWG = Library Working Group). A: The inconsistency is due to use. vector is a sequence having an ordering over the elements. While it's true that the elements in a map are also ordered according to some comparison criterion, this ordering is non-evident from the structure. There is no efficient way to get from one element to the next (efficient = constant time). In fact, to iterate over the map is quite expensive; either the creation of the iterator or the iterator itself involves a walk over the complete tree. This cannot be done in O(n), unless a stack is used, in which case the space required is no longer constant. All in all, there simply is no cheap way of returning the “next” element after erasing. For sequences, there is a way. Additionally, Rob is right. There's no need for the Map to return an iterator. A: Just as an aside, the STL shipped with MS Visual Studio C++ (Dinkumware IIRC) provides a map implementation with an erase function returning an iterator to the next element. They do note it's not standards conforming. A: I have no idea if this is the answer, but one reason might be with the cost of locating the next element. Iterating through a map is inherently "slow".
STL vector vs map erase
In the STL almost all containers have an erase function. The question I have is in a vector, the erase function returns an iterator pointing to the next element in the vector. The map container does not do this. Instead it returns a void. Anyone know why there is this inconsistancy?
[ "See http://www.sgi.com/tech/stl/Map.html\n\nMap has the important property that\n inserting a new element into a map\n does not invalidate iterators that\n point to existing elements. Erasing an\n element from a map also does not\n invalidate any iterators, except, of\n course, for iterators that actually\n point to the element that is being\n erased.\n\nThe reason for returning an iterator on erase is so that you can iterate over the list erasing elements as you go. If erasing an item doesn't invalidate existing iterators there is no need to do this.\n", "erase returns an iterator in C++11. This is due to defect report 130:\n\nTable 67 (23.1.1) says that container::erase(iterator) returns an iterator. Table 69 (23.1.2) says that in addition to this requirement, associative containers also say that container::erase(iterator) returns void. That's not an addition; it's a change to the requirements, which has the effect of making associative containers fail to meet the requirements for containers.\n\nThe standards committee accepted this:\n\nthe LWG agrees the return type should be iterator, not void. (Alex Stepanov agrees too.)\n\n(LWG = Library Working Group).\n", "The inconsistency is due to use. vector is a sequence having an ordering over the elements. While it's true that the elements in a map are also ordered according to some comparison criterion, this ordering is non-evident from the structure. There is no efficient way to get from one element to the next (efficient = constant time). In fact, to iterate over the map is quite expensive; either the creation of the iterator or the iterator itself involves a walk over the complete tree. This cannot be done in O(n), unless a stack is used, in which case the space required is no longer constant.\nAll in all, there simply is no cheap way of returning the “next” element after erasing. For sequences, there is a way.\nAdditionally, Rob is right. There's no need for the Map to return an iterator.\n", "Just as an aside, the STL shipped with MS Visual Studio C++ (Dinkumware IIRC) provides a map implementation with an erase function returning an iterator to the next element.\nThey do note it's not standards conforming.\n", "I have no idea if this is the answer, but one reason might be with the cost of locating the next element. Iterating through a map is inherently \"slow\".\n" ]
[ 27, 12, 5, 3, 1 ]
[]
[]
[ "c++", "stl" ]
stackoverflow_0000052714_c++_stl.txt
Q: What's the best way to store changes to database records that require approval before being visible? I need to store user entered changes to a particular table, but not show those changes until they have been viewed and approved by an administrative user. While those changes are still in a pending state, I would still display the old version of the data. What would be the best way of storing these changes waiting for approval? I have thought of several ways, but can't figure out what is the best method. This is a very small web app. One way would be to have a PendingChanges table that mimics the other table's schema, and then once the change is approved, I could update the real table with the information. Another approach would be to do some sort of record versioning where I store multiple versions of the data in the table and then always pull the record with the highest version number that has been marked approved. That would limit the number of extra tables (I need to do this for multiple tables), but would require me to do extra processing every time I pull out a set of records to make sure I get the right ones. Any personal experiences with these methods or others that might be good? Update: Just to clarify, in this particular situation I am not interested so much in historical data. I just need some way of approving any changes that are made by a user before they go live on the site. So, a user will edit their "profile" and then an administrator will look at that modification and approve it. Once approved, that will become the displayed value and the old version does not need to be kept. Anybody tried the solution below where you store pending changes from any table that needs to track them as XML in a special PendingChanges table? Each record would have a column that said which table the changes were for, a column that maybe stored the id of the record that would be changed (null if it's a new record), a datetime column to store when the change was made, and a column to store the xml of the changed record (could maybe serialize my data object). Since I don't need history, after a change was approved, the real table would be updated and the PendingChange record could be deleted. Any thoughts about that method? A: Definitely store them in the main table with a column to indicate whether the data is approved or not. When the change is approved, no copying is required. The extra work to filter the unapproved data is the sort of thing databases are supposed to do, when you think about it. If you index the approved column, it shouldn't be too burdensome to do the right thing. A: Size is your enemy. If you are dealing with lots of data and large numbers of rows, then having the historical mixed in with the current will hammer you. You'll also have problems if you join out to other data with making sure you've got the right rows. If you need to save the historical data to show changes over time, I would go with the separate historical, table that updates the live, real data once it's approved. It's just all-around cleaner. If you have a lot of datatypes that will have this mechanism but don't need to keep a historical record, I would suggest a common queue talbe for reviewing pending items, say stored as xml. This would allow just one table to be read by administrators and would enable you to add this functionality to any table in you system fairly easily. A: I work in a banking domain and we have this need - that the changes done by one user must only be reflected after being approved by another. The design we use is as below Main Table A Another Table B that stores the changed record (and so is exactly similar to the first) + 2 additional columns (an FKey to C and a code to indicate the kind of change) A third table C that stores all such records that need approval A fourth table D that stores history (you probably don't need this). I recommend this approach. It handles all scenarios including updates and deletions very gracefully. A: Given the SOx compliance movement that has been shoved in the face of most publically traded companies, I've had quite a bit of experience in this area. Usually I have been using a separate table with a time stamped pending changes with some sort of flag column. The person in charge of administration of this data gets a list of pending changes and can choose to accept or not to accept. When a piece of data gets accepted, I use triggers to integrate the new data into the table. Though some people don't like the trigger method and would rather code this into the stored procs. This has worked well for me, even in rather large databases. The complexity can get a little difficult to deal with, especially in dealing with a situation where one change directly conflicts with another change and what order to process these changes in. The table holding the request data can never be able to be deleted, since it holds the "bread crumbs" so to speak that are required in case there is a need to trace back what happened in a particular situation. But in any approach, the risks need to be assessed, such as what I mentioned with the conflicting data, and a business logic layer needs to be in place to determine the process in these situations. I personally don't like the same table method, because in the cases of data stores that are constantly being changed, this extra data in a table can unnecessarily bog down the request on the table, and would require a lot more detail to how you are indexing the table and your execution plans. A: Yet another idea would be to have three tables. One would be the main table to hold the original data. The second would hold the proposed data. The third would hold the historical data. This approach gives you the ability to quickly and easily roll back and also gives you an audit trail if you need it. A: I would create a table with an flag and create a view like CREATE OR REPLACE VIEW AS SELECT * FROM my_table where approved = 1 It can help to separate dependencies between the aprovement and the queries. But may be is not the best idea if need to make updates to the view. Moving records might have some performance considerations. But Partitioned tables could do something quite similar. A: As this is a web app i'm going to assume there are more reads than writes, and you want something reasonably fast, and your conflict resolution (i.e out of order approvals) results in the same behaviour -- latest update is the one that is used. Both of the strategies you propose are similar in they both hold one row per change set, have to deal with conflicts etc, the only difference being whether to store the data in one table or two. Given the scenario, two tables seems the better solution for performance reasons. You could also solve this with the one table and a view of the most recent approved changes if your database supports it. A: I think the second way is the better approach, simply because it scales better to multiple tables. Also, the extra processing would be minimal, as you can create an index to the table based on the 'approved' bit, and you can specialize your queries to either pull approved (for viewing) or unapproved (for approving) entries.
What's the best way to store changes to database records that require approval before being visible?
I need to store user entered changes to a particular table, but not show those changes until they have been viewed and approved by an administrative user. While those changes are still in a pending state, I would still display the old version of the data. What would be the best way of storing these changes waiting for approval? I have thought of several ways, but can't figure out what is the best method. This is a very small web app. One way would be to have a PendingChanges table that mimics the other table's schema, and then once the change is approved, I could update the real table with the information. Another approach would be to do some sort of record versioning where I store multiple versions of the data in the table and then always pull the record with the highest version number that has been marked approved. That would limit the number of extra tables (I need to do this for multiple tables), but would require me to do extra processing every time I pull out a set of records to make sure I get the right ones. Any personal experiences with these methods or others that might be good? Update: Just to clarify, in this particular situation I am not interested so much in historical data. I just need some way of approving any changes that are made by a user before they go live on the site. So, a user will edit their "profile" and then an administrator will look at that modification and approve it. Once approved, that will become the displayed value and the old version does not need to be kept. Anybody tried the solution below where you store pending changes from any table that needs to track them as XML in a special PendingChanges table? Each record would have a column that said which table the changes were for, a column that maybe stored the id of the record that would be changed (null if it's a new record), a datetime column to store when the change was made, and a column to store the xml of the changed record (could maybe serialize my data object). Since I don't need history, after a change was approved, the real table would be updated and the PendingChange record could be deleted. Any thoughts about that method?
[ "Definitely store them in the main table with a column to indicate whether the data is approved or not.\nWhen the change is approved, no copying is required. The extra work to filter the unapproved data is the sort of thing databases are supposed to do, when you think about it. If you index the approved column, it shouldn't be too burdensome to do the right thing.\n", "Size is your enemy. If you are dealing with lots of data and large numbers of rows, then having the historical mixed in with the current will hammer you. You'll also have problems if you join out to other data with making sure you've got the right rows.\nIf you need to save the historical data to show changes over time, I would go with the separate historical, table that updates the live, real data once it's approved. It's just all-around cleaner.\nIf you have a lot of datatypes that will have this mechanism but don't need to keep a historical record, I would suggest a common queue talbe for reviewing pending items, say stored as xml. This would allow just one table to be read by administrators and would enable you to add this functionality to any table in you system fairly easily.\n", "I work in a banking domain and we have this need - that the changes done by one user must only be reflected after being approved by another. The design we use is as below\n\nMain Table A\nAnother Table B that stores the changed record (and so is exactly similar to the first) + 2 additional columns (an FKey to C and a code to indicate the kind of change)\nA third table C that stores all such records that need approval\nA fourth table D that stores history (you probably don't need this). \n\nI recommend this approach. It handles all scenarios including updates and deletions very gracefully.\n", "Given the SOx compliance movement that has been shoved in the face of most publically traded companies, I've had quite a bit of experience in this area. Usually I have been using a separate table with a time stamped pending changes with some sort of flag column. The person in charge of administration of this data gets a list of pending changes and can choose to accept or not to accept. When a piece of data gets accepted, I use triggers to integrate the new data into the table. Though some people don't like the trigger method and would rather code this into the stored procs. This has worked well for me, even in rather large databases. The complexity can get a little difficult to deal with, especially in dealing with a situation where one change directly conflicts with another change and what order to process these changes in. The table holding the request data can never be able to be deleted, since it holds the \"bread crumbs\" so to speak that are required in case there is a need to trace back what happened in a particular situation. But in any approach, the risks need to be assessed, such as what I mentioned with the conflicting data, and a business logic layer needs to be in place to determine the process in these situations. \nI personally don't like the same table method, because in the cases of data stores that are constantly being changed, this extra data in a table can unnecessarily bog down the request on the table, and would require a lot more detail to how you are indexing the table and your execution plans. \n", "Yet another idea would be to have three tables.\n\nOne would be the main table to hold the original data.\nThe second would hold the proposed data.\nThe third would hold the historical data.\n\nThis approach gives you the ability to quickly and easily roll back and also gives you an audit trail if you need it.\n", "I would create a table with an flag and create a view like\n CREATE OR REPLACE VIEW AS \n\n SELECT * FROM my_table where approved = 1\n\nIt can help to separate dependencies between the aprovement and the queries. But may be is not the best idea if need to make updates to the view. \nMoving records might have some performance considerations. But Partitioned tables could do something quite similar. \n", "As this is a web app i'm going to assume there are more reads than writes, and you want something reasonably fast, and your conflict resolution (i.e out of order approvals) results in the same behaviour -- latest update is the one that is used.\nBoth of the strategies you propose are similar in they both hold one row per change set, have to deal with conflicts etc, the only difference being whether to store the data in one table or two. Given the scenario, two tables seems the better solution for performance reasons. You could also solve this with the one table and a view of the most recent approved changes if your database supports it.\n", "I think the second way is the better approach, simply because it scales better to multiple tables. Also, the extra processing would be minimal, as you can create an index to the table based on the 'approved' bit, and you can specialize your queries to either pull approved (for viewing) or unapproved (for approving) entries.\n" ]
[ 21, 12, 6, 4, 2, 2, 1, 0 ]
[]
[]
[ "database_design" ]
stackoverflow_0000103766_database_design.txt
Q: What's wrong with my XPath/XML? I'm trying a very basic XPath on this xml (same as below), and it doesn't find anything. I'm trying both .NET and this website, and XPaths such as //PropertyGroup, /PropertyGroup and //MSBuildCommunityTasksPath are simply not working for me (they compiled but return zero results). Source XML: <?xml version="1.0" encoding="utf-8"?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- $Id: FxCop.proj 114 2006-03-14 06:32:46Z pwelter34 $ --> <PropertyGroup> <MSBuildCommunityTasksPath>$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\bin\Debug</MSBuildCommunityTasksPath> </PropertyGroup> <Import Project="$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\MSBuild.Community.Tasks.Targets" /> <Target Name="DoFxCop"> <FxCop TargetAssemblies="$(MSBuildCommunityTasksPath)\MSBuild.Community.Tasks.dll" RuleLibraries="@(FxCopRuleAssemblies)" AnalysisReportFileName="Test.html" DependencyDirectories="$(MSBuildCommunityTasksPath)" FailOnError="True" ApplyOutXsl="True" OutputXslFileName="C:\Program Files\Microsoft FxCop 1.32\Xml\FxCopReport.xsl" /> </Target> </Project> A: You can add namespaces in your code and all that, but you can effectively wildcard the namespace. Try the following XPath idiom. //*[local-name()='PropertyGroup'] //*[local-name()='MSBuildCommunityTasksPath'] name() usually works as well, as in: //*[name()='PropertyGroup'] //*[name()='MSBuildCommunityTasksPath'] EDIT: Namespaces are great and i'm not suggesting they're not important, but wildcarding them comes in handy when cobbling together prototype code, one-off desktop tools, experimenting with XSLT, and so forth. Balance your need for convenience against acceptable risk for the task at hand. FYI, if need be, you can also strip or reassign namespaces. A: The tags in the document end up in the "default" namespace created by the xmlns attribute with no prefix. Unfortunately, XPath alone can not query elements in the default namespace. I'm actually not sure of the semantic details, but you have to explicitly attach a prefix to that namespace using whatever tool is hosting XPath. There may be a shorter way to do this in .NET, but the only way I've seen is via a NameSpaceManager. After you explicitly add a namespace, you can query using the namespace manager as if all the tags in the namespaced element have that prefix (I chose 'msbuild'): using System; using System.Xml; public class XPathNamespace { public static void Main(string[] args) { XmlDocument xmlDocument = new XmlDocument(); xmlDocument.LoadXml( @"<?xml version=""1.0"" encoding=""utf-8""?> <Project xmlns=""http://schemas.microsoft.com/developer/msbuild/2003""> <!-- $Id: FxCop.proj 114 2006-03-14 06:32:46Z pwelter34 $ --> <PropertyGroup> <MSBuildCommunityTasksPath>$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\bin\Debug</MSBuildCommunityTasksPath> </PropertyGroup> <Import Project=""$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\MSBuild.Community.Tasks.Targets""/> <Target Name=""DoFxCop""> <FxCop TargetAssemblies=""$(MSBuildCommunityTasksPath)\MSBuild.Community.Tasks.dll"" RuleLibraries=""@(FxCopRuleAssemblies)"" AnalysisReportFileName=""Test.html"" DependencyDirectories=""$(MSBuildCommunityTasksPath)"" FailOnError=""True"" ApplyOutXsl=""True"" OutputXslFileName=""C:\Program Files\Microsoft FxCop 1.32\Xml\FxCopReport.xsl"" /> </Target> </Project>"); XmlNamespaceManager namespaceManager = new XmlNamespaceManager(xmlDocument.NameTable); namespaceManager.AddNamespace("msbuild", "http://schemas.microsoft.com/developer/msbuild/2003"); foreach (XmlNode n in xmlDocument.SelectNodes("//msbuild:MSBuildCommunityTasksPath", namespaceManager)) { Console.WriteLine(n.InnerText); } } } A: Your issue is with the namespace (xmlns="http://schemas.microsoft.com/developer/msbuild/2003"). You're receiving zero nodes because you aren't qualifying it with the namespace. If you remove the xmlns attribute, your "//PropertyGroup" XPath will work. How you query with namespace usually involves aliasing a default xmlns to an identifier (since one is not specified on the attribute), and selecting like "//myXMLNStoken:PropertyGroup".
What's wrong with my XPath/XML?
I'm trying a very basic XPath on this xml (same as below), and it doesn't find anything. I'm trying both .NET and this website, and XPaths such as //PropertyGroup, /PropertyGroup and //MSBuildCommunityTasksPath are simply not working for me (they compiled but return zero results). Source XML: <?xml version="1.0" encoding="utf-8"?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- $Id: FxCop.proj 114 2006-03-14 06:32:46Z pwelter34 $ --> <PropertyGroup> <MSBuildCommunityTasksPath>$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\bin\Debug</MSBuildCommunityTasksPath> </PropertyGroup> <Import Project="$(MSBuildProjectDirectory)\MSBuild.Community.Tasks\MSBuild.Community.Tasks.Targets" /> <Target Name="DoFxCop"> <FxCop TargetAssemblies="$(MSBuildCommunityTasksPath)\MSBuild.Community.Tasks.dll" RuleLibraries="@(FxCopRuleAssemblies)" AnalysisReportFileName="Test.html" DependencyDirectories="$(MSBuildCommunityTasksPath)" FailOnError="True" ApplyOutXsl="True" OutputXslFileName="C:\Program Files\Microsoft FxCop 1.32\Xml\FxCopReport.xsl" /> </Target> </Project>
[ "You can add namespaces in your code and all that, but you can effectively wildcard the namespace. Try the following XPath idiom.\n//*[local-name()='PropertyGroup']\n//*[local-name()='MSBuildCommunityTasksPath']\n\nname() usually works as well, as in:\n//*[name()='PropertyGroup']\n//*[name()='MSBuildCommunityTasksPath']\n\nEDIT: Namespaces are great and i'm not suggesting they're not important, but wildcarding them comes in handy when cobbling together prototype code, one-off desktop tools, experimenting with XSLT, and so forth. Balance your need for convenience against acceptable risk for the task at hand. FYI, if need be, you can also strip or reassign namespaces.\n", "The tags in the document end up in the \"default\" namespace created by the xmlns attribute with no prefix. Unfortunately, XPath alone can not query elements in the default namespace. I'm actually not sure of the semantic details, but you have to explicitly attach a prefix to that namespace using whatever tool is hosting XPath.\nThere may be a shorter way to do this in .NET, but the only way I've seen is via a NameSpaceManager. After you explicitly add a namespace, you can query using the namespace manager as if all the tags in the namespaced element have that prefix (I chose 'msbuild'):\nusing System;\nusing System.Xml;\n\npublic class XPathNamespace {\n public static void Main(string[] args) {\n XmlDocument xmlDocument = new XmlDocument();\n xmlDocument.LoadXml(\n @\"<?xml version=\"\"1.0\"\" encoding=\"\"utf-8\"\"?>\n<Project xmlns=\"\"http://schemas.microsoft.com/developer/msbuild/2003\"\">\n <!-- $Id: FxCop.proj 114 2006-03-14 06:32:46Z pwelter34 $ -->\n\n <PropertyGroup>\n <MSBuildCommunityTasksPath>$(MSBuildProjectDirectory)\\MSBuild.Community.Tasks\\bin\\Debug</MSBuildCommunityTasksPath>\n </PropertyGroup>\n\n <Import Project=\"\"$(MSBuildProjectDirectory)\\MSBuild.Community.Tasks\\MSBuild.Community.Tasks.Targets\"\"/>\n\n <Target Name=\"\"DoFxCop\"\">\n\n <FxCop \n TargetAssemblies=\"\"$(MSBuildCommunityTasksPath)\\MSBuild.Community.Tasks.dll\"\"\n RuleLibraries=\"\"@(FxCopRuleAssemblies)\"\" \n AnalysisReportFileName=\"\"Test.html\"\"\n DependencyDirectories=\"\"$(MSBuildCommunityTasksPath)\"\"\n FailOnError=\"\"True\"\"\n ApplyOutXsl=\"\"True\"\"\n OutputXslFileName=\"\"C:\\Program Files\\Microsoft FxCop 1.32\\Xml\\FxCopReport.xsl\"\"\n />\n </Target>\n\n</Project>\");\n\n XmlNamespaceManager namespaceManager = new\n XmlNamespaceManager(xmlDocument.NameTable);\n namespaceManager.AddNamespace(\"msbuild\", \"http://schemas.microsoft.com/developer/msbuild/2003\");\n foreach (XmlNode n in xmlDocument.SelectNodes(\"//msbuild:MSBuildCommunityTasksPath\", namespaceManager)) {\n Console.WriteLine(n.InnerText);\n }\n }\n}\n\n", "Your issue is with the namespace (xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\"). You're receiving zero nodes because you aren't qualifying it with the namespace. If you remove the xmlns attribute, your \"//PropertyGroup\" XPath will work. How you query with namespace usually involves aliasing a default xmlns to an identifier (since one is not specified on the attribute), and selecting like \"//myXMLNStoken:PropertyGroup\".\n" ]
[ 16, 2, 1 ]
[]
[]
[ "namespaces", "xml", "xpath" ]
stackoverflow_0000103576_namespaces_xml_xpath.txt
Q: How do I match one letter or many in a PHP preg_split style regex I'm having an issue with my regex. I want to capture <% some stuff %> and i need what's inside the <% and the %> This regex works quite well for that. $matches = preg_split("/<%[\s]*(.*?)[\s]*%>/i",$markup,-1,(PREG_SPLIT_NO_EMPTY | PREG_SPLIT_DELIM_CAPTURE)); I also want to catch &amp;% some stuff %&amp;gt; so I need to capture <% or &amp;lt;% and %> or %&amp;gt; respectively. If I put in a second set of parens, it makes preg_split function differently (because as you can see from the flag, I'm trying to capture what's inside the parens. Preferably, it would only match &amp;lt; to &amp;gt; and < to > as well, but that's not completely necessary EDIT: The SUBJECT may contain multiple matches, and I need all of them A: In your case, it's better to use preg_match with its additional parameter and parenthesis: preg_match("#((?:<|&lt;)%)([\s]*(?:[^ø]*)[\s]*?)(%(?:>|&gt;))#i",$markup, $out); print_r($out); Array ( [0] => <% your stuff %> [1] => <% [2] => your stuff [3] => %> ) By the way, check this online tool to debug PHP regexp, it's so useful ! http://regex.larsolavtorvik.com/ EDIT : I hacked the regexp a bit so it's faster. Tested it, it works :-) Now let's explain all that stuff : preg_match will store everything he captures in the var passed as third param (here $out) if preg_match matches something, it will be store in $out[0] anything that is inside () but not (?:) in the pattern will be stored in $out The patten in details : #((?:<|&lt;)%)([\s]*(?:[^ø]*)[\s]*?)(%(?:>|&gt;))#i can be viewed as ((?:<|&lt;)%) + ([\s]*(?:[^ø]*)[\s]*?) + (%(?:>|&gt;)). ((?:<|&lt;)%) is capturing < or &lt; then % (%(?:>|&gt;)) is capturing % then < or &gt; ([\s]*(?:[^ø]*)[\s]*?) means 0 or more spaces, then 0 or more times anything that is not the ø symbol, the 0 or more spaces. Why do we use [^ø] instead of . ? It's because . is very time consuming, the regexp engine will check among all the existing characters. [^ø] just check if the char is not ø. Nobody uses ø, it's an international money symbol, but if you care, you can replace it by chr(7) wich is the shell bell char that's obviously will never be typed in a web page. EDIT2 : I just read your edit about capturing all the matches. In that case, you´ll use preg_match_all the same way. A: <?php $code = 'Here is a <% test %> and &lt;% another test %&gt; for you'; preg_match_all('/(<|&lt;)%\s*(.*?)\s*%(>|&gt;)/', $code, $matches); print_r($matches[2]); ?> Result: Array ( [0] => test [1] => another test ) A: Why are you using preg_split if what you really want is what matches inside the parentheses? Seems like it would be simpler to just use preg_match. It's often an issue with regex that parens are used both for grouping your logic and for capturing patterns. According to the PHP doc on regex syntax, The fact that plain parentheses fulfil two functions is not always helpful. There are often times when a grouping subpattern is required without a capturing requirement. If an opening parenthesis is followed by "?:", the subpattern does not do any capturing, and is not counted when computing the number of any subsequent capturing subpatterns. A: If you want to match, give preg_match_all a shot with a regular expression like this: preg_match_all('/((\<\%)(\s)(.*?)(\s)(\%\>))/i', '<% wtf %> <% sadfdsafds %>', $result); This results in a match of just about everything under the sun. You can add/remove parens to match more/less: Array ( [0] => Array ( [0] => <% wtf %> [1] => <% sadfdsafds %> ) [1] => Array ( [0] => <% wtf %> [1] => <% sadfdsafds %> ) [2] => Array ( [0] => <% [1] => <% ) [3] => Array ( [0] => [1] => ) [4] => Array ( [0] => wtf [1] => sadfdsafds ) [5] => Array ( [0] => [1] => ) [6] => Array ( [0] => %> [1] => %> ) ) A: One possible solution is to use the extra parens, like so, but to ditch those in the results, so you actually only use 1/2 of the total restults. this regex $matches = preg_split("/(<|&lt;)%[\s]*(.*?)[\s]*%(>|&gt;)/i",$markup,-1,(PREG_SPLIT_NO_EMPTY | PREG_SPLIT_DELIM_CAPTURE)); for input Hi my name is <h1>Issac</h1><% some stuff %>here&lt;% more stuff %&gt; output would be Array( [0]=>Hi my name is <h1>Issac</h1> [1]=>< [2]=>some stuff [3]=>> [4]=>here [5]=>&;lt; [6]=>more stuff [7]=>&gt; ) Which would give the desired resutls, if I only used the even numbers
How do I match one letter or many in a PHP preg_split style regex
I'm having an issue with my regex. I want to capture <% some stuff %> and i need what's inside the <% and the %> This regex works quite well for that. $matches = preg_split("/<%[\s]*(.*?)[\s]*%>/i",$markup,-1,(PREG_SPLIT_NO_EMPTY | PREG_SPLIT_DELIM_CAPTURE)); I also want to catch &amp;% some stuff %&amp;gt; so I need to capture <% or &amp;lt;% and %> or %&amp;gt; respectively. If I put in a second set of parens, it makes preg_split function differently (because as you can see from the flag, I'm trying to capture what's inside the parens. Preferably, it would only match &amp;lt; to &amp;gt; and < to > as well, but that's not completely necessary EDIT: The SUBJECT may contain multiple matches, and I need all of them
[ "In your case, it's better to use preg_match with its additional parameter and parenthesis:\npreg_match(\"#((?:<|&lt;)%)([\\s]*(?:[^ø]*)[\\s]*?)(%(?:>|&gt;))#i\",$markup, $out);\nprint_r($out);\n\nArray\n(\n [0] => <% your stuff %>\n [1] => <%\n [2] => your stuff\n [3] => %>\n)\n\nBy the way, check this online tool to debug PHP regexp, it's so useful !\nhttp://regex.larsolavtorvik.com/\nEDIT : I hacked the regexp a bit so it's faster. Tested it, it works :-)\nNow let's explain all that stuff :\n\npreg_match will store everything he captures in the var passed as third param (here $out)\nif preg_match matches something, it will be store in $out[0]\nanything that is inside () but not (?:) in the pattern will be stored in $out\n\nThe patten in details :\n#((?:<|&lt;)%)([\\s]*(?:[^ø]*)[\\s]*?)(%(?:>|&gt;))#i can be viewed as ((?:<|&lt;)%) + ([\\s]*(?:[^ø]*)[\\s]*?) + (%(?:>|&gt;)).\n\n((?:<|&lt;)%) is capturing < or &lt; then %\n(%(?:>|&gt;)) is capturing % then < or &gt; \n([\\s]*(?:[^ø]*)[\\s]*?) means 0 or more spaces, then 0 or more times anything that is not the ø symbol, the 0 or more spaces.\n\nWhy do we use [^ø] instead of . ? It's because . is very time consuming, the regexp engine will check among all the existing characters. [^ø] just check if the char is not ø. Nobody uses ø, it's an international money symbol, but if you care, you can replace it by chr(7) wich is the shell bell char that's obviously will never be typed in a web page.\nEDIT2 : I just read your edit about capturing all the matches. In that case, you´ll use preg_match_all the same way.\n", "<?php\n$code = 'Here is a <% test %> and &lt;% another test %&gt; for you';\npreg_match_all('/(<|&lt;)%\\s*(.*?)\\s*%(>|&gt;)/', $code, $matches);\nprint_r($matches[2]);\n?>\n\nResult:\nArray\n(\n [0] => test\n [1] => another test\n)\n\n", "Why are you using preg_split if what you really want is what matches inside the parentheses? Seems like it would be simpler to just use preg_match.\nIt's often an issue with regex that parens are used both for grouping your logic and for capturing patterns.\nAccording to the PHP doc on regex syntax,\n\nThe fact that plain parentheses fulfil two functions is not always helpful. There are often times when a grouping subpattern is required without a capturing requirement. If an opening parenthesis is followed by \"?:\", the subpattern does not do any capturing, and is not counted when computing the number of any subsequent capturing subpatterns.\n\n", "If you want to match, give preg_match_all a shot with a regular expression like this:\npreg_match_all('/((\\<\\%)(\\s)(.*?)(\\s)(\\%\\>))/i', '<% wtf %> <% sadfdsafds %>', $result);\n\nThis results in a match of just about everything under the sun. You can add/remove parens to match more/less:\nArray\n(\n [0] => Array\n (\n [0] => <% wtf %>\n [1] => <% sadfdsafds %>\n )\n\n[1] => Array\n (\n [0] => <% wtf %>\n [1] => <% sadfdsafds %>\n )\n\n[2] => Array\n (\n [0] => <%\n [1] => <%\n )\n\n[3] => Array\n (\n [0] => \n [1] => \n )\n\n[4] => Array\n (\n [0] => wtf\n [1] => sadfdsafds\n )\n\n[5] => Array\n (\n [0] => \n [1] => \n )\n\n[6] => Array\n (\n [0] => %>\n [1] => %>\n )\n\n)\n\n", "One possible solution is to use the extra parens, like so, but to ditch those in the results, so you actually only use 1/2 of the total restults.\nthis regex\n$matches = preg_split(\"/(<|&lt;)%[\\s]*(.*?)[\\s]*%(>|&gt;)/i\",$markup,-1,(PREG_SPLIT_NO_EMPTY | PREG_SPLIT_DELIM_CAPTURE));\n\nfor input\nHi my name is <h1>Issac</h1><% some stuff %>here&lt;% more stuff %&gt; \n\noutput would be\nArray(\n [0]=>Hi my name is <h1>Issac</h1>\n [1]=><\n [2]=>some stuff\n [3]=>>\n [4]=>here\n [5]=>&;lt;\n [6]=>more stuff\n [7]=>&gt;\n)\n\nWhich would give the desired resutls, if I only used the even numbers\n" ]
[ 9, 2, 1, 1, 0 ]
[]
[]
[ "php", "regex" ]
stackoverflow_0000104238_php_regex.txt
Q: How do I download the source for BIRT? The Eclipse projects are all stored in the Eclipse Foundation CVS servers. Using the source is a great way to debug your code and to figure out how to do new things. Unfortunately in a large software project like BIRT, it can be difficult to know which projects and versions are required for a particular build. So what is the best way to get the source for a particular build? A: Okay, I know the answer to this one... Eclipse has a feature named Team Project Sets which allows you to define a collection of projects, stored in various version control systems that can be downloaded as a package. I have published a collection of team project set files that can be used to get the BIRT source. The files are stored in a Subversion repository here I have a short article with a bit more detail on the BirtWorld blog. A: Go to the BIRT website and follow their Directions.
How do I download the source for BIRT?
The Eclipse projects are all stored in the Eclipse Foundation CVS servers. Using the source is a great way to debug your code and to figure out how to do new things. Unfortunately in a large software project like BIRT, it can be difficult to know which projects and versions are required for a particular build. So what is the best way to get the source for a particular build?
[ "Okay, I know the answer to this one...\nEclipse has a feature named Team Project Sets which allows you to define a collection of projects, stored in various version control systems that can be downloaded as a package. I have published a collection of team project set files that can be used to get the BIRT source. The files are stored in a Subversion repository here\nI have a short article with a bit more detail on the BirtWorld blog.\n", "Go to the BIRT website and follow their Directions.\n" ]
[ 1, 1 ]
[]
[]
[ "birt", "cvs", "eclipse" ]
stackoverflow_0000104439_birt_cvs_eclipse.txt
Q: Tips on refactoring an outdated database schema Being stuck with a legacy database schema that no longer reflects your data model is every developer's nightmare. Yet with all the talk of refactoring code for maintainability I have not heard much of refactoring outdated database schemas. What are some tips on how to transition to a better schema without breaking all the code that relies on the old one? I will propose a specific problem I am having to illustrate my point but feel free to give advice on other techniques that have proven helpful - those will likely come in handy as well. My example: My company receives and ships products. Now a product receipt and a product shipment have some very different data associated with them so the original database designers created a separate table for receipts and for shipments. In my one year working with this system I have come to the realization that the current schema doesn't make a lick of sense. After all, both a receipt and a shipment are basically a transaction, they each involve changing the amount of a product, at heart only the +/- sign is different. Indeed, we frequently need to find the total amount that the product has changed over a period of time, a problem for which this design is downright intractable. Obviously the appropriate design would be to have a single Transactions table with the Id being a foreign key of either a ReceiptInfo or a ShipmentInfo table. Unfortunately, the wrong schema has already been in production for some years and has hundreds of stored procedures, and thousands of lines of code written off of it. How then can I transition the schema to work correctly? A: Here's a whole catalogue of database refactorings: http://databaserefactoring.com/ A: That's a very difficult thing to work around; A couple quick options after refactoring the database are: Create views that match the original schema but pull from the new schema; You may need triggers here so any updates to the views can be handled. Create the new schema and put in triggers on each side to maintain the other side. A: This book (Refactoring Databases) has been a God-send to me when dealing with legacy database schemas, including when I had to deal with almost the exact same issue for our inventory database. Also, having a system in place to track changes to the database schema (like a series of alter scripts that is stored int he source control repository) helps immensely in figuring out code-to-database dependencies. A: Stored procedures and views are your friend here. Even if the system doesn't use them, change it to use them, then refactor the database underneath. Your receipts and shipments then become views. Beware, receipts and shipments are actually two very different beasts in most systems I have worked with. Receipts are linked to suppliers, while shipments are linked to customers (or customer/ship-to locations). At the inventory level, they are often represented the same. A: Is all data access limited to stored procedures? If not, the task could be nearly impossible. If so, you just have to make sure your data migration scripts work well transitioning from the old to the new schema, and then make sure your stored procedures honor theur inputs and outputs. Hopefully none of them have "select *" queries. If they do, use 'sp_help tablename' to get the complete list of columns, copy that out and replace each * with the complete column list, just to make sure you don't break client code. I would recommend making the changes gradually, and do lots of integration testing. It's hard to do a significant remodel without introducing a few bugs. A: The first thing is to create the table schema. I already did that for a Legacy database using Enterprise Architect. You can select the DB and it will create you every tables/fields. Then, you will need to split everything in categories. Exemple all your receives and ships products together, client stuff in an other category. Once everything is clear up, you will be able to refactor field by creating new table, new releashionship and new fields. Of course, this will need lot of change if all is accessed without Stored Procedure. A: I don't think its obvious that the id of the transactions table should be a foreign key to either ReceiptInfo or a ShipmentInfo. Think the other way around. In an object oriented model you should have a transaction table and the ReceiptInfo or a ShipmentInfo should have a foreign key to the transaction table. If you are lucky, there will be only 1 or 2 points in code where new records in ReceiptInfo or a ShipmentInfo are made. There you should add code where you add an entry in the Transaction table and after that create the entry in ReceiptInfo or ShipmentInfo with the foreign key to Transaction. A: Sometimes you can create new tables that have better structures and then create views with the names of your old tables but are based on the data in the new tables. That way, you code doesnt break while you start to move to a better structure. Be careful with thsi though as sometimes you move from a non-relational table to a relational structure where you have multiple records while the code will be expecting only one. This is particulalry true if you have developers who use subqueries. Then as each thing is changed, it will move away from the views to the real table. Eventually you can drop the views. This at least allows you to work incrementally to keep things working as you move stuff, but start to fix things to use a better design.
Tips on refactoring an outdated database schema
Being stuck with a legacy database schema that no longer reflects your data model is every developer's nightmare. Yet with all the talk of refactoring code for maintainability I have not heard much of refactoring outdated database schemas. What are some tips on how to transition to a better schema without breaking all the code that relies on the old one? I will propose a specific problem I am having to illustrate my point but feel free to give advice on other techniques that have proven helpful - those will likely come in handy as well. My example: My company receives and ships products. Now a product receipt and a product shipment have some very different data associated with them so the original database designers created a separate table for receipts and for shipments. In my one year working with this system I have come to the realization that the current schema doesn't make a lick of sense. After all, both a receipt and a shipment are basically a transaction, they each involve changing the amount of a product, at heart only the +/- sign is different. Indeed, we frequently need to find the total amount that the product has changed over a period of time, a problem for which this design is downright intractable. Obviously the appropriate design would be to have a single Transactions table with the Id being a foreign key of either a ReceiptInfo or a ShipmentInfo table. Unfortunately, the wrong schema has already been in production for some years and has hundreds of stored procedures, and thousands of lines of code written off of it. How then can I transition the schema to work correctly?
[ "Here's a whole catalogue of database refactorings:\nhttp://databaserefactoring.com/\n", "That's a very difficult thing to work around; A couple quick options after refactoring the database are:\n\nCreate views that match the original schema but pull from the new schema; You may need triggers here so any updates to the views can be handled.\nCreate the new schema and put in triggers on each side to maintain the other side.\n\n", "This book (Refactoring Databases) has been a God-send to me when dealing with legacy database schemas, including when I had to deal with almost the exact same issue for our inventory database. \nAlso, having a system in place to track changes to the database schema (like a series of alter scripts that is stored int he source control repository) helps immensely in figuring out code-to-database dependencies.\n", "Stored procedures and views are your friend here. Even if the system doesn't use them, change it to use them, then refactor the database underneath.\nYour receipts and shipments then become views.\nBeware, receipts and shipments are actually two very different beasts in most systems I have worked with. Receipts are linked to suppliers, while shipments are linked to customers (or customer/ship-to locations). At the inventory level, they are often represented the same.\n", "Is all data access limited to stored procedures? If not, the task could be nearly impossible. If so, you just have to make sure your data migration scripts work well transitioning from the old to the new schema, and then make sure your stored procedures honor theur inputs and outputs. \nHopefully none of them have \"select *\" queries. If they do, use 'sp_help tablename' to get the complete list of columns, copy that out and replace each * with the complete column list, just to make sure you don't break client code.\nI would recommend making the changes gradually, and do lots of integration testing. It's hard to do a significant remodel without introducing a few bugs.\n", "The first thing is to create the table schema. I already did that for a Legacy database using Enterprise Architect. You can select the DB and it will create you every tables/fields. Then, you will need to split everything in categories. Exemple all your receives and ships products together, client stuff in an other category. Once everything is clear up, you will be able to refactor field by creating new table, new releashionship and new fields. Of course, this will need lot of change if all is accessed without Stored Procedure.\n", "I don't think its obvious that the id of the transactions table should be a foreign key to either ReceiptInfo or a ShipmentInfo. Think the other way around. In an object oriented model you should have a transaction table and the ReceiptInfo or a ShipmentInfo should have a foreign key to the transaction table. If you are lucky, there will be only 1 or 2 points in code where new records in ReceiptInfo or a ShipmentInfo are made. There you should add code where you add an entry in the Transaction table and after that create the entry in ReceiptInfo or ShipmentInfo with the foreign key to Transaction.\n", "Sometimes you can create new tables that have better structures and then create views with the names of your old tables but are based on the data in the new tables. That way, you code doesnt break while you start to move to a better structure. Be careful with thsi though as sometimes you move from a non-relational table to a relational structure where you have multiple records while the code will be expecting only one. This is particulalry true if you have developers who use subqueries. \nThen as each thing is changed, it will move away from the views to the real table. Eventually you can drop the views. This at least allows you to work incrementally to keep things working as you move stuff, but start to fix things to use a better design.\n" ]
[ 5, 3, 3, 1, 0, 0, 0, 0 ]
[]
[]
[ "database", "refactoring", "schema" ]
stackoverflow_0000104380_database_refactoring_schema.txt
Q: Delphi 7 and Windows Vista I have a simple software that is made in Delphi 7, and it crashes on Vista after a while. These are totally random crashes, nothing is written in any crash log, just stops working and then Vista tries to find a solution. Does anyone have any ideas ? A: Try one of the exception catchers, like madExcept. It can often help you find out what is happening inside your app at the time of trouble. In general though Delphi apps are fine in Vista, so there must be some interaction, perhaps user rights, that is causing trouble. A: A few ideas: DEP - try disabling DEP for the program an see if it solves the problem ASLR It fails to get access to some resource, gets a NULL pointer (a common way of functions to signal that they failed) and tries to use that (with predictable results) The best thing would be to run with a debugger (preferably Delphi 7 - it sounds like you have source code) attached and check the exact location of the crash. A: just to point out--madExcept has a "hang" detection option that should help.
Delphi 7 and Windows Vista
I have a simple software that is made in Delphi 7, and it crashes on Vista after a while. These are totally random crashes, nothing is written in any crash log, just stops working and then Vista tries to find a solution. Does anyone have any ideas ?
[ "Try one of the exception catchers, like madExcept. It can often help you find out what is happening inside your app at the time of trouble. In general though Delphi apps are fine in Vista, so there must be some interaction, perhaps user rights, that is causing trouble.\n", "A few ideas:\n\nDEP - try disabling DEP for the program an see if it solves the problem\nASLR\nIt fails to get access to some resource, gets a NULL pointer (a common way of functions to signal that they failed) and tries to use that (with predictable results)\n\nThe best thing would be to run with a debugger (preferably Delphi 7 - it sounds like you have source code) attached and check the exact location of the crash.\n", "just to point out--madExcept has a \"hang\" detection option that should help.\n" ]
[ 7, 2, 0 ]
[]
[]
[ "delphi", "windows_vista" ]
stackoverflow_0000100998_delphi_windows_vista.txt
Q: Automate a Ruby Gem install that has input I am trying to install the ibm_db gem so that I can access DB2 from Ruby. When I try: sudo gem install ibm_db I get the following request for clarification: Select which gem to install for your platform (i486-linux) 1. ibm_db 0.10.0 (ruby) 2. ibm_db 0.10.0 (mswin32) 3. ibm_db 0.9.5 (mswin32) 4. ibm_db 0.9.5 (ruby) 5. Skip this gem 6. Cancel installation I am always going to be installing the linux version (which I assume is the "ruby" version), so is there a way to pick which one I will install straight from the gem install command? The reason this is a problem is that I need to automate this install via a bash script, so I would like to select that I want the "ruby" version ahead of time. A: You can use a 'here document'. That is: sudo gem install ibm_db <<heredoc 1 heredoc What's between the \<\<\SOMETHING and SOMETHING gets inputted as entry to the previous command (somewhat like ruby's own heredocuments). The 1 there alone, of course, is the selection of the "ibm_db 0.10.0 (ruby)" platform. Hope it's enough. A: Try this: sudo gem install --platform ruby ibm_db Note that you can get help on the install command using: gem help install UPDATE: Looks like this option only works for RubyGems 0.9.5 or above. A: @John Topley I already tried gem help install, and --platform is not an option, both in help and in practice: $ sudo gem install ibm_db --platform ruby ERROR: While executing gem ... (OptionParser::InvalidOption) invalid option: --platform UPDATE: The Ubuntu repos have 0.9.4 version of rubygems, which doesn't have the --platform option. It appears it may be a new feature in 0.9.5, but there is still no online documentation for it, and regardless, it won't work on Ubuntu which is the platform I need it to work on. A: Try this, I think it only works on Bash though sudo gem install ibm_db < <(echo 1) A: Versions of Rubygems from 1.0 and up automatically detect the platform you are running and thus do not ask that question. Are you able to update your gems to the latest? $ sudo gem update --system Be warned if you are on Windows once you have updated; you might run into this issue. A: Another option is to download the .gem file and install it manually as such: sudo gem install path/to/ibm_db-0.10.0.gem This particular gem was at rubyforge.
Automate a Ruby Gem install that has input
I am trying to install the ibm_db gem so that I can access DB2 from Ruby. When I try: sudo gem install ibm_db I get the following request for clarification: Select which gem to install for your platform (i486-linux) 1. ibm_db 0.10.0 (ruby) 2. ibm_db 0.10.0 (mswin32) 3. ibm_db 0.9.5 (mswin32) 4. ibm_db 0.9.5 (ruby) 5. Skip this gem 6. Cancel installation I am always going to be installing the linux version (which I assume is the "ruby" version), so is there a way to pick which one I will install straight from the gem install command? The reason this is a problem is that I need to automate this install via a bash script, so I would like to select that I want the "ruby" version ahead of time.
[ "You can use a 'here document'. That is:\nsudo gem install ibm_db <<heredoc\n 1\nheredoc\n\nWhat's between the \\<\\<\\SOMETHING and SOMETHING gets inputted as entry to the previous command (somewhat like ruby's own heredocuments). The 1 there alone, of course, is the selection of the \"ibm_db 0.10.0 (ruby)\" platform.\nHope it's enough.\n", "Try this:\nsudo gem install --platform ruby ibm_db\n\nNote that you can get help on the install command using:\ngem help install\n\n\nUPDATE: Looks like this option only works for RubyGems 0.9.5 or above.\n", "@John Topley\nI already tried gem help install, and --platform is not an option, both in help and in practice:\n\n$ sudo gem install ibm_db --platform ruby\nERROR: While executing gem ... (OptionParser::InvalidOption)\n invalid option: --platform\n\n\nUPDATE: The Ubuntu repos have 0.9.4 version of rubygems, which doesn't have the --platform option. It appears it may be a new feature in 0.9.5, but there is still no online documentation for it, and regardless, it won't work on Ubuntu which is the platform I need it to work on.\n", "Try this, I think it only works on Bash though\nsudo gem install ibm_db < <(echo 1)\n\n", "Versions of Rubygems from 1.0 and up automatically detect the platform you are running and thus do not ask that question. Are you able to update your gems to the latest?\n$ sudo gem update --system\n\nBe warned if you are on Windows once you have updated; you might run into this issue.\n", "Another option is to download the .gem file and install it manually as such:\nsudo gem install path/to/ibm_db-0.10.0.gem\n\nThis particular gem was at rubyforge.\n" ]
[ 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "automation", "database", "db2", "ruby", "rubygems" ]
stackoverflow_0000103918_automation_database_db2_ruby_rubygems.txt
Q: How do I change the build directory that MSBuild uses under Team Foundation Build? I'm getting the following error when trying to build my app using Team Foundation Build: C:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(1682,9): error MSB3554: Cannot write to the output file "obj\Release\Company.Redacted.BlahBlah.Localization.Subsystems. Startup_Shutdown_Processing.StartupShutdownProcessingMessages.de.resources". The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. My project builds fine on my development machine as the source is only two folders deep, but TF Build seems to use a really deep directory that is causing it to break. How do I change the folders that are used? Edit: I checked the .proj file for my build that is stored in source control and found the following: <!-- BUILD DIRECTORY This property is included only for backwards compatibility. The build directory used for a build definition is now stored in the database, as the BuildDirectory property of the definition's DefaultBuildAgent. For compatibility with V1 clients, keep this property in sync with the value in the database. --> <BuildDirectoryPath>UNKNOWN</BuildDirectoryPath> If this is stored in the database how do I change it? Edit: Found the following blog post which may be pointing me torward the solution. Now I just need to figure out how to change the setting in the Build Agent. http://blogs.msdn.com/jpricket/archive/2007/04/30/build-type-builddirectorypath-build-agent-working-directory.aspx Currently my working directory is "$(Temp)\$(BuildDefinitionPath)" but now I don't know what wildcards are available to specify a different folder. A: You need to edit the build working directory of your Build Agent so that the begging path is a little smaller. To edit the build agent, right click on the "Builds" node and select "Manage Build Agents..." I personally use something like c:\bw\$(BuildDefinitionId). $(BuildDefinitionId) translates into the id of the build definition (hence the name :-) ), which means you get a build path starting with something like c:\bw\36 rather than c:\Documents and Settings\tfsbuild\Local Settings\Temp\BuildDefinitionName Good luck, Martin. A: you have to checkout the build script file, from the source control explorer, and get your elbows dirty replacing the path.
How do I change the build directory that MSBuild uses under Team Foundation Build?
I'm getting the following error when trying to build my app using Team Foundation Build: C:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(1682,9): error MSB3554: Cannot write to the output file "obj\Release\Company.Redacted.BlahBlah.Localization.Subsystems. Startup_Shutdown_Processing.StartupShutdownProcessingMessages.de.resources". The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. My project builds fine on my development machine as the source is only two folders deep, but TF Build seems to use a really deep directory that is causing it to break. How do I change the folders that are used? Edit: I checked the .proj file for my build that is stored in source control and found the following: <!-- BUILD DIRECTORY This property is included only for backwards compatibility. The build directory used for a build definition is now stored in the database, as the BuildDirectory property of the definition's DefaultBuildAgent. For compatibility with V1 clients, keep this property in sync with the value in the database. --> <BuildDirectoryPath>UNKNOWN</BuildDirectoryPath> If this is stored in the database how do I change it? Edit: Found the following blog post which may be pointing me torward the solution. Now I just need to figure out how to change the setting in the Build Agent. http://blogs.msdn.com/jpricket/archive/2007/04/30/build-type-builddirectorypath-build-agent-working-directory.aspx Currently my working directory is "$(Temp)\$(BuildDefinitionPath)" but now I don't know what wildcards are available to specify a different folder.
[ "You need to edit the build working directory of your Build Agent so that the begging path is a little smaller. To edit the build agent, right click on the \"Builds\" node and select \"Manage Build Agents...\"\nI personally use something like c:\\bw\\$(BuildDefinitionId). $(BuildDefinitionId) translates into the id of the build definition (hence the name :-) ), which means you get a build path starting with something like c:\\bw\\36 rather than c:\\Documents and Settings\\tfsbuild\\Local Settings\\Temp\\BuildDefinitionName\nGood luck,\nMartin.\n", "you have to checkout the build script file, from the source control explorer, and get your elbows dirty replacing the path.\n" ]
[ 16, 0 ]
[]
[]
[ "msbuild", "tfs", "tfsbuild" ]
stackoverflow_0000104292_msbuild_tfs_tfsbuild.txt
Q: How can I add an HyperLink in TRichEdit using Delphi How can I add an HyperLink in a TRichEdit (using Delphi). I need to have something like: "This is my text, click here to do something." A: According to this article on delphi.about.com Unfortunately, Delphi's implementation of the RichEdit control leaves out a lot of the functionality found in more recent versions of this control (from Microsoft). You can add your own functionality as discussed here. NOTE: Delphi 2009 has just been released, so the TRichEdit control may have been updated to support mode features. A: If you really want hyperlinks and more, you could check out TRichView. There is a good demonstration of its capabilities at link text. A: i don't know if it's mentioned in the About.com article but i think it's worth mentioning that the hyperlink in TRichEdit only works if the TRichEdit itself is directly placed on the form (not in a panel). http://www.scalabium.com/faq/dct0146.htm A: The richedit in Infopower supports hyperlinks.
How can I add an HyperLink in TRichEdit using Delphi
How can I add an HyperLink in a TRichEdit (using Delphi). I need to have something like: "This is my text, click here to do something."
[ "According to this article on delphi.about.com\n\nUnfortunately, Delphi's implementation of the RichEdit control leaves out a lot of the functionality found in more recent versions of this control (from Microsoft). \n\nYou can add your own functionality as discussed here.\nNOTE: Delphi 2009 has just been released, so the TRichEdit control may have been updated to support mode features.\n", "If you really want hyperlinks and more, you could check out TRichView. There is a good demonstration of its capabilities at link text.\n", "i don't know if it's mentioned in the About.com article but i think it's worth mentioning that the hyperlink in TRichEdit only works if the TRichEdit itself is directly placed on the form (not in a panel).\nhttp://www.scalabium.com/faq/dct0146.htm\n", "The richedit in Infopower supports hyperlinks.\n" ]
[ 4, 4, 3, 1 ]
[]
[]
[ "delphi", "hyperlink", "richedit" ]
stackoverflow_0000093517_delphi_hyperlink_richedit.txt
Q: Flex best practices? I have the feeling that is easy to find samples, tutorials and simple examples on Flex. It seems harder to find tips and good practices based on real-life projects. Any tips on how to : How to write maintainable actionscript code How to ensure a clean separation of concern. Has anybody used an MVC framework such as cairngorm, puremvc or easymvc on a real Flex project ? How to fetch data from a server with blazeds/amfphp ? How to reduce latency for the end-user ? ... A: I work often with Flex in my job, and I will be happy to help.. but your questions deserve an article for each one :) I'll try some short answer. Maintenable code: I think that the same rules of any other OO languages apply. Some flex-specific rules I'm use to follow: use strong typed variables, always consider dispatching events as the way for your UI components talk each other (a little more initial work, very flexible and decoupled later). Frameworks: looked at it, read the documentation.. very nice, but I still feel that their complications are not balanced by the benefits they provide. Anyway I'd like to change my mind on this point.. Talking with server: Right now I'm using BlazeDS, it works very well.. there are many tutorials on the subject out there, if you find any trouble setting up it I would be happy to help. Latency: Do you mean in client/server comunications? If so, you should explore the various type of channels BlazeDS implements.. pull-only, two-way http polling, near real-time on http (comet).. if you need more, LiveCycle Data Services ES, the commrcial implementation from which BlazeDS is born, among other things offer another protocol called RTMP, it isn't http-tunnelled so there can be problem with firewalls and proxies, but it offers better performance (there is a free closed-source version of LCDS). I use the standard http channels in intranet environments, and found no real performance problems even with large datasets. Well.. quite a lot of stuff, can't be more specific now on each of this points, ask you if need :) A: Here are a couple of great resources to do with Flex/AS3 best practices and standards: Flex SDK coding conventions and best practices Flex best practices – Part 1: Setting up your Flex project The first one I found especially useful and I try to make sure any team I work with have all read it A: I have found the MVC framework RIAWave link to be absolutely incredible. It is super lightweight and easy to use. I found Cairngorm and PureMVC to have a pretty steep learning curve and they both feel a bit too bulky for me. RIAWave stays out of the way and just gives you the MVC basics to work with. AMFPHP on the backend is very nice as well. AMFPHP also has an apache module that will take care of serializing/unserializing the sent and received data all in C which is blazing fast. If latency is a worry, you will want to make sure you get a good webhost or even deploy to multiple data centers so that your users are never far from a server. Sounds like a bit early to be worrying about that though.
Flex best practices?
I have the feeling that is easy to find samples, tutorials and simple examples on Flex. It seems harder to find tips and good practices based on real-life projects. Any tips on how to : How to write maintainable actionscript code How to ensure a clean separation of concern. Has anybody used an MVC framework such as cairngorm, puremvc or easymvc on a real Flex project ? How to fetch data from a server with blazeds/amfphp ? How to reduce latency for the end-user ? ...
[ "I work often with Flex in my job, and I will be happy to help.. but your questions deserve an article for each one :) I'll try some short answer.\nMaintenable code: I think that the same rules of any other OO languages apply. Some flex-specific rules I'm use to follow: use strong typed variables, always consider dispatching events as the way for your UI components talk each other (a little more initial work, very flexible and decoupled later).\nFrameworks: looked at it, read the documentation.. very nice, but I still feel that their complications are not balanced by the benefits they provide. Anyway I'd like to change my mind on this point..\nTalking with server: Right now I'm using BlazeDS, it works very well.. there are many tutorials on the subject out there, if you find any trouble setting up it I would be happy to help.\nLatency: Do you mean in client/server comunications? If so, you should explore the various type of channels BlazeDS implements.. pull-only, two-way http polling, near real-time on http (comet).. if you need more, LiveCycle Data Services ES, the commrcial implementation from which BlazeDS is born, among other things offer another protocol called RTMP, it isn't http-tunnelled so there can be problem with firewalls and proxies, but it offers better performance (there is a free closed-source version of LCDS). I use the standard http channels in intranet environments, and found no real performance problems even with large datasets.\nWell.. quite a lot of stuff, can't be more specific now on each of this points, ask you if need :)\n", "Here are a couple of great resources to do with Flex/AS3 best practices and standards: \nFlex SDK coding conventions and best practices\nFlex best practices – Part 1: Setting up your Flex project\nThe first one I found especially useful and I try to make sure any team I work with have all read it\n", "I have found the MVC framework RIAWave link to be absolutely incredible. It is super lightweight and easy to use. I found Cairngorm and PureMVC to have a pretty steep learning curve and they both feel a bit too bulky for me. RIAWave stays out of the way and just gives you the MVC basics to work with.\nAMFPHP on the backend is very nice as well. AMFPHP also has an apache module that will take care of serializing/unserializing the sent and received data all in C which is blazing fast.\nIf latency is a worry, you will want to make sure you get a good webhost or even deploy to multiple data centers so that your users are never far from a server. Sounds like a bit early to be worrying about that though.\n" ]
[ 5, 3, 0 ]
[]
[]
[ "actionscript", "apache_flex", "blazeds", "puremvc" ]
stackoverflow_0000096040_actionscript_apache_flex_blazeds_puremvc.txt
Q: WCSF Random assembly manifest definition does not match assembly ref in .NET 2.0 I'm running WCSF Feb 2008 along with Enterprise Library 3.1 and noticed that randomly I get the "fun" Could not load file or assembly Microsoft.Practices.EnterpriseLibrary.Common, Version=3.1.0.0, Culture=neutral, Public ... The located assembly's manifest definition does not match the assembly reference. Usually this wouldn't be worth mentioning on stackoverflow, but the strange thing is that the first time I fire this up it breaks, but if I close it down and simply hit F11 again - it works .... strange. Does anyone know why this might break sometimes, but not others? A: The problem was related to my version of the data access DLL I was adding. I found that if I went to the following: C:\Program Files\Microsoft Web Client Software Factory February 2008\Microsoft Practices Library and imported this specific data access DLL instead of the one I compiled myself from the Enterprise Library 3.1 installer, everything worked great.
WCSF Random assembly manifest definition does not match assembly ref in .NET 2.0
I'm running WCSF Feb 2008 along with Enterprise Library 3.1 and noticed that randomly I get the "fun" Could not load file or assembly Microsoft.Practices.EnterpriseLibrary.Common, Version=3.1.0.0, Culture=neutral, Public ... The located assembly's manifest definition does not match the assembly reference. Usually this wouldn't be worth mentioning on stackoverflow, but the strange thing is that the first time I fire this up it breaks, but if I close it down and simply hit F11 again - it works .... strange. Does anyone know why this might break sometimes, but not others?
[ "The problem was related to my version of the data access DLL I was adding. I found that if I went to the following:\n\nC:\\Program Files\\Microsoft Web Client\n Software Factory February\n 2008\\Microsoft Practices Library\n\nand imported this specific data access DLL instead of the one I compiled myself from the Enterprise Library 3.1 installer, everything worked great.\n" ]
[ 1 ]
[]
[]
[ "enterprise_library", "wcsf" ]
stackoverflow_0000087168_enterprise_library_wcsf.txt
Q: Python implementation of Parsec? I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language. Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options. In Haskell I would use Parsec, a library of parsing functions (known as combinators). Is there a Python implementation of Parsec? Or perhaps some other production quality library full of parsing functionality so I can build a context sensitive parser in Python? EDIT: All my attempts at context free parsing have failed. For this reason, I don't expect ANTLR to be useful here. A: I believe that pyparsing is based on the same principles as parsec. A: PySec is another monadic parser, I don't know much about it, but it's worth looking at here A: An option you may consider, if an LL parser is ok to you, is to give ANTLR a try, it can generate python too (actually it is LL(*) as they name it, * stands for the quantity of lookahead it can cope with). A: Nothing prevents you for diverting your parser from the "context free" path using PLY. You can pass information to the lexer during parsing, and in this way achieve full flexibility. I'm pretty sure that you can parse anything you want with PLY this way. For a hands-on example, consider - it is a parser for ANSI C written in Python with PLY. It solves the classic C typedef - identifier problem (that makes C's grammar non context-sensitive) by populating a symbol table in the parser that is being used in the lexer to resolve symbol names as either types or not. A: There's ANTLR, which is LL(*), there's PyParsing, which is more object friendly and is sort of like a DSL, and then there's Parsing which is like OCaml's Menhir. A: ANTLR is great and has the added benefit of working across multiple languages.
Python implementation of Parsec?
I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language. Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options. In Haskell I would use Parsec, a library of parsing functions (known as combinators). Is there a Python implementation of Parsec? Or perhaps some other production quality library full of parsing functionality so I can build a context sensitive parser in Python? EDIT: All my attempts at context free parsing have failed. For this reason, I don't expect ANTLR to be useful here.
[ "I believe that pyparsing is based on the same principles as parsec.\n", "PySec is another monadic parser, I don't know much about it, but it's worth looking at here\n", "An option you may consider, if an LL parser is ok to you, is to give ANTLR a try, it can generate python too (actually it is LL(*) as they name it, * stands for the quantity of lookahead it can cope with).\n", "Nothing prevents you for diverting your parser from the \"context free\" path using PLY. You can pass information to the lexer during parsing, and in this way achieve full flexibility. I'm pretty sure that you can parse anything you want with PLY this way.\nFor a hands-on example, consider - it is a parser for ANSI C written in Python with PLY. It solves the classic C typedef - identifier problem (that makes C's grammar non context-sensitive) by populating a symbol table in the parser that is being used in the lexer to resolve symbol names as either types or not.\n", "There's ANTLR, which is LL(*), there's PyParsing, which is more object friendly and is sort of like a DSL, and then there's Parsing which is like OCaml's Menhir.\n", "ANTLR is great and has the added benefit of working across multiple languages.\n" ]
[ 9, 6, 5, 2, 1, 0 ]
[]
[]
[ "combinators", "parsec", "parsing", "python" ]
stackoverflow_0000094952_combinators_parsec_parsing_python.txt
Q: For std::map, how will insert behave if it has to resize the container and the memory is not available? For std::map, how will insert behave if it has to resize the container and the memory is not available? A: STL map does not have to "resize" container. map (just like list) is a node based container; each insert allocates memory. That said, out of memory situation is handled just like any other out-of-memory situation in C++: it throws a std::bad_alloc. STL containers with default allocators don't do anything fancy, they all end up allocating via standard new/delete operators somehow. In STL map's case, it will throw exception and will otherwise behave as if it was not called. That is, the container will remain unmodified. A: New will throw an exception. Easy as that. The insert will not happen, and neither will the content of the dictionary be modified or corrupted. A: To expand on Nils answer (yes it will throw), but what happens when it throws is sometimes confusing in the spec. In 17.2.2 of the specification (regarding maps / exceptions), if insert() throws, that function has no effect. This is a strong guarantee for map. This differs from containers using contiguous allocation like vector or deque.
For std::map, how will insert behave if it has to resize the container and the memory is not available?
For std::map, how will insert behave if it has to resize the container and the memory is not available?
[ "STL map does not have to \"resize\" container. map (just like list) is a node based container; each insert allocates memory.\nThat said, out of memory situation is handled just like any other out-of-memory situation in C++: it throws a std::bad_alloc. STL containers with default allocators don't do anything fancy, they all end up allocating via standard new/delete operators somehow.\nIn STL map's case, it will throw exception and will otherwise behave as if it was not called. That is, the container will remain unmodified.\n", "New will throw an exception. Easy as that.\nThe insert will not happen, and neither will the content of the dictionary be modified or corrupted.\n", "To expand on Nils answer (yes it will throw), but what happens when it throws is sometimes confusing in the spec.\nIn 17.2.2 of the specification (regarding maps / exceptions), if insert() throws, that function has no effect. This is a strong guarantee for map. This differs from containers using contiguous allocation like vector or deque.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "insert", "stdmap" ]
stackoverflow_0000104483_insert_stdmap.txt
Q: SWFAddress Deeplinks and C# library? Is there a C# class for interacting with SWFAddress deeplink URL strings (reading deeplink parameters, building SWFAddress URLs, etc.)? Planning to write one myself otherwise; but I wanted to make sure I wasn't reinventing the wheel first. A: If you're trying to read those deep linking URLs on the server side (which I assume you are), know that it's not possible. Those deep linking systems use the fragment part of URLs (the part that comes after the hash (#) symbol) for designating specific parts of the flash apps in the browser URL and fragments are not sent to the web server by browsers when making requests -- they're simply meant for browsers to be able to move to a certain part of the page by themselves. So in order to access full deep linking URLs, you'll have to write a client-side solution (e.g. with Javascript or AS3).
SWFAddress Deeplinks and C# library?
Is there a C# class for interacting with SWFAddress deeplink URL strings (reading deeplink parameters, building SWFAddress URLs, etc.)? Planning to write one myself otherwise; but I wanted to make sure I wasn't reinventing the wheel first.
[ "If you're trying to read those deep linking URLs on the server side (which I assume you are), know that it's not possible.\nThose deep linking systems use the fragment part of URLs (the part that comes after the hash (#) symbol) for designating specific parts of the flash apps in the browser URL and fragments are not sent to the web server by browsers when making requests -- they're simply meant for browsers to be able to move to a certain part of the page by themselves.\nSo in order to access full deep linking URLs, you'll have to write a client-side solution (e.g. with Javascript or AS3).\n" ]
[ 1 ]
[]
[]
[ "asp.net", "flash" ]
stackoverflow_0000077637_asp.net_flash.txt
Q: Trialware/licensing strategies I wrote a utility for photographers that I plan to sell online pretty cheap ($10). I'd like to allow the user to try the software out for a week or so before asking for a license. Since this is a personal project and the software is not very expensive, I don't think that purchasing the services of professional licensing providers would be worth it and I'm rolling my own. Currently, the application checks for a registry key that contains an encrypted string that either specifies when the trial expires or that they have a valid license. If the key is not present, a trial period key is created. So all you would need to do to get another week for free is delete the registry key. I don't think many users would do that, especially when the app is only $10, but I'm curious if there's a better way to do this that is not onerous to the legitimate user. I write web apps normally and haven't dealt with this stuff before. The app is in .NET 2.0, if that matters. A: EDIT: You can make your current licensing scheme considerable more difficult to crack by storing the registry information in the Local Security Authority (LSA). Most users will not be able to remove your key information from there. A search for LSA on MSDN should give you the information you need. Opinions on licensing schemes vary with each individual, more among developers than specific user groups (such as photographers). You should take a deep breath and try to see what your target user would accept, given the business need your application will solve. This is my personal opinion on the subject. There will be vocal individuals that disagree. The answer to this depends greatly on how you expect your application to be used. If you expect the application to be used several times every day, you will benefit the most from a very long trial period (several month), to create a lock-in situation. For this to work you will have to have a grace period where the software alerts the user that payment will be needed soon. Before the grace period you will have greater success if the software is silent about the trial period. Wether or not you choose to believe in this quite bold statement is of course entirely up to you. But if you do, you should realize that the less often your application will be used, the shorter the trial period should be. It is also very important that payment is very quick and easy for the user (as little data entry and as few clicks as possible). If you are very uncertain about the usage of the application, you should choose a very short trial period. You will, in my experience, achieve better results if the application is silent about the fact that it is in trial period in this case. Though effective for licensing purposes, "Call home" features is regarded as a privacy threat by many people. Personally I disagree with the notion that this is any way bad for a customer that is willing to pay for the software he/she is using. Therefore I suggest implementing a licensing scheme where the application checks the license status (trial, paid) on a regular basis, and helps the user pay for the software when it's time. This might be overkill for a small utility application, though. For very small, or even simple, utility applications, I argue that upfront payment without trial period is the most effective. Regarding the security of the solution, you have to make it proportional to the development effort. In my line of work, security is very critical because there are partners and dealers involved, and because the investment made in development is very high. For a small utility application, it makes more sense to price it right and rely on the honest users that will pay for the software that address their business needs. A: There's not much point to doing complicated protection schemes. Basically one of two things will happen: Your app is not popular enough, and nobody cracks it. Your app becomes popular, someone cracks it and releases it, then anybody with zero knowledge can simply download that crack if they want to cheat you. In the case of #1, it's not worth putting a lot of effort into the scheme, because you might make one or two extra people buy your app. In the case of #2, it's not worth putting a lot of effort because someone will crack it anyway, and the effort will be wasted. Basically my suggestion is just do something simple, like you already are, and that's just as effective. People who don't want to cheat / steal from you will pay up, people who want to cheat you will do it regardless. A: If you are hosting your homepage on a server that you control, you could have the downloadable trial-version of your software automatically compile to a new binary every night. This compile will replace a hardcoded datetime-value in your program for when the software expires. That way the only way to "cheat" is to change the date on your computer, and most people wont do that because of the problems that will create. A: Try the Shareware Starter Kit. It was developed my Microsoft and may have some other features you want. http://msdn.microsoft.com/en-us/vs2005/aa718342.aspx A: If you are planning to continue developing your software, you might consider the ransom model: http://en.wikipedia.org/wiki/Street_Performer_Protocol Essentially, you develop improvements to the software, and then ask for a certain amount of donations before you release them (without any DRM). A: One way to do it that's easy for the user but not for you is to hard-code the expiry date and make new versions of the installer every now and then... :) If I were you though, I wouldn't make it any more advanced than what you're already doing. Like you say it's only $10, and if someone really wants to crack your system they will do it no matter how complicated you make it. You could do a slightly more advanced version of your scheme by requiring a net connection and letting a server generate the trial key. If you do something along the lines of sign(hash(unique_computer_id+when_to_expire)) and let the app check with a public key that your server has signed the expiry date it should require a "real" hack to bypass. This way you can store the unique id's serverside and refuse to generate a expiry date more than once or twice. Not sure what to use as the unique id, but there should be some way to get something useful from Windows. A: I am facing the very same problem with an application I'm selling for a very low price as well. Besides obfuscating the app, I came up with a system that uses two keys in the registry, one of which is used to determine that time of installation, the other one the actual license key. The keys are named obscurely and a missing key indicates tampering with the installation. Of course deleting both keys and reinstalling the application will start the evaluation time again. I figured it doesn't matter anyway, as someone who wants to crack the app will succeed in doing so, or find a crack by someone who succeeded in doing so. So in the end I'm only achieving the goal of making it not TOO easy to crack the application, and this is what, I guess, will stop 80-90% of the customers from doing so. And afterall: as the application is sold for a very low price, there's no justification for me to invest any more time into this issue than I already have. A: just be cool about the license. explain up front that this is your passion and a child of your labor. give people a chance to do the right thing. if someone wants to pirate it, it will happen eventually. i still remember my despair seeing my books on bittorrent, but its something you have to just deal with. Don't cave to casual piracy (what you're doing now sounds great) but don't cripple the thing beyond that. I still believe that there are enough honest people out there to make a for-profit coding endeavor worth while. A: Don't have the evaluation based on "days since install", instead do number of days used, or number of times run or something similar. People tend to download shareware, run it once or twice, and then forget it for a few weeks until they need it again. By then, the trial may have expired and so they've only had a few tries to get hooked on using your app, even though they've had it installed for a while. Number of activation/days instead lets them get into a habit of using your app for a task, and also makes a stronger sell (i.e. you've used this app 30 times...). Even better, limiting the features works better than timing out. For example, perhaps your photography app could limit the user to 1 megapixel images, but let them use it for as long as they want. Also, consider pricing your app at $20 (or $19.95). Unless there's already a micropayment setup in place (like iPhone store or XBoxLive or something) people tend to have an aversion to buying things online below a certain price point (which is around $20 depending on the type of app), and people assume subconciously if something is inexpensive, it must not be very good. You can actually raise your conversion rate with a higher price (up to a point of course). A: In these sort of circumstances, I don't really think it matters what you do. If you have some kind of protection it will stop 90% of your users. The other 10% - if they don't want to pay for your software they'll pretty much find a way around protection no matter what you do. If you want something a little less obvious you can put a file in System32 that sounds like a system file that the application checks the existence of on launch. That can be a little harder to track down.
Trialware/licensing strategies
I wrote a utility for photographers that I plan to sell online pretty cheap ($10). I'd like to allow the user to try the software out for a week or so before asking for a license. Since this is a personal project and the software is not very expensive, I don't think that purchasing the services of professional licensing providers would be worth it and I'm rolling my own. Currently, the application checks for a registry key that contains an encrypted string that either specifies when the trial expires or that they have a valid license. If the key is not present, a trial period key is created. So all you would need to do to get another week for free is delete the registry key. I don't think many users would do that, especially when the app is only $10, but I'm curious if there's a better way to do this that is not onerous to the legitimate user. I write web apps normally and haven't dealt with this stuff before. The app is in .NET 2.0, if that matters.
[ "EDIT: You can make your current licensing scheme considerable more difficult to crack by storing the registry information in the Local Security Authority (LSA). Most users will not be able to remove your key information from there. A search for LSA on MSDN should give you the information you need.\nOpinions on licensing schemes vary with each individual, more among developers than specific user groups (such as photographers). You should take a deep breath and try to see what your target user would accept, given the business need your application will solve.\nThis is my personal opinion on the subject. There will be vocal individuals that disagree.\nThe answer to this depends greatly on how you expect your application to be used. If you expect the application to be used several times every day, you will benefit the most from a very long trial period (several month), to create a lock-in situation. For this to work you will have to have a grace period where the software alerts the user that payment will be needed soon. Before the grace period you will have greater success if the software is silent about the trial period.\nWether or not you choose to believe in this quite bold statement is of course entirely up to you. But if you do, you should realize that the less often your application will be used, the shorter the trial period should be. It is also very important that payment is very quick and easy for the user (as little data entry and as few clicks as possible).\nIf you are very uncertain about the usage of the application, you should choose a very short trial period. You will, in my experience, achieve better results if the application is silent about the fact that it is in trial period in this case.\nThough effective for licensing purposes, \"Call home\" features is regarded as a privacy threat by many people. Personally I disagree with the notion that this is any way bad for a customer that is willing to pay for the software he/she is using. Therefore I suggest implementing a licensing scheme where the application checks the license status (trial, paid) on a regular basis, and helps the user pay for the software when it's time. This might be overkill for a small utility application, though.\nFor very small, or even simple, utility applications, I argue that upfront payment without trial period is the most effective.\nRegarding the security of the solution, you have to make it proportional to the development effort. In my line of work, security is very critical because there are partners and dealers involved, and because the investment made in development is very high. For a small utility application, it makes more sense to price it right and rely on the honest users that will pay for the software that address their business needs.\n", "There's not much point to doing complicated protection schemes. Basically one of two things will happen:\n\nYour app is not popular enough, and nobody cracks it.\nYour app becomes popular, someone cracks it and releases it, then anybody with zero knowledge can simply download that crack if they want to cheat you.\n\nIn the case of #1, it's not worth putting a lot of effort into the scheme, because you might make one or two extra people buy your app. In the case of #2, it's not worth putting a lot of effort because someone will crack it anyway, and the effort will be wasted.\nBasically my suggestion is just do something simple, like you already are, and that's just as effective. People who don't want to cheat / steal from you will pay up, people who want to cheat you will do it regardless.\n", "If you are hosting your homepage on a server that you control, you could have the downloadable trial-version of your software automatically compile to a new binary every night. This compile will replace a hardcoded datetime-value in your program for when the software expires. That way the only way to \"cheat\" is to change the date on your computer, and most people wont do that because of the problems that will create.\n", "Try the Shareware Starter Kit. It was developed my Microsoft and may have some other features you want.\nhttp://msdn.microsoft.com/en-us/vs2005/aa718342.aspx\n", "If you are planning to continue developing your software, you might consider the ransom model:\nhttp://en.wikipedia.org/wiki/Street_Performer_Protocol\nEssentially, you develop improvements to the software, and then ask for a certain amount of donations before you release them (without any DRM).\n", "One way to do it that's easy for the user but not for you is to hard-code the expiry date and make new versions of the installer every now and then... :)\nIf I were you though, I wouldn't make it any more advanced than what you're already doing. Like you say it's only $10, and if someone really wants to crack your system they will do it no matter how complicated you make it.\nYou could do a slightly more advanced version of your scheme by requiring a net connection and letting a server generate the trial key. If you do something along the lines of sign(hash(unique_computer_id+when_to_expire)) and let the app check with a public key that your server has signed the expiry date it should require a \"real\" hack to bypass.\nThis way you can store the unique id's serverside and refuse to generate a expiry date more than once or twice. Not sure what to use as the unique id, but there should be some way to get something useful from Windows.\n", "I am facing the very same problem with an application I'm selling for a very low price as well. \nBesides obfuscating the app, I came up with a system that uses two keys in the registry, one of which is used to determine that time of installation, the other one the actual license key. The keys are named obscurely and a missing key indicates tampering with the installation. \nOf course deleting both keys and reinstalling the application will start the evaluation time again.\nI figured it doesn't matter anyway, as someone who wants to crack the app will succeed in doing so, or find a crack by someone who succeeded in doing so. \nSo in the end I'm only achieving the goal of making it not TOO easy to crack the application, and this is what, I guess, will stop 80-90% of the customers from doing so. And afterall: as the application is sold for a very low price, there's no justification for me to invest any more time into this issue than I already have.\n", "just be cool about the license. explain up front that this is your passion and a child of your labor. give people a chance to do the right thing. if someone wants to pirate it, it will happen eventually. i still remember my despair seeing my books on bittorrent, but its something you have to just deal with. Don't cave to casual piracy (what you're doing now sounds great) but don't cripple the thing beyond that.\nI still believe that there are enough honest people out there to make a for-profit coding endeavor worth while.\n", "Don't have the evaluation based on \"days since install\", instead do number of days used, or number of times run or something similar. People tend to download shareware, run it once or twice, and then forget it for a few weeks until they need it again. By then, the trial may have expired and so they've only had a few tries to get hooked on using your app, even though they've had it installed for a while. Number of activation/days instead lets them get into a habit of using your app for a task, and also makes a stronger sell (i.e. you've used this app 30 times...).\nEven better, limiting the features works better than timing out. For example, perhaps your photography app could limit the user to 1 megapixel images, but let them use it for as long as they want. \nAlso, consider pricing your app at $20 (or $19.95). Unless there's already a micropayment setup in place (like iPhone store or XBoxLive or something) people tend to have an aversion to buying things online below a certain price point (which is around $20 depending on the type of app), and people assume subconciously if something is inexpensive, it must not be very good. You can actually raise your conversion rate with a higher price (up to a point of course). \n", "In these sort of circumstances, I don't really think it matters what you do. If you have some kind of protection it will stop 90% of your users. The other 10% - if they don't want to pay for your software they'll pretty much find a way around protection no matter what you do.\nIf you want something a little less obvious you can put a file in System32 that sounds like a system file that the application checks the existence of on launch. That can be a little harder to track down.\n" ]
[ 15, 11, 4, 3, 2, 2, 2, 2, 2, 1 ]
[]
[]
[ "licensing", "trialware" ]
stackoverflow_0000104291_licensing_trialware.txt
Q: How do you resolve Linked server references in SQL Database project in VS? In a Visual Studio SQL Server Database project, how can you resolve the errors associated with linked server references within the project? A: Trying using synonyms for linked databases.
How do you resolve Linked server references in SQL Database project in VS?
In a Visual Studio SQL Server Database project, how can you resolve the errors associated with linked server references within the project?
[ "Trying using synonyms for linked databases.\n" ]
[ 2 ]
[]
[]
[ "database_project", "sql_server", "tfs", "visual_studio" ]
stackoverflow_0000101964_database_project_sql_server_tfs_visual_studio.txt
Q: Mod-rewrites on apache: change all URLs Right now I'm doing something like this: RewriteRule ^/?logout(/)?$ logout.php RewriteRule ^/?config(/)?$ config.php I would much rather have one rules that would do the same thing for each url, so I don't have to keep adding them every time I add a new file. Also, I like to match things like '/config/new' to 'config_new.php' if that is possible. I am guessing some regexp would let me accomplish this? A: Try: RewriteRule ^/?(\w+)/?$ $1.php the $1 is the content of the first captured string in brackets. The brackets around the 2nd slash are not needed. edit: For the other match, try this: RewriteRule ^/?(\w+)/(\w+)/?$ $1_$2.php A: I would do something like this: RewriteRule ^/?(logout|config|foo)/?$ $1.php RewriteRule ^/?(logout|config|foo)/(new|edit|delete)$ $1_$2.php I prefer to explicitly list the url's I want to match, so that I don't have to worry about static content or adding new things later that don't need to be rewritten to php files. The above is ok if all sub url's are valid for all root url's (book/new, movie/new, user/new), but not so good if you want to have different sub url's depending on root action (logout/new doesn't make much sense). You can handle that either with a more complex regex, or by routing everything to a single php file which will determine what files to include and display based on the url. A: Mod rewrite can't do (potentially) boundless replaces like you want to do in the second part of your question. But check out the External Rewriting Engine at the bottom of the Apache URL Rewriting Guide: External Rewriting Engine Description: A FAQ: How can we solve the FOO/BAR/QUUX/etc. problem? There seems no solution by the use of mod_rewrite... Solution: Use an external RewriteMap, i.e. a program which acts like a RewriteMap. It is run once on startup of Apache receives the requested URLs on STDIN and has to put the resulting (usually rewritten) URL on STDOUT (same order!). RewriteEngine on RewriteMap quux-map prg:/path/to/map.quux.pl RewriteRule ^/~quux/(.*)$ /~quux/${quux-map:$1} #!/path/to/perl # disable buffered I/O which would lead # to deadloops for the Apache server $| = 1; # read URLs one per line from stdin and # generate substitution URL on stdout while (<>) { s|^foo/|bar/|; print $_; } This is a demonstration-only example and just rewrites all URLs /~quux/foo/... to /~quux/bar/.... Actually you can program whatever you like. But notice that while such maps can be used also by an average user, only the system administrator can define it.
Mod-rewrites on apache: change all URLs
Right now I'm doing something like this: RewriteRule ^/?logout(/)?$ logout.php RewriteRule ^/?config(/)?$ config.php I would much rather have one rules that would do the same thing for each url, so I don't have to keep adding them every time I add a new file. Also, I like to match things like '/config/new' to 'config_new.php' if that is possible. I am guessing some regexp would let me accomplish this?
[ "Try:\nRewriteRule ^/?(\\w+)/?$ $1.php\nthe $1 is the content of the first captured string in brackets. The brackets around the 2nd slash are not needed.\nedit: For the other match, try this:\nRewriteRule ^/?(\\w+)/(\\w+)/?$ $1_$2.php\n", "I would do something like this:\nRewriteRule ^/?(logout|config|foo)/?$ $1.php\nRewriteRule ^/?(logout|config|foo)/(new|edit|delete)$ $1_$2.php\n\nI prefer to explicitly list the url's I want to match, so that I don't have to worry about static content or adding new things later that don't need to be rewritten to php files. \nThe above is ok if all sub url's are valid for all root url's (book/new, movie/new, user/new), but not so good if you want to have different sub url's depending on root action (logout/new doesn't make much sense). You can handle that either with a more complex regex, or by routing everything to a single php file which will determine what files to include and display based on the url.\n", "Mod rewrite can't do (potentially) boundless replaces like you want to do in the second part of your question. But check out the External Rewriting Engine at the bottom of the Apache URL Rewriting Guide:\n\nExternal Rewriting Engine\nDescription:\nA FAQ: How can we solve the FOO/BAR/QUUX/etc. problem? There seems no solution by the use of mod_rewrite...\n Solution:\nUse an external RewriteMap, i.e. a program which acts like a RewriteMap. It is run once on startup of Apache receives the requested URLs on STDIN and has to put the resulting (usually rewritten) URL on STDOUT (same order!).\nRewriteEngine on\nRewriteMap quux-map prg:/path/to/map.quux.pl\nRewriteRule ^/~quux/(.*)$ /~quux/${quux-map:$1}\n\n#!/path/to/perl\n\n# disable buffered I/O which would lead\n# to deadloops for the Apache server\n$| = 1;\n\n# read URLs one per line from stdin and\n# generate substitution URL on stdout\nwhile (<>) {\n s|^foo/|bar/|;\n print $_;\n}\n\nThis is a demonstration-only example and just rewrites all URLs /~quux/foo/... to /~quux/bar/.... Actually you can program whatever you like. But notice that while such maps can be used also by an average user, only the system administrator can define it.\n\n" ]
[ 2, 2, 1 ]
[]
[]
[ "apache", "mod_rewrite", "regex" ]
stackoverflow_0000104487_apache_mod_rewrite_regex.txt
Q: Can I customize a "Date Prompt" in Cognos8? I am working with Cognos8 Report Studio. In my report there are two date prompts: START date and END date. Users can select two different dates or make them both the same date. But the report has valid data only for the last business date of each month. For example, if Jan 31 is Sunday, valid data is available only for Jan 29 which is Friday (the last business day of the month). Can I have a customized "Date Prompt" where I can disable all other dates except the last business day of each month? Users should be able to select only month-end dates and no other dates? A: If I understand correctly your users can select different dates but each selection can only be the last business day of any month. So it could be start:29-JAN-2008 and end:30-MAR-2008 or same date start:29-JAN-2008 and end:29-JAN-2008. Why have days at all? Could you model your data to include a month/year field e.g. - "JAN 2008" and present that as a multi-select list box prompt? Are you sure your data source does not have a GL Accounting period field or dictionary that you can use? If that doesn't work than you'll have to try to calculate the last day of the month but then you may need to include any business holidays in your particular jurisdiction because the last weekday of the month is not neccessarily the last business day of the month. A: I don't believe you can customize the standard calendar date prompt in Cognos in the way that you are describing, and quick search of the Cognos knowledgebase didn't uncover any documents. However, it seems that the easiest way to provide a user-friendly prompt would be to just have a simple drop-down value prompt with the month/year combination, since there is only one valid date choice per month.
Can I customize a "Date Prompt" in Cognos8?
I am working with Cognos8 Report Studio. In my report there are two date prompts: START date and END date. Users can select two different dates or make them both the same date. But the report has valid data only for the last business date of each month. For example, if Jan 31 is Sunday, valid data is available only for Jan 29 which is Friday (the last business day of the month). Can I have a customized "Date Prompt" where I can disable all other dates except the last business day of each month? Users should be able to select only month-end dates and no other dates?
[ "If I understand correctly your users can select different dates but each selection can only be the last business day of any month. So it could be start:29-JAN-2008 and end:30-MAR-2008 or same date start:29-JAN-2008 and end:29-JAN-2008.\nWhy have days at all? Could you model your data to include a month/year field e.g. - \"JAN 2008\" and present that as a multi-select list box prompt? Are you sure your data source does not have a GL Accounting period field or dictionary that you can use?\nIf that doesn't work than you'll have to try to calculate the last day of the month but then you may need to include any business holidays in your particular jurisdiction because the last weekday of the month is not neccessarily the last business day of the month.\n", "I don't believe you can customize the standard calendar date prompt in Cognos in the way that you are describing, and quick search of the Cognos knowledgebase didn't uncover any documents. However, it seems that the easiest way to provide a user-friendly prompt would be to just have a simple drop-down value prompt with the month/year combination, since there is only one valid date choice per month.\n" ]
[ 1, 1 ]
[]
[]
[ "cognos", "date", "prompt", "reporting" ]
stackoverflow_0000062865_cognos_date_prompt_reporting.txt
Q: Is it possible in W3C's XML Schema language (XSD) to allow a series of elements to be in any order but still limit occurrences? I know about all and choice, but they don't account for a case where I do want some elements to be able to occur more than once, such as: <Root> <ThingA/> <ThingB/> <ThingC/> <ThingC/> <ThingC/> </Root> I could use sequence, but I'd prefer to allow these children to be in any order. I could use any, but then I couldn't have more than one ThingC. I could use choice, but then I couldn't limit ThingA and ThingB to 0 or 1. I think I may have read somewhere that this was either difficult or impossible in XSD, but might be possible with RELAX NG. I don't remember where I read that, unfortunately. Thanks for any help! A: That's right: you can't do what you want to do in XML Schema, but you can in RELAX NG with: <element name="Root"> <interleave> <element name="ThingA"><empty /></element> <element name="ThingB"><empty /></element> <oneOrMore><element name="ThingC"><empty /></element></oneOrMore> </interleave> </element> Your options in XML Schema are: add a preprocessing step that normalises your input XML into a particular order, and then use <xs:sequence> use <xs:choice>, and add extra validation (for example using Schematron) to check that there's not more than one <ThingA> or <ThingB> decide to fix the order of the elements in your markup language It turns out that the third is usually the best option; there's usually not much cost for generators of XML to output elements in a particular order, and not only does it help validation but it also aids consumption of the XML if the order can be known in advance.
Is it possible in W3C's XML Schema language (XSD) to allow a series of elements to be in any order but still limit occurrences?
I know about all and choice, but they don't account for a case where I do want some elements to be able to occur more than once, such as: <Root> <ThingA/> <ThingB/> <ThingC/> <ThingC/> <ThingC/> </Root> I could use sequence, but I'd prefer to allow these children to be in any order. I could use any, but then I couldn't have more than one ThingC. I could use choice, but then I couldn't limit ThingA and ThingB to 0 or 1. I think I may have read somewhere that this was either difficult or impossible in XSD, but might be possible with RELAX NG. I don't remember where I read that, unfortunately. Thanks for any help!
[ "That's right: you can't do what you want to do in XML Schema, but you can in RELAX NG with:\n<element name=\"Root\">\n <interleave>\n <element name=\"ThingA\"><empty /></element>\n <element name=\"ThingB\"><empty /></element>\n <oneOrMore><element name=\"ThingC\"><empty /></element></oneOrMore>\n </interleave>\n</element>\n\nYour options in XML Schema are:\n\nadd a preprocessing step that normalises your input XML into a particular order, and then use <xs:sequence>\nuse <xs:choice>, and add extra validation (for example using Schematron) to check that there's not more than one <ThingA> or <ThingB>\ndecide to fix the order of the elements in your markup language\n\nIt turns out that the third is usually the best option; there's usually not much cost for generators of XML to output elements in a particular order, and not only does it help validation but it also aids consumption of the XML if the order can be known in advance.\n" ]
[ 6 ]
[]
[]
[ "schema", "xml", "xsd" ]
stackoverflow_0000104248_schema_xml_xsd.txt
Q: How to remove published wmi schema? I've published schema, and no longer have the dll's that contained the wmi provider that the schema was published from. How can I remove the schema? A: If you are talking about the assembly from your other question, you can simply use wbemtest.exe: Connect to Root namespace Enum instances... button (Superclass name: __Namespace) Delete instance named Test or MyTest That will delete the entire namespace including all the classes you created. If you want to delete a class and leave the namespace Connect to Root\Test Enum classes... button (Recursive) Delete the classes you want If there are multiple machines this can be automated using WMI scripting library or System.Management. With MOF you can use #pragma deleteclass. If the schema was created with #pragma autorecover you need to remove the entry from HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WBEM\CIMOM\autorecover mofs
How to remove published wmi schema?
I've published schema, and no longer have the dll's that contained the wmi provider that the schema was published from. How can I remove the schema?
[ "If you are talking about the assembly from your other question, you can simply use wbemtest.exe:\n\nConnect to Root namespace\nEnum instances... button (Superclass\nname: __Namespace)\nDelete instance named Test or MyTest\n\nThat will delete the entire namespace including all the classes you created. If you want to delete a class and leave the namespace\n\nConnect to Root\\Test\nEnum classes... button (Recursive)\nDelete the classes you want\n\nIf there are multiple machines this can be automated using WMI scripting library or System.Management. With MOF you can use #pragma deleteclass. If the schema was created with #pragma autorecover you need to remove the entry from\nHKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\WBEM\\CIMOM\\autorecover mofs\n" ]
[ 4 ]
[]
[]
[ "wmi" ]
stackoverflow_0000104188_wmi.txt
Q: Is there a way to force a style to a div element which already has a style="" attribute I'm trying to skin HTML output which I don't have control over. One of the elements is a div with a style="overflow: auto" attribute. Is there a way in CSS to force that div to use overflow: hidden;? A: You can add !important to the end of your style, like this: element { overflow: hidden !important; } This is something you should not rely on normally, but in your case that's the best option. Changing the value in Javascript strays from the best practice of separating markup, presentation, and behavior (html/css/javascript). A: Have you tried setting !important in the CSS file? Something like: #mydiv { overflow: hidden !important; } A: Not sure if this would work or not, haven't tested it with overflow. overflow:hidden !important maybe? A: If the div has an inline style declaration, the only way to modify it without changing the source is with JavaScript. Inline style attributes 'win' every time in CSS. A: Magnar is correct as explained by the W3C spec pasted below. Seems the !important keyword was added to allow users to override even "baked in" style settings at the element level. Since you are in the situation where you do not have control over the html this may be your best option, though it would not be a normal design pattern. W3C CSS Specs Excerpt: 6.4.2 !important rules CSS attempts to create a balance of power between author and user style sheets. By default, rules in an author's style sheet override those in a user's style sheet (see cascade rule 3). However, for balance, an "!important" declaration (the keywords "!" and "important" follow the declaration) takes precedence over a normal declaration. Both author and user style sheets may contain "!important" declarations, and user "!important" rules override author "!important" rules. This CSS feature improves accessibility of documents by giving users with special requirements (large fonts, color combinations, etc.) control over presentation. Note. This is a semantic change since CSS1. In CSS1, author "!important" rules took precedence over user "!important" rules. Declaring a shorthand property (e.g., 'background') to be "!important" is equivalent to declaring all of its sub-properties to be "!important". Example(s): The first rule in the user's style sheet in the following example contains an "!important" declaration, which overrides the corresponding declaration in the author's styles sheet. The second declaration will also win due to being marked "!important". However, the third rule in the user's style sheet is not "!important" and will therefore lose to the second rule in the author's style sheet (which happens to set style on a shorthand property). Also, the third author rule will lose to the second author rule since the second rule is "!important". This shows that "!important" declarations have a function also within author style sheets. /* From the user's style sheet */ P { text-indent: 1em ! important } P { font-style: italic ! important } P { font-size: 18pt } /* From the author's style sheet */ P { text-indent: 1.5em !important } P { font: 12pt sans-serif !important } P { font-size: 24pt } A: As far as I know, styles on the actual HTML elements override anything you can do in separate CSS style. You can, however, use Javascript to override it.
Is there a way to force a style to a div element which already has a style="" attribute
I'm trying to skin HTML output which I don't have control over. One of the elements is a div with a style="overflow: auto" attribute. Is there a way in CSS to force that div to use overflow: hidden;?
[ "You can add !important to the end of your style, like this:\nelement {\n overflow: hidden !important;\n}\n\nThis is something you should not rely on normally, but in your case that's the best option. Changing the value in Javascript strays from the best practice of separating markup, presentation, and behavior (html/css/javascript).\n", "Have you tried setting !important in the CSS file? Something like:\n#mydiv { overflow: hidden !important; }\n\n", "Not sure if this would work or not, haven't tested it with overflow.\noverflow:hidden !important\n\nmaybe?\n", "If the div has an inline style declaration, the only way to modify it without changing the source is with JavaScript. Inline style attributes 'win' every time in CSS.\n", "Magnar is correct as explained by the W3C spec pasted below. Seems the !important keyword was added to allow users to override even \"baked in\" style settings at the element level. Since you are in the situation where you do not have control over the html this may be your best option, though it would not be a normal design pattern.\nW3C CSS Specs\nExcerpt:\n\n6.4.2 !important rules\n CSS attempts to create a balance of power between author and user style\n sheets. By default, rules in an\n author's style sheet override those in\n a user's style sheet (see cascade rule\n 3). \nHowever, for balance, an \"!important\" declaration (the keywords\n\n\"!\" and \"important\" follow the\n declaration) takes precedence over a\n normal declaration. Both author and\n user style sheets may contain\n \"!important\" declarations, and user\n \"!important\" rules override author\n \"!important\" rules. This CSS feature\n improves accessibility of documents by\n giving users with special requirements\n (large fonts, color combinations,\n etc.) control over presentation. \nNote. This is a semantic change since CSS1. In CSS1, author\n\n\"!important\" rules took precedence\n over user \"!important\" rules. \nDeclaring a shorthand property (e.g., 'background') to be\n\n\"!important\" is equivalent to\n declaring all of its sub-properties to\n be \"!important\". \nExample(s):\n\nThe first rule in the user's style sheet in the following example\n\ncontains an \"!important\" declaration,\n which overrides the corresponding\n declaration in the author's styles\n sheet. The second declaration will\n also win due to being marked\n \"!important\". However, the third rule\n in the user's style sheet is not\n \"!important\" and will therefore lose\n to the second rule in the author's\n style sheet (which happens to set\n style on a shorthand property). Also,\n the third author rule will lose to the\n second author rule since the second\n rule is \"!important\". This shows that\n \"!important\" declarations have a\n function also within author style\n sheets. \n/* From the user's style sheet */\nP { text-indent: 1em ! important }\nP { font-style: italic ! important }\nP { font-size: 18pt }\n\n/* From the author's style sheet */\nP { text-indent: 1.5em !important }\nP { font: 12pt sans-serif !important }\nP { font-size: 24pt }\n\n\n", "As far as I know, styles on the actual HTML elements override anything you can do in separate CSS style.\nYou can, however, use Javascript to override it.\n" ]
[ 89, 14, 6, 2, 2, 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0000104485_css_html.txt
Q: Specific Client Detection based on headers. Firefox extension? I have a website in which I want to be able to detect a certain user based upon a permanent attribute of a specific user. My original plan was to use an ip address but those are difficult to maintain since they can change frequently. Cookie's and Sessions are almost out of question because they expire and tend to be difficult to manipulate. Basically what i want to be able to do is detect if the current client visiting the website is a special user without having to deal with logins / passwords. To use something more permanent. The user agent plugin could work but then, if i ever upgrade firefox or whatever i would have to go in and manually update the user agent string. I found this script: https://addons.mozilla.org/en-US/firefox/addon/6895 but it doesn't work for newest version of firefox 3. It would be a perfect solution because it sends special headers at specific websites. Short of writing my own extension does anyone have ideas of what to do? Do i need an extension? Should i try to write my own? A: You could generate a SSL client certificate, and have your users install it. From then on, their browser would identify them using their certificate. HOWTO: Securing A Website With Client SSL Certificates SSL and Certificats (IIS 6.0)
Specific Client Detection based on headers. Firefox extension?
I have a website in which I want to be able to detect a certain user based upon a permanent attribute of a specific user. My original plan was to use an ip address but those are difficult to maintain since they can change frequently. Cookie's and Sessions are almost out of question because they expire and tend to be difficult to manipulate. Basically what i want to be able to do is detect if the current client visiting the website is a special user without having to deal with logins / passwords. To use something more permanent. The user agent plugin could work but then, if i ever upgrade firefox or whatever i would have to go in and manually update the user agent string. I found this script: https://addons.mozilla.org/en-US/firefox/addon/6895 but it doesn't work for newest version of firefox 3. It would be a perfect solution because it sends special headers at specific websites. Short of writing my own extension does anyone have ideas of what to do? Do i need an extension? Should i try to write my own?
[ "You could generate a SSL client certificate, and have your users install it. From then on, their browser would identify them using their certificate.\n\nHOWTO: Securing A Website With Client SSL Certificates\nSSL and Certificats (IIS 6.0)\n\n" ]
[ 1 ]
[]
[]
[ "firefox", "http_headers" ]
stackoverflow_0000104665_firefox_http_headers.txt
Q: ReportViewer displaying black background in Print Layout mode In my ReportViewer control, when I click on Print Layout, the background turns black on the report. This must be a bug. Is there a workaround? A: Microsoft was already aware of this issue, its KB is located here. You can solve the problem by installing Cumulative Update 1 (build 3161), which can be requested for download through the following Microsoft page If you can wait a little while more I think the fix will come out with SQL Server 2005 SP3
ReportViewer displaying black background in Print Layout mode
In my ReportViewer control, when I click on Print Layout, the background turns black on the report. This must be a bug. Is there a workaround?
[ "Microsoft was already aware of this issue, its KB is located here.\nYou can solve the problem by installing Cumulative Update 1 (build 3161), which can be requested for download through the following Microsoft page\nIf you can wait a little while more I think the fix will come out with SQL Server 2005 SP3\n" ]
[ 7 ]
[]
[]
[ "reporting_services", "reportviewer", "visual_studio" ]
stackoverflow_0000103688_reporting_services_reportviewer_visual_studio.txt
Q: What was the first version of MS Office to officially support Unicode? I am doing some research on Unicode for a white-paper I am writing. Does anyone remember the first version of MS Office on the Windows platform that was fully Unicode compliant? Not having much luck Googling this answer out of the net. A: office 97: "The universal character set provided by Unicode overcomes this problem. Office 97 was the first version of Office to support Unicode in all applications except Microsoft Access and Microsoft Outlook®. In Office 2000, Access and Microsoft Publisher gain Unicode support. Microsoft FrontPage® 2000 also supports Unicode on Web pages, but text typed into dialog boxes and other elements of the user interface are limited to characters defined by the user’s code page." - Microsoft url: http://office.microsoft.com/en-us/ork2000/HA011382921033.aspx A: Further to DanWoolston's answer, Microsoft Outlook 2003 was the first Outlook version to offer full Unicode support, so depending on your definition of 'Office' (seeing as there are so many different editions), your answer might be Microsoft Office 2003. URL: http://office.microsoft.com/en-us/ork2003/HA011402611033.aspx
What was the first version of MS Office to officially support Unicode?
I am doing some research on Unicode for a white-paper I am writing. Does anyone remember the first version of MS Office on the Windows platform that was fully Unicode compliant? Not having much luck Googling this answer out of the net.
[ "office 97:\n\n\"The universal character set provided by Unicode overcomes this problem. Office 97 was the first version of Office to support Unicode in all applications except Microsoft Access and Microsoft Outlook®. In Office 2000, Access and Microsoft Publisher gain Unicode support. Microsoft FrontPage® 2000 also supports Unicode on Web pages, but text typed into dialog boxes and other elements of the user interface are limited to characters defined by the user’s code page.\" - Microsoft\n\nurl:\nhttp://office.microsoft.com/en-us/ork2000/HA011382921033.aspx\n", "Further to DanWoolston's answer, Microsoft Outlook 2003 was the first Outlook version to offer full Unicode support, so depending on your definition of 'Office' (seeing as there are so many different editions), your answer might be Microsoft Office 2003.\nURL: http://office.microsoft.com/en-us/ork2003/HA011402611033.aspx\n" ]
[ 2, 1 ]
[]
[]
[ "ms_office", "unicode" ]
stackoverflow_0000104661_ms_office_unicode.txt
Q: Why won't my package upgrade with yum? I'm trying to upgrade a package using yum on Fedora 8. The package is elfutils. Here's what I have installed locally: $ yum info elfutils Installed Packages Name : elfutils Arch : x86_64 Version: 0.130 Release: 3.fc8 Size : 436 k Repo : installed Summary: A collection of utilities and DSOs to handle compiled objects There's a bug in this version, and according to the bug report, a newer version has been pushed to the Fedora 8 stable repository. But, if I try to update: $ yum update elfutils Setting up Update Process Could not find update match for elfutils No Packages marked for Update Here are my repositories: $ yum repolist enabled repo id repo name status InstallMedia Fedora 8 enabled fedora Fedora 8 - x86_64 enabled updates Fedora 8 - x86_64 - Updates enabled What am I missing? A: OK, I figured it out. I needed to upgrade the fedora-release package. That allowed me to see all of the updated packages. Thanks to ethyreal for pointing me to the Yum upgrade FAQ. A: i know this seems silly but did you try removing it and reinstalling? yum remove elfutils then yum install elfutils alternatively you could try updating everything: yum update ...if their is no update marked in the repository you might try: yum upgrade A: If you look at the listing of the repository packages directory at Link to Fedora Repository You will see that you have the latest version in that directory, which is why yum is not upgrading your package. This is the same in both the i386 and x86_64 package directories. So the reason that you are not seeing an update is that there is not a more current version in the repository yet. The notification in the bug report that a new version is in the repository is incorrect.
Why won't my package upgrade with yum?
I'm trying to upgrade a package using yum on Fedora 8. The package is elfutils. Here's what I have installed locally: $ yum info elfutils Installed Packages Name : elfutils Arch : x86_64 Version: 0.130 Release: 3.fc8 Size : 436 k Repo : installed Summary: A collection of utilities and DSOs to handle compiled objects There's a bug in this version, and according to the bug report, a newer version has been pushed to the Fedora 8 stable repository. But, if I try to update: $ yum update elfutils Setting up Update Process Could not find update match for elfutils No Packages marked for Update Here are my repositories: $ yum repolist enabled repo id repo name status InstallMedia Fedora 8 enabled fedora Fedora 8 - x86_64 enabled updates Fedora 8 - x86_64 - Updates enabled What am I missing?
[ "OK, I figured it out. I needed to upgrade the fedora-release package. That allowed me to see all of the updated packages. Thanks to ethyreal for pointing me to the Yum upgrade FAQ.\n", "i know this seems silly but did you try removing it and reinstalling?\nyum remove elfutils\n\nthen\nyum install elfutils\n\nalternatively you could try updating everything:\nyum update\n\n...if their is no update marked in the repository you might try:\nyum upgrade\n\n", "If you look at the listing of the repository packages directory at\nLink to Fedora Repository\nYou will see that you have the latest version in that directory, which is why yum is not upgrading your package. This is the same in both the i386 and x86_64 package directories. So the reason that you are not seeing an update is that there is not a more current version in the repository yet. The notification in the bug report that a new version is in the repository is incorrect.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "fedora", "linux", "yum" ]
stackoverflow_0000104363_fedora_linux_yum.txt
Q: Wrong 'Local Path' in DCOM Config entry I have a component in DCOM Config whose 'Local Path' (on the General tab of the Properties page for that component in dcomcnfg) is pointing to the wrong place. However when I go to that directory and unregister the component using "componentname.exe /unregserver", the Local Path for that component remains unchanged. I've also tried going to the correct directory and registering the component there, using "componentname.exe /regserver", but the value in 'Local Path' still doesn't change. Any suggestions? A: Sounds to me like that componentname.exe is not using the ProgID/GUID that you think it's using. Either that or its register/unregister commands aren't working. Do you have the source? You could step through the registration routine and see what it's doing.
Wrong 'Local Path' in DCOM Config entry
I have a component in DCOM Config whose 'Local Path' (on the General tab of the Properties page for that component in dcomcnfg) is pointing to the wrong place. However when I go to that directory and unregister the component using "componentname.exe /unregserver", the Local Path for that component remains unchanged. I've also tried going to the correct directory and registering the component there, using "componentname.exe /regserver", but the value in 'Local Path' still doesn't change. Any suggestions?
[ "Sounds to me like that componentname.exe is not using the ProgID/GUID that you think it's using. Either that or its register/unregister commands aren't working. Do you have the source? You could step through the registration routine and see what it's doing.\n" ]
[ 1 ]
[]
[]
[ "dcom" ]
stackoverflow_0000102649_dcom.txt
Q: Web application to user instant messaging What options are available for receiving instant alerts from web applications? I have a time sensitive web application I need to tend to (approving expediated purchase order requests). I have thought of being notified by e-mail and SMS. Are there any programs to let my website send a popup window directly to my screen? Or any other instant notification options? A: Include instant messenger fields in your account details. When your application processes an expedited order, stick the notification in a queue. If their instant messenger field is blank, prompt them. Write an XMPP (Jabber) bot that consumes items in that queue and send them out. Use transports to enable functionality for other networks, for MSN Messenger users, etc. A: You could always have it send messages via Jabber (or whatever IM network you choose). A: If you have your application open and you want a popup to appear, you could have a javascript timer that does an ajax style poll of your server every so often to see if there is a notification it needs to post. You could then throw up a pop up with the notification? A: You could also have an RSS feed tying into the application that would publish an update as soon as something occurred. Granted that would require you to have an RSS reader constantly pinging the application. Similarly, if you use the log4j/log4net framework, you can log a "fatal" or other special event for these actions and configure the appender to notify you via SMS, e-mail, etc. ...ooh, come to think of it. If you are using a .NET based app, log4net has a NetSendAppender that "Writes logging events to the Windows Messenger service. These messages are displayed in a dialog on a users terminal." A: The question isn't really specific enough to answer. From a high level point of view, presumably this notification would be sent from the server process and not client side via JavaScript. Without knowing the server side development language I can't say what specific libraries might be available, however there is the PhpTocLib PHP library available on SourceForge although I'm not sure how current that is. Another option would be to sign up for twitter or a similar service and send yourself a direct message when you have an update of interest. This could be accomplished with a simple HTTP POST from your application and you could receive notification using whatever methods the various microblogging services offer, including SMS.
Web application to user instant messaging
What options are available for receiving instant alerts from web applications? I have a time sensitive web application I need to tend to (approving expediated purchase order requests). I have thought of being notified by e-mail and SMS. Are there any programs to let my website send a popup window directly to my screen? Or any other instant notification options?
[ "Include instant messenger fields in your account details. When your application processes an expedited order, stick the notification in a queue. If their instant messenger field is blank, prompt them. Write an XMPP (Jabber) bot that consumes items in that queue and send them out. Use transports to enable functionality for other networks, for MSN Messenger users, etc.\n", "You could always have it send messages via Jabber (or whatever IM network you choose).\n", "If you have your application open and you want a popup to appear, you could have a javascript timer that does an ajax style poll of your server every so often to see if there is a notification it needs to post. You could then throw up a pop up with the notification?\n", "You could also have an RSS feed tying into the application that would publish an update as soon as something occurred. Granted that would require you to have an RSS reader constantly pinging the application.\nSimilarly, if you use the log4j/log4net framework, you can log a \"fatal\" or other special event for these actions and configure the appender to notify you via SMS, e-mail, etc.\n...ooh, come to think of it. If you are using a .NET based app, log4net has a NetSendAppender that \"Writes logging events to the Windows Messenger service. These messages are displayed in a dialog on a users terminal.\"\n", "The question isn't really specific enough to answer. From a high level point of view, presumably this notification would be sent from the server process and not client side via JavaScript. Without knowing the server side development language I can't say what specific libraries might be available, however there is the PhpTocLib PHP library available on SourceForge although I'm not sure how current that is.\nAnother option would be to sign up for twitter or a similar service and send yourself a direct message when you have an update of interest. This could be accomplished with a simple HTTP POST from your application and you could receive notification using whatever methods the various microblogging services offer, including SMS.\n" ]
[ 2, 1, 1, 0, 0 ]
[]
[]
[ "messaging" ]
stackoverflow_0000104733_messaging.txt
Q: Disappearing checkbox label in ASP.Net 3.5 DetailsView control I have a web form with a button and a DetailsView control on it. In the button's click event I change the DetailsView control to insert mode so I can add records: DetailsView1.ChangeMode(DetailsViewMode.Insert) Everything works fine, except for a checkbox in the DetailsView. When the DetailsView goes into insert mode, the text describing what the checkbox is for disappears. The checkbox itself works fine. Why is my text disappearing and how can I fix it? A: I was able to fix my problem by changing it to a template field. Not sure why it wouldn't work the other way. A: Is the text in a Label that is in the item template? If so, you'd need to add it to the Edit Item Template. Also check that the width of the control is wide enough for all the controls and text. It may be getting hidden due to absolute positioning. A: Thanks for the quick reply. The text isn't in a template. It's just a CheckBoxField with the Text property set to "Active": I've tried widening the field and the DetailsView control, but the text still disappears when I click the button.
Disappearing checkbox label in ASP.Net 3.5 DetailsView control
I have a web form with a button and a DetailsView control on it. In the button's click event I change the DetailsView control to insert mode so I can add records: DetailsView1.ChangeMode(DetailsViewMode.Insert) Everything works fine, except for a checkbox in the DetailsView. When the DetailsView goes into insert mode, the text describing what the checkbox is for disappears. The checkbox itself works fine. Why is my text disappearing and how can I fix it?
[ "I was able to fix my problem by changing it to a template field. Not sure why it wouldn't work the other way. \n", "Is the text in a Label that is in the item template? If so, you'd need to add it to the Edit Item Template.\nAlso check that the width of the control is wide enough for all the controls and text. It may be getting hidden due to absolute positioning.\n", "Thanks for the quick reply. The text isn't in a template. It's just a CheckBoxField with the Text property set to \"Active\":\n\nI've tried widening the field and the DetailsView control, but the text still disappears when I click the button. \n" ]
[ 2, 1, 1 ]
[]
[]
[ "asp.net", "checkbox", "detailsview", "webforms" ]
stackoverflow_0000103708_asp.net_checkbox_detailsview_webforms.txt
Q: Remote Linux server to remote linux server dir copy. How? What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both. A: There are two ways I usually do this, both use ssh: scp -r sourcedir/ [email protected]:/dest/dir/ or, the more robust and faster (in terms of transfer speed) method: rsync -auv -e ssh --progress sourcedir/ [email protected]:/dest/dir/ Read the man pages for each command if you want more details about how they work. A: I would modify a previously suggested reply: rsync -avlzp /path/to/sfolder [email protected]:/path/to/remote/dfolder as follows: -a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this: rsync -aHvz /path/to/sfolder [email protected]:/path/to/remote/dfolder You also have to be careful about trailing slashes. You probably want rsync -aHvz /path/to/sfolder/ [email protected]:/path/to/remote/dfolder if the desire is for the contents of the source "sfolder" to appear in the destination "dfolder". Without the trailing slash, an "sfolder" subdirectory would be created in the destination "dfolder". A: rsync -avlzp /path/to/folder [email protected]:/path/to/remote/folder A: scp -r <directory> <username>@<targethost>:<targetdir> A: Log in to one machine $ scp -r /path/to/top/directory user@server:/path/to/copy A: Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too! Rsync works with SSH so your copy operation is secure. A: Try unison if the task is recurring. http://www.cis.upenn.edu/~bcpierce/unison/ A: Check out scp or rsync, man scp man rsync scp file1 file2 dir3 user@remotehost:path A: I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm. If you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host. rdiff-backup user1@host1::/source-dir user2@host2::/dest-dir from the doc: rdiff-backup also preserves subdirectories, hard links, dev files, permissions, uid/gid ownership, modification times, extended attributes, acls, and resource forks. which is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly) install on ubuntu: sudo apt-get install rdiff-backup A: Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh: tar cvf - | ssh server tar xf - A: I think you can try with: rsync -azvu -e ssh user@host1:/directory/ user@host2:/directory2/ (and I assume you are on host0 and you want to copy from host1 to host2 directly) If the above does not work, you could try: ssh user@host1 "/usr/bin/rsync -azvu -e ssh /directory/ user@host2:/directory2/" in the this, it would work, if you already have setup passwordless SSH login from host1 to host2 A: scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host. A: As non-root user ideally: scp -r src $host:$path If you already some of the content on $host consider using rsync with ssh as a tunnel. /Allan A: If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this: cd /origin; find . -xdev -depth -not -path ./lost+found -print0 \ | tar --create --atime-preserve=system --null --files-from=- --format=posix \ --no-recursion --sparse | ssh targethost 'cd /target; tar --extract \ --overwrite --preserve-permissions --sparse' I keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow. A: scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine. A: I like to pipe tar through ssh. tar cf - [directory] | ssh [username]@[hostname] tar xf - -C [destination on remote box] This method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership. tar cf - [directory] | ssh [username]@[hostname] "cat > output.tar" For slow connections you can add compression, z for gzip or j for bzip2. tar cjf - [directory] | ssh [username]@[hostname] "cat > output.tar.bz2" tar czf - [directory] | ssh [username]@[hostname] "cat > output.tar.gz" tar czf - [directory] | ssh [username]@[hostname] tar xzf - -C [destination on remote box]
Remote Linux server to remote linux server dir copy. How?
What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both.
[ "There are two ways I usually do this, both use ssh:\nscp -r sourcedir/ [email protected]:/dest/dir/\n\nor, the more robust and faster (in terms of transfer speed) method:\nrsync -auv -e ssh --progress sourcedir/ [email protected]:/dest/dir/\n\nRead the man pages for each command if you want more details about how they work.\n", "I would modify a previously suggested reply:\nrsync -avlzp /path/to/sfolder [email protected]:/path/to/remote/dfolder\n\nas follows:\n-a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this:\nrsync -aHvz /path/to/sfolder [email protected]:/path/to/remote/dfolder\n\nYou also have to be careful about trailing slashes. You probably want\nrsync -aHvz /path/to/sfolder/ [email protected]:/path/to/remote/dfolder\n\nif the desire is for the contents of the source \"sfolder\" to appear in the destination \"dfolder\". Without the trailing slash, an \"sfolder\" subdirectory would be created in the destination \"dfolder\".\n", "rsync -avlzp /path/to/folder [email protected]:/path/to/remote/folder\n", "scp -r <directory> <username>@<targethost>:<targetdir>\n\n", "Log in to one machine\n\n$ scp -r /path/to/top/directory user@server:/path/to/copy\n\n", "Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too!\nRsync works with SSH so your copy operation is secure.\n", "Try unison if the task is recurring.\nhttp://www.cis.upenn.edu/~bcpierce/unison/\n", "Check out scp or rsync, \nman scp\nman rsync\nscp file1 file2 dir3 user@remotehost:path\n\n", "I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm.\nIf you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host.\nrdiff-backup user1@host1::/source-dir user2@host2::/dest-dir\n\nfrom the doc:\n\nrdiff-backup also preserves \n subdirectories, hard links, dev files,\n permissions, uid/gid ownership, \n modification times, extended\n attributes, acls, and resource forks.\n\nwhich is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly)\ninstall on ubuntu:\nsudo apt-get install rdiff-backup\n\n", "Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh:\ntar cvf - | ssh server tar xf -\n\n", "I think you can try with:\nrsync -azvu -e ssh user@host1:/directory/ user@host2:/directory2/\n\n(and I assume you are on host0 and you want to copy from host1 to host2 directly)\nIf the above does not work, you could try:\nssh user@host1 \"/usr/bin/rsync -azvu -e ssh /directory/ user@host2:/directory2/\"\n\nin the this, it would work, if you already have setup passwordless SSH login from host1 to host2\n", "scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host.\n", "As non-root user ideally:\nscp -r src $host:$path\nIf you already some of the content on $host consider using rsync with ssh as a tunnel.\n/Allan\n", "If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this:\ncd /origin; find . -xdev -depth -not -path ./lost+found -print0 \\\n| tar --create --atime-preserve=system --null --files-from=- --format=posix \\\n--no-recursion --sparse | ssh targethost 'cd /target; tar --extract \\\n--overwrite --preserve-permissions --sparse'\n\nI keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow.\n", "scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine.\n", "I like to pipe tar through ssh. \ntar cf - [directory] | ssh [username]@[hostname] tar xf - -C [destination on remote box]\nThis method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership.\ntar cf - [directory] | ssh [username]@[hostname] \"cat > output.tar\"\nFor slow connections you can add compression, z for gzip or j for bzip2.\ntar cjf - [directory] | ssh [username]@[hostname] \"cat > output.tar.bz2\"\ntar czf - [directory] | ssh [username]@[hostname] \"cat > output.tar.gz\"\ntar czf - [directory] | ssh [username]@[hostname] tar xzf - -C [destination on remote box]\n" ]
[ 69, 38, 9, 6, 5, 3, 3, 2, 2, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "data_transfer", "linux" ]
stackoverflow_0000069411_data_transfer_linux.txt
Q: Is GnuPG compatible with McAfee eBusiness Server 7.1? Right now, we're using PGP command line 9.0. Does anybody know if GnuPG will work? It'd be a lot cheaper. EDIT: Theoretically, GnuPG/PGP/McAfee eBusiness Server should be able to interoperate. In practice, you pretty much just have to test to see. We did not make GnuPG work with McAfee eBusiness Server. A: I've never used McAfee eBusiness Server specifically, but the entire point of GnuPG was to provide Free Software that implemented the OpenPGP spec. Unless McAfee is for some hideously obnoxious reason mandating specific ciphers, there shouldn't be a problem. Note that if some components are going to be checking a key with PGP, and some with GnuPG, you may want to doublecheck the interoperability FAQ question for GnuPG, as you may, in fact, have to limit your cipher and compression algorithms or signature versions. That FAQ is discussing a much older version of PGP, so it may actually no longer be an issue.
Is GnuPG compatible with McAfee eBusiness Server 7.1?
Right now, we're using PGP command line 9.0. Does anybody know if GnuPG will work? It'd be a lot cheaper. EDIT: Theoretically, GnuPG/PGP/McAfee eBusiness Server should be able to interoperate. In practice, you pretty much just have to test to see. We did not make GnuPG work with McAfee eBusiness Server.
[ "I've never used McAfee eBusiness Server specifically, but the entire point of GnuPG was to provide Free Software that implemented the OpenPGP spec. Unless McAfee is for some hideously obnoxious reason mandating specific ciphers, there shouldn't be a problem.\nNote that if some components are going to be checking a key with PGP, and some with GnuPG, you may want to doublecheck the interoperability FAQ question for GnuPG, as you may, in fact, have to limit your cipher and compression algorithms or signature versions. That FAQ is discussing a much older version of PGP, so it may actually no longer be an issue.\n" ]
[ 2 ]
[]
[]
[ "gnupg", "mcafee", "pgp" ]
stackoverflow_0000103831_gnupg_mcafee_pgp.txt
Q: Suggestions on how to add the functionality to import Finale music files on an application? I'm working on a music writing application and would like to add the functionality to import Finale music files. Right now, the only thing I know is that they are enigma binary files. Does anyone have any suggestions on where I could start so that I could be able to parse through these types of files? A: Finale files are not just binary files, but compressed, encrypted binary files. ETF files are text files and do have some documentation in older versions of the Finale plug-in developer kit. But ETF export was removed from Finale several versions ago. As was previously suggested, your best bet is to import MusicXML files instead. This will give you higher-quality imports in much less development time. MusicXML support is built into Finale since 2006, PrintMusic since 2006, Allegro and Songwriter since 2007, and will be coming to NotePad and Reader in 2009. Plug-ins are available that export MusicXML files from Finale all the way back to 2000 on Windows, 2004 on Mac OS X PPC, and 2007 on Mac OS X Intel. The MusicXML support in Finale has been under development for nearly 10 years and provides a near-lossless export of Finale files into an open, standard, royalty-free format. MusicXML is supported by over 150 programs, so by adding MusicXML support you not only get Finale file support, but support for files originally created with Sibelius, capella, Encore, or (via PDFtoMusic Pro) any program that can print a PDF version of a musical score. There is lots of information about MusicXML at http://www.makemusic.com/musicxml. This includes the MusicXML DTD and XSD, a tutorial,sample files, and more. There is also a MusicXML developer mailing list available for signup at http://www.makemusic.com/musicxml/mailing-list. MusicXML has a lot of features, so do not try to tackle all of it at once. Start off supporting the basics of pitches and rhythms, then add more and more features over time based on what your customers need. A: Get a good hex editor and start looking inside some files. Look for common structure. Do some detective work. Look for fields that might be counts, sizes or offsets within the file. Make trivial changes in Finale and observe the changes in the file. Make changes with the hex editor, then load the changed file back into Finale and see if the change does what you thought it would. So this is a completely unhelpful answer, but the best way to reverse the file-format is to jump in and just do it. You're probably in for a very long process BTW, but at least it's fun. Oh, and pray the file-format isn't compressed... A: I don't know about the older .mus files, but the newer .eft files are partially described here: http://www.lilypond.org/web/devel/misc/etfformat. A: I would look into the MusicXml format, http://www.recordare.com/xml.html. Finale should have the ability to export to MusicXml. (I think it is with a plug-in shipped with newer versions of Finale). From there, it should be relatively straightforward, because it is xml, after all.
Suggestions on how to add the functionality to import Finale music files on an application?
I'm working on a music writing application and would like to add the functionality to import Finale music files. Right now, the only thing I know is that they are enigma binary files. Does anyone have any suggestions on where I could start so that I could be able to parse through these types of files?
[ "Finale files are not just binary files, but compressed, encrypted binary files. ETF files are text files and do have some documentation in older versions of the Finale plug-in developer kit. But ETF export was removed from Finale several versions ago.\nAs was previously suggested, your best bet is to import MusicXML files instead. This will give you higher-quality imports in much less development time. MusicXML support is built into Finale since 2006, PrintMusic since 2006, Allegro and Songwriter since 2007, and will be coming to NotePad and Reader in 2009. Plug-ins are available that export MusicXML files from Finale all the way back to 2000 on Windows, 2004 on Mac OS X PPC, and 2007 on Mac OS X Intel. The MusicXML support in Finale has been under development for nearly 10 years and provides a near-lossless export of Finale files into an open, standard, royalty-free format.\nMusicXML is supported by over 150 programs, so by adding MusicXML support you not only get Finale file support, but support for files originally created with Sibelius, capella, Encore, or (via PDFtoMusic Pro) any program that can print a PDF version of a musical score.\nThere is lots of information about MusicXML at http://www.makemusic.com/musicxml. This includes the MusicXML DTD and XSD, a tutorial,sample files, and more. There is also a MusicXML developer mailing list available for signup at http://www.makemusic.com/musicxml/mailing-list.\nMusicXML has a lot of features, so do not try to tackle all of it at once. Start off supporting the basics of pitches and rhythms, then add more and more features over time based on what your customers need.\n", "Get a good hex editor and start looking inside some files. Look for common structure. Do some detective work. Look for fields that might be counts, sizes or offsets within the file. Make trivial changes in Finale and observe the changes in the file. Make changes with the hex editor, then load the changed file back into Finale and see if the change does what you thought it would.\nSo this is a completely unhelpful answer, but the best way to reverse the file-format is to jump in and just do it. You're probably in for a very long process BTW, but at least it's fun.\nOh, and pray the file-format isn't compressed...\n", "I don't know about the older .mus files, but the newer .eft files are partially described here:\nhttp://www.lilypond.org/web/devel/misc/etfformat.\n", "I would look into the MusicXml format, http://www.recordare.com/xml.html.\nFinale should have the ability to export to MusicXml. (I think it is with a plug-in shipped with newer versions of Finale). From there, it should be relatively straightforward, because it is xml, after all.\n" ]
[ 4, 3, 2, 1 ]
[]
[]
[ "parsing" ]
stackoverflow_0000098711_parsing.txt
Q: Using multiple web projects with different languages in Visual Studio I need to combine a VB web project and a C# web project and have them run alongside each other in the same web root. For instance, I need to be able to navigate to localhost:1234/vbProjPage.aspx and then redirect to localhost:1234/cSharpProjPage.aspx. Is this possible from within Visual Studio 2008? I know you have the ability to create a web site and throw everything into the root, but it would be best in this scenario to keep each project separate from each other. UPDATE: To answer Wes' question, it is possible but not desirable to change paths like that (/vb/vbPage.aspx & /cs/csPage.aspx) UPDATE: Travis suggested using sub-web projects. This link explains how to do it but the solution involves putting a project inside of a project, that is exactly what I am trying to avoid. I need the projects physically separated. A: You can do this using sub-web projects. This has been available in Visual Studio since 2005 and works with the Web Application Project style of web site. ScottGu has a great blog entry describing the process. You may face some interesting challenges getting pages to commingle in the same folder, but the sub-web project structure should still lend you some ideas. A: I don't think you'll be able to have two seperate projects but intermixing them within one project isn't a problem. You could always organize the files into folder to keep things seperate if you felt the need. A: Just out of curiosity, would changing your url paths be out of the question? Instead of 1234/vbpage and 1234/cspage how about something like 1234/vb/page and 1234/cs/page ? I know you said same web root, but I'm just curious :) A: you could use URL re-writting with a filter to look for "cs" or "vb" at the begining of each file and direct it to the appropriate directory.
Using multiple web projects with different languages in Visual Studio
I need to combine a VB web project and a C# web project and have them run alongside each other in the same web root. For instance, I need to be able to navigate to localhost:1234/vbProjPage.aspx and then redirect to localhost:1234/cSharpProjPage.aspx. Is this possible from within Visual Studio 2008? I know you have the ability to create a web site and throw everything into the root, but it would be best in this scenario to keep each project separate from each other. UPDATE: To answer Wes' question, it is possible but not desirable to change paths like that (/vb/vbPage.aspx & /cs/csPage.aspx) UPDATE: Travis suggested using sub-web projects. This link explains how to do it but the solution involves putting a project inside of a project, that is exactly what I am trying to avoid. I need the projects physically separated.
[ "You can do this using sub-web projects. This has been available in Visual Studio since 2005 and works with the Web Application Project style of web site. ScottGu has a great blog entry describing the process. You may face some interesting challenges getting pages to commingle in the same folder, but the sub-web project structure should still lend you some ideas.\n", "I don't think you'll be able to have two seperate projects but intermixing them within one project isn't a problem. You could always organize the files into folder to keep things seperate if you felt the need.\n", "Just out of curiosity, would changing your url paths be out of the question? Instead of 1234/vbpage and 1234/cspage how about something like 1234/vb/page and 1234/cs/page ? I know you said same web root, but I'm just curious :)\n", "you could use URL re-writting with a filter to look for \"cs\" or \"vb\" at the begining of each file and direct it to the appropriate directory.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "c#", "ide", "vb.net", "visual_studio" ]
stackoverflow_0000104133_c#_ide_vb.net_visual_studio.txt
Q: Automating interaction with a HTML game using C#? There are some HTML based games (ie bootleggers.us) that have a simple login form and after that your entire game experience revolves around submitting various forms and reading information from the website itself. My Question is, what is the best way to go about writing a bot / automate the html-based game using C#? My initial thought is to use the System.Net.HttpRequest and WebRequest classes to get the source html and parse using regexs to get the desired information. However, I would like to avoid this if it is at all possible. Are there any solutions that abstract away some of this and make automating website interaction easier? Ie filling out forms, submitting forms, reading values from the website, etc? Some library? A: You could use Watin: http://watin.sourceforge.net/ A: No, you're pretty much going to have to "screen scrape" every page. You might consider writing most this in JavaScript instead of C#. Depending on the HTML of the game site, this could be more or less difficult depending on whether they provide good id attributes on the page elements, etc...
Automating interaction with a HTML game using C#?
There are some HTML based games (ie bootleggers.us) that have a simple login form and after that your entire game experience revolves around submitting various forms and reading information from the website itself. My Question is, what is the best way to go about writing a bot / automate the html-based game using C#? My initial thought is to use the System.Net.HttpRequest and WebRequest classes to get the source html and parse using regexs to get the desired information. However, I would like to avoid this if it is at all possible. Are there any solutions that abstract away some of this and make automating website interaction easier? Ie filling out forms, submitting forms, reading values from the website, etc? Some library?
[ "You could use Watin: http://watin.sourceforge.net/\n", "No, you're pretty much going to have to \"screen scrape\" every page. You might consider writing most this in JavaScript instead of C#. Depending on the HTML of the game site, this could be more or less difficult depending on whether they provide good id attributes on the page elements, etc...\n" ]
[ 3, 0 ]
[]
[]
[ "automation", "c#" ]
stackoverflow_0000104878_automation_c#.txt
Q: Default Printer in Unmanaged C++ I'm looking for a way to find the name of the Windows default printer using unmanaged C++ (found plenty of .NET examples, but no success unmanaged). Thanks. A: Here is how to get a list of current printers and the default one if there is one set as the default. Also note: getting zero for the default printer name length is valid if the user has no printers or has not set one as default. Also being able to handle long printer names should be supported so calling GetDefaultPrinter with NULL as a buffer pointer first will return the name length and then you can allocate a name buffer big enough to hold the name. DWORD numprinters; DWORD defprinter=0; DWORD dwSizeNeeded=0; DWORD dwItem; LPPRINTER_INFO_2 printerinfo = NULL; // Get buffer size EnumPrinters ( PRINTER_ENUM_LOCAL | PRINTER_ENUM_CONNECTIONS , NULL, 2, NULL, 0, &dwSizeNeeded, &numprinters ); // allocate memory //printerinfo = (LPPRINTER_INFO_2)HeapAlloc ( GetProcessHeap (), HEAP_ZERO_MEMORY, dwSizeNeeded ); printerinfo = (LPPRINTER_INFO_2)new char[dwSizeNeeded]; if ( EnumPrinters ( PRINTER_ENUM_LOCAL | PRINTER_ENUM_CONNECTIONS, // what to enumerate NULL, // printer name (NULL for all) 2, // level (LPBYTE)printerinfo, // buffer dwSizeNeeded, // size of buffer &dwSizeNeeded, // returns size &numprinters // return num. items ) == 0 ) { numprinters=0; } { DWORD size=0; // Get the size of the default printer name. GetDefaultPrinter(NULL, &size); if(size) { // Allocate a buffer large enough to hold the printer name. TCHAR* buffer = new TCHAR[size]; // Get the printer name. GetDefaultPrinter(buffer, &size); for ( dwItem = 0; dwItem < numprinters; dwItem++ ) { if(!strcmp(buffer,printerinfo[dwItem].pPrinterName)) defprinter=dwItem; } delete buffer; } } A: The following works great for printing with the win32api from C++ char szPrinterName[255]; unsigned long lPrinterNameLength; GetDefaultPrinter( szPrinterName, &lPrinterNameLength ); HDC hPrinterDC; hPrinterDC = CreateDC("WINSPOOL\0", szPrinterName, NULL, NULL); In the future instead of googling "unmanaged" try googling "win32 /subject/" or "win32 api /subject/" A: How to retrieve and set the default printer in Windows: http://support.microsoft.com/default.aspx?scid=kb;EN-US;246772 Code from now-unavailable article: // You are explicitly linking to GetDefaultPrinter because linking // implicitly on Windows 95/98 or NT4 results in a runtime error. // This block specifies which text version you explicitly link to. #ifdef UNICODE #define GETDEFAULTPRINTER "GetDefaultPrinterW" #else #define GETDEFAULTPRINTER "GetDefaultPrinterA" #endif // Size of internal buffer used to hold "printername,drivername,portname" // string. You may have to increase this for huge strings. #define MAXBUFFERSIZE 250 /*----------------------------------------------------------------*/ /* DPGetDefaultPrinter */ /* */ /* Parameters: */ /* pPrinterName: Buffer alloc'd by caller to hold printer name. */ /* pdwBufferSize: On input, ptr to size of pPrinterName. */ /* On output, min required size of pPrinterName. */ /* */ /* NOTE: You must include enough space for the NULL terminator! */ /* */ /* Returns: TRUE for success, FALSE for failure. */ /*----------------------------------------------------------------*/ BOOL DPGetDefaultPrinter(LPTSTR pPrinterName, LPDWORD pdwBufferSize) { BOOL bFlag; OSVERSIONINFO osv; TCHAR cBuffer[MAXBUFFERSIZE]; PRINTER_INFO_2 *ppi2 = NULL; DWORD dwNeeded = 0; DWORD dwReturned = 0; HMODULE hWinSpool = NULL; PROC fnGetDefaultPrinter = NULL; // What version of Windows are you running? osv.dwOSVersionInfoSize = sizeof(OSVERSIONINFO); GetVersionEx(&osv); // If Windows 95 or 98, use EnumPrinters. if (osv.dwPlatformId == VER_PLATFORM_WIN32_WINDOWS) { // The first EnumPrinters() tells you how big our buffer must // be to hold ALL of PRINTER_INFO_2. Note that this will // typically return FALSE. This only means that the buffer (the 4th // parameter) was not filled in. You do not want it filled in here. SetLastError(0); bFlag = EnumPrinters(PRINTER_ENUM_DEFAULT, NULL, 2, NULL, 0, &dwNeeded, &dwReturned); { if ((GetLastError() != ERROR_INSUFFICIENT_BUFFER) || (dwNeeded == 0)) return FALSE; } // Allocate enough space for PRINTER_INFO_2. ppi2 = (PRINTER_INFO_2 *)GlobalAlloc(GPTR, dwNeeded); if (!ppi2) return FALSE; // The second EnumPrinters() will fill in all the current information. bFlag = EnumPrinters(PRINTER_ENUM_DEFAULT, NULL, 2, (LPBYTE)ppi2, dwNeeded, &dwNeeded, &dwReturned); if (!bFlag) { GlobalFree(ppi2); return FALSE; } // If specified buffer is too small, set required size and fail. if ((DWORD)lstrlen(ppi2->pPrinterName) >= *pdwBufferSize) { *pdwBufferSize = (DWORD)lstrlen(ppi2->pPrinterName) + 1; GlobalFree(ppi2); return FALSE; } // Copy printer name into passed-in buffer. lstrcpy(pPrinterName, ppi2->pPrinterName); // Set buffer size parameter to minimum required buffer size. *pdwBufferSize = (DWORD)lstrlen(ppi2->pPrinterName) + 1; } // If Windows NT, use the GetDefaultPrinter API for Windows 2000, // or GetProfileString for version 4.0 and earlier. else if (osv.dwPlatformId == VER_PLATFORM_WIN32_NT) { if (osv.dwMajorVersion >= 5) // Windows 2000 or later (use explicit call) { hWinSpool = LoadLibrary("winspool.drv"); if (!hWinSpool) return FALSE; fnGetDefaultPrinter = GetProcAddress(hWinSpool, GETDEFAULTPRINTER); if (!fnGetDefaultPrinter) { FreeLibrary(hWinSpool); return FALSE; } bFlag = fnGetDefaultPrinter(pPrinterName, pdwBufferSize); FreeLibrary(hWinSpool); if (!bFlag) return FALSE; } else // NT4.0 or earlier { // Retrieve the default string from Win.ini (the registry). // String will be in form "printername,drivername,portname". if (GetProfileString("windows", "device", ",,,", cBuffer, MAXBUFFERSIZE) <= 0) return FALSE; // Printer name precedes first "," character. strtok(cBuffer, ","); // If specified buffer is too small, set required size and fail. if ((DWORD)lstrlen(cBuffer) >= *pdwBufferSize) { *pdwBufferSize = (DWORD)lstrlen(cBuffer) + 1; return FALSE; } // Copy printer name into passed-in buffer. lstrcpy(pPrinterName, cBuffer); // Set buffer size parameter to minimum required buffer size. *pdwBufferSize = (DWORD)lstrlen(cBuffer) + 1; } } // Clean up. if (ppi2) GlobalFree(ppi2); return TRUE; } #undef MAXBUFFERSIZE #undef GETDEFAULTPRINTER // You are explicitly linking to SetDefaultPrinter because implicitly // linking on Windows 95/98 or NT4 results in a runtime error. // This block specifies which text version you explicitly link to. #ifdef UNICODE #define SETDEFAULTPRINTER "SetDefaultPrinterW" #else #define SETDEFAULTPRINTER "SetDefaultPrinterA" #endif /*-----------------------------------------------------------------*/ /* DPSetDefaultPrinter */ /* */ /* Parameters: */ /* pPrinterName: Valid name of existing printer to make default. */ /* */ /* Returns: TRUE for success, FALSE for failure. */ /*-----------------------------------------------------------------*/ BOOL DPSetDefaultPrinter(LPTSTR pPrinterName) { BOOL bFlag; OSVERSIONINFO osv; DWORD dwNeeded = 0; HANDLE hPrinter = NULL; PRINTER_INFO_2 *ppi2 = NULL; LPTSTR pBuffer = NULL; LONG lResult; HMODULE hWinSpool = NULL; PROC fnSetDefaultPrinter = NULL; // What version of Windows are you running? osv.dwOSVersionInfoSize = sizeof(OSVERSIONINFO); GetVersionEx(&osv); if (!pPrinterName) return FALSE; // If Windows 95 or 98, use SetPrinter. if (osv.dwPlatformId == VER_PLATFORM_WIN32_WINDOWS) { // Open this printer so you can get information about it. bFlag = OpenPrinter(pPrinterName, &hPrinter, NULL); if (!bFlag || !hPrinter) return FALSE; // The first GetPrinter() tells you how big our buffer must // be to hold ALL of PRINTER_INFO_2. Note that this will // typically return FALSE. This only means that the buffer (the 3rd // parameter) was not filled in. You do not want it filled in here. SetLastError(0); bFlag = GetPrinter(hPrinter, 2, 0, 0, &dwNeeded); if (!bFlag) { if ((GetLastError() != ERROR_INSUFFICIENT_BUFFER) || (dwNeeded == 0)) { ClosePrinter(hPrinter); return FALSE; } } // Allocate enough space for PRINTER_INFO_2. ppi2 = (PRINTER_INFO_2 *)GlobalAlloc(GPTR, dwNeeded); if (!ppi2) { ClosePrinter(hPrinter); return FALSE; } // The second GetPrinter() will fill in all the current information // so that all you have to do is modify what you are interested in. bFlag = GetPrinter(hPrinter, 2, (LPBYTE)ppi2, dwNeeded, &dwNeeded); if (!bFlag) { ClosePrinter(hPrinter); GlobalFree(ppi2); return FALSE; } // Set default printer attribute for this printer. ppi2->Attributes |= PRINTER_ATTRIBUTE_DEFAULT; bFlag = SetPrinter(hPrinter, 2, (LPBYTE)ppi2, 0); if (!bFlag) { ClosePrinter(hPrinter); GlobalFree(ppi2); return FALSE; } // Tell all open programs that this change occurred. // Allow each program 1 second to handle this message. lResult = SendMessageTimeout(HWND_BROADCAST, WM_SETTINGCHANGE, 0L, (LPARAM)(LPCTSTR)"windows", SMTO_NORMAL, 1000, NULL); } // If Windows NT, use the SetDefaultPrinter API for Windows 2000, // or WriteProfileString for version 4.0 and earlier. else if (osv.dwPlatformId == VER_PLATFORM_WIN32_NT) { if (osv.dwMajorVersion >= 5) // Windows 2000 or later (use explicit call) { hWinSpool = LoadLibrary("winspool.drv"); if (!hWinSpool) return FALSE; fnSetDefaultPrinter = GetProcAddress(hWinSpool, SETDEFAULTPRINTER); if (!fnSetDefaultPrinter) { FreeLibrary(hWinSpool); return FALSE; } bFlag = fnSetDefaultPrinter(pPrinterName); FreeLibrary(hWinSpool); if (!bFlag) return FALSE; } else // NT4.0 or earlier { // Open this printer so you can get information about it. bFlag = OpenPrinter(pPrinterName, &hPrinter, NULL); if (!bFlag || !hPrinter) return FALSE; // The first GetPrinter() tells you how big our buffer must // be to hold ALL of PRINTER_INFO_2. Note that this will // typically return FALSE. This only means that the buffer (the 3rd // parameter) was not filled in. You do not want it filled in here. SetLastError(0); bFlag = GetPrinter(hPrinter, 2, 0, 0, &dwNeeded); if (!bFlag) { if ((GetLastError() != ERROR_INSUFFICIENT_BUFFER) || (dwNeeded == 0)) { ClosePrinter(hPrinter); return FALSE; } } // Allocate enough space for PRINTER_INFO_2. ppi2 = (PRINTER_INFO_2 *)GlobalAlloc(GPTR, dwNeeded); if (!ppi2) { ClosePrinter(hPrinter); return FALSE; } // The second GetPrinter() fills in all the current<BR/> // information. bFlag = GetPrinter(hPrinter, 2, (LPBYTE)ppi2, dwNeeded, &dwNeeded); if ((!bFlag) || (!ppi2->pDriverName) || (!ppi2->pPortName)) { ClosePrinter(hPrinter); GlobalFree(ppi2); return FALSE; } // Allocate buffer big enough for concatenated string. // String will be in form "printername,drivername,portname". pBuffer = (LPTSTR)GlobalAlloc(GPTR, lstrlen(pPrinterName) + lstrlen(ppi2->pDriverName) + lstrlen(ppi2->pPortName) + 3); if (!pBuffer) { ClosePrinter(hPrinter); GlobalFree(ppi2); return FALSE; } // Build string in form "printername,drivername,portname". lstrcpy(pBuffer, pPrinterName); lstrcat(pBuffer, ","); lstrcat(pBuffer, ppi2->pDriverName); lstrcat(pBuffer, ","); lstrcat(pBuffer, ppi2->pPortName); // Set the default printer in Win.ini and registry. bFlag = WriteProfileString("windows", "device", pBuffer); if (!bFlag) { ClosePrinter(hPrinter); GlobalFree(ppi2); GlobalFree(pBuffer); return FALSE; } } // Tell all open programs that this change occurred. // Allow each app 1 second to handle this message. lResult = SendMessageTimeout(HWND_BROADCAST, WM_SETTINGCHANGE, 0L, 0L, SMTO_NORMAL, 1000, NULL); } // Clean up. if (hPrinter) ClosePrinter(hPrinter); if (ppi2) GlobalFree(ppi2); if (pBuffer) GlobalFree(pBuffer); return TRUE; } #undef SETDEFAULTPRINTER A: GetDefaultPrinter (MSDN) ought to do the trick. That will get you the name to pass to CreateDC for printing.
Default Printer in Unmanaged C++
I'm looking for a way to find the name of the Windows default printer using unmanaged C++ (found plenty of .NET examples, but no success unmanaged). Thanks.
[ "Here is how to get a list of current printers and the default one if there is one set as the default.\nAlso note: getting zero for the default printer name length is valid if the user has no printers or has not set one as default. \nAlso being able to handle long printer names should be supported so calling GetDefaultPrinter with NULL as a buffer pointer first will return the name length and then you can allocate a name buffer big enough to hold the name.\nDWORD numprinters;\nDWORD defprinter=0;\nDWORD dwSizeNeeded=0;\nDWORD dwItem;\nLPPRINTER_INFO_2 printerinfo = NULL;\n\n// Get buffer size\n\nEnumPrinters ( PRINTER_ENUM_LOCAL | PRINTER_ENUM_CONNECTIONS , NULL, 2, NULL, 0, &dwSizeNeeded, &numprinters );\n\n// allocate memory\n//printerinfo = (LPPRINTER_INFO_2)HeapAlloc ( GetProcessHeap (), HEAP_ZERO_MEMORY, dwSizeNeeded );\nprinterinfo = (LPPRINTER_INFO_2)new char[dwSizeNeeded];\n\nif ( EnumPrinters ( PRINTER_ENUM_LOCAL | PRINTER_ENUM_CONNECTIONS, // what to enumerate\n NULL, // printer name (NULL for all)\n 2, // level\n (LPBYTE)printerinfo, // buffer\n dwSizeNeeded, // size of buffer\n &dwSizeNeeded, // returns size\n &numprinters // return num. items\n ) == 0 )\n{\n numprinters=0;\n}\n\n{\n DWORD size=0; \n\n // Get the size of the default printer name.\n GetDefaultPrinter(NULL, &size);\n if(size)\n {\n // Allocate a buffer large enough to hold the printer name.\n TCHAR* buffer = new TCHAR[size];\n\n // Get the printer name.\n GetDefaultPrinter(buffer, &size);\n\n for ( dwItem = 0; dwItem < numprinters; dwItem++ )\n {\n if(!strcmp(buffer,printerinfo[dwItem].pPrinterName))\n defprinter=dwItem;\n }\n delete buffer;\n }\n}\n\n", "The following works great for printing with the win32api from C++\nchar szPrinterName[255];\nunsigned long lPrinterNameLength;\nGetDefaultPrinter( szPrinterName, &lPrinterNameLength );\nHDC hPrinterDC;\nhPrinterDC = CreateDC(\"WINSPOOL\\0\", szPrinterName, NULL, NULL);\n\nIn the future instead of googling \"unmanaged\" try googling \"win32 /subject/\" or \"win32 api /subject/\"\n", "How to retrieve and set the default printer in Windows:\nhttp://support.microsoft.com/default.aspx?scid=kb;EN-US;246772\nCode from now-unavailable article:\n// You are explicitly linking to GetDefaultPrinter because linking \n// implicitly on Windows 95/98 or NT4 results in a runtime error.\n// This block specifies which text version you explicitly link to.\n#ifdef UNICODE\n #define GETDEFAULTPRINTER \"GetDefaultPrinterW\"\n#else\n #define GETDEFAULTPRINTER \"GetDefaultPrinterA\"\n#endif\n\n// Size of internal buffer used to hold \"printername,drivername,portname\"\n// string. You may have to increase this for huge strings.\n#define MAXBUFFERSIZE 250\n\n/*----------------------------------------------------------------*/ \n/* DPGetDefaultPrinter */ \n/* */ \n/* Parameters: */ \n/* pPrinterName: Buffer alloc'd by caller to hold printer name. */ \n/* pdwBufferSize: On input, ptr to size of pPrinterName. */ \n/* On output, min required size of pPrinterName. */ \n/* */ \n/* NOTE: You must include enough space for the NULL terminator! */ \n/* */ \n/* Returns: TRUE for success, FALSE for failure. */ \n/*----------------------------------------------------------------*/ \nBOOL DPGetDefaultPrinter(LPTSTR pPrinterName, LPDWORD pdwBufferSize)\n{\n BOOL bFlag;\n OSVERSIONINFO osv;\n TCHAR cBuffer[MAXBUFFERSIZE];\n PRINTER_INFO_2 *ppi2 = NULL;\n DWORD dwNeeded = 0;\n DWORD dwReturned = 0;\n HMODULE hWinSpool = NULL;\n PROC fnGetDefaultPrinter = NULL;\n\n // What version of Windows are you running?\n osv.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);\n GetVersionEx(&osv);\n\n // If Windows 95 or 98, use EnumPrinters.\n if (osv.dwPlatformId == VER_PLATFORM_WIN32_WINDOWS)\n {\n // The first EnumPrinters() tells you how big our buffer must\n // be to hold ALL of PRINTER_INFO_2. Note that this will\n // typically return FALSE. This only means that the buffer (the 4th\n // parameter) was not filled in. You do not want it filled in here.\n SetLastError(0);\n bFlag = EnumPrinters(PRINTER_ENUM_DEFAULT, NULL, 2, NULL, 0, &dwNeeded, &dwReturned);\n {\n if ((GetLastError() != ERROR_INSUFFICIENT_BUFFER) || (dwNeeded == 0))\n return FALSE;\n }\n\n // Allocate enough space for PRINTER_INFO_2.\n ppi2 = (PRINTER_INFO_2 *)GlobalAlloc(GPTR, dwNeeded);\n if (!ppi2)\n return FALSE;\n\n // The second EnumPrinters() will fill in all the current information.\n bFlag = EnumPrinters(PRINTER_ENUM_DEFAULT, NULL, 2, (LPBYTE)ppi2, dwNeeded, &dwNeeded, &dwReturned);\n if (!bFlag)\n {\n GlobalFree(ppi2);\n return FALSE;\n }\n\n // If specified buffer is too small, set required size and fail.\n if ((DWORD)lstrlen(ppi2->pPrinterName) >= *pdwBufferSize)\n {\n *pdwBufferSize = (DWORD)lstrlen(ppi2->pPrinterName) + 1;\n GlobalFree(ppi2);\n return FALSE;\n }\n\n // Copy printer name into passed-in buffer.\n lstrcpy(pPrinterName, ppi2->pPrinterName);\n\n // Set buffer size parameter to minimum required buffer size.\n *pdwBufferSize = (DWORD)lstrlen(ppi2->pPrinterName) + 1;\n }\n\n // If Windows NT, use the GetDefaultPrinter API for Windows 2000,\n // or GetProfileString for version 4.0 and earlier.\n else if (osv.dwPlatformId == VER_PLATFORM_WIN32_NT)\n {\n if (osv.dwMajorVersion >= 5) // Windows 2000 or later (use explicit call)\n {\n hWinSpool = LoadLibrary(\"winspool.drv\");\n if (!hWinSpool)\n return FALSE;\n fnGetDefaultPrinter = GetProcAddress(hWinSpool, GETDEFAULTPRINTER);\n if (!fnGetDefaultPrinter)\n {\n FreeLibrary(hWinSpool);\n return FALSE;\n }\n\n bFlag = fnGetDefaultPrinter(pPrinterName, pdwBufferSize);\n FreeLibrary(hWinSpool);\n if (!bFlag)\n return FALSE;\n }\n\n else // NT4.0 or earlier\n {\n // Retrieve the default string from Win.ini (the registry).\n // String will be in form \"printername,drivername,portname\".\n if (GetProfileString(\"windows\", \"device\", \",,,\", cBuffer, MAXBUFFERSIZE) <= 0)\n return FALSE;\n\n // Printer name precedes first \",\" character.\n strtok(cBuffer, \",\");\n\n // If specified buffer is too small, set required size and fail.\n if ((DWORD)lstrlen(cBuffer) >= *pdwBufferSize)\n {\n *pdwBufferSize = (DWORD)lstrlen(cBuffer) + 1;\n return FALSE;\n }\n\n // Copy printer name into passed-in buffer.\n lstrcpy(pPrinterName, cBuffer);\n\n // Set buffer size parameter to minimum required buffer size.\n *pdwBufferSize = (DWORD)lstrlen(cBuffer) + 1;\n }\n }\n\n // Clean up.\n if (ppi2)\n GlobalFree(ppi2);\n\n return TRUE;\n}\n#undef MAXBUFFERSIZE\n#undef GETDEFAULTPRINTER\n\n\n// You are explicitly linking to SetDefaultPrinter because implicitly\n// linking on Windows 95/98 or NT4 results in a runtime error.\n// This block specifies which text version you explicitly link to.\n#ifdef UNICODE\n #define SETDEFAULTPRINTER \"SetDefaultPrinterW\"\n#else\n #define SETDEFAULTPRINTER \"SetDefaultPrinterA\"\n#endif\n\n/*-----------------------------------------------------------------*/ \n/* DPSetDefaultPrinter */ \n/* */ \n/* Parameters: */ \n/* pPrinterName: Valid name of existing printer to make default. */ \n/* */ \n/* Returns: TRUE for success, FALSE for failure. */ \n/*-----------------------------------------------------------------*/ \nBOOL DPSetDefaultPrinter(LPTSTR pPrinterName)\n\n{\n BOOL bFlag;\n OSVERSIONINFO osv;\n DWORD dwNeeded = 0;\n HANDLE hPrinter = NULL;\n PRINTER_INFO_2 *ppi2 = NULL;\n LPTSTR pBuffer = NULL;\n LONG lResult;\n HMODULE hWinSpool = NULL;\n PROC fnSetDefaultPrinter = NULL;\n\n // What version of Windows are you running?\n osv.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);\n GetVersionEx(&osv);\n\n if (!pPrinterName)\n return FALSE;\n\n // If Windows 95 or 98, use SetPrinter.\n if (osv.dwPlatformId == VER_PLATFORM_WIN32_WINDOWS)\n {\n // Open this printer so you can get information about it.\n bFlag = OpenPrinter(pPrinterName, &hPrinter, NULL);\n if (!bFlag || !hPrinter)\n return FALSE;\n\n // The first GetPrinter() tells you how big our buffer must\n // be to hold ALL of PRINTER_INFO_2. Note that this will\n // typically return FALSE. This only means that the buffer (the 3rd\n // parameter) was not filled in. You do not want it filled in here.\n SetLastError(0);\n bFlag = GetPrinter(hPrinter, 2, 0, 0, &dwNeeded);\n if (!bFlag)\n {\n if ((GetLastError() != ERROR_INSUFFICIENT_BUFFER) || (dwNeeded == 0))\n {\n ClosePrinter(hPrinter);\n return FALSE;\n }\n }\n\n // Allocate enough space for PRINTER_INFO_2.\n ppi2 = (PRINTER_INFO_2 *)GlobalAlloc(GPTR, dwNeeded);\n if (!ppi2)\n {\n ClosePrinter(hPrinter);\n return FALSE;\n }\n\n // The second GetPrinter() will fill in all the current information\n // so that all you have to do is modify what you are interested in.\n bFlag = GetPrinter(hPrinter, 2, (LPBYTE)ppi2, dwNeeded, &dwNeeded);\n if (!bFlag)\n {\n ClosePrinter(hPrinter);\n GlobalFree(ppi2);\n return FALSE;\n }\n\n // Set default printer attribute for this printer.\n ppi2->Attributes |= PRINTER_ATTRIBUTE_DEFAULT;\n bFlag = SetPrinter(hPrinter, 2, (LPBYTE)ppi2, 0);\n if (!bFlag)\n {\n ClosePrinter(hPrinter);\n GlobalFree(ppi2);\n return FALSE;\n }\n\n // Tell all open programs that this change occurred. \n // Allow each program 1 second to handle this message.\n lResult = SendMessageTimeout(HWND_BROADCAST, WM_SETTINGCHANGE, 0L,\n (LPARAM)(LPCTSTR)\"windows\", SMTO_NORMAL, 1000, NULL);\n }\n\n // If Windows NT, use the SetDefaultPrinter API for Windows 2000,\n // or WriteProfileString for version 4.0 and earlier.\n else if (osv.dwPlatformId == VER_PLATFORM_WIN32_NT)\n {\n if (osv.dwMajorVersion >= 5) // Windows 2000 or later (use explicit call)\n {\n hWinSpool = LoadLibrary(\"winspool.drv\");\n if (!hWinSpool)\n return FALSE;\n fnSetDefaultPrinter = GetProcAddress(hWinSpool, SETDEFAULTPRINTER);\n if (!fnSetDefaultPrinter)\n {\n FreeLibrary(hWinSpool);\n return FALSE;\n }\n\n bFlag = fnSetDefaultPrinter(pPrinterName);\n FreeLibrary(hWinSpool);\n if (!bFlag)\n return FALSE;\n }\n\n else // NT4.0 or earlier\n {\n // Open this printer so you can get information about it.\n bFlag = OpenPrinter(pPrinterName, &hPrinter, NULL);\n if (!bFlag || !hPrinter)\n return FALSE;\n\n // The first GetPrinter() tells you how big our buffer must\n // be to hold ALL of PRINTER_INFO_2. Note that this will\n // typically return FALSE. This only means that the buffer (the 3rd\n // parameter) was not filled in. You do not want it filled in here.\n SetLastError(0);\n bFlag = GetPrinter(hPrinter, 2, 0, 0, &dwNeeded);\n if (!bFlag)\n {\n if ((GetLastError() != ERROR_INSUFFICIENT_BUFFER) || (dwNeeded == 0))\n {\n ClosePrinter(hPrinter);\n return FALSE;\n }\n }\n\n // Allocate enough space for PRINTER_INFO_2.\n ppi2 = (PRINTER_INFO_2 *)GlobalAlloc(GPTR, dwNeeded);\n if (!ppi2)\n {\n ClosePrinter(hPrinter);\n return FALSE;\n }\n\n // The second GetPrinter() fills in all the current<BR/>\n // information.\n bFlag = GetPrinter(hPrinter, 2, (LPBYTE)ppi2, dwNeeded, &dwNeeded);\n if ((!bFlag) || (!ppi2->pDriverName) || (!ppi2->pPortName))\n {\n ClosePrinter(hPrinter);\n GlobalFree(ppi2);\n return FALSE;\n }\n\n // Allocate buffer big enough for concatenated string.\n // String will be in form \"printername,drivername,portname\".\n pBuffer = (LPTSTR)GlobalAlloc(GPTR,\n lstrlen(pPrinterName) +\n lstrlen(ppi2->pDriverName) +\n lstrlen(ppi2->pPortName) + 3);\n if (!pBuffer)\n {\n ClosePrinter(hPrinter);\n GlobalFree(ppi2);\n return FALSE;\n }\n\n // Build string in form \"printername,drivername,portname\".\n lstrcpy(pBuffer, pPrinterName); lstrcat(pBuffer, \",\");\n lstrcat(pBuffer, ppi2->pDriverName); lstrcat(pBuffer, \",\");\n lstrcat(pBuffer, ppi2->pPortName);\n\n // Set the default printer in Win.ini and registry.\n bFlag = WriteProfileString(\"windows\", \"device\", pBuffer);\n if (!bFlag)\n {\n ClosePrinter(hPrinter);\n GlobalFree(ppi2);\n GlobalFree(pBuffer);\n return FALSE;\n }\n }\n\n // Tell all open programs that this change occurred. \n // Allow each app 1 second to handle this message.\n lResult = SendMessageTimeout(HWND_BROADCAST, WM_SETTINGCHANGE, 0L, 0L,\n SMTO_NORMAL, 1000, NULL);\n }\n\n // Clean up.\n if (hPrinter)\n ClosePrinter(hPrinter);\n if (ppi2)\n GlobalFree(ppi2);\n if (pBuffer)\n GlobalFree(pBuffer);\n\n return TRUE;\n}\n#undef SETDEFAULTPRINTER\n\n", "GetDefaultPrinter (MSDN) ought to do the trick. That will get you the name to pass to CreateDC for printing.\n" ]
[ 4, 3, 2, 1 ]
[ "Unmanaged C++ doesn't exist (and managed C++ is now C++/CLI), if you are referring to C++, using unmanaged as a tag is just sad...\n" ]
[ -1 ]
[ "c++", "default", "printing", "unmanaged", "windows" ]
stackoverflow_0000104844_c++_default_printing_unmanaged_windows.txt
Q: Big things to do when deploying a rails app In the question What little things do I need to do before deploying a rails application I am getting a lot of answers that are bigger than "little things". So this question is slighly different. What reasonably major steps do I need to take before deploying a rails application. In this case, i mean things which are are going to take more than 5 mins, and so need to be scheduled. For small oneline config changes, please use the little things question. A: Set up Capistrano to deploy You'll want to learn capistrano if you don't already know it, and use it to deploy your code in an automated way. This will involve setting up your shared directory and shared resources like database.yml. Install C Based MySQL gem If you don't have all the required libs, this can take a little while, but less than 20 minutes. Make sure you aren't vulnerable to common web application attacks Session fixation, session hijacking, cross-site scripting, SQL injection (probably you don't have to worry much about SQL injection). Be sure you use h() when outputting user-entered data in a view screen. Lots of good material online about this. Choose a server architecture Nginx, Mongrel, FastCGI, CGI, Apache, Passenger: there is a lot to choose from. Think about how your app will be used and decide on the best architecture, then set it up. Set up Exception Notifier or Exception Logger You will want your app to warn you when it breaks. Set one of these tools up to track production exceptions. Note: Exception notifier will warn you when routing errors occur (i.e. when people fat-finger URLs or script kiddies attack you): so think about what you want the framework to do when that happens and adjust accordingly. Make sure all of your passwords are out of source control If you have database.yml, mail.yml (if you use yaml_mail_config) or other sensitive files in source control, get them out of there, replace them with database.yml.example, and put them in the shared/ folder on your server. Ensure that your DB is locked down. A lot of people forget to secure MySQL when setting up their new production Rails box. Don't be like them. Make sure all of the little web-files are in place If you are planning to be listed in Google, generate a sitemap.xml file. If you are planning to use an .htaccess file for something, make sure it's there. If you need a robots.txt file to prevent certain areas of your site from being indexed, make one. If you want a good looking 404 Page, make sure it's set up correctly. If you want a "Be Right Back" page to be present when you deploy, make sure that you have a Capistrano maintenance file specified and Nginx or Apache knows how and when to redirect to it. Get your SSL Certs in place If you are going to use SSL, make sure you get certificates that are valid on your production domain, and set them up. A: Use some process monitoring Sometimes your processes (mongrels in many cases) will die or other bad things will happen to them. For example a memory leak could cause the memory consumption to increase indefinitely or a process could start using all your CPU. monit and god are both good choices to save you from this fate. They can also be set to hit a url on your site to check for a 200 response code. Set up server monitoring Some suggestions in this space: fiveruns, newrelic, scout These tools will record detailed metrics on your servers and are invaluable when something goes wrong and you need to see what actually happened. They also give you real time information on server load. If you have a cluster this kind of reporting is even more critical. Backup Write a script to periodically backup your database and any other assets that your users can upload. S3 might be a good choice for this. A: Choose a web server / load balancer My preferred server is nginx, but the common pattern is to start with apache + mod_proxy_http.
Big things to do when deploying a rails app
In the question What little things do I need to do before deploying a rails application I am getting a lot of answers that are bigger than "little things". So this question is slighly different. What reasonably major steps do I need to take before deploying a rails application. In this case, i mean things which are are going to take more than 5 mins, and so need to be scheduled. For small oneline config changes, please use the little things question.
[ "Set up Capistrano to deploy You'll want to learn capistrano if you don't already know it, and use it to deploy your code in an automated way. This will involve setting up your shared directory and shared resources like database.yml.\nInstall C Based MySQL gem If you don't have all the required libs, this can take a little while, but less than 20 minutes. \nMake sure you aren't vulnerable to common web application attacks Session fixation, session hijacking, cross-site scripting, SQL injection (probably you don't have to worry much about SQL injection). Be sure you use h() when outputting user-entered data in a view screen. Lots of good material online about this. \nChoose a server architecture Nginx, Mongrel, FastCGI, CGI, Apache, Passenger: there is a lot to choose from. Think about how your app will be used and decide on the best architecture, then set it up. \nSet up Exception Notifier or Exception Logger You will want your app to warn you when it breaks. Set one of these tools up to track production exceptions. Note: Exception notifier will warn you when routing errors occur (i.e. when people fat-finger URLs or script kiddies attack you): so think about what you want the framework to do when that happens and adjust accordingly. \nMake sure all of your passwords are out of source control If you have database.yml, mail.yml (if you use yaml_mail_config) or other sensitive files in source control, get them out of there, replace them with database.yml.example, and put them in the shared/ folder on your server. \nEnsure that your DB is locked down. A lot of people forget to secure MySQL when setting up their new production Rails box. Don't be like them. \nMake sure all of the little web-files are in place If you are planning to be listed in Google, generate a sitemap.xml file. If you are planning to use an .htaccess file for something, make sure it's there. If you need a robots.txt file to prevent certain areas of your site from being indexed, make one. If you want a good looking 404 Page, make sure it's set up correctly. If you want a \"Be Right Back\" page to be present when you deploy, make sure that you have a Capistrano maintenance file specified and Nginx or Apache knows how and when to redirect to it. \nGet your SSL Certs in place If you are going to use SSL, make sure you get certificates that are valid on your production domain, and set them up. \n", "Use some process monitoring\nSometimes your processes (mongrels in many cases) will die or other bad things will happen to them. For example a memory leak could cause the memory consumption to increase indefinitely or a process could start using all your CPU.\nmonit and god are both good choices to save you from this fate. They can also be set to hit a url on your site to check for a 200 response code.\nSet up server monitoring\nSome suggestions in this space: fiveruns, newrelic, scout\nThese tools will record detailed metrics on your servers and are invaluable when something goes wrong and you need to see what actually happened. They also give you real time information on server load.\nIf you have a cluster this kind of reporting is even more critical.\nBackup\nWrite a script to periodically backup your database and any other assets that your users can upload. S3 might be a good choice for this.\n", "Choose a web server / load balancer\nMy preferred server is nginx, but the common pattern is to start with apache + mod_proxy_http. \n" ]
[ 11, 3, 1 ]
[]
[]
[ "deployment", "ruby_on_rails" ]
stackoverflow_0000101275_deployment_ruby_on_rails.txt
Q: Find the best combination from a given set of multiple sets Say you have a shipment. It needs to go from point A to point B, point B to point C and finally point C to point D. You need it to get there in five days for the least amount of money possible. There are three possible shippers for each leg, each with their own different time and cost for each leg: Array ( [leg0] => Array ( [UPS] => Array ( [days] => 1 [cost] => 5000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 5 [cost] => 1000 ) ) [leg1] => Array ( [UPS] => Array ( [days] => 1 [cost] => 3000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 3 [cost] => 1000 ) ) [leg2] => Array ( [UPS] => Array ( [days] => 1 [cost] => 4000 ) [FedEx] => Array ( [days] => 1 [cost] => 3000 ) [Conway] => Array ( [days] => 2 [cost] => 5000 ) ) ) How would you go about finding the best combination programmatically? My best attempt so far (third or fourth algorithm) is: Find the longest shipper for each leg Eliminate the most "expensive" one Find the cheapest shipper for each leg Calculate the total cost & days If days are acceptable, finish, else, goto 1 Quickly mocked-up in PHP (note that the test array below works swimmingly, but if you try it with the test array from above, it does not find the correct combination): $shippers["leg1"] = array( "UPS" => array("days" => 1, "cost" => 4000), "Conway" => array("days" => 3, "cost" => 3200), "FedEx" => array("days" => 8, "cost" => 1000) ); $shippers["leg2"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $shippers["leg3"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $times = 0; $totalDays = 9999999; print "<h1>Shippers to Choose From:</h1><pre>"; print_r($shippers); print "</pre><br />"; while($totalDays > $maxDays && $times < 500){ $totalDays = 0; $times++; $worstShipper = null; $longestShippers = null; $cheapestShippers = null; foreach($shippers as $legName => $leg){ //find longest shipment for each leg (in terms of days) unset($longestShippers[$legName]); $longestDays = null; if(count($leg) > 1){ foreach($leg as $shipperName => $shipper){ if(empty($longestDays) || $shipper["days"] > $longestDays){ $longestShippers[$legName]["days"] = $shipper["days"]; $longestShippers[$legName]["cost"] = $shipper["cost"]; $longestShippers[$legName]["name"] = $shipperName; $longestDays = $shipper["days"]; } } } } foreach($longestShippers as $leg => $shipper){ $shipper["totalCost"] = $shipper["days"] * $shipper["cost"]; //print $shipper["totalCost"] . " &lt;?&gt; " . $worstShipper["totalCost"] . ";"; if(empty($worstShipper) || $shipper["totalCost"] > $worstShipper["totalCost"]){ $worstShipper = $shipper; $worstShipperLeg = $leg; } } //print "worst shipper is: shippers[$worstShipperLeg][{$worstShipper['name']}]" . $shippers[$worstShipperLeg][$worstShipper["name"]]["days"]; unset($shippers[$worstShipperLeg][$worstShipper["name"]]); print "<h1>Next:</h1><pre>"; print_r($shippers); print "</pre><br />"; foreach($shippers as $legName => $leg){ //find cheapest shipment for each leg (in terms of cost) unset($cheapestShippers[$legName]); $lowestCost = null; foreach($leg as $shipperName => $shipper){ if(empty($lowestCost) || $shipper["cost"] < $lowestCost){ $cheapestShippers[$legName]["days"] = $shipper["days"]; $cheapestShippers[$legName]["cost"] = $shipper["cost"]; $cheapestShippers[$legName]["name"] = $shipperName; $lowestCost = $shipper["cost"]; } } //recalculate days and see if we are under max days... $totalDays += $cheapestShippers[$legName]['days']; } //print "<h2>totalDays: $totalDays</h2>"; } print "<h1>Chosen Shippers:</h1><pre>"; print_r($cheapestShippers); print "</pre>"; I think I may have to actually do some sort of thing where I literally make each combination one by one (with a series of loops) and add up the total "score" of each, and find the best one.... EDIT: To clarify, this isn't a "homework" assignment (I'm not in school). It is part of my current project at work. The requirements (as always) have been constantly changing. If I were given the current constraints at the time I began working on this problem, I would be using some variant of the A* algorithm (or Dijkstra's or shortest path or simplex or something). But everything has been morphing and changing, and that brings me to where I'm at right now. So I guess that means I need to forget about all the crap I've done to this point and just go with what I know I should go with, which is a path finding algorithm. A: Could alter some of the shortest path algorithms, like Dijkstra's, to weight each path by cost but also keep track of time and stop going along a certain path if the time exceeds your threshold. Should find the cheapest that gets you in under your threshold that way A: Sounds like what you have is called a "linear programming problem". It also sounds like a homework problem, no offense. The classical solution to a LP problem is called the "Simplex Method". Google it. However, to use that method, you must have the problem correctly formulated to describe your requirements. Still, it may be possible to enumerate all possible paths, since you have such a small set. Such a thing won't scale, though. A: Sounds like a job for Dijkstra's algorithm: Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1959, 1 is a graph search algorithm that solves the single-source shortest path problem for a graph with non negative edge path costs, outputting a shortest path tree. This algorithm is often used in routing. There are also implementation details in the Wikipedia article. A: If I knew I only had to deal with 5 cities, in a predetermined order, and that there were only 3 routes between adjacent cities, I'd brute force it. No point in being elegant. If, on the other hand, this were a homework assignment and I were supposed to produce an algorithm that could actually scale, I'd probably take a different approach. A: This is a knapsack problem. The weights are the days in transit, and the profit should be $5000 - cost of leg. Eliminate all negative costs and go from there! A: As Baltimark said, this is basically a Linear programming problem. If only the coefficients for the shippers (1 for included, 0 for not included) were not (binary) integers for each leg, this would be more easily solveable. Now you need to find some (binary) integer linear programming (ILP) heuristics as the problem is NP-hard. See Wikipedia on integer linear programming for links; on my linear programming course we used at least Branch and bound. Actually now that I think of it, this special case is solveable without actual ILP as the amount of days does not matter as long as it is <= 5. Now start by choosing the cheapest carrier for first choice (Conway 5:1000). Next you choose yet again the cheapest, resulting 8 days and 4000 currency units which is too much so we abort that. By trying others too we see that they all results days > 5 so we back to first choice and try the second cheapest (FedEx 2:3000) and then ups in the second and fedex in the last. This gives us total of 4 days and 9000 currency units. We then could use this cost to prune other searches in the tree that would by some subtree-stage result costs larger that the one we've found already and leave that subtree unsearched from that point on. This only works as long as we can know that searching in the subtree will not produce a better results, as we do here when costs cannot be negative. Hope this rambling helped a bit :).
Find the best combination from a given set of multiple sets
Say you have a shipment. It needs to go from point A to point B, point B to point C and finally point C to point D. You need it to get there in five days for the least amount of money possible. There are three possible shippers for each leg, each with their own different time and cost for each leg: Array ( [leg0] => Array ( [UPS] => Array ( [days] => 1 [cost] => 5000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 5 [cost] => 1000 ) ) [leg1] => Array ( [UPS] => Array ( [days] => 1 [cost] => 3000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 3 [cost] => 1000 ) ) [leg2] => Array ( [UPS] => Array ( [days] => 1 [cost] => 4000 ) [FedEx] => Array ( [days] => 1 [cost] => 3000 ) [Conway] => Array ( [days] => 2 [cost] => 5000 ) ) ) How would you go about finding the best combination programmatically? My best attempt so far (third or fourth algorithm) is: Find the longest shipper for each leg Eliminate the most "expensive" one Find the cheapest shipper for each leg Calculate the total cost & days If days are acceptable, finish, else, goto 1 Quickly mocked-up in PHP (note that the test array below works swimmingly, but if you try it with the test array from above, it does not find the correct combination): $shippers["leg1"] = array( "UPS" => array("days" => 1, "cost" => 4000), "Conway" => array("days" => 3, "cost" => 3200), "FedEx" => array("days" => 8, "cost" => 1000) ); $shippers["leg2"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $shippers["leg3"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $times = 0; $totalDays = 9999999; print "<h1>Shippers to Choose From:</h1><pre>"; print_r($shippers); print "</pre><br />"; while($totalDays > $maxDays && $times < 500){ $totalDays = 0; $times++; $worstShipper = null; $longestShippers = null; $cheapestShippers = null; foreach($shippers as $legName => $leg){ //find longest shipment for each leg (in terms of days) unset($longestShippers[$legName]); $longestDays = null; if(count($leg) > 1){ foreach($leg as $shipperName => $shipper){ if(empty($longestDays) || $shipper["days"] > $longestDays){ $longestShippers[$legName]["days"] = $shipper["days"]; $longestShippers[$legName]["cost"] = $shipper["cost"]; $longestShippers[$legName]["name"] = $shipperName; $longestDays = $shipper["days"]; } } } } foreach($longestShippers as $leg => $shipper){ $shipper["totalCost"] = $shipper["days"] * $shipper["cost"]; //print $shipper["totalCost"] . " &lt;?&gt; " . $worstShipper["totalCost"] . ";"; if(empty($worstShipper) || $shipper["totalCost"] > $worstShipper["totalCost"]){ $worstShipper = $shipper; $worstShipperLeg = $leg; } } //print "worst shipper is: shippers[$worstShipperLeg][{$worstShipper['name']}]" . $shippers[$worstShipperLeg][$worstShipper["name"]]["days"]; unset($shippers[$worstShipperLeg][$worstShipper["name"]]); print "<h1>Next:</h1><pre>"; print_r($shippers); print "</pre><br />"; foreach($shippers as $legName => $leg){ //find cheapest shipment for each leg (in terms of cost) unset($cheapestShippers[$legName]); $lowestCost = null; foreach($leg as $shipperName => $shipper){ if(empty($lowestCost) || $shipper["cost"] < $lowestCost){ $cheapestShippers[$legName]["days"] = $shipper["days"]; $cheapestShippers[$legName]["cost"] = $shipper["cost"]; $cheapestShippers[$legName]["name"] = $shipperName; $lowestCost = $shipper["cost"]; } } //recalculate days and see if we are under max days... $totalDays += $cheapestShippers[$legName]['days']; } //print "<h2>totalDays: $totalDays</h2>"; } print "<h1>Chosen Shippers:</h1><pre>"; print_r($cheapestShippers); print "</pre>"; I think I may have to actually do some sort of thing where I literally make each combination one by one (with a series of loops) and add up the total "score" of each, and find the best one.... EDIT: To clarify, this isn't a "homework" assignment (I'm not in school). It is part of my current project at work. The requirements (as always) have been constantly changing. If I were given the current constraints at the time I began working on this problem, I would be using some variant of the A* algorithm (or Dijkstra's or shortest path or simplex or something). But everything has been morphing and changing, and that brings me to where I'm at right now. So I guess that means I need to forget about all the crap I've done to this point and just go with what I know I should go with, which is a path finding algorithm.
[ "Could alter some of the shortest path algorithms, like Dijkstra's, to weight each path by cost but also keep track of time and stop going along a certain path if the time exceeds your threshold. Should find the cheapest that gets you in under your threshold that way\n", "Sounds like what you have is called a \"linear programming problem\". It also sounds like a homework problem, no offense. \nThe classical solution to a LP problem is called the \"Simplex Method\". Google it.\nHowever, to use that method, you must have the problem correctly formulated to describe your requirements. \nStill, it may be possible to enumerate all possible paths, since you have such a small set. Such a thing won't scale, though. \n", "Sounds like a job for Dijkstra's algorithm:\n\nDijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1959, 1 is a graph search algorithm that solves the single-source shortest path problem for a graph with non negative edge path costs, outputting a shortest path tree. This algorithm is often used in routing.\n\nThere are also implementation details in the Wikipedia article.\n", "If I knew I only had to deal with 5 cities, in a predetermined order, and that there were only 3 routes between adjacent cities, I'd brute force it. No point in being elegant.\nIf, on the other hand, this were a homework assignment and I were supposed to produce an algorithm that could actually scale, I'd probably take a different approach.\n", "This is a knapsack problem. The weights are the days in transit, and the profit should be $5000 - cost of leg. Eliminate all negative costs and go from there!\n", "As Baltimark said, this is basically a Linear programming problem. If only the coefficients for the shippers (1 for included, 0 for not included) were not (binary) integers for each leg, this would be more easily solveable. Now you need to find some (binary) integer linear programming (ILP) heuristics as the problem is NP-hard.\nSee Wikipedia on integer linear programming for links; on my linear programming course we used at least Branch and bound.\nActually now that I think of it, this special case is solveable without actual ILP as the amount of days does not matter as long as it is <= 5. Now start by choosing the cheapest carrier for first choice (Conway 5:1000). Next you choose yet again the cheapest, resulting 8 days and 4000 currency units which is too much so we abort that. By trying others too we see that they all results days > 5 so we back to first choice and try the second cheapest (FedEx 2:3000) and then ups in the second and fedex in the last. This gives us total of 4 days and 9000 currency units.\nWe then could use this cost to prune other searches in the tree that would by some subtree-stage result costs larger that the one we've found already and leave that subtree unsearched from that point on.\nThis only works as long as we can know that searching in the subtree will not produce a better results, as we do here when costs cannot be negative.\nHope this rambling helped a bit :). \n" ]
[ 8, 5, 5, 3, 2, 2 ]
[ "I think that Dijkstra's algorithm is for finding a shortest path. \ncmcculloh is looking for the minimal cost subject to the constraint that he gets it there in 5 days. \nSo, merely finding the quickest way won't get him there cheapest, and getting there for the cheapest, won't get it there in the required amount of time. \n" ]
[ -1 ]
[ "algorithm", "combinations", "np_complete", "php", "puzzle" ]
stackoverflow_0000014884_algorithm_combinations_np_complete_php_puzzle.txt
Q: Reduce startup time of .NET windows form app running off of a networked drive I have a simple .NET 2.0 windows form app that runs off of a networked drive (e.g. \MyServer\MyShare\app.exe). It's very basic, and only loads the bare minimum .NET libraries. However, it still takes ~6-10 seconds to load. People think something must be wrong that app so small takes so long to load. Are there any suggestions for improving the startup speed? A: Try out Sysinternals Process Explorer. It has an column of "% time in JIT". If that number is large you could run ngen on your application. If it's not it's likely to be a slow network connection. CodeGuru has a tutorial on usage of ngen. A: To speed up load time, you can compile a tiny start application and let that application do the loading of assemblies in runtime from a library outside bin folder. http://support.microsoft.com/kb/837908 A: Determining JIT time for weighing NGEN feasibility is certainly a good starting point. I also would agree with those who look to fudge the load time by using another entry point to then load the assemblies. Often it's the appearance of speed versus actual speed that improves the user experience. A: Setup Clickonce for the app so it's deployed to the local machine. A: You could cheat like Microsoft Office (and Adobe I think) and add an app in the Startup group that tells the app to load and then immediately unload. That way the DLL's are pre-cached in memory for when the user tries to start the app. Only catch: I'm not completely sure if it works this way with networked files -- and if it doesn't, this might be the cause of the slow start (ie you're always doing a cold start vs a possible warm start if running from the local machine).
Reduce startup time of .NET windows form app running off of a networked drive
I have a simple .NET 2.0 windows form app that runs off of a networked drive (e.g. \MyServer\MyShare\app.exe). It's very basic, and only loads the bare minimum .NET libraries. However, it still takes ~6-10 seconds to load. People think something must be wrong that app so small takes so long to load. Are there any suggestions for improving the startup speed?
[ "Try out Sysinternals Process Explorer. It has an column of \"% time in JIT\". If that number is large you could run ngen on your application. If it's not it's likely to be a slow network connection. CodeGuru has a tutorial on usage of ngen.\n", "To speed up load time, you can compile a tiny start application and let that application do the loading of assemblies in runtime from a library outside bin folder.\nhttp://support.microsoft.com/kb/837908\n", "Determining JIT time for weighing NGEN feasibility is certainly a good starting point. I also would agree with those who look to fudge the load time by using another entry point to then load the assemblies. Often it's the appearance of speed versus actual speed that improves the user experience.\n", "Setup Clickonce for the app so it's deployed to the local machine.\n", "You could cheat like Microsoft Office (and Adobe I think) and add an app in the Startup group that tells the app to load and then immediately unload. That way the DLL's are pre-cached in memory for when the user tries to start the app. Only catch: I'm not completely sure if it works this way with networked files -- and if it doesn't, this might be the cause of the slow start (ie you're always doing a cold start vs a possible warm start if running from the local machine).\n" ]
[ 5, 3, 3, 1, 0 ]
[]
[]
[ ".net", "performance", "startup" ]
stackoverflow_0000104815_.net_performance_startup.txt
Q: Rails Sessions over servers I'd like to have some rails apps over different servers sharing the same session. I can do it within the same server but don't know if it is possible to share over different servers. Anyone already did or knows how to do it? Thanks A: Use the Database Session store. The short of it is this: To generate the table, at the console, run rake db:sessions:create in your environment.rb, include this line config.action_controller.session_store = :active_record_store A: Depending on how your app is set up, you can easily share cookies from sites in the same domain (foo.domain, bar.domain, domain) by setting your apps up to use the same secret: http://www.russellquinn.com/2008/01/30/multiple-rails-applications/ Now, if you have disparate sites, such as sdfsf.com, dsfsadfsdafdsaf.com, etc. you'll have to do a lot more tricks because the very nature of cookies restricts them to the specific domain. Essentially what you're trying to do is use cross-site scripting to, instead of hijack your session, read it from the other ones. In that case, a combination of using the same cookie secret etc and then some cross-site scripting you can manually extract the session info and re-create it on each site (or if you use ActiveRecord session {or NFS session dir}, link up with the existing one). It's not easy, but it can be done. Or, the low-tech way (which I've done before) is simply have the login page visit a specially crafted login page on each site that sets an app cookie on it and bounces you to the next one. It isn't pretty. A: Try using database-backed sessions. A: In Rails 2.0 there is now a CookieStore that stores all session data in an encrypted cookie on the client's machine. http://izumi.plan99.net/blog/index.php/2007/11/25/rails-20-cookie-session-store-and-security/
Rails Sessions over servers
I'd like to have some rails apps over different servers sharing the same session. I can do it within the same server but don't know if it is possible to share over different servers. Anyone already did or knows how to do it? Thanks
[ "Use the Database Session store. The short of it is this:\n\nTo generate the table, at the console, run\nrake db:sessions:create\n\nin your environment.rb, include this line\nconfig.action_controller.session_store = :active_record_store\n\n\n", "Depending on how your app is set up, you can easily share cookies from sites in the same domain (foo.domain, bar.domain, domain) by setting your apps up to use the same secret:\nhttp://www.russellquinn.com/2008/01/30/multiple-rails-applications/\nNow, if you have disparate sites, such as sdfsf.com, dsfsadfsdafdsaf.com, etc. you'll have to do a lot more tricks because the very nature of cookies restricts them to the specific domain. Essentially what you're trying to do is use cross-site scripting to, instead of hijack your session, read it from the other ones.\nIn that case, a combination of using the same cookie secret etc and then some cross-site scripting you can manually extract the session info and re-create it on each site (or if you use ActiveRecord session {or NFS session dir}, link up with the existing one). It's not easy, but it can be done.\nOr, the low-tech way (which I've done before) is simply have the login page visit a specially crafted login page on each site that sets an app cookie on it and bounces you to the next one. It isn't pretty.\n", "Try using database-backed sessions.\n", "In Rails 2.0 there is now a CookieStore that stores all session data in an encrypted cookie on the client's machine.\nhttp://izumi.plan99.net/blog/index.php/2007/11/25/rails-20-cookie-session-store-and-security/\n" ]
[ 6, 3, 0, 0 ]
[]
[]
[ "cross_server", "ruby_on_rails", "session" ]
stackoverflow_0000104837_cross_server_ruby_on_rails_session.txt
Q: Is there a .NET performance counter to show the rate of p/invoke calls being made? Is there a .NET performance counter to show the rate of p/invoke calls made? I've just noticed that the application I'm debugging was making a call into native code from managed land within a tight loop. The intended implementation was for a p/invoke call to be made once and then cached. I'm wondering if I could have noticed this mistake via a CLR Interop or Remoting .NET performance counter. Any ideas? A: Try the ".NET CLR Interop" for "# of marshalling" performance counter. See this article for more http://msdn.microsoft.com/en-us/library/ms998551.aspx.
Is there a .NET performance counter to show the rate of p/invoke calls being made?
Is there a .NET performance counter to show the rate of p/invoke calls made? I've just noticed that the application I'm debugging was making a call into native code from managed land within a tight loop. The intended implementation was for a p/invoke call to be made once and then cached. I'm wondering if I could have noticed this mistake via a CLR Interop or Remoting .NET performance counter. Any ideas?
[ "Try the \".NET CLR Interop\" for \"# of marshalling\" performance counter.\nSee this article for more http://msdn.microsoft.com/en-us/library/ms998551.aspx.\n" ]
[ 2 ]
[]
[]
[ ".net", "analysis", "interop", "performance", "pinvoke" ]
stackoverflow_0000104920_.net_analysis_interop_performance_pinvoke.txt
Q: Dealing with the rate of change in software development I am primarily a .NET developer, and in that sphere alone there are at any given time probably close to a dozen fascinating emerging technologies, some of them real game-changers, that I would love to delve into. Sadly, this appears to be beyond the limits of human capacity. I read an article by Rocky Lhotka (.NET legend, inventor of CSLA, etc) where he mentioned, almost in passing, that last year he felt very terribly overwheled by the rate of change. He made it sound like maybe it wasn't possible to stay on the bleeding edge anymore, that maybe he wasn't going to try so hard because it was futile. It was a surprise to me that true geniuses like Lhotka (who are probably expected to devote a great deal of their time to playing with the latest technology and should be able to pick things up quickly) also feel the burn! So, how do you guys deal with this? Do you just chalk it up to the fact that development is vast, and it's more important to be able to find things quickly than to learn it all? Or do you have a continuing education strategy that actually allows you to stay close to the cutting edge? A: I have been in IT for 30 years now, so perhaps I can offer some perspective. Yes, there is an increasing amount of material to keep abreast of. But the rate of change (as in "progress") is not increasing - if anything, it is decreasing. What we are seeing is a widening of the field. Take a simple example: Once upon a time there was HTML/1. Then came HTML/2 and that was progress. Now we have HTML/4, HTML/5, XHTML/1, Flash, Silverlight, and on and on. Any one of these is progress, but each is progress in a different direction and all are in active use. Stay on top of this? Forget it - it's not possible. On the other hand, good IT folks can pick up a new language or a new technology in a few weeks at most - no big deal. Try to pick out the genuinely new ideas and learn about them. Ignore all the specific technologies (IIS 7, SQL Server 2008, etc.) unless and until you need them. Continuing the Internet as an example, the last real innovation were the ideas behind Web 2.0. I took the opportunity to learn Ruby at the same time - did a couple of small, throw-away projects in Ruby on Rails. If a project in this area comes along, the ideas will be the same in whatever environment. One does occasionally get frustrated. It's not always easy to pick out the truly new ideas amidst all the marketing hype. All the best... Brad A: Attend conferences and local user group meetings, get on twitter and start following a bunch of folks. Join or start up a mailing list (google groups is my favorite provider, Yahoo groups aren't half bad either) in your area to discuss issues. Propose a talk at your local DNUG to have someone do a quick overview of all these new technologies or maybe have an open discussion/lightning talk where people stand up and give 5-10 minutes on their favorite new technology. In short: Get out there and talk and share with people. It's the only way you'll stay on top of everything. You can't do it by yourself unless you don't sleep and don't work. A: I find myself worrying about missing the boat on something from time to time but when I actually sit down and learn some hot new technology I find that it's primarily a new combination of fundamental technologies I've already seen. My appoach is to make sure I have a good grasp of algorithms, data structures, communication protocols, some hardware knowledge and general engineering skills. A: It is tough not to be tempted to want to learn it all, but I try not to jump into anything that is 'too new' I seem to end up with a lot of frustration with not a lot of sources out there to help. While someone does have to take the dive head first and I respect those people (I guess that's the life of a beta tester) I just do not think that responsibility falls on everyone. But if you have the time, and patience then diving into something new can be a lot of fun. I guess its not a direct answer to your question but I hope it gives you something to think about. A: I say just pick a facet of the development landscape that fascinates you and delve into that. For example, if you enjoy dealing with distributed systems, start reading up on WCF and becoming an expert on it. I don't think it's possible to be familiar with everything aside from a casual understanding of the technology. Far better to specialize instead of becoming a jack of all trades, but master of none. A: Since I can never find time to go and dabble or play with new technologies, typically I choose one based on some small amount of information - maybe an article, maybe a recommendation of a friend - and then I force myself to use the new technology in a project that I'm working on. That how got into the current process I'm in of learning SCSF and CAB. It can be painful, and even slow at the start since you have to run up the curve, in the end it typically works in your favor (provided the technology you chose gives benefits). That's how I learned LINQ, Generics and just about everything else. Choose a technology that purports to solve the problem you have better than the way you know and then force yourself to implement it that way.
Dealing with the rate of change in software development
I am primarily a .NET developer, and in that sphere alone there are at any given time probably close to a dozen fascinating emerging technologies, some of them real game-changers, that I would love to delve into. Sadly, this appears to be beyond the limits of human capacity. I read an article by Rocky Lhotka (.NET legend, inventor of CSLA, etc) where he mentioned, almost in passing, that last year he felt very terribly overwheled by the rate of change. He made it sound like maybe it wasn't possible to stay on the bleeding edge anymore, that maybe he wasn't going to try so hard because it was futile. It was a surprise to me that true geniuses like Lhotka (who are probably expected to devote a great deal of their time to playing with the latest technology and should be able to pick things up quickly) also feel the burn! So, how do you guys deal with this? Do you just chalk it up to the fact that development is vast, and it's more important to be able to find things quickly than to learn it all? Or do you have a continuing education strategy that actually allows you to stay close to the cutting edge?
[ "I have been in IT for 30 years now, so perhaps I can offer some perspective. Yes, there is an increasing amount of material to keep abreast of. But the rate of change (as in \"progress\") is not increasing - if anything, it is decreasing. What we are seeing is a widening of the field.\nTake a simple example: Once upon a time there was HTML/1. Then came HTML/2 and that was progress. Now we have HTML/4, HTML/5, XHTML/1, Flash, Silverlight, and on and on. Any one of these is progress, but each is progress in a different direction and all are in active use.\nStay on top of this? Forget it - it's not possible. On the other hand, good IT folks can pick up a new language or a new technology in a few weeks at most - no big deal. Try to pick out the genuinely new ideas and learn about them. Ignore all the specific technologies (IIS 7, SQL Server 2008, etc.) unless and until you need them.\nContinuing the Internet as an example, the last real innovation were the ideas behind Web 2.0. I took the opportunity to learn Ruby at the same time - did a couple of small, throw-away projects in Ruby on Rails. If a project in this area comes along, the ideas will be the same in whatever environment.\nOne does occasionally get frustrated. It's not always easy to pick out the truly new ideas amidst all the marketing hype.\nAll the best...\nBrad\n", "Attend conferences and local user group meetings, get on twitter and start following a bunch of folks. Join or start up a mailing list (google groups is my favorite provider, Yahoo groups aren't half bad either) in your area to discuss issues.\nPropose a talk at your local DNUG to have someone do a quick overview of all these new technologies or maybe have an open discussion/lightning talk where people stand up and give 5-10 minutes on their favorite new technology.\nIn short: Get out there and talk and share with people. It's the only way you'll stay on top of everything. You can't do it by yourself unless you don't sleep and don't work.\n", "I find myself worrying about missing the boat on something from time to time but when I actually sit down and learn some hot new technology I find that it's primarily a new combination of fundamental technologies I've already seen. \nMy appoach is to make sure I have a good grasp of algorithms, data structures, communication protocols, some hardware knowledge and general engineering skills. \n", "It is tough not to be tempted to want to learn it all, but I try not to jump into anything that is 'too new' I seem to end up with a lot of frustration with not a lot of sources out there to help. While someone does have to take the dive head first and I respect those people (I guess that's the life of a beta tester) I just do not think that responsibility falls on everyone. But if you have the time, and patience then diving into something new can be a lot of fun. I guess its not a direct answer to your question but I hope it gives you something to think about.\n", "I say just pick a facet of the development landscape that fascinates you and delve into that. For example, if you enjoy dealing with distributed systems, start reading up on WCF and becoming an expert on it.\nI don't think it's possible to be familiar with everything aside from a casual understanding of the technology. Far better to specialize instead of becoming a jack of all trades, but master of none.\n", "Since I can never find time to go and dabble or play with new technologies, typically I choose one based on some small amount of information - maybe an article, maybe a recommendation of a friend - and then I force myself to use the new technology in a project that I'm working on. That how got into the current process I'm in of learning SCSF and CAB. It can be painful, and even slow at the start since you have to run up the curve, in the end it typically works in your favor (provided the technology you chose gives benefits). That's how I learned LINQ, Generics and just about everything else. Choose a technology that purports to solve the problem you have better than the way you know and then force yourself to implement it that way.\n" ]
[ 9, 4, 3, 2, 1, 1 ]
[]
[]
[ "language_agnostic" ]
stackoverflow_0000104890_language_agnostic.txt
Q: MSI installer: Adding multiple properties to SecureCustomProperties I'm looking for a way to add multiple properties to the SecureCustomProperties value in my .msi installer's property table. I've tried comma delimiting, semi-colon delimiting, and even space delimiters. None of the above seem to work. Hints? A: Ok, so I was almost there ... semi-colon delimited with NO SPACES. This appears to do the trick.
MSI installer: Adding multiple properties to SecureCustomProperties
I'm looking for a way to add multiple properties to the SecureCustomProperties value in my .msi installer's property table. I've tried comma delimiting, semi-colon delimiting, and even space delimiters. None of the above seem to work. Hints?
[ "Ok, so I was almost there ... semi-colon delimited with NO SPACES. This appears to do the trick.\n" ]
[ 7 ]
[]
[]
[ "delimiter", "installation", "windows", "windows_installer" ]
stackoverflow_0000105024_delimiter_installation_windows_windows_installer.txt
Q: What is the best way to have synchronized a collection of objects between various threads in .Net? What is the best way to have synchronized a collection of objects between various threads in .Net? I need to have a List or Dictionary accessed from different threads in a thread safe mode. With Adds, Removes, Foreachs, etc. A: Basically it depends on the pattern you need to use. If you have several threads writing and reading the same place you can use the same data structure that you would have used with a single thread (hastable, array, etc.) with a lock/monitor or a ReaderWriterLock to prevent race conditions. In case you need to pass data between threads you'll need some kind of queue (synced or lockfree) that thread(s) of group A would insert to and thread(s) of group B would deque from. You might want to use WaitEvent (AutoReset or Manual) so that you won't loose CPU when the queue is empty. It really depends on what kind of workflow you want to achieve. A: You could implement a lock-free queue: http://www.boyet.com/Articles/LockfreeQueue.html Or handle the synchronization yourself using locks: http://www.albahari.com/threading/part2.html#_Locking A: Hashtable.Synchronized method returns a synchronized (thread safe) wrapper for the Hashtable. http://msdn.microsoft.com/en-us/library/system.collections.hashtable.synchronized(VS.80).aspx This also exists for other collections. A: A number of the collection classes in .Net have built in support for synchronizing and making access from multiple threads safe. For example (in C++/CLR): Collections::Queue ^unsafe_queue = gcnew Collections::Queue(); Collections::Queue ^safe_queue = Collections::Queue::Synchronized(unsafe_queue); You can throw away the reference to unsafe_queue, and keep the reference to safe_queue. It can be shared between threads, and you're guaranteed thread safe access. Other collection classes, like ArrayList and Hashtable, also support this, in a similar manner. A: Without knowing specifics, I'd lean towards delegates and events to notify of changes. http://msdn.microsoft.com/en-us/library/17sde2xt(VS.71).aspx And implementing the Observer or Publish Subscribe pattern http://en.wikipedia.org/wiki/Observer_pattern http://msdn.microsoft.com/en-us/library/ms978603.aspx
What is the best way to have synchronized a collection of objects between various threads in .Net?
What is the best way to have synchronized a collection of objects between various threads in .Net? I need to have a List or Dictionary accessed from different threads in a thread safe mode. With Adds, Removes, Foreachs, etc.
[ "Basically it depends on the pattern you need to use.\nIf you have several threads writing and reading the same place you can use the same data structure that you would have used with a single thread (hastable, array, etc.) with a lock/monitor or a ReaderWriterLock to prevent race conditions.\nIn case you need to pass data between threads you'll need some kind of queue (synced or lockfree) that thread(s) of group A would insert to and thread(s) of group B would deque from. You might want to use WaitEvent (AutoReset or Manual) so that you won't loose CPU when the queue is empty. \nIt really depends on what kind of workflow you want to achieve.\n", "You could implement a lock-free queue:\nhttp://www.boyet.com/Articles/LockfreeQueue.html\nOr handle the synchronization yourself using locks:\nhttp://www.albahari.com/threading/part2.html#_Locking\n", "Hashtable.Synchronized method returns a synchronized (thread safe) wrapper for the Hashtable.\nhttp://msdn.microsoft.com/en-us/library/system.collections.hashtable.synchronized(VS.80).aspx\nThis also exists for other collections.\n", "A number of the collection classes in .Net have built in support for synchronizing and making access from multiple threads safe. For example (in C++/CLR):\n\n Collections::Queue ^unsafe_queue = gcnew Collections::Queue();\n Collections::Queue ^safe_queue = Collections::Queue::Synchronized(unsafe_queue);\n\nYou can throw away the reference to unsafe_queue, and keep the reference to safe_queue. It can be shared between threads, and you're guaranteed thread safe access. Other collection classes, like ArrayList and Hashtable, also support this, in a similar manner.\n", "Without knowing specifics, I'd lean towards delegates and events to notify of changes. \nhttp://msdn.microsoft.com/en-us/library/17sde2xt(VS.71).aspx\nAnd implementing the Observer or Publish Subscribe pattern \nhttp://en.wikipedia.org/wiki/Observer_pattern\nhttp://msdn.microsoft.com/en-us/library/ms978603.aspx\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "multithreading" ]
stackoverflow_0000094204_multithreading.txt
Q: ResultSet not closed when connection closed? I've been doing code review (mostly using tools like FindBugs) of one of our pet projects and FindBugs marked following code as erroneous (pseudocode): Connection conn = dataSource.getConnection(); try{ PreparedStatement stmt = conn.prepareStatement(); //initialize the statement stmt.execute(); ResultSet rs = stmt.getResultSet(); //get data }finally{ conn.close(); } The error was that this code might not release resources. I figured out that the ResultSet and Statement were not closed, so I closed them in finally: finally{ try{ rs.close() }catch(SqlException se){ //log it } try{ stmt.close(); }catch(SqlException se){ //log it } conn.close(); } But I encountered the above pattern in many projects (from quite a few companies), and no one was closing ResultSets or Statements. Did you have troubles with ResultSets and Statements not being closed when the Connection is closed? I found only this and it refers to Oracle having problems with closing ResultSets when closing Connections (we use Oracle db, hence my corrections). java.sql.api says nothing in Connection.close() javadoc. A: One problem with ONLY closing the connection and not the result set, is that if your connection management code is using connection pooling, the connection.close() would just put the connection back in the pool. Additionally, some database have a cursor resource on the server that will not be freed properly unless it is explicitly closed. A: I've had problems with unclosed ResultSets in Oracle, even though the connection was closed. The error I got was "ORA-01000: maximum open cursors exceeded" So: Always close your ResultSet! A: You should always close all JDBC resources explicitly. As Aaron and John already said, closing a connection will often only return it to a pool and not all JDBC drivers are implemented exact the same way. Here is a utility method that can be used from a finally block: public static void closeEverything(ResultSet rs, Statement stmt, Connection con) { if (rs != null) { try { rs.close(); } catch (SQLException e) { } } if (stmt != null) { try { stmt.close(); } catch (SQLException e) { } } if (con != null) { try { con.close(); } catch (SQLException e) { } } } A: Oracle will give you errors about open cursors in this case. According to: http://java.sun.com/javase/6/docs/api/java/sql/Statement.html it looks like reusing a statement will close any open resultsets, and closing a statement will close any resultsets, but i don't see anything about closing a connection will close any of the resources it created. All of those details are left to the JDBC driver provider. Its always safest to close everything explicitly. We wrote a util class that wraps everything with try{ xxx } catch (Throwable {} so that you can just call Utils.close(rs) and Utils.close(stmt), etc without having to worry about exceptions that close scan supposedly throw. A: The ODBC Bridge can produce a memory leak with some ODBC drivers. If you use a good JDBC driver then you should does not have any problems with closing the connection. But there are 2 problems: Does you know if you have a good driver? Will you use other JDBC drivers in the future? That the best practice is to close it all. A: I work in a large J2EE web environment. We have several databases that may be connected to in a single request. We began getting logical deadlocks in some of our applications. The issue was that as follows: User would request page Server connects to DB 1 Server Selects on DB 1 Server "closes" connection to DB 1 Server connects to DB 2 Deadlocked! This occurred for 2 reasons, we were experiencing far higher volume of traffic than normal and the J2EE Spec by default does not actually close your connection until the thread finishes execution. So, in the above example step 4 never actually closed the connection even though they were closed properly in finally . To fix this, you you have to use resource references in the web.xml for your Database Connections and you have to set the res-sharing-scope to unsharable. Example: <resource-ref> <description>My Database</description> <res-ref-name>jdbc/jndi/pathtodatasource</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Unshareable</res-sharing-scope> </resource-ref> A: I've definitely seen problems with unclosed ResultSets, and what can it hurt to close them all the time, right? The unreliability of needing to remembering to do this is one of the best reasons to move to frameworks that manage these details for you. It might not be feasible in your development environment, but I've had great luck using Spring to manage JPA transactions. The messy details of opening connections, statements, result sets, and writing over-complicated try/catch/finally blocks (with try/catch blocks in the finally block!) to close them again just disappears, leaving you to actually get some work done. I'd highly recommend migrating to that kind of a solution. A: In Java, Statements (not Resultsets) correlate to Cursors in Oracle. It is best to close the resources that you open as unexpected behavior can occur in regards to the JVM and system resources. Additionally, some JDBC pooling frameworks pool Statements and Connections, so not closing them might not mark those objects as free in the pool, and cause performance issues in the framework. In general, if there is a close() or destroy() method on an object, there's a reason to call it, and to ignore it is done so at your own peril.
ResultSet not closed when connection closed?
I've been doing code review (mostly using tools like FindBugs) of one of our pet projects and FindBugs marked following code as erroneous (pseudocode): Connection conn = dataSource.getConnection(); try{ PreparedStatement stmt = conn.prepareStatement(); //initialize the statement stmt.execute(); ResultSet rs = stmt.getResultSet(); //get data }finally{ conn.close(); } The error was that this code might not release resources. I figured out that the ResultSet and Statement were not closed, so I closed them in finally: finally{ try{ rs.close() }catch(SqlException se){ //log it } try{ stmt.close(); }catch(SqlException se){ //log it } conn.close(); } But I encountered the above pattern in many projects (from quite a few companies), and no one was closing ResultSets or Statements. Did you have troubles with ResultSets and Statements not being closed when the Connection is closed? I found only this and it refers to Oracle having problems with closing ResultSets when closing Connections (we use Oracle db, hence my corrections). java.sql.api says nothing in Connection.close() javadoc.
[ "One problem with ONLY closing the connection and not the result set, is that if your connection management code is using connection pooling, the connection.close() would just put the connection back in the pool. Additionally, some database have a cursor resource on the server that will not be freed properly unless it is explicitly closed.\n", "I've had problems with unclosed ResultSets in Oracle, even though the connection was closed. The error I got was \n\"ORA-01000: maximum open cursors exceeded\"\n\nSo: Always close your ResultSet!\n", "You should always close all JDBC resources explicitly. As Aaron and John already said, closing a connection will often only return it to a pool and not all JDBC drivers are implemented exact the same way.\nHere is a utility method that can be used from a finally block:\npublic static void closeEverything(ResultSet rs, Statement stmt,\n Connection con) {\n if (rs != null) {\n try {\n rs.close();\n } catch (SQLException e) {\n }\n }\n if (stmt != null) {\n try {\n stmt.close();\n } catch (SQLException e) {\n }\n }\n if (con != null) {\n try {\n con.close();\n } catch (SQLException e) {\n }\n }\n}\n\n", "Oracle will give you errors about open cursors in this case.\nAccording to: http://java.sun.com/javase/6/docs/api/java/sql/Statement.html\nit looks like reusing a statement will close any open resultsets, and closing a statement will close any resultsets, but i don't see anything about closing a connection will close any of the resources it created.\nAll of those details are left to the JDBC driver provider. \nIts always safest to close everything explicitly. We wrote a util class that wraps everything with try{ xxx } catch (Throwable {} so that you can just call Utils.close(rs) and Utils.close(stmt), etc without having to worry about exceptions that close scan supposedly throw.\n", "The ODBC Bridge can produce a memory leak with some ODBC drivers.\nIf you use a good JDBC driver then you should does not have any problems with closing the connection. But there are 2 problems:\n\nDoes you know if you have a good driver?\nWill you use other JDBC drivers in the future?\n\nThat the best practice is to close it all.\n", "I work in a large J2EE web environment. We have several databases that may be connected to in a single request. We began getting logical deadlocks in some of our applications. The issue was that as follows:\n\nUser would request page\nServer connects to DB 1\nServer Selects on DB 1\nServer \"closes\" connection to DB 1\nServer connects to DB 2\nDeadlocked!\n\nThis occurred for 2 reasons, we were experiencing far higher volume of traffic than normal and the J2EE Spec by default does not actually close your connection until the thread finishes execution. So, in the above example step 4 never actually closed the connection even though they were closed properly in finally .\nTo fix this, you you have to use resource references in the web.xml for your Database Connections and you have to set the res-sharing-scope to unsharable. \nExample:\n<resource-ref>\n <description>My Database</description>\n <res-ref-name>jdbc/jndi/pathtodatasource</res-ref-name>\n <res-type>javax.sql.DataSource</res-type>\n <res-auth>Container</res-auth>\n <res-sharing-scope>Unshareable</res-sharing-scope>\n</resource-ref>\n\n", "I've definitely seen problems with unclosed ResultSets, and what can it hurt to close them all the time, right? The unreliability of needing to remembering to do this is one of the best reasons to move to frameworks that manage these details for you. It might not be feasible in your development environment, but I've had great luck using Spring to manage JPA transactions. The messy details of opening connections, statements, result sets, and writing over-complicated try/catch/finally blocks (with try/catch blocks in the finally block!) to close them again just disappears, leaving you to actually get some work done. I'd highly recommend migrating to that kind of a solution.\n", "In Java, Statements (not Resultsets) correlate to Cursors in Oracle. It is best to close the resources that you open as unexpected behavior can occur in regards to the JVM and system resources.\nAdditionally, some JDBC pooling frameworks pool Statements and Connections, so not closing them might not mark those objects as free in the pool, and cause performance issues in the framework.\nIn general, if there is a close() or destroy() method on an object, there's a reason to call it, and to ignore it is done so at your own peril.\n" ]
[ 52, 29, 19, 9, 8, 8, 4, 4 ]
[]
[]
[ "findbugs", "java", "jdbc" ]
stackoverflow_0000103938_findbugs_java_jdbc.txt
Q: SQL Query Help: Transforming Dates In A Non-Trivial Way I have a table with a "Date" column, and I would like to do a query that does the following: If the date is a Monday, Tuesday, Wednesday, or Thursday, the displayed date should be shifted up by 1 day, as in DATEADD(day, 1, [Date]) On the other hand, if it is a Friday, the displayed date should be incremented by 3 days (i.e. so it becomes the following Monday). How do I do this in my SELECT statement? As in, SELECT somewayofdoingthis([Date]) FROM myTable (This is SQL Server 2000.) A: Here is how I would do it. I do recommend a function like above if you will be using this in other places. CASE WHEN DATEPART(dw, [Date]) IN (2,3,4,5) THEN DATEADD(d, 1, [Date]) WHEN DATEPART(dw, [Date]) = 6 THEN DATEADD(d, 3, [Date]) ELSE [Date] END AS [ConvertedDate] A: CREATE FUNCTION dbo.GetNextWDay(@Day datetime) RETURNS DATETIME AS BEGIN DECLARE @ReturnDate DateTime set @ReturnDate = dateadd(dd, 1, @Day) if (select datename(@ReturnDate))) = 'Saturday' set @ReturnDate = dateadd(dd, 2, @ReturnDate) if (select datename(@ReturnDate) = 'Sunday' set @ReturnDate = dateadd(dd, 1, @ReturnDate) RETURN @ReturnDate END A: Try select case when datepart(dw,[Date]) between 2 and 5 then DATEADD(dd, 1, [Date]) when datepart(dw,[Date]) = 6 then DATEADD(dd, 3, [Date]) else [Date] end as [Date] A: I'm assuming that you also want Saturday and Sunday to shift forward to the following Monday. If that is not the case, take the 1 out of (1,2,3,4,5) and remove the last when clause. case --Sunday thru Thursday are shifted forward 1 day when datepart(weekday, [Date]) in (1,2,3,4,5) then dateadd(day, 1, [Date]) --Friday is shifted forward to Monday when datepart(weekday, [Date]) = 6 then dateadd(day, 3, [Date]) --Saturday is shifted forward to Monday when datepart(weekday, [Date]) = 7 then dateadd(day, 2, [Date]) end You can also do it in one line: select dateadd(day, 1 + (datepart(weekday, [Date])/6) * (8-datepart(weekday, [Date])), [Date]) A: Sounds like a CASE expression. I don't know the proper data manipulations for SQL Server, but basically it would look like this: CASE WHEN [Date] is a Friday THEN DATEADD( day, 3, [Date] ) ELSE DATEADD( day, 1, [Date] ) END If you wanted to check for weekend days you could add additional WHEN clauses before the ELSE. A: This is off the top of my head and can be clearly cleaned up but use it as a starting point: select case when DATENAME(dw, [date]) = 'Monday' then DATEADD(dw, 1, [Date]) when DATENAME(dw, [date]) = 'Tuesday' then DATEADD(dw, 1, [Date]) when DATENAME(dw, [date]) = 'Wednesday' then DATEADD(dw, 1, [Date]) when DATENAME(dw, [date]) = 'Thursday' then DATEADD(dw, 1, [Date]) when DATENAME(dw, [date]) = 'Friday' then DATEADD(dw, 3, [Date]) end as nextDay ... A: you could use this: select dayname,newdayname = CASE dayname WHEN 'Monday' THEN 'Tuesday' WHEN 'Tuesday' THEN 'Wednesday' WHEN 'Wednesday' THEN 'Thursday' WHEN 'Thursday' THEN 'Friday' WHEN 'Friday' THEN 'Monday' WHEN 'Saturday' THEN 'Monday' WHEN 'Sunday' THEN 'Monday' END FROM UDO_DAYS results: Monday Tuesday Tuesday Wednesday Wednesday Thursday Thursday Friday Friday Monday Saturday Monday Sunday Monday table data: Monday Tuesday Wednesday Thursday Friday Saturday Sunday A: Look up the CASE statement and the DATEPART statement. You will want to use the dw argument with DATEPART to get back an integer that represents the day of week. A: How about taking a page from the Data Warehouse guys and make a table. In DW terms, this would be a date dimension. A standard date dimension would have things like various names for a date ("MON", "Monday", "August 22, 1998"), or indicators like end-of-month and start-of-month. However, you can also have columns that only make sense in your environment. For instance, based on the question, yours might have a next-work-day column that would point to the key for the day in question. That way you can customize it further to take into account holidays or other non-working days. The DW folks are adamant about using meaningless keys (that is, don't just use a truncated date as the key, use a generated key), but you can decide that for yourself. The Date Dimension Toolkit has code to generate your own tables in various DBMS and it has CSV data for several years worth of dates. A: you need to create a SQL Function that does this transformation for you. A: This is mostly like Brian's except it didn't compile due to mismatched parens and I changed the IF to not have the select in it. It is important to note that we use DateNAME here rather than datePART because datePART is dependent on the value set by SET DATEFIRST, which sets the first day of the week. CREATE FUNCTION dbo.GetNextWDay(@Day datetime) RETURNS DATETIME AS BEGIN DECLARE @ReturnDate DateTime set @ReturnDate = dateadd(dd, 1, @Day) if datename(dw, @ReturnDate) = 'Saturday' set @ReturnDate = dateadd(dd, 2, @ReturnDate) if datename(dw, @ReturnDate) = 'Sunday' set @ReturnDate = dateadd(dd, 1, @ReturnDate) RETURN @ReturnDate END
SQL Query Help: Transforming Dates In A Non-Trivial Way
I have a table with a "Date" column, and I would like to do a query that does the following: If the date is a Monday, Tuesday, Wednesday, or Thursday, the displayed date should be shifted up by 1 day, as in DATEADD(day, 1, [Date]) On the other hand, if it is a Friday, the displayed date should be incremented by 3 days (i.e. so it becomes the following Monday). How do I do this in my SELECT statement? As in, SELECT somewayofdoingthis([Date]) FROM myTable (This is SQL Server 2000.)
[ "Here is how I would do it. I do recommend a function like above if you will be using this in other places.\nCASE\nWHEN\n DATEPART(dw, [Date]) IN (2,3,4,5)\nTHEN\n DATEADD(d, 1, [Date])\nWHEN\n DATEPART(dw, [Date]) = 6\nTHEN\n DATEADD(d, 3, [Date])\nELSE\n [Date]\nEND AS [ConvertedDate]\n\n", "CREATE FUNCTION dbo.GetNextWDay(@Day datetime)\nRETURNS DATETIME\nAS\nBEGIN \n DECLARE @ReturnDate DateTime\n\n set @ReturnDate = dateadd(dd, 1, @Day)\n\n if (select datename(@ReturnDate))) = 'Saturday'\n set @ReturnDate = dateadd(dd, 2, @ReturnDate)\n\n if (select datename(@ReturnDate) = 'Sunday'\n set @ReturnDate = dateadd(dd, 1, @ReturnDate)\n\n RETURN @ReturnDate\nEND\n\n", "Try\nselect case when datepart(dw,[Date]) between 2 and 5 then DATEADD(dd, 1, [Date])\nwhen datepart(dw,[Date]) = 6 then DATEADD(dd, 3, [Date]) else [Date] end as [Date] \n\n", "I'm assuming that you also want Saturday and Sunday to shift forward to the following Monday. If that is not the case, take the 1 out of (1,2,3,4,5) and remove the last when clause.\ncase\n --Sunday thru Thursday are shifted forward 1 day\n when datepart(weekday, [Date]) in (1,2,3,4,5) then dateadd(day, 1, [Date]) \n --Friday is shifted forward to Monday\n when datepart(weekday, [Date]) = 6 then dateadd(day, 3, [Date])\n --Saturday is shifted forward to Monday\n when datepart(weekday, [Date]) = 7 then dateadd(day, 2, [Date])\nend\n\nYou can also do it in one line:\nselect dateadd(day, 1 + (datepart(weekday, [Date])/6) * (8-datepart(weekday, [Date])), [Date])\n\n", "Sounds like a CASE expression. I don't know the proper data manipulations for SQL Server, but basically it would look like this:\nCASE\n WHEN [Date] is a Friday THEN DATEADD( day, 3, [Date] )\n ELSE DATEADD( day, 1, [Date] )\nEND\n\nIf you wanted to check for weekend days you could add additional WHEN clauses before the ELSE.\n", "This is off the top of my head and can be clearly cleaned up but use it as a starting point:\nselect case when DATENAME(dw, [date]) = 'Monday' then DATEADD(dw, 1, [Date])\n when DATENAME(dw, [date]) = 'Tuesday' then DATEADD(dw, 1, [Date])\n when DATENAME(dw, [date]) = 'Wednesday' then DATEADD(dw, 1, [Date])\n when DATENAME(dw, [date]) = 'Thursday' then DATEADD(dw, 1, [Date])\n when DATENAME(dw, [date]) = 'Friday' then DATEADD(dw, 3, [Date])\n end as nextDay\n ...\n\n", "you could use this:\nselect dayname,newdayname =\n CASE dayname\n WHEN 'Monday' THEN 'Tuesday'\n WHEN 'Tuesday' THEN 'Wednesday'\n WHEN 'Wednesday' THEN 'Thursday'\n WHEN 'Thursday' THEN 'Friday'\n WHEN 'Friday' THEN 'Monday'\n WHEN 'Saturday' THEN 'Monday'\n WHEN 'Sunday' THEN 'Monday'\nEND\nFROM UDO_DAYS\n\n\nresults:\nMonday Tuesday\nTuesday Wednesday\nWednesday Thursday\nThursday Friday\nFriday Monday\nSaturday Monday\nSunday Monday\n\ntable data:\nMonday\nTuesday\nWednesday\nThursday\nFriday\nSaturday\nSunday\n\n", "Look up the CASE statement and the DATEPART statement. You will want to use the dw argument with DATEPART to get back an integer that represents the day of week.\n", "How about taking a page from the Data Warehouse guys and make a table. In DW terms, this would be a date dimension. A standard date dimension would have things like various names for a date (\"MON\", \"Monday\", \"August 22, 1998\"), or indicators like end-of-month and start-of-month. However, you can also have columns that only make sense in your environment.\nFor instance, based on the question, yours might have a next-work-day column that would point to the key for the day in question. That way you can customize it further to take into account holidays or other non-working days. \nThe DW folks are adamant about using meaningless keys (that is, don't just use a truncated date as the key, use a generated key), but you can decide that for yourself. \nThe Date Dimension Toolkit has code to generate your own tables in various DBMS and it has CSV data for several years worth of dates.\n", "you need to create a SQL Function that does this transformation for you. \n", "This is mostly like Brian's except it didn't compile due to mismatched parens and I changed the IF to not have the select in it. It is important to note that we use DateNAME here rather than datePART because datePART is dependent on the value set by SET DATEFIRST, which sets the first day of the week.\nCREATE FUNCTION dbo.GetNextWDay(@Day datetime)\nRETURNS DATETIME\nAS\nBEGIN\n DECLARE @ReturnDate DateTime\n\n set @ReturnDate = dateadd(dd, 1, @Day)\n if datename(dw, @ReturnDate) = 'Saturday'\n set @ReturnDate = dateadd(dd, 2, @ReturnDate)\n if datename(dw, @ReturnDate) = 'Sunday'\n set @ReturnDate = dateadd(dd, 1, @ReturnDate)\n RETURN @ReturnDate\nEND\n\n" ]
[ 5, 4, 2, 2, 1, 1, 1, 1, 1, 0, 0 ]
[ "create table #dates (dt datetime)\ninsert into #dates (dt) values ('1/1/2001')\ninsert into #dates (dt) values ('1/2/2001')\ninsert into #dates (dt) values ('1/3/2001')\ninsert into #dates (dt) values ('1/4/2001')\ninsert into #dates (dt) values ('1/5/2001')\n\n select\n dt, day(dt), dateadd(dd,1,dt)\n from\n #dates\n where\n day(dt) between 1 and 4\n\n union all\n\n select\n dt, day(dt), dateadd(dd,3,dt)\n from\n #dates\n where\n day(dt) = 5\n\n drop table #dates\n\n" ]
[ -2 ]
[ "date", "dateadd", "sql", "sql_server", "sql_server_2000" ]
stackoverflow_0000104330_date_dateadd_sql_sql_server_sql_server_2000.txt
Q: Cannot store load test results in a TFS 2005 results store I've setup a results store and when I publish results of a load test, I can't view the published test details. From the test run section of the build report I click on the published build and when I choose View Test Results Details from the Test Runs shortcut menu I get an error that the test results details cannot be viewed because the results were not stored in a results store. I've looked for the data in the results store database and I don't anything there so the error makes sense. I've setup a connection string to the results store in the Administer Test Controller dialog. Is this the only thing that needs to be done to get test results into the store? A: The reports aren't going to be available until after the data has been copied to the data warehouse. This can typically take up to one hour to do. See http://msdn.microsoft.com/en-us/library/ms404692(VS.80).aspx
Cannot store load test results in a TFS 2005 results store
I've setup a results store and when I publish results of a load test, I can't view the published test details. From the test run section of the build report I click on the published build and when I choose View Test Results Details from the Test Runs shortcut menu I get an error that the test results details cannot be viewed because the results were not stored in a results store. I've looked for the data in the results store database and I don't anything there so the error makes sense. I've setup a connection string to the results store in the Administer Test Controller dialog. Is this the only thing that needs to be done to get test results into the store?
[ "The reports aren't going to be available until after the data has been copied to the data warehouse. This can typically take up to one hour to do.\nSee http://msdn.microsoft.com/en-us/library/ms404692(VS.80).aspx\n" ]
[ 0 ]
[]
[]
[ "load_testing", "tfs", "visual_studio_2005" ]
stackoverflow_0000104494_load_testing_tfs_visual_studio_2005.txt
Q: Why do I get "java.net.BindException: Only one usage of each socket address" if netstat says something else? I start up my application which uses a Jetty server, using port 9000. I then shut down my application with Ctrl-C I check with "netstat -a" and see that the port 9000 is no longer being used. I restart my application and get: [ERROR,9/19 15:31:08] java.net.BindException: Only one usage of each socket address (protocol/network address/port) is normally permitted [TRACE,9/19 15:31:08] java.net.BindException: Only one usage of each socket address (protocol/network address/port) is normally permitted [TRACE,9/19 15:31:08] at java.net.PlainSocketImpl.convertSocketExceptionToIOException(PlainSocketImpl.java:75) [TRACE,9/19 15:31:08] at sun.nio.ch.Net.bind(Net.java:101) [TRACE,9/19 15:31:08] at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) [TRACE,9/19 15:31:08] at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77) [TRACE,9/19 15:31:08] at org.mortbay.jetty.nio.BlockingChannelConnector.open(BlockingChannelConnector.java:73) [TRACE,9/19 15:31:08] at org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:285) [TRACE,9/19 15:31:08] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [TRACE,9/19 15:31:08] at org.mortbay.jetty.Server.doStart(Server.java:233) [TRACE,9/19 15:31:08] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [TRACE,9/19 15:31:08] at ... Is this a Java bug? Can I avoid it somehow before starting the Jetty server? Edit #1 Here is our code for creating our BlockingChannelConnector, note the "setReuseAddress(true)": connector.setReuseAddress( true ); connector.setPort( port ); connector.setStatsOn( true ); connector.setMaxIdleTime( 30000 ); connector.setLowResourceMaxIdleTime( 30000 ); connector.setAcceptQueueSize( maxRequests ); connector.setName( "Blocking-IO Connector, bound to host " + connector.getHost() ); Could it have something to do with the idle time? Edit #2 Next piece of the puzzle that may or may not help: when running the application in Debug Mode (Eclipse) the server starts up without a problem!!! But the problem described above occurs reproducibly when running the application in Run Mode or as a built jar file. Whiskey Tango Foxtrot? Edit #3 (4 days later) - still have the issue. Any thoughts? A: During your first invocation of your program, did it accept at least one incoming connection? If so then what you are most likely seeing is the socket linger in effect. For the best explanation dig up a copy of TCP/IP Illustrated by Stevens (source: kohala.com) But, as I understand it, because the application did not properly close the connection (that is BOTH client and server sent their FIN/ACK sequences) the socket you were listening on cannot be reused until the connection is considered dead, the so called 2MSL timeout. The value of 1 MSL can vary by operating system, but its usually a least a minute, and usually more like 5. The best advice I have heard to avoid this condition (apart from always closing all sockets properly on exit) is to set the SO_LINGER tcp option to 0 on your server socket during the listen() phase. As freespace pointed out, in java this is the setReuseAddress(true) method. A: You might want call setReuseAddress(true) before calling bind() on your socket object. This is caused by a TCP connection persisting even after the socket is closed. A: I'm not sure about Jetty, but I have noticed that sometimes Tomcat will not shut down cleanly on some of our Linux servers. In cases like that, Tomcat will restart but not be able to use the port in question because the previous instance is still bound to it. In such cases, we have to find the rogue process and explicitly kill -9 it before we restart Tomcat. I'm not sure if this is a java bug or specific to Tomcat or the JVM we're using. A: I must say I also thought that it's the usual issue solved by setReuseAddress(true). However, the error message in that case is usually something along the lines that the JVM can't bind to the port. I've never seen the posted error message before. Googling for it seems to suggest that another process is listening on one or more (but not all) network interfaces, and you request your process to bind to all interfaces, whereas it can bind to some (those that the other process isn't listening to) but not all of them. Just guessing here though...
Why do I get "java.net.BindException: Only one usage of each socket address" if netstat says something else?
I start up my application which uses a Jetty server, using port 9000. I then shut down my application with Ctrl-C I check with "netstat -a" and see that the port 9000 is no longer being used. I restart my application and get: [ERROR,9/19 15:31:08] java.net.BindException: Only one usage of each socket address (protocol/network address/port) is normally permitted [TRACE,9/19 15:31:08] java.net.BindException: Only one usage of each socket address (protocol/network address/port) is normally permitted [TRACE,9/19 15:31:08] at java.net.PlainSocketImpl.convertSocketExceptionToIOException(PlainSocketImpl.java:75) [TRACE,9/19 15:31:08] at sun.nio.ch.Net.bind(Net.java:101) [TRACE,9/19 15:31:08] at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) [TRACE,9/19 15:31:08] at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77) [TRACE,9/19 15:31:08] at org.mortbay.jetty.nio.BlockingChannelConnector.open(BlockingChannelConnector.java:73) [TRACE,9/19 15:31:08] at org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:285) [TRACE,9/19 15:31:08] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [TRACE,9/19 15:31:08] at org.mortbay.jetty.Server.doStart(Server.java:233) [TRACE,9/19 15:31:08] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [TRACE,9/19 15:31:08] at ... Is this a Java bug? Can I avoid it somehow before starting the Jetty server? Edit #1 Here is our code for creating our BlockingChannelConnector, note the "setReuseAddress(true)": connector.setReuseAddress( true ); connector.setPort( port ); connector.setStatsOn( true ); connector.setMaxIdleTime( 30000 ); connector.setLowResourceMaxIdleTime( 30000 ); connector.setAcceptQueueSize( maxRequests ); connector.setName( "Blocking-IO Connector, bound to host " + connector.getHost() ); Could it have something to do with the idle time? Edit #2 Next piece of the puzzle that may or may not help: when running the application in Debug Mode (Eclipse) the server starts up without a problem!!! But the problem described above occurs reproducibly when running the application in Run Mode or as a built jar file. Whiskey Tango Foxtrot? Edit #3 (4 days later) - still have the issue. Any thoughts?
[ "During your first invocation of your program, did it accept at least one incoming connection? If so then what you are most likely seeing is the socket linger in effect. \nFor the best explanation dig up a copy of TCP/IP Illustrated by Stevens \n\n(source: kohala.com) \nBut, as I understand it, because the application did not properly close the connection (that is BOTH client and server sent their FIN/ACK sequences) the socket you were listening on cannot be reused until the connection is considered dead, the so called 2MSL timeout. The value of 1 MSL can vary by operating system, but its usually a least a minute, and usually more like 5. \nThe best advice I have heard to avoid this condition (apart from always closing all sockets properly on exit) is to set the SO_LINGER tcp option to 0 on your server socket during the listen() phase. As freespace pointed out, in java this is the setReuseAddress(true) method.\n", "You might want call setReuseAddress(true) before calling bind() on your socket object. This is caused by a TCP connection persisting even after the socket is closed.\n", "I'm not sure about Jetty, but I have noticed that sometimes Tomcat will not shut down cleanly on some of our Linux servers. In cases like that, Tomcat will restart but not be able to use the port in question because the previous instance is still bound to it. In such cases, we have to find the rogue process and explicitly kill -9 it before we restart Tomcat. I'm not sure if this is a java bug or specific to Tomcat or the JVM we're using.\n", "I must say I also thought that it's the usual issue solved by setReuseAddress(true). However, the error message in that case is usually something along the lines that the JVM can't bind to the port. I've never seen the posted error message before. Googling for it seems to suggest that another process is listening on one or more (but not all) network interfaces, and you request your process to bind to all interfaces, whereas it can bind to some (those that the other process isn't listening to) but not all of them. Just guessing here though...\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "java", "jetty", "sockets" ]
stackoverflow_0000101880_java_jetty_sockets.txt
Q: Testing Abstract Class Concrete Methods How would I design and organize tests for the concrete methods of an abstract class? Specifically in .NET. A: You have to create a subclass that implements the abstract methods (with empty methods), but none of the concrete ones. This subclass should be for testing only (it should never go into your production code). Just ignore the overridden abstract methods in your unit tests and concentrate on the concrete methods. A: Use Rhino Mocks, it can generate implementations of the abstract class at runtime and you can call the non-abstract methods. A: The first thing that comes to mind is to test those methods in a concrete child class. A: Any reason not to just include that in the testing of one of the instances? If that doesn't work, you could probably create a subclass just for testing with no unique functionality of its own. A: I Always use Stub/Mock object A: You have to define and create a concrete test class that inhereits from the abstract. Typically it will be a light shim that does nothing but pass through calls.
Testing Abstract Class Concrete Methods
How would I design and organize tests for the concrete methods of an abstract class? Specifically in .NET.
[ "You have to create a subclass that implements the abstract methods (with empty methods), but none of the concrete ones. This subclass should be for testing only (it should never go into your production code). Just ignore the overridden abstract methods in your unit tests and concentrate on the concrete methods.\n", "Use Rhino Mocks, it can generate implementations of the abstract class at runtime and you can call the non-abstract methods.\n", "The first thing that comes to mind is to test those methods in a concrete child class.\n", "Any reason not to just include that in the testing of one of the instances?\nIf that doesn't work, you could probably create a subclass just for testing with no unique functionality of its own.\n", "I Always use Stub/Mock object\n", "You have to define and create a concrete test class that inhereits from the abstract. Typically it will be a light shim that does nothing but pass through calls.\n" ]
[ 9, 3, 1, 0, 0, 0 ]
[]
[]
[ ".net", "c#", "unit_testing" ]
stackoverflow_0000104958_.net_c#_unit_testing.txt
Q: How to determine order for new item? I have a members table in MySQL CREATE TABLE `members` ( `id` int(10) unsigned NOT NULL auto_increment, `name` varchar(65) collate utf8_unicode_ci NOT NULL, `order` tinyint(3) unsigned NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB; And I would like to let users order the members how they like. I'm storing the order in order column. I'm wondering how to insert new user to be added to the bottom of the list. This is what I have today: $db->query('insert into members VALUES (0, "new member", 0)'); $lastId = $db->lastInsertId(); $maxOrder = $db->fetchAll('select MAX(`order`) max_order FROM members'); $db->query('update members SET `order` = ? WHERE id = ?', array( $maxOrder[0]['max_order'] + 1, $lastId )); But that's not really precise while when there are several users adding new members at the same time, it might happen the MAX(order) will return the same values. How do you handle such cases? A: You can do the SELECT as part of the INSERT, such as: INSERT INTO members SELECT 0, "new member", max(`order`)+1 FROM members; Keep in mind that you are going to want to have an index on the order column to make the SELECT part optimized. In addition, you might want to reconsider the tinyint for order, unless you only expect to only have 255 orders ever. Also order is a reserved word and you will always need to write it as `order`, so you might consider renaming that column as well. A: Since you already automatically increment the id for each new member, you can order by id. A: I am not sure I understand. If each user wants a different order how will you store individual user preferences in one single field in the "members" table? Usually you just let users to order based on the natural order of the fields. What is the purpose of the order field? A: Usually I make all my select statements order by "order, name"; Then I always insert the same value for Order (either 0 or 9999999 depending on if I want them first or last). Then the user can reorder however they like. A: InnoDB supports transactions. Before the insert do a 'begin' statement and when your finished do a commit. See this article for an explanation of transactions in mySql. A: What you could do is create a table with keys (member_id,position) that maps to another member_id. Then you can store the ordering in that table separate from the member list itself. (Each member retains their own list ordering, which is what I assume you want...?) Supposing that you have a member table like this: +-----------+--------------+ | member_id | name | +-----------+--------------+ | 1 | John Smith | | 2 | John Doe | | 3 | John Johnson | | 4 | Sue Someone | +-----------+--------------+ Then, you could have an ordering table like this: +---------------+----------+-----------------+ | member_id_key | position | member_id_value | +---------------+----------+-----------------+ | 1 | 1 | 4 | | 1 | 2 | 1 | | 1 | 3 | 3 | | 1 | 4 | 2 | | 2 | 2 | 1 | | 2 | 3 | 2 | +---------------+----------+-----------------+ You can select the member list given the stored order by using an inner join. For example: SELECT name FROM members inner join orderings ON members.member_id = orderings.member_id_value WHERE orderings.member_id_key = <ID for member you want to lookup> ORDER BY position; As an example, the result of running this query for John Smith's list (ie, WHERE member_id_key = 1) would be: +--------------+ | name | +--------------+ | Sue Someone | | John Smith | | John Johnson | | John Doe | +--------------+ You can calculate position for adding to the bottom of the list by adding one to the max position value for a given id.
How to determine order for new item?
I have a members table in MySQL CREATE TABLE `members` ( `id` int(10) unsigned NOT NULL auto_increment, `name` varchar(65) collate utf8_unicode_ci NOT NULL, `order` tinyint(3) unsigned NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB; And I would like to let users order the members how they like. I'm storing the order in order column. I'm wondering how to insert new user to be added to the bottom of the list. This is what I have today: $db->query('insert into members VALUES (0, "new member", 0)'); $lastId = $db->lastInsertId(); $maxOrder = $db->fetchAll('select MAX(`order`) max_order FROM members'); $db->query('update members SET `order` = ? WHERE id = ?', array( $maxOrder[0]['max_order'] + 1, $lastId )); But that's not really precise while when there are several users adding new members at the same time, it might happen the MAX(order) will return the same values. How do you handle such cases?
[ "You can do the SELECT as part of the INSERT, such as:\nINSERT INTO members SELECT 0, \"new member\", max(`order`)+1 FROM members;\nKeep in mind that you are going to want to have an index on the order column to make the SELECT part optimized. \nIn addition, you might want to reconsider the tinyint for order, unless you only expect to only have 255 orders ever.\nAlso order is a reserved word and you will always need to write it as `order`, so you might consider renaming that column as well.\n", "Since you already automatically increment the id for each new member, you can order by id.\n", "I am not sure I understand. If each user wants a different order how will you store individual user preferences in one single field in the \"members\" table?\nUsually you just let users to order based on the natural order of the fields. What is the purpose of the order field?\n", "Usually I make all my select statements order by \"order, name\"; Then I always insert the same value for Order (either 0 or 9999999 depending on if I want them first or last). Then the user can reorder however they like.\n", "InnoDB supports transactions. Before the insert do a 'begin' statement and when your finished do a commit. See this article for an explanation of transactions in mySql.\n", "What you could do is create a table with keys (member_id,position) that maps to another member_id. Then you can store the ordering in that table separate from the member list itself. (Each member retains their own list ordering, which is what I assume you want...?)\nSupposing that you have a member table like this:\n+-----------+--------------+\n| member_id | name |\n+-----------+--------------+\n| 1 | John Smith |\n| 2 | John Doe |\n| 3 | John Johnson |\n| 4 | Sue Someone |\n+-----------+--------------+\n\nThen, you could have an ordering table like this:\n+---------------+----------+-----------------+\n| member_id_key | position | member_id_value |\n+---------------+----------+-----------------+\n| 1 | 1 | 4 |\n| 1 | 2 | 1 |\n| 1 | 3 | 3 |\n| 1 | 4 | 2 |\n| 2 | 2 | 1 |\n| 2 | 3 | 2 |\n+---------------+----------+-----------------+\n\nYou can select the member list given the stored order by using an inner join. For example:\nSELECT name\nFROM members inner join orderings \n ON members.member_id = orderings.member_id_value\nWHERE orderings.member_id_key = <ID for member you want to lookup>\nORDER BY position;\n\nAs an example, the result of running this query for John Smith's list (ie, WHERE member_id_key = 1) would be:\n+--------------+\n| name |\n+--------------+\n| Sue Someone |\n| John Smith |\n| John Johnson |\n| John Doe |\n+--------------+\n\nYou can calculate position for adding to the bottom of the list by adding one to the max position value for a given id.\n" ]
[ 4, 2, 0, 0, 0, 0 ]
[]
[]
[ "mysql", "php" ]
stackoverflow_0000104747_mysql_php.txt
Q: WebDAV doesn't include BCC headers when retrieving mail This seems to be intended behaviour as stated here, but I can't believe the only method of getting the BCCs is to parse Outlook Web Access' HTML code. Has anybody encountered the same limitation and found a workaround? I'd also be fine with getting the BCCs from somewhere via WebDAV and adding the header fields myself. A: BCC, by definition, doesn't create nor use any of the headers in an e-mail message. It wouldn't be "blind" otherwise, no?
WebDAV doesn't include BCC headers when retrieving mail
This seems to be intended behaviour as stated here, but I can't believe the only method of getting the BCCs is to parse Outlook Web Access' HTML code. Has anybody encountered the same limitation and found a workaround? I'd also be fine with getting the BCCs from somewhere via WebDAV and adding the header fields myself.
[ "BCC, by definition, doesn't create nor use any of the headers in an e-mail message. It wouldn't be \"blind\" otherwise, no?\n" ]
[ 1 ]
[]
[]
[ "exchange_server", "webdav" ]
stackoverflow_0000092801_exchange_server_webdav.txt
Q: Kerberos Delegation for Clients Ouside the Firewall I am trying to run a SQL Server Reporting Services where the data for the report is on a SQL Server database that's on a different server. Integrated Authentication is turned on for both the Report Server and the report. I have confirmed that Kerberos delegation is working fine by using Internet Explorer to run the report from inside the network. However, when I open the report server through the firewall, I cannot run the report. I get the following error: An error has occurred during report processing. Cannot create a connection to data source 'frattoxppro2'. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Does Kerberos authentication not work outside a firewall? A: Kerberos requires a port 88 connection to the KDC, in this case, most likely your DC. What you probably want to look at is HTTPS + Basic Authentication + Protocol Transition to take the Basic Authentication and translate it into a DC based Kerberos Ticket for delegation and back end authentication. Protocol Transition with Constrained Delegation Technical Supplement How To: Use Protocol Transition and Constrained Delegation in ASP.NET Not exactly the easiest to set up, but when its working, it works amazingly well. A: I'm not really in a position to tell you why kerberos isn't working for you, but did have a alternative suggestion for your configuration. You can use ISA services to expose the reporting server rather than simply poking a hole in your firewall. This is something our company has done successfully - it republishes the reporting services site so the browsers are talking to ISA, not directly to the server. ISA Services is quite happy to pass through your credentials as well.
Kerberos Delegation for Clients Ouside the Firewall
I am trying to run a SQL Server Reporting Services where the data for the report is on a SQL Server database that's on a different server. Integrated Authentication is turned on for both the Report Server and the report. I have confirmed that Kerberos delegation is working fine by using Internet Explorer to run the report from inside the network. However, when I open the report server through the firewall, I cannot run the report. I get the following error: An error has occurred during report processing. Cannot create a connection to data source 'frattoxppro2'. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Does Kerberos authentication not work outside a firewall?
[ "Kerberos requires a port 88 connection to the KDC, in this case, most likely your DC.\nWhat you probably want to look at is HTTPS + Basic Authentication + Protocol Transition to take the Basic Authentication and translate it into a DC based Kerberos Ticket for delegation and back end authentication.\n\nProtocol Transition with\nConstrained Delegation Technical\nSupplement\nHow To: Use Protocol Transition and\nConstrained Delegation in\nASP.NET\n\nNot exactly the easiest to set up, but when its working, it works amazingly well.\n", "I'm not really in a position to tell you why kerberos isn't working for you, but did have a alternative suggestion for your configuration. You can use ISA services to expose the reporting server rather than simply poking a hole in your firewall. This is something our company has done successfully - it republishes the reporting services site so the browsers are talking to ISA, not directly to the server. ISA Services is quite happy to pass through your credentials as well.\n" ]
[ 5, 0 ]
[]
[]
[ "delegation", "iis", "internet_explorer", "kerberos", "sql_server" ]
stackoverflow_0000088646_delegation_iis_internet_explorer_kerberos_sql_server.txt
Q: Is it possible to increase the 256 character limit in excel validation drop down boxes? I am creating the validation dynamically and have hit a 256 character limit. My validation looks something like this: Level 1, Level 2, Level 3, Level 4..... Is there any way to get around the character limit other then pointing at a range? The validation is already being produced in VBA. Increasing the limit is the easiest way to avoid any impact on how the sheet currently works. A: I'm pretty sure there is no way around the 256 character limit, Joel Spolsky explains why here: http://www.joelonsoftware.com/printerFriendly/articles/fog0000000319.html. You could however use VBA to get close to replicating the functionality of the built in validation by coding the Worksheet_Change event. Here's a mock up to give you the idea. You will probably want to refactor it to cache the ValidValues, handle changes to ranges of cells, etc... Private Sub Worksheet_Change(ByVal Target As Range) Dim ValidationRange As Excel.Range Dim ValidValues(1 To 100) As String Dim Index As Integer Dim Valid As Boolean Dim Msg As String Dim WhatToDo As VbMsgBoxResult 'Initialise ValidationRange Set ValidationRange = Sheet1.Range("A:A") ' Check if change is in a cell we need to validate If Not Intersect(Target, ValidationRange) Is Nothing Then ' Populate ValidValues array For Index = 1 To 100 ValidValues(Index) = "Level " & Index Next ' do the validation, permit blank values If IsEmpty(Target) Then Valid = True Else Valid = False For Index = 1 To 100 If Target.Value = ValidValues(Index) Then ' found match to valid value Valid = True Exit For End If Next End If If Not Valid Then Target.Select ' tell user value isn't valid Msg = _ "The value you entered is not valid" & vbCrLf & vbCrLf & _ "A user has restricted values that can be entered into this cell." WhatToDo = MsgBox(Msg, vbRetryCancel + vbCritical, "Microsoft Excel") Target.Value = "" If WhatToDo = vbRetry Then Application.SendKeys "{F2}" End If End If End If End Sub
Is it possible to increase the 256 character limit in excel validation drop down boxes?
I am creating the validation dynamically and have hit a 256 character limit. My validation looks something like this: Level 1, Level 2, Level 3, Level 4..... Is there any way to get around the character limit other then pointing at a range? The validation is already being produced in VBA. Increasing the limit is the easiest way to avoid any impact on how the sheet currently works.
[ "I'm pretty sure there is no way around the 256 character limit, Joel Spolsky explains why here: http://www.joelonsoftware.com/printerFriendly/articles/fog0000000319.html.\nYou could however use VBA to get close to replicating the functionality of the built in validation by coding the Worksheet_Change event. Here's a mock up to give you the idea. You will probably want to refactor it to cache the ValidValues, handle changes to ranges of cells, etc...\nPrivate Sub Worksheet_Change(ByVal Target As Range)\nDim ValidationRange As Excel.Range\nDim ValidValues(1 To 100) As String\nDim Index As Integer\nDim Valid As Boolean\nDim Msg As String\nDim WhatToDo As VbMsgBoxResult\n\n 'Initialise ValidationRange\n Set ValidationRange = Sheet1.Range(\"A:A\")\n\n ' Check if change is in a cell we need to validate\n If Not Intersect(Target, ValidationRange) Is Nothing Then\n\n ' Populate ValidValues array\n For Index = 1 To 100\n ValidValues(Index) = \"Level \" & Index\n Next\n\n ' do the validation, permit blank values\n If IsEmpty(Target) Then\n Valid = True\n Else\n Valid = False\n For Index = 1 To 100\n If Target.Value = ValidValues(Index) Then\n ' found match to valid value\n Valid = True\n Exit For\n End If\n Next\n End If\n\n If Not Valid Then\n\n Target.Select\n\n ' tell user value isn't valid\n Msg = _\n \"The value you entered is not valid\" & vbCrLf & vbCrLf & _\n \"A user has restricted values that can be entered into this cell.\"\n\n WhatToDo = MsgBox(Msg, vbRetryCancel + vbCritical, \"Microsoft Excel\")\n\n Target.Value = \"\"\n\n If WhatToDo = vbRetry Then\n Application.SendKeys \"{F2}\"\n End If\n\n End If\n\n End If\n\nEnd Sub\n\n" ]
[ 6 ]
[]
[]
[ "excel", "vba" ]
stackoverflow_0000090365_excel_vba.txt
Q: Win32 TreeCtrl TVN_ENDLABELEDIT memory allocation I have a Win32 TreeCtrl where the user can rename the tree labels. I process the TVN_ENDLABELEDIT message to do this. In certain cases I need to change the text that the user entered. Basically the user can enter a short name during edit and I want to replace it with a longer text. To do this I change the pszText member of the TVITEM struct I received during TVN_ENDLABELEDIT. I do a pointer replace here, as the original memory may be too small to do a simple strcpy like operation. However I do not know how to deallocate the original pszText member. Basically because it's unknown if that was created with malloc() or new ... therefore I cannot call the appropriate deallocator. Obviously Win32 won't call the deallocator for the old pszText because the pointer has been replaced. So if I don't deallocate, there will be a memory leak. Any idea how Win32 allocate these structs and what is the proper way to handle the above situation? A: Unless you're using LPSTR_TEXTCALLBACK, the tree-view control is responsible for allocating the memory, not your code, so you shouldn't change the value of the pszText pointer. To change the item's text in your TVN_ENDLABELEDIT handler, you can use TreeView_SetItem, then return 0 from the handler. A: You don't want to directly edit the text in the TVITEM struct, the results are undefined. Instead, use the TVM_SETITEM message, or equivalently, use the TreeView_SetItem() macro defined in windowsx.h.
Win32 TreeCtrl TVN_ENDLABELEDIT memory allocation
I have a Win32 TreeCtrl where the user can rename the tree labels. I process the TVN_ENDLABELEDIT message to do this. In certain cases I need to change the text that the user entered. Basically the user can enter a short name during edit and I want to replace it with a longer text. To do this I change the pszText member of the TVITEM struct I received during TVN_ENDLABELEDIT. I do a pointer replace here, as the original memory may be too small to do a simple strcpy like operation. However I do not know how to deallocate the original pszText member. Basically because it's unknown if that was created with malloc() or new ... therefore I cannot call the appropriate deallocator. Obviously Win32 won't call the deallocator for the old pszText because the pointer has been replaced. So if I don't deallocate, there will be a memory leak. Any idea how Win32 allocate these structs and what is the proper way to handle the above situation?
[ "Unless you're using LPSTR_TEXTCALLBACK, the tree-view control is responsible for allocating the memory, not your code, so you shouldn't change the value of the pszText pointer.\nTo change the item's text in your TVN_ENDLABELEDIT handler, you can use TreeView_SetItem, then return 0 from the handler.\n", "You don't want to directly edit the text in the TVITEM struct, the results are undefined. Instead, use the TVM_SETITEM message, or equivalently, use the TreeView_SetItem() macro defined in windowsx.h.\n" ]
[ 2, 0 ]
[]
[]
[ "treecontrol", "winapi" ]
stackoverflow_0000101038_treecontrol_winapi.txt
Q: Home key go to start of line in Visual Studio? Where is the option in Visual Studio to make the Home key go to the start of the line? Right now you have to do Home,Home or Home, Ctrl+Left Arrow i'd prefer that home goes to the start of the line. i saw it before, but now i cannot find it. A: In Tools/Customize/Keyboard, Reassign the "Home" key from Edit.LineStart" to "Edit.LineFirstColumn" Edit by OP: You must change Scope to Text Editor before this will work. Visual Studio 2010 Visual Studio 2010 removed the "scope" option. Instead you want the "Use new shortcut in" option: A: From asking the same question on MSDN forums: TaylorMichaelL said: The command you are interested in is Edit.LineFirstColumn. You'll want to change the scope to be the Text Editor. You should remove any existing shortcut key associated with the command first. If you don't change the scope then the Home key won't work. Then try using the Home key. It should work. Michael Taylor - 9/18/08 http://p3net.mvps.org Changing the Scope to Text Editor was the missing piece in the puzzle. Go to Tools/Customize/Keyboard Change Scope to "Text Editor". Reassign the "Home" key from Edit.LineStart to Edit.LineFirstColumn
Home key go to start of line in Visual Studio?
Where is the option in Visual Studio to make the Home key go to the start of the line? Right now you have to do Home,Home or Home, Ctrl+Left Arrow i'd prefer that home goes to the start of the line. i saw it before, but now i cannot find it.
[ "In Tools/Customize/Keyboard, Reassign the \"Home\" key from Edit.LineStart\" to \"Edit.LineFirstColumn\"\nEdit by OP: You must change Scope to Text Editor before this will work.\n\nVisual Studio 2010\nVisual Studio 2010 removed the \"scope\" option. Instead you want the \"Use new shortcut in\" option:\n\n", "From asking the same question on MSDN forums:\nTaylorMichaelL said:\n\nThe command you are interested in is\n Edit.LineFirstColumn. You'll want to\n change the scope to be the Text\n Editor. You should remove any\n existing shortcut key associated with\n the command first. If you don't\n change the scope then the Home key\n won't work. Then try using the Home\n key. It should work.\nMichael Taylor - 9/18/08\n http://p3net.mvps.org\n\nChanging the Scope to Text Editor was the missing piece in the puzzle.\n\nGo to Tools/Customize/Keyboard\nChange Scope to \"Text Editor\".\nReassign the \"Home\" key from Edit.LineStart to Edit.LineFirstColumn\n\n" ]
[ 28, 6 ]
[]
[]
[ "key", "keyboard_shortcuts", "visual_studio" ]
stackoverflow_0000083808_key_keyboard_shortcuts_visual_studio.txt
Q: Where'd my generic ActionLink go? Moved from preview 2 to preview 5 and now my Html.ActionLink calls are all failing. It appears that the generic version has been replaced with a non-type safe version. // used to work <li> <%= Html.ActionLink<HomeController>(c => c.Index(), "Home")%> </li> // what appears I can only do now <li> <%= Html.ActionLink<HomeController>("Index", "Home")%> </li> Why did The Gu do this? Has it been moved to Microsoft.Web.Mvc or somewhere else as a "future"? Is there a replacement that is generic? Halp! A: Don't blame the GU, it's my fault. That method has been moved to MvcFutures. Here's a blog post that provides the foundation for why this change was made.
Where'd my generic ActionLink go?
Moved from preview 2 to preview 5 and now my Html.ActionLink calls are all failing. It appears that the generic version has been replaced with a non-type safe version. // used to work <li> <%= Html.ActionLink<HomeController>(c => c.Index(), "Home")%> </li> // what appears I can only do now <li> <%= Html.ActionLink<HomeController>("Index", "Home")%> </li> Why did The Gu do this? Has it been moved to Microsoft.Web.Mvc or somewhere else as a "future"? Is there a replacement that is generic? Halp!
[ "Don't blame the GU, it's my fault. That method has been moved to MvcFutures. Here's a blog post that provides the foundation for why this change was made.\n" ]
[ 7 ]
[]
[]
[ "asp.net", "asp.net_mvc" ]
stackoverflow_0000105310_asp.net_asp.net_mvc.txt
Q: Asp.net c# and logging ip access on every page and frequency Are there any prebuilt modules for this? Is there an event thats called everytime a page is loaded? I'm just trying to secure one of my more important admin sections. A: As blowdart said, simple IP Address logging is handled by IIS already. Simply right-click on the Website in Internet Information Services (IIS) Manager tool, go to the Web Site tab, and check the Enable Logging box. You can customize what information is logged also. If you want to restrict the site or even a folder of the site to specific IP's, just go to the IIS Site or Folder properties that you want to protect in the IIS Manager, right click and select Properties. Choose the Directory Security tab. In the middle you should see the "IP Addresses and domain name restrictions. This will be where you can setup which IP's to block or allow. If you want to do this programatically in the code-behind of ASP.Net, you could use the page preinit event. A: A little more information please; do you want to log IPs or lock access via IP? Both those functions are built into IIS rather than ASP.NET; so are you looking for how to limit access via IP programatically? A: You can use the following to get a user's IP address: Request.ServerVariables["REMOTE_ADDR"] Once you have the IP you will have to write something custom to log it or block by IP. There isn't something built in to asp.net to do this for you. A: Is there an event that's called everytime a page is loaded? Page_Load might be what you're looking for. However, and I'm really not trying to be mean here, if you don't know that, you probably shouldn't be trying to secure the app. You're just not experienced enough in .Net I'm sure you're great at what you do, in whatever platform you're experienced in. But .Net WebForms isn't your forte. This is one of those times when you should back off and let someone else handle it.
Asp.net c# and logging ip access on every page and frequency
Are there any prebuilt modules for this? Is there an event thats called everytime a page is loaded? I'm just trying to secure one of my more important admin sections.
[ "As blowdart said, simple IP Address logging is handled by IIS already. Simply right-click on the Website in Internet Information Services (IIS) Manager tool, go to the Web Site tab, and check the Enable Logging box. You can customize what information is logged also. \nIf you want to restrict the site or even a folder of the site to specific IP's, just go to the IIS Site or Folder properties that you want to protect in the IIS Manager, right click and select Properties. Choose the Directory Security tab. In the middle you should see the \"IP Addresses and domain name restrictions. This will be where you can setup which IP's to block or allow.\nIf you want to do this programatically in the code-behind of ASP.Net, you could use the page preinit event.\n", "A little more information please; do you want to log IPs or lock access via IP? Both those functions are built into IIS rather than ASP.NET; so are you looking for how to limit access via IP programatically?\n", "You can use the following to get a user's IP address:\n Request.ServerVariables[\"REMOTE_ADDR\"]\n\nOnce you have the IP you will have to write something custom to log it or block by IP. There isn't something built in to asp.net to do this for you.\n", "\nIs there an event that's called everytime a page is loaded?\n\nPage_Load might be what you're looking for.\nHowever, and I'm really not trying to be mean here, if you don't know that, you probably shouldn't be trying to secure the app. You're just not experienced enough in .Net\nI'm sure you're great at what you do, in whatever platform you're experienced in. But .Net WebForms isn't your forte. This is one of those times when you should back off and let someone else handle it.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "c#", "ip_address", "logging" ]
stackoverflow_0000104764_c#_ip_address_logging.txt
Q: What are the key considerations when creating a web crawler? I just started thinking about creating/customizing a web crawler today, and know very little about web crawler/robot etiquette. A majority of the writings on etiquette I've found seem old and awkward, so I'd like to get some current (and practical) insights from the web developer community. I want to use a crawler to walk over "the web" for a super simple purpose - "does the markup of site XYZ meet condition ABC?". This raises a lot of questions for me, but I think the two main questions I need to get out of the way first are: It feels a little "iffy" from the get go -- is this sort of thing acceptable? What specific considerations should the crawler take to not upset people? A: Obey robots.txt (and not too aggressive like has been said already). You might want to think about your user-agent string - they're a good place to be up-front about what you're doing and how you can be contacted. A: Besides WillDean's and Einar's good answers, I would really recommend you take a time to read about the meaning of the HTTP response codes, and what your crawler should do when encountering each one, since it will make a big a difference on your performance, and on wether or not you are banned from some sites. Some useful links: HTTP/1.1: Status Code Definitions Aggregator client HTTP tests Wikipedia A: Please be sure to include a URL in your user-agent string that explains who/what/why your robot is crawling. A: Also do not forget to obey the bot meta tags: http://www.w3.org/TR/html4/appendix/notes.html#h-B.4.1.2 Another thing to think about - when spider pages, don't be too hasty deciding things don't exist or have errors. Some pages are offline due to maintenance work or errors that are corrected within a short period. A: All good points, the ones made here. You will also have to deal with dynamically-generated Java and JavaScript links, parameters and session IDs, escaping single and double quotes, failed attempts at relative links (using ../../ to go past the root directory), case sensitivity, frames, redirects, cookies.... I could go on for days, and kinda have. I have a Robots Checklist that covers most of this, and I'm happy answer what I can. You should also think about using open-source robot crawler code, because it gives you a huge leg up on all these issues. I have a page on that as well: open source robot code. Hope that helps! A: I'd say that it is very important to consider how much load you are causing. For instance, if your crawler requests every object of a single site, more or less at once, it might cause load problems for that particular site. In other words, make sure your crawler is not too aggressive. A: It's perfectly accetable to do - just make sure it only visits each page once for each session. As you're technically creating a searchbot you must obey robots.txt and no-cache rules. People can still block your bot specifically if needed by blocking IPs. You're only looking for source code as far as I can tell so you'll want to build something to follow <link>s for stylesheets and <script src="..."></script> for JavaScripts. A: Load is a big consideration. Put limits on how often you crawl a particular site and what is the most basic info you need to accomplish your goal. If you are looking for text do not download all images, stuff like that. Of course obey robots.txt but also make sure your user agent string includes accurate contact info and maybe a link to a web page describing what you are doing and how you do it. If a web admin is seeing a lot of requests from you and is curious you might be able to answer a lot of questions with an informative web page. A: You will need to add some capability to blacklist sites / domains or other things (IP ranges, ASN, etc) to avoid your spider getting bogged down with spam sites. You'll need to have a HTTP implementation with a lot of control over timeout and behaviour. Expect a lot of sites to send back invalid responses, huge responses, rubbish headers, or just leave the connection open indefinitely with no response etc. Also don't trust a 200 status to mean "the page exists". Quite a large proportion of sites send back 200 for "Not found" or other errors, in my experience (Along with a large HTML document).
What are the key considerations when creating a web crawler?
I just started thinking about creating/customizing a web crawler today, and know very little about web crawler/robot etiquette. A majority of the writings on etiquette I've found seem old and awkward, so I'd like to get some current (and practical) insights from the web developer community. I want to use a crawler to walk over "the web" for a super simple purpose - "does the markup of site XYZ meet condition ABC?". This raises a lot of questions for me, but I think the two main questions I need to get out of the way first are: It feels a little "iffy" from the get go -- is this sort of thing acceptable? What specific considerations should the crawler take to not upset people?
[ "Obey robots.txt (and not too aggressive like has been said already).\nYou might want to think about your user-agent string - they're a good place to be up-front about what you're doing and how you can be contacted.\n", "Besides WillDean's and Einar's good answers, I would really recommend you take a time to read about the meaning of the HTTP response codes, and what your crawler should do when encountering each one, since it will make a big a difference on your performance, and on wether or not you are banned from some sites. \nSome useful links:\nHTTP/1.1: Status Code Definitions\nAggregator client HTTP tests\nWikipedia\n", "Please be sure to include a URL in your user-agent string that explains who/what/why your robot is crawling.\n", "Also do not forget to obey the bot meta tags: http://www.w3.org/TR/html4/appendix/notes.html#h-B.4.1.2\nAnother thing to think about - when spider pages, don't be too hasty deciding things don't exist or have errors. Some pages are offline due to maintenance work or errors that are corrected within a short period.\n", "All good points, the ones made here. You will also have to deal with dynamically-generated Java and JavaScript links, parameters and session IDs, escaping single and double quotes, failed attempts at relative links (using ../../ to go past the root directory), case sensitivity, frames, redirects, cookies....\nI could go on for days, and kinda have. I have a Robots Checklist that covers most of this, and I'm happy answer what I can.\nYou should also think about using open-source robot crawler code, because it gives you a huge leg up on all these issues. I have a page on that as well: open source robot code. Hope that helps!\n", "I'd say that it is very important to consider how much load you are causing. For instance, if your crawler requests every object of a single site, more or less at once, it might cause load problems for that particular site.\nIn other words, make sure your crawler is not too aggressive.\n", "It's perfectly accetable to do - just make sure it only visits each page once for each session. As you're technically creating a searchbot you must obey robots.txt and no-cache rules. People can still block your bot specifically if needed by blocking IPs.\nYou're only looking for source code as far as I can tell so you'll want to build something to follow <link>s for stylesheets and <script src=\"...\"></script> for JavaScripts.\n", "Load is a big consideration. Put limits on how often you crawl a particular site and what is the most basic info you need to accomplish your goal. If you are looking for text do not download all images, stuff like that.\nOf course obey robots.txt but also make sure your user agent string includes accurate contact info and maybe a link to a web page describing what you are doing and how you do it. If a web admin is seeing a lot of requests from you and is curious you might be able to answer a lot of questions with an informative web page.\n", "You will need to add some capability to blacklist sites / domains or other things (IP ranges, ASN, etc) to avoid your spider getting bogged down with spam sites.\nYou'll need to have a HTTP implementation with a lot of control over timeout and behaviour. Expect a lot of sites to send back invalid responses, huge responses, rubbish headers, or just leave the connection open indefinitely with no response etc.\nAlso don't trust a 200 status to mean \"the page exists\". Quite a large proportion of sites send back 200 for \"Not found\" or other errors, in my experience (Along with a large HTML document).\n" ]
[ 9, 3, 3, 3, 3, 2, 2, 2, 2 ]
[]
[]
[ "web_crawler" ]
stackoverflow_0000032366_web_crawler.txt
Q: Memory footprint issues with JAVA, JNI, and C application I have a piece of an application that is written in C, it spawns a JVM and uses JNI to interact with a Java application. My memory footprint via Process Explorer gets upto 1GB and runs out of memory. Now as far as I know it should be able to get upto 2GB. One thing I believe is that the memory the JVM is using isn't visible in the Process Explorer. My xmx is set to 256, I added some statements to watch the java side memory and it is peaking at 256 and GC is doing its job and it is all good on that side. So my question is, where is the other 700+ MB being consumed? Anyone out there a Java/JNI/C Memory expert? A: There could be a leak in the JNI code. Remember to use (*jni)->DeleteLocalRef() for any object references you get once you are done with them. If you use any native C buffers to create new Java objects, make sure you free them off once the object is created. Check the JNI Specification for further guidelines. Depending on the VM you are using you might be able to turn on JNI checking. For example, on the IBM JDK you can specify "-Xcheck:jni". A: Try a test app in C that doesn't spawn the JVM but instead tries to allocate more and more memory. See whether the test app can reach the 2 GB barrier. A: Write a C test harness and use valgrind/alleyoop to check for leakage in your C code, and similarly use the java jvisualvm tool. A: The C and JNI code can allocate memory as well (malloc/free/new/etc), which is outside of the VM's 256m. The xMX only restricts what the VM will allocate itself. Depending on what you're allocating in the C code, and what other things are loaded in memory you may or may not be able to get up to 2GB. A: If you say that it's the Windows process that runs out of memory as opposed to the JVM, then my initial guess is that you probably invoke some (your own) native methods from the JVM and those native methods leak memory. So, I concur with @John Gardner here. A: Well thanks to all of your help especially @alexander I have discovered that all the extra memory that isn't visible via Process Explorer is being used by the Java Heap. In fact via other tests that I have run the JVM's memory consumption is included in what I see from the Process Explorer. So the heap is taking large amounts of memory, I will have to do some more research about that and maybe ask a separate question.
Memory footprint issues with JAVA, JNI, and C application
I have a piece of an application that is written in C, it spawns a JVM and uses JNI to interact with a Java application. My memory footprint via Process Explorer gets upto 1GB and runs out of memory. Now as far as I know it should be able to get upto 2GB. One thing I believe is that the memory the JVM is using isn't visible in the Process Explorer. My xmx is set to 256, I added some statements to watch the java side memory and it is peaking at 256 and GC is doing its job and it is all good on that side. So my question is, where is the other 700+ MB being consumed? Anyone out there a Java/JNI/C Memory expert?
[ "There could be a leak in the JNI code.\nRemember to use (*jni)->DeleteLocalRef() for any object references you get once you are done with them. If you use any native C buffers to create new Java objects, make sure you free them off once the object is created. Check the JNI Specification for further guidelines.\nDepending on the VM you are using you might be able to turn on JNI checking. For example, on the IBM JDK you can specify \"-Xcheck:jni\".\n", "Try a test app in C that doesn't spawn the JVM but instead tries to allocate more and more memory. See whether the test app can reach the 2 GB barrier.\n", "Write a C test harness and use valgrind/alleyoop to check for leakage in your C code, and similarly use the java jvisualvm tool.\n", "The C and JNI code can allocate memory as well (malloc/free/new/etc), which is outside of the VM's 256m. The xMX only restricts what the VM will allocate itself. Depending on what you're allocating in the C code, and what other things are loaded in memory you may or may not be able to get up to 2GB.\n", "If you say that it's the Windows process that runs out of memory as opposed to the JVM, then my initial guess is that you probably invoke some (your own) native methods from the JVM and those native methods leak memory. So, I concur with @John Gardner here.\n", "Well thanks to all of your help especially @alexander I have discovered that all the extra memory that isn't visible via Process Explorer is being used by the Java Heap. In fact via other tests that I have run the JVM's memory consumption is included in what I see from the Process Explorer. So the heap is taking large amounts of memory, I will have to do some more research about that and maybe ask a separate question.\n" ]
[ 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "c", "java", "java_native_interface", "memory" ]
stackoverflow_0000104442_c_java_java_native_interface_memory.txt
Q: Code generators vs. ORMs vs. Stored Procedures In what domains do each of these software architectures shine or fail? Which key requirements would prompt you to choose one over the other? Please assume that you have developers available who can do good object oriented code as well as good database development. Also, please avoid holy wars :) all three technologies have pros and cons, I'm interested in where is most appropriate to use which. A: Every one of these tools provides differing layers of abstraction, along with differing points to override behavior. These are architecture choices, and all architectural choices depend on trade-offs between technology, control, and organization, both of the application itself and the environment where it will be deployed. If you're dealing with a culture where DBAs 'rule the roost', then a stored-procedure-based architecture will be easier to deploy. On the other hand, it can be very difficult to manage and version stored procedures. Code generators shine when you use statically-typed languages, because you can catch errors at compile-time instead of at run-time. ORMs are ideal for integration tools, where you may need to deal with different RDBMSes and schemas on an installation-to-installation basis. Change one map and your application goes from working with PeopleSoft on Oracle to working with Microsoft Dynamics on SQL Server. I've seen applications where Generated Code is used to interface with Stored Procedures, because the stored procedures could be tweaked to get around limitations in the code generator. Ultimately the only correct answer will depend upon the problem you're trying to solve and the environment where the solution needs to execute. Anything else is arguing the correct pronunciation of 'potato'. A: I'll add my two cents: Stored procedures Can be easily optimized Abstract fundamental business rules, enhancing data integrity Provide a good security model (no need to grant read or write permissions to a front facing db user) Shine when you have many applications accessing the same data ORMs Let you concentrate only on the domain and have a more "pure" object oriented approach to development Shine when your application must be cross db compatible Shine when your application is mostly driven by behaviour instead of data Code Generators Provide you similar benefits as ORMs, with higher maintenance costs, but with better customizability. Are generally superior to ORMs in that ORMs tend to trade compile-time errors for runtime errors, which is generally to be avoided A: I agree that there are pros and cons to everything and a lot depends on your architecture. That being said, I try to use ORM's where it makes sense. A lot of the functionality is already there and usually they help prevent SQL Injection (plus it helps avoid re-inventing the wheel). Please see these other two posts on the topic (dynamic SQL vs stored procedures vs ORM) for more information Dynamic SQL vs. stored procedures Which is better: Ad hoc queries, or stored procedures? ORMs vs. stored procedures Why is parameterized SQL generated by NHibernate just as fast as a stored procedure? A: ORMs and code generators are kind of on one side of the field, and stored procedures are on another. Typically, it's easier to use ORMs and code generators in greenfield projects, because you can tailor your database schema to match the domain model you create. It's much more difficult to use them with legacy projects, because once software is written with a "data-first" mindset, it's difficult to wrap it with a domain model. That being said, all three of the approaches have value. Stored procedures can be easier to optimize, but it can be tempting to put business logic in them that may be repeated in the application itself. ORMs work well if your schema matches the concept of the ORM, but can be difficult to customize if not. Code generators can be a nice middle ground, because they provide some of the benefits of an ORM but allow customization of the generated code -- however, if you get into the habit of altering the generated code, you then have two problems, because you will have to alter it each time you re-generate it. There is no one true answer, but I tend more towards the ORM side because I believe it makes more sense to think with an object-first mindset. A: Stored Procedures Pros: Encapsulates data access code and is application-independent Cons: Can be RDBMS-specific and increase development time ORM At least some ORMs allow mapping to stored procedures Pros: Abstracts data access code and allows entity objects to be written in domain-specific way Cons: Possible performance overhead and limited mapping capability Code generation Pros: Can be used to generate stored-proc based code or an ORM or a mix of both Cons: Code generator layer may have to be maintained in addition to understanding generated code A: You forgot a significant option that deserves a category of its own: a hybrid data mapping framework such as iBatis. I have been pleased with iBatis because it lets your OO code remain OO in nature, and your database remain relational in nature, and solves the impedance mismatch by adding a third abstraction (the mapping layer between the objects and the relations) that is responsible for mapping the two, rather than trying to force fit one paradigm into the other.
Code generators vs. ORMs vs. Stored Procedures
In what domains do each of these software architectures shine or fail? Which key requirements would prompt you to choose one over the other? Please assume that you have developers available who can do good object oriented code as well as good database development. Also, please avoid holy wars :) all three technologies have pros and cons, I'm interested in where is most appropriate to use which.
[ "Every one of these tools provides differing layers of abstraction, along with differing points to override behavior. These are architecture choices, and all architectural choices depend on trade-offs between technology, control, and organization, both of the application itself and the environment where it will be deployed.\n\nIf you're dealing with a culture where DBAs 'rule the roost', then a stored-procedure-based architecture will be easier to deploy. On the other hand, it can be very difficult to manage and version stored procedures.\nCode generators shine when you use statically-typed languages, because you can catch errors at compile-time instead of at run-time.\nORMs are ideal for integration tools, where you may need to deal with different RDBMSes and schemas on an installation-to-installation basis. Change one map and your application goes from working with PeopleSoft on Oracle to working with Microsoft Dynamics on SQL Server.\n\nI've seen applications where Generated Code is used to interface with Stored Procedures, because the stored procedures could be tweaked to get around limitations in the code generator.\nUltimately the only correct answer will depend upon the problem you're trying to solve and the environment where the solution needs to execute. Anything else is arguing the correct pronunciation of 'potato'.\n", "I'll add my two cents:\nStored procedures\n\nCan be easily optimized\nAbstract fundamental business rules, enhancing data integrity\nProvide a good security model (no need to grant read or write permissions to a front facing db user)\nShine when you have many applications accessing the same data\n\nORMs\n\nLet you concentrate only on the domain and have a more \"pure\" object oriented approach to development\nShine when your application must be cross db compatible\nShine when your application is mostly driven by behaviour instead of data\n\nCode Generators\n\nProvide you similar benefits as ORMs, with higher maintenance costs, but with better customizability.\nAre generally superior to ORMs in that ORMs tend to trade compile-time errors for runtime errors, which is generally to be avoided\n\n", "I agree that there are pros and cons to everything and a lot depends on your architecture. That being said, I try to use ORM's where it makes sense. A lot of the functionality is already there and usually they help prevent SQL Injection (plus it helps avoid re-inventing the wheel).\nPlease see these other two posts on the topic (dynamic SQL vs \nstored procedures vs ORM) for more information\nDynamic SQL vs. stored procedures\nWhich is better: Ad hoc queries, or stored procedures?\nORMs vs. stored procedures\nWhy is parameterized SQL generated by NHibernate just as fast as a stored procedure?\n", "ORMs and code generators are kind of on one side of the field, and stored procedures are on another. Typically, it's easier to use ORMs and code generators in greenfield projects, because you can tailor your database schema to match the domain model you create. It's much more difficult to use them with legacy projects, because once software is written with a \"data-first\" mindset, it's difficult to wrap it with a domain model.\nThat being said, all three of the approaches have value. Stored procedures can be easier to optimize, but it can be tempting to put business logic in them that may be repeated in the application itself. ORMs work well if your schema matches the concept of the ORM, but can be difficult to customize if not. Code generators can be a nice middle ground, because they provide some of the benefits of an ORM but allow customization of the generated code -- however, if you get into the habit of altering the generated code, you then have two problems, because you will have to alter it each time you re-generate it.\nThere is no one true answer, but I tend more towards the ORM side because I believe it makes more sense to think with an object-first mindset.\n", "Stored Procedures\n\nPros: Encapsulates data access code and is application-independent\nCons: Can be RDBMS-specific and increase development time\n\nORM\nAt least some ORMs allow mapping to stored procedures\n\nPros: Abstracts data access code and allows entity objects to be written in domain-specific way\nCons: Possible performance overhead and limited mapping capability\n\nCode generation\n\nPros: Can be used to generate stored-proc based code or an ORM or a mix of both\nCons: Code generator layer may have to be maintained in addition to understanding generated code\n\n", "You forgot a significant option that deserves a category of its own: a hybrid data mapping framework such as iBatis.\nI have been pleased with iBatis because it lets your OO code remain OO in nature, and your database remain relational in nature, and solves the impedance mismatch by adding a third abstraction (the mapping layer between the objects and the relations) that is responsible for mapping the two, rather than trying to force fit one paradigm into the other.\n" ]
[ 13, 10, 5, 3, 2, 2 ]
[]
[]
[ "architecture", "code_generation", "language_agnostic", "orm", "stored_procedures" ]
stackoverflow_0000076395_architecture_code_generation_language_agnostic_orm_stored_procedures.txt
Q: Can I compose a Spring Configuration File from smaller ones? I have a handful of projects that all use one project for the data model. Each of these projects has its own applicationContext.xml file with a bunch of repetitive data stuff within it. I'd like to have a modelContext.xml file and another for my ui.xml, etc. Can I do this? A: From the Spring Docs (v 2.5.5 Section 3.2.2.1.): It can often be useful to split up container definitions into multiple XML files. One way to then load an application context which is configured from all these XML fragments is to use the application context constructor which takes multiple Resource locations. With a bean factory, a bean definition reader can be used multiple times to read definitions from each file in turn. Generally, the Spring team prefers the above approach, since it keeps container configuration files unaware of the fact that they are being combined with others. An alternate approach is to use one or more occurrences of the element to load bean definitions from another file (or files). Let's look at a sample: <import resource="services.xml"/> <import resource="resources/messageSource.xml"/> <import resource="/resources/themeSource.xml"/> <bean id="bean1" class="..."/> <bean id="bean2" class="..."/> In this example, external bean definitions are being loaded from 3 files, services.xml, messageSource.xml, and themeSource.xml. All location paths are considered relative to the definition file doing the importing, so services.xml in this case must be in the same directory or classpath location as the file doing the importing, while messageSource.xml and themeSource.xml must be in a resources location below the location of the importing file. As you can see, a leading slash is actually ignored, but given that these are considered relative paths, it is probably better form not to use the slash at all. The contents of the files being imported must be valid XML bean definition files according to the Spring Schema or DTD, including the top level element. A: We do this in our projects at work, using the classpath* resource loader in Spring. For a certain app, all appcontext files containing the application id will be loaded: classpath*:springconfig/spring-appname-*.xml A: Yes, you can do this via the import element. <import resource="services.xml"/> Each element's resource attribute is a valid path (e.g. classpath:foo.xml) A: Given what Nicholas pointed me to I found this in the docs. It allows me to pick at runtime the bean contexts I'm interested in. GenericApplicationContext ctx = new GenericApplicationContext(); XmlBeanDefinitionReader xmlReader = new XmlBeanDefinitionReader(ctx); xmlReader.loadBeanDefinitions(new ClassPathResource("modelContext.xml")); xmlReader.loadBeanDefinitions(new ClassPathResource("uiContext.xml")); ctx.refresh(); A: Here's what I've done for one of my projects. In your web.xml file, you can define the Spring bean files you want your application to use: <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml /WEB-INF/modelContext.xml /WEB-INF/ui.xml </param-value> </context-param> If this isn't defined in your web.xml, it automatically looks for /WEB-INF/applicationContext.xml A: Another thing to note is that although you can do this, if you aren't a big fan of XML you can do a lot of stuff in Spring 2.5 with annotations. A: Yes, you can using the tag inside the "Master" bean file. But what about the why? Why not listing the files in the contextConfigLocation context param of the wab.xml or als locations array of the bean factory? I think mutliple files are much easier to handle. You may choose only some of them for a test, simply add rename or remove a part of the application and you may boundle different applications with the same config files (a webapp and a commandline version with some overlapping bean definitions).
Can I compose a Spring Configuration File from smaller ones?
I have a handful of projects that all use one project for the data model. Each of these projects has its own applicationContext.xml file with a bunch of repetitive data stuff within it. I'd like to have a modelContext.xml file and another for my ui.xml, etc. Can I do this?
[ "From the Spring Docs (v 2.5.5 Section 3.2.2.1.):\n\nIt can often be useful to split up\n container definitions into multiple\n XML files. One way to then load an\n application context which is\n configured from all these XML\n fragments is to use the application\n context constructor which takes\n multiple Resource locations. With a\n bean factory, a bean definition reader\n can be used multiple times to read\n definitions from each file in turn.\nGenerally, the Spring team prefers the\n above approach, since it keeps\n container configuration files unaware\n of the fact that they are being\n combined with others. An alternate\n approach is to use one or more\n occurrences of the element\n to load bean definitions from another\n file (or files). Let's look at a\n sample:\n\n<import resource=\"services.xml\"/>\n<import resource=\"resources/messageSource.xml\"/>\n<import resource=\"/resources/themeSource.xml\"/>\n\n<bean id=\"bean1\" class=\"...\"/>\n<bean id=\"bean2\" class=\"...\"/>\n\n\nIn this example, external bean\n definitions are being loaded from 3\n files, services.xml,\n messageSource.xml, and\n themeSource.xml. All location paths\n are considered relative to the\n definition file doing the importing,\n so services.xml in this case must be\n in the same directory or classpath\n location as the file doing the\n importing, while messageSource.xml and\n themeSource.xml must be in a resources\n location below the location of the\n importing file. As you can see, a\n leading slash is actually ignored, but\n given that these are considered\n relative paths, it is probably better\n form not to use the slash at all. The\n contents of the files being imported\n must be valid XML bean definition\n files according to the Spring Schema\n or DTD, including the top level\n element.\n\n", "We do this in our projects at work, using the classpath* resource loader in Spring. For a certain app, all appcontext files containing the application id will be loaded:\nclasspath*:springconfig/spring-appname-*.xml\n\n", "Yes, you can do this via the import element.\n<import resource=\"services.xml\"/>\n\nEach element's resource attribute is a valid path (e.g. classpath:foo.xml)\n", "Given what Nicholas pointed me to I found this in the docs. It allows me to pick at runtime the bean contexts I'm interested in.\nGenericApplicationContext ctx = new GenericApplicationContext();\nXmlBeanDefinitionReader xmlReader = new XmlBeanDefinitionReader(ctx);\nxmlReader.loadBeanDefinitions(new ClassPathResource(\"modelContext.xml\"));\nxmlReader.loadBeanDefinitions(new ClassPathResource(\"uiContext.xml\"));\nctx.refresh();\n\n", "Here's what I've done for one of my projects. In your web.xml file, you can define the Spring bean files you want your application to use:\n <context-param>\n <param-name>contextConfigLocation</param-name>\n <param-value>\n /WEB-INF/applicationContext.xml\n /WEB-INF/modelContext.xml\n /WEB-INF/ui.xml\n </param-value>\n </context-param>\n\nIf this isn't defined in your web.xml, it automatically looks for /WEB-INF/applicationContext.xml\n", "Another thing to note is that although you can do this, if you aren't a big fan of XML you can do a lot of stuff in Spring 2.5 with annotations. \n", "Yes, you can using the tag inside the \"Master\" bean file. But what about the why? Why not listing the files in the contextConfigLocation context param of the wab.xml or als locations array of the bean factory?\nI think mutliple files are much easier to handle. You may choose only some of them for a test, simply add rename or remove a part of the application and you may boundle different applications with the same config files (a webapp and a commandline version with some overlapping bean definitions).\n" ]
[ 19, 3, 2, 2, 1, 0, 0 ]
[]
[]
[ "java", "spring" ]
stackoverflow_0000094542_java_spring.txt
Q: What are the major vulnerabilities of Windows 2003 + Apache? I am searching for a host for a new commercial website. Among other things, I'd like to know what the various OS - Webserver combinations have in terms of vulnerabilities. What are the vulnerabilities of Windows 2003 + Apache? A: You could look here: http://httpd.apache.org/security/vulnerabilities_20.html As for the windows side, it's windows. There are going to be vulnerabilities. Just stay up to date with service packs and patches, and you'll be fine. A: As suggested here, I could check out the CERT Database.
What are the major vulnerabilities of Windows 2003 + Apache?
I am searching for a host for a new commercial website. Among other things, I'd like to know what the various OS - Webserver combinations have in terms of vulnerabilities. What are the vulnerabilities of Windows 2003 + Apache?
[ "You could look here: http://httpd.apache.org/security/vulnerabilities_20.html\nAs for the windows side, it's windows. There are going to be vulnerabilities. Just stay up to date with service packs and patches, and you'll be fine.\n", "As suggested here, I could check out the CERT Database.\n" ]
[ 1, 0 ]
[]
[]
[ "security" ]
stackoverflow_0000087371_security.txt
Q: What do you think of using properties as object initializers in C#; I was wondering what people thought of using properties as object initializers in C#. For some reason it seems to break the fundamentals of what constructors are used for. An example... public class Person { string firstName; string lastName; public string FirstName { get { return firstName; } set { firstName = value; } } public string LastName { get { return lastName; } set { lastName= value; } } } Then doing object intialization with..... Person p = new Person{ FirstName = "Joe", LastName = "Smith" }; Person p = new Person{ FirstName = "Joe" }; A: What you see here is some syntatic sugar provided by the compiler. Under the hood what it really does is something like: Person p = new Person( FirstName = "Joe", LastName = "Smith" ); Person _p$1 = new Person(); _p$1.FirstName = "Joe"; _p$1.LastName = "Smith"; Person p = _p$1; So IMHO you are not really breaking any constructor fundamentals but using a nice language artifact in order to ease readability and maintainability. A: Object initializers does in no way replace constructors. The constructor defines the contract that you have to adhere to in order to create a instance of a class. The main motivation for object initializers in the C# language is to support Anonymous Types. var v = new { Foo = 1, Bar = "Hi" }; Console.WriteLine(v.Bar); A: IMHO its sweet. Most objects are newed up with the default constructor, and must have some properties set before they are ready to run; so the object initializers make it easier to code against most objects out there. A: Since you're already using the new C# syntax, might as well use automatic properties as well, just to sweeten up your code a drop more: instead of this: string firstName; public string FirstName { get { return firstName; } set { firstName = value; } } use this: public string FirstName { get; set; } A: Constructors should only really have arguments that are required to construct the object. Object initialisers are just a convenient way to assign values to properties. I use object initialisers whenever I can as I think it's a tidier syntax. A: I think overall it is useful, especially when used with automatic properties. It can be confusing when properties are doing more than get/set. Hopefully this will lead to more methods, and reduce the abuse of properties. A: Not your original question, but still... Your class declaration can be written as: public class Person { public string FirstName { get; set; } public string LastName {get; set; } } and if it were my code, I'd probably have an object for Name with fields First and Last. A: It's also quite necessary for projected classes returned from a language integrated query (linq) var qry = from something in listofsomething select new { Firstname = something.FirstName, Lastname = something.Surname } A: Object Initializers help to reduce coding complexity in that you don't need to create a half dozen different constructors in order to provide initial values for properties. Anything that reduces redundant code is a positive, in my book. I believe the primary reason the feature was added to the language is to support anonymous types for LINQ. A: Adding to Nescio's thoughts - I'd suggest in code reviews actively hunting down expensive transparent operations in property accessors e.g. DB round tripping. A: If you want to enforce the use of a constructor, you could set your object's default parameterless constructor to private, and leave public only some enforced constructors: public class SomeObject { private SomeObject() {} public SomeObject(string someString) //enforced constructor {} public string MyProperty { get; set; } } Using the above definition, this throws an error: var myObject = new SomeObject { MyProperty = "foo" } //no method accepts zero arguments for constructor Of course this can't be done for all cases. Serialization, for example, requires that you have a non-private default constructor. A: I for one am not happy with them. I don't think they have a place in the constructor, or MS should got back and refactor them to allow you to use them in a private fasion. If I construct an object I want to pass in some PRIVATE data. I want it set from the outside world once and that's it. With Object Initializers you allow the values passed into the constructor to be modifiable. Maybe in the future they will change this.
What do you think of using properties as object initializers in C#;
I was wondering what people thought of using properties as object initializers in C#. For some reason it seems to break the fundamentals of what constructors are used for. An example... public class Person { string firstName; string lastName; public string FirstName { get { return firstName; } set { firstName = value; } } public string LastName { get { return lastName; } set { lastName= value; } } } Then doing object intialization with..... Person p = new Person{ FirstName = "Joe", LastName = "Smith" }; Person p = new Person{ FirstName = "Joe" };
[ "What you see here is some syntatic sugar provided by the compiler. Under the hood what it really does is something like:\nPerson p = new Person( FirstName = \"Joe\", LastName = \"Smith\" );\nPerson _p$1 = new Person();\n_p$1.FirstName = \"Joe\";\n_p$1.LastName = \"Smith\";\nPerson p = _p$1;\n\nSo IMHO you are not really breaking any constructor fundamentals but using a nice language artifact in order to ease readability and maintainability.\n", "Object initializers does in no way replace constructors. The constructor defines the contract that you have to adhere to in order to create a instance of a class.\nThe main motivation for object initializers in the C# language is to support Anonymous Types.\nvar v = new { Foo = 1, Bar = \"Hi\" };\nConsole.WriteLine(v.Bar);\n\n", "IMHO its sweet. Most objects are newed up with the default constructor, and must have some properties set before they are ready to run; so the object initializers make it easier to code against most objects out there.\n", "Since you're already using the new C# syntax, might as well use automatic properties as well, just to sweeten up your code a drop more:\ninstead of this:\nstring firstName;\n\npublic string FirstName\n{\n get { return firstName; }\n set { firstName = value; }\n}\n\nuse this:\npublic string FirstName { get; set; }\n\n", "Constructors should only really have arguments that are required to construct the object. Object initialisers are just a convenient way to assign values to properties. I use object initialisers whenever I can as I think it's a tidier syntax.\n", "I think overall it is useful, especially when used with automatic properties.\nIt can be confusing when properties are doing more than get/set.\nHopefully this will lead to more methods, and reduce the abuse of properties.\n", "Not your original question, but still...\nYour class declaration can be written as:\npublic class Person\n{\n public string FirstName { get; set; }\n public string LastName {get; set; }\n}\n\nand if it were my code, I'd probably have an object for Name with fields First and Last.\n", "It's also quite necessary for projected classes returned from a language integrated query (linq) \nvar qry = from something in listofsomething\n select new {\n Firstname = something.FirstName,\n Lastname = something.Surname\n }\n\n", "Object Initializers help to reduce coding complexity in that you don't need to create a half dozen different constructors in order to provide initial values for properties. Anything that reduces redundant code is a positive, in my book.\nI believe the primary reason the feature was added to the language is to support anonymous types for LINQ.\n", "Adding to Nescio's thoughts - I'd suggest in code reviews actively hunting down expensive transparent operations in property accessors e.g. DB round tripping.\n", "If you want to enforce the use of a constructor, you could set your object's default parameterless constructor to private, and leave public only some enforced constructors:\npublic class SomeObject\n{\n private SomeObject()\n {}\n\n public SomeObject(string someString) //enforced constructor\n {}\n\n public string MyProperty { get; set; }\n }\n\nUsing the above definition, this throws an error:\nvar myObject = new SomeObject { MyProperty = \"foo\" } //no method accepts zero arguments for constructor\n\nOf course this can't be done for all cases. Serialization, for example, requires that you have a non-private default constructor.\n", "I for one am not happy with them. I don't think they have a place in the constructor, or MS should got back and refactor them to allow you to use them in a private fasion. If I construct an object I want to pass in some PRIVATE data. I want it set from the outside world once and that's it. With Object Initializers you allow the values passed into the constructor to be modifiable. \nMaybe in the future they will change this.\n" ]
[ 10, 6, 4, 2, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "c#" ]
stackoverflow_0000088791_c#.txt
Q: How to configure ResourceBundleViewResolver in Spring Framework 2.0 Everywhere I look always the same explanation pop ups. Configure the view resolver. <bean id="viewMappings" class="org.springframework.web.servlet.view.ResourceBundleViewResolver"> <property name="basename" value="views" /> </bean> And then put a file in the classpath named view.properties with some key-value pairs (don't mind the names). logout.class=org.springframework.web.servlet.view.JstlView logout.url=WEB-INF/jsp/logout.jsp What does logout.class and logout.url mean? How does ResourceBundleViewResolver uses the key-value pairs in the file? My goal is that when someone enters the URI myserver/myapp/logout.htm the file logout.jsp gets served. A: ResourceBundleViewResolver uses the key/vals in views.properties to create view beans (actually created in an internal application context). The name of the view bean in your example will be "logout" and it will be a bean of type JstlView. JstlView has an attribute called URL which will be set to "WEB-INF/jsp/logout.jsp". You can set any attribute on the view class in a similar way. What you appear to be missing is your controller/handler layer. If you want /myapp/logout.htm to serve logout.jsp, you must map a Controller into /myapp/logout.htm and that Controller needs to return the view name "logout". The ResourceBundleViewResolver will then be consulted for a bean of that name, and return your instance of JstlView. A: To answer your question logout is the view name obtained from the ModelAndView object returned by the controller. If your are having problems you many need the following additional configuration. You need to add a servlet mapping for *.htm in your web.xml: <web-app> <servlet> <servlet-name>htm</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <oad-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>htm</servlet-name> <url-pattern>*.htm</url-pattern> </servlet-mapping> </web-app> And if you want to map directly to the *.jsp without creating a custom controller then you need to add the following bean to your Spring context: <bean id="urlFilenameController" class="org.springframework.web.servlet.mvc.UrlFilenameViewController" />
How to configure ResourceBundleViewResolver in Spring Framework 2.0
Everywhere I look always the same explanation pop ups. Configure the view resolver. <bean id="viewMappings" class="org.springframework.web.servlet.view.ResourceBundleViewResolver"> <property name="basename" value="views" /> </bean> And then put a file in the classpath named view.properties with some key-value pairs (don't mind the names). logout.class=org.springframework.web.servlet.view.JstlView logout.url=WEB-INF/jsp/logout.jsp What does logout.class and logout.url mean? How does ResourceBundleViewResolver uses the key-value pairs in the file? My goal is that when someone enters the URI myserver/myapp/logout.htm the file logout.jsp gets served.
[ "ResourceBundleViewResolver uses the key/vals in views.properties to create view beans (actually created in an internal application context). The name of the view bean in your example will be \"logout\" and it will be a bean of type JstlView. JstlView has an attribute called URL which will be set to \"WEB-INF/jsp/logout.jsp\". You can set any attribute on the view class in a similar way.\nWhat you appear to be missing is your controller/handler layer. If you want /myapp/logout.htm to serve logout.jsp, you must map a Controller into /myapp/logout.htm and that Controller needs to return the view name \"logout\". The ResourceBundleViewResolver will then be consulted for a bean of that name, and return your instance of JstlView.\n", "To answer your question logout is the view name obtained from the ModelAndView object returned by the controller. If your are having problems you many need the following additional configuration.\nYou need to add a servlet mapping for *.htm in your web.xml:\n\n <web-app>\n <servlet>\n <servlet-name>htm</servlet-name>\n <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>\n <oad-on-startup>1</load-on-startup>\n </servlet>\n <servlet-mapping>\n <servlet-name>htm</servlet-name>\n <url-pattern>*.htm</url-pattern>\n </servlet-mapping>\n </web-app>\n\nAnd if you want to map directly to the *.jsp without creating a custom controller then you need to add the following bean to your Spring context:\n\n <bean id=\"urlFilenameController\"\n class=\"org.springframework.web.servlet.mvc.UrlFilenameViewController\" />\n\n" ]
[ 5, 0 ]
[]
[]
[ "frameworks", "java", "spring" ]
stackoverflow_0000104587_frameworks_java_spring.txt
Q: Pros and cons with Jaxer I realize that this question has been asked before, but it has been a month with no decent responses... I'm looking at Aptana's Jaxer and I find the concept to be very exciting. Here is a quick overview for those who are not familiar with it: Jaxer is, in their words, "the world's first true AJAX server". It is based on the Mozilla engine so scripts are written with javascript and you have complete access to the DOM on the server-side. Scripts are placed on your pages with <script> tags and you can specify a runat attribute (ala ASP.NET) to mark scripts for execution on the client, server, both, or as a "server-proxy" which makes the functions available on the client, but they execute on the server via AJAX. This also means that you can use your favorite client-side libraries (jQuery, Prototype) on the server as well as the client. It also can be used to process documents that are generated in another language (e.g. php, ruby) which I imagine is not practical except to help in transitioning existing applications to use Jaxer. What are the pros and cons? How mature/stable is it the API? How good is performance compared to other server-side html preprocessors? Has anyone used Jaxer with another technology (php, pearl, ruby, etc.) and what were your experiences? EDIT: I've posted another question regarding a drawback I discovered while playing with Jaxer: Defining objects when using Jaxer A: I didn't use Jaxer for very long, but here's some things I found: Pros Write the frontend and backend in the same code. Especially nice for writing validation logic. "Seamless" AJAX communication back to the server - it's just like calling a JS function. The ability to use JavaScript frameworks like jQuery to manipulate the DOM. The ability to generate or manipulate images using the Canvas API. You get to write your server JavaScript using whizzy new JavaScript 1.8 features like Array extras and getters/setters. Cons I found their API to be unstable (they were transitioning to 1.0 when I was trying it so that kinda made sense) and the documentation was confusing, missing, or didn't match with changed functionality. I also found that it was very hard to debug my Jaxer server-side code, and when I got in trouble the error messages weren't very helpful. You don't get real MVC or even MVP (ASP.NET-style) separation between your presentation and your logic. I personally couldn't get E4X (xml in JavaScript) working, which was supposed to be a big draw. There's not a lot of framework built around it for building a whole application. You're starting from some pretty basic building blocks. It's not really providing any help in your view, so forget all the templating or reusable components you might use elsewhere. Not that you can't replicate that, but it's more difficult than having it out of the box. Overall, I think Jaxer has the most promise as a postprocessor in front of another web framewok. It would be great to use Jaxer to layer all the spiffy AJAX stuff on top of an existing site. It would make it a lot easier to make a dynamic site with validation / page manipulation logic shared between server and client. I don't think I would want to write an application using only Jaxer. Also, it's young (and immature) - I'll be interested to see where it ends up. A: I did come across this set of performance benchmarks. It looks as though Jaxer performs better than Rails, but not as well as php... A: @BRH: Great insight. I would echo all of the "Pros" and "Cons" 2, 4, & 5 and your final overview. I kind of get the sense that they didn't intend to displace any of the market for upstream frameworks ... but if they could do so and keep it as tight and comprehensible as it is, I hope they do! I like the way they think! P.S. I don't know if it is new, but there is a <jaxer:include tag that injects fragments into the page prior to server-side script execution that might be a help in some code-reuse scenarios. There may be more for me to discover along those lines.
Pros and cons with Jaxer
I realize that this question has been asked before, but it has been a month with no decent responses... I'm looking at Aptana's Jaxer and I find the concept to be very exciting. Here is a quick overview for those who are not familiar with it: Jaxer is, in their words, "the world's first true AJAX server". It is based on the Mozilla engine so scripts are written with javascript and you have complete access to the DOM on the server-side. Scripts are placed on your pages with <script> tags and you can specify a runat attribute (ala ASP.NET) to mark scripts for execution on the client, server, both, or as a "server-proxy" which makes the functions available on the client, but they execute on the server via AJAX. This also means that you can use your favorite client-side libraries (jQuery, Prototype) on the server as well as the client. It also can be used to process documents that are generated in another language (e.g. php, ruby) which I imagine is not practical except to help in transitioning existing applications to use Jaxer. What are the pros and cons? How mature/stable is it the API? How good is performance compared to other server-side html preprocessors? Has anyone used Jaxer with another technology (php, pearl, ruby, etc.) and what were your experiences? EDIT: I've posted another question regarding a drawback I discovered while playing with Jaxer: Defining objects when using Jaxer
[ "I didn't use Jaxer for very long, but here's some things I found:\nPros\n\nWrite the frontend and backend in the same code. Especially nice for writing validation logic.\n\"Seamless\" AJAX communication back to the server - it's just like calling a JS function.\nThe ability to use JavaScript frameworks like jQuery to manipulate the DOM.\nThe ability to generate or manipulate images using the Canvas API.\nYou get to write your server JavaScript using whizzy new JavaScript 1.8 features like Array extras and getters/setters.\n\nCons\n\nI found their API to be unstable (they were transitioning to 1.0 when I was trying it so that kinda made sense) and the documentation was confusing, missing, or didn't match with changed functionality. I also found that it was very hard to debug my Jaxer server-side code, and when I got in trouble the error messages weren't very helpful.\nYou don't get real MVC or even MVP (ASP.NET-style) separation between your presentation and your logic.\nI personally couldn't get E4X (xml in JavaScript) working, which was supposed to be a big draw.\nThere's not a lot of framework built around it for building a whole application. You're starting from some pretty basic building blocks.\nIt's not really providing any help in your view, so forget all the templating or reusable components you might use elsewhere. Not that you can't replicate that, but it's more difficult than having it out of the box.\n\nOverall, I think Jaxer has the most promise as a postprocessor in front of another web framewok. It would be great to use Jaxer to layer all the spiffy AJAX stuff on top of an existing site. It would make it a lot easier to make a dynamic site with validation / page manipulation logic shared between server and client. I don't think I would want to write an application using only Jaxer. Also, it's young (and immature) - I'll be interested to see where it ends up.\n", "I did come across this set of performance benchmarks.\nIt looks as though Jaxer performs better than Rails, but not as well as php...\n", "@BRH: Great insight. I would echo all of the \"Pros\" and \"Cons\" 2, 4, & 5 and your final overview. I kind of get the sense that they didn't intend to displace any of the market for upstream frameworks ... but if they could do so and keep it as tight and comprehensible as it is, I hope they do! I like the way they think!\nP.S. I don't know if it is new, but there is a <jaxer:include tag that injects fragments into the page prior to server-side script execution that might be a help in some code-reuse scenarios. There may be more for me to discover along those lines.\n" ]
[ 12, 1, 0 ]
[]
[]
[ "ajax", "aptana", "javascript", "jaxer" ]
stackoverflow_0000098915_ajax_aptana_javascript_jaxer.txt
Q: How do you create automated tests of a Maven plugin using JUnit? I've got a (mostly) working plugin developed, but since its function is directly related to the project it processes, how do you develop unit and integration tests for the plugin. The best idea I've had is to create an integration test project for the plugin that uses the plugin during its lifecycle and has tests that report on the plugins success or failure in processing the data. Anyone with better ideas? A: You need to use the maven-plugin-testing-harness, <dependency> <groupId>org.apache.maven.shared</groupId> <artifactId>maven-plugin-testing-harness</artifactId> <version>1.1</version> <scope>test</scope> </dependency> You derive your unit test classes from AbstractMojoTestCase. You need to create a bare bones POM, usually in the src/test/resources folder. <project> <build> <plugins> <plugin> <groupId>com.mydomain,mytools</groupId> <artifactId>mytool-maven-plugin</artifactId> <configuration> <!-- Insert configuration settings here --> </configuration> <executions> <execution> <goals> <goal>mygoal</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Use the AbstractMojoTest.lookupMojo(String,File) (or one of the other variations) to load the Mojo for a specific goal and execute it. final File testPom = new File(PlexusTestCase.getBasedir(), "/target/test-classes/mytools-plugin-config.xml"); Mojo mojo = this.lookupMojo("mygoal", testPom); // Insert assertions to validate that your plugin was initialised correctly mojo.execute(); // Insert assertions to validate that your plugin behaved as expected I created my a plugin of my own that you can refer to for clarification http://ldap-plugin.btmatthews.com, A: If you'd like to see some real-world examples, the Terracotta Maven plugin (tc-maven-plugin) has some tests with it that you can peruse in the open source forge. The plugin is at: http://forge.terracotta.org/releases/projects/tc-maven-plugin/ And the source is in svn at: http://svn.terracotta.org/svn/forge/projects/tc-maven-plugin/trunk/ And in that source you can find some actual Maven plugin tests at: src/test/java/org/terracotta/maven/plugins/tc/
How do you create automated tests of a Maven plugin using JUnit?
I've got a (mostly) working plugin developed, but since its function is directly related to the project it processes, how do you develop unit and integration tests for the plugin. The best idea I've had is to create an integration test project for the plugin that uses the plugin during its lifecycle and has tests that report on the plugins success or failure in processing the data. Anyone with better ideas?
[ "You need to use the maven-plugin-testing-harness, \n\n <dependency>\n <groupId>org.apache.maven.shared</groupId>\n <artifactId>maven-plugin-testing-harness</artifactId>\n <version>1.1</version>\n <scope>test</scope>\n </dependency>\n\nYou derive your unit test classes from AbstractMojoTestCase. \nYou need to create a bare bones POM, usually in the src/test/resources folder. \n\n <project>\n <build>\n <plugins>\n <plugin>\n <groupId>com.mydomain,mytools</groupId>\n <artifactId>mytool-maven-plugin</artifactId>\n <configuration>\n <!-- Insert configuration settings here -->\n </configuration>\n <executions>\n <execution>\n <goals>\n <goal>mygoal</goal>\n </goals>\n </execution>\n </executions>\n </plugin>\n </plugins>\n </build>\n </project>\n\nUse the AbstractMojoTest.lookupMojo(String,File) (or one of the other variations) to load the Mojo for a specific goal and execute it.\n\n final File testPom = new File(PlexusTestCase.getBasedir(), \"/target/test-classes/mytools-plugin-config.xml\");\n Mojo mojo = this.lookupMojo(\"mygoal\", testPom);\n // Insert assertions to validate that your plugin was initialised correctly\n mojo.execute();\n // Insert assertions to validate that your plugin behaved as expected\n\nI created my a plugin of my own that you can refer to for clarification http://ldap-plugin.btmatthews.com,\n", "If you'd like to see some real-world examples, the Terracotta Maven plugin (tc-maven-plugin) has some tests with it that you can peruse in the open source forge.\nThe plugin is at: http://forge.terracotta.org/releases/projects/tc-maven-plugin/\nAnd the source is in svn at: http://svn.terracotta.org/svn/forge/projects/tc-maven-plugin/trunk/\nAnd in that source you can find some actual Maven plugin tests at: src/test/java/org/terracotta/maven/plugins/tc/\n" ]
[ 6, 1 ]
[]
[]
[ "automated_tests", "java", "maven_2" ]
stackoverflow_0000104803_automated_tests_java_maven_2.txt
Q: Malloc inside a function call appears to be getting freed on return? I think I've got it down to the most basic case: int main(int argc, char ** argv) { int * arr; foo(arr); printf("car[3]=%d\n",arr[3]); free (arr); return 1; } void foo(int * arr) { arr = (int*) malloc( sizeof(int)*25 ); arr[3] = 69; } The output is this: > ./a.out car[3]=-1869558540 a.out(4100) malloc: *** error for object 0x8fe01037: Non-aligned pointer being freed *** set a breakpoint in malloc_error_break to debug > If anyone can shed light on where my understanding is failing, it'd be greatly appreciated. A: You pass the pointer by value, not by reference, so whatever you do with arr inside foo will not make a difference outside the foo-function. As m_pGladiator wrote one way is to declare a reference to pointer like this (only possible in C++ btw. C does not know about references): int main(int argc, char ** argv) { int * arr; foo(arr); printf("car[3]=%d\n",arr[3]); free (arr); return 1; } void foo(int * &arr ) { arr = (int*) malloc( sizeof(int)*25 ); arr[3] = 69; } Another (better imho) way is to not pass the pointer as an argument but to return a pointer: int main(int argc, char ** argv) { int * arr; arr = foo(); printf("car[3]=%d\n",arr[3]); free (arr); return 1; } int * foo(void ) { int * arr; arr = (int*) malloc( sizeof(int)*25 ); arr[3] = 69; return arr; } And you can pass a pointer to a pointer. That's the C way to pass by reference. Complicates the syntax a bit but well - that's how C is... int main(int argc, char ** argv) { int * arr; foo(&arr); printf("car[3]=%d\n",arr[3]); free (arr); return 1; } void foo(int ** arr ) { (*arr) = (int*) malloc( sizeof(int)*25 ); (*arr)[3] = 69; } A: You've allocated arr in foo, but that pointers value is stored in the call stack. If you want to do this, do it like this: void foo( int ** arr) { *arr = (int *)malloc( sizeof(int) * 25 ); (*arr)[3] = 69; } And in main, simply pass a pointer to foo (like foo(&arr)) A: foo receives a local copy of the int pointer, alloactes memory to it and leaks that memory when it goes out of scope. One way to fix this to get foo to return the pointer: int * foo() { return (int*) malloc( sizeof(int)*25 ); } int main() { int* arr = foo(); } Another is to pass foo a pointer to a pointer void foo(int ** arr) { *arr = malloc(...); } int main() { foo(&arr); } In C++ it is simpler to modify foo to accept a reference to a pointer. The only change you need in C++ is to change foo to void foo(int * & arr) A: Since your are passing the pointer by value, the arr pointer inside main isn't pointing to the allocated memory. This means two thing: you've got yourself a memory leak (NO, the memory isn't freed after the function foo completes), and when you access the arr pointer inside main you are accessing some arbitrary range of memory, hence you don't get 3 printed out and hence free() refuses to work. You're lucky you didn't get a segmentation fault when accessing arr[3] inside main. A: You cannot change the value of your argument (arr) if it's not passed in by reference (&). In general, you would want to return the pointer, so your method should be: arr=foo(); It's bad juju to try to reassign arguments; I don't recommend the (&) solution.
Malloc inside a function call appears to be getting freed on return?
I think I've got it down to the most basic case: int main(int argc, char ** argv) { int * arr; foo(arr); printf("car[3]=%d\n",arr[3]); free (arr); return 1; } void foo(int * arr) { arr = (int*) malloc( sizeof(int)*25 ); arr[3] = 69; } The output is this: > ./a.out car[3]=-1869558540 a.out(4100) malloc: *** error for object 0x8fe01037: Non-aligned pointer being freed *** set a breakpoint in malloc_error_break to debug > If anyone can shed light on where my understanding is failing, it'd be greatly appreciated.
[ "You pass the pointer by value, not by reference, so whatever you do with arr inside foo will not make a difference outside the foo-function. \nAs m_pGladiator wrote one way is to declare a reference to pointer like this (only possible in C++ btw. C does not know about references):\nint main(int argc, char ** argv) {\n int * arr;\n\n foo(arr);\n printf(\"car[3]=%d\\n\",arr[3]);\n free (arr);\n return 1;\n}\n\nvoid foo(int * &arr ) {\n arr = (int*) malloc( sizeof(int)*25 );\n arr[3] = 69;\n}\n\nAnother (better imho) way is to not pass the pointer as an argument but to return a pointer:\nint main(int argc, char ** argv) {\n int * arr;\n\n arr = foo();\n printf(\"car[3]=%d\\n\",arr[3]);\n free (arr);\n return 1;\n}\n\nint * foo(void ) {\n int * arr;\n arr = (int*) malloc( sizeof(int)*25 );\n arr[3] = 69;\n return arr;\n}\n\nAnd you can pass a pointer to a pointer. That's the C way to pass by reference. Complicates the syntax a bit but well - that's how C is...\nint main(int argc, char ** argv) {\n int * arr;\n\n foo(&arr);\n printf(\"car[3]=%d\\n\",arr[3]);\n free (arr);\n return 1;\n}\n\nvoid foo(int ** arr ) {\n (*arr) = (int*) malloc( sizeof(int)*25 );\n (*arr)[3] = 69;\n}\n\n", "You've allocated arr in foo, but that pointers value is stored in the call stack. If you want to do this, do it like this:\nvoid foo( int ** arr) {\n *arr = (int *)malloc( sizeof(int) * 25 );\n (*arr)[3] = 69;\n}\n\nAnd in main, simply pass a pointer to foo (like foo(&arr))\n", "foo receives a local copy of the int pointer, alloactes memory to it and leaks that memory when it goes out of scope.\nOne way to fix this to get foo to return the pointer:\nint * foo() {\n return (int*) malloc( sizeof(int)*25 );\n}\n\nint main() {\n int* arr = foo();\n}\n\nAnother is to pass foo a pointer to a pointer\nvoid foo(int ** arr) {\n *arr = malloc(...);\n}\n\nint main() {\n foo(&arr);\n}\n\nIn C++ it is simpler to modify foo to accept a reference to a pointer. The only change you need in C++ is to change foo to\nvoid foo(int * & arr)\n\n", "Since your are passing the pointer by value, the arr pointer inside main isn't pointing to the allocated memory. This means two thing: you've got yourself a memory leak (NO, the memory isn't freed after the function foo completes), and when you access the arr pointer inside main you are accessing some arbitrary range of memory, hence you don't get 3 printed out and hence free() refuses to work. You're lucky you didn't get a segmentation fault when accessing arr[3] inside main.\n", "You cannot change the value of your argument (arr) if it's not passed in by reference (&). In general, you would want to return the pointer, so your method should be:\narr=foo();\nIt's bad juju to try to reassign arguments; I don't recommend the (&) solution.\n" ]
[ 47, 6, 3, 1, 0 ]
[]
[]
[ "c", "malloc", "pointers" ]
stackoverflow_0000105477_c_malloc_pointers.txt
Q: WPF Datatrigger not firing when expected I have the following XAML: <TextBlock Text="{Binding ElementName=EditListBox, Path=SelectedItems.Count}" Margin="0,0,5,0"/> <TextBlock Text="items selected"> <TextBlock.Style> <Style TargetType="{x:Type TextBlock}"> <Style.Triggers> <DataTrigger Binding="{Binding ElementName=EditListBox, Path=SelectedItems.Count}" Value="1"> <Setter Property="TextBlock.Text" Value="item selected"></Setter> </DataTrigger> </Style.Triggers> </Style> </TextBlock.Style> </TextBlock> The first text block happily changes with SelectedItems.Count, showing 0,1,2, etc. The datatrigger on the second block never seems to fire to change the text. Any thoughts? A: Alternatively, you could replace your XAML with this: <TextBlock Margin="0,0,5,0" Text="{Binding ElementName=EditListBox, Path=SelectedItems.Count}"/> <TextBlock> <TextBlock.Style> <Style TargetType="{x:Type TextBlock}"> <Setter Property="Text" Value="items selected"/> <Style.Triggers> <DataTrigger Binding="{Binding ElementName=EditListBox, Path=SelectedItems.Count}" Value="1"> <Setter Property="Text" Value="item selected"/> </DataTrigger> </Style.Triggers> </Style> </TextBlock.Style> </TextBlock> Converters can solve a lot of binding problems but having a lot of specialized converters gets very messy. A: The DataTrigger is firing but the Text field for your second TextBlock is hard-coded as "items selected" so it won't be able to change. To see it firing, you can remove Text="items selected". Your problem is a good candidate for using a ValueConverter instead of DataTrigger. Here's how to create and use the ValueConverter to get it to set the Text to what you want. Create this ValueConverter: public class CountToSelectedTextConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if ((int)value == 1) return "item selected"; else return "items selected"; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } Add the namespace reference to your the assembly the converter is located: xmlns:local="clr-namespace:ValueConverterExample" Add the converter to your resources: <Window.Resources> <local:CountToSelectedTextConverter x:Key="CountToSelectedTextConverter"/> </Window.Resources> Change your second textblock to: <TextBlock Text="{Binding ElementName=EditListBox, Path=SelectedItems.Count, Converter={StaticResource CountToSelectedTextConverter}}"/>
WPF Datatrigger not firing when expected
I have the following XAML: <TextBlock Text="{Binding ElementName=EditListBox, Path=SelectedItems.Count}" Margin="0,0,5,0"/> <TextBlock Text="items selected"> <TextBlock.Style> <Style TargetType="{x:Type TextBlock}"> <Style.Triggers> <DataTrigger Binding="{Binding ElementName=EditListBox, Path=SelectedItems.Count}" Value="1"> <Setter Property="TextBlock.Text" Value="item selected"></Setter> </DataTrigger> </Style.Triggers> </Style> </TextBlock.Style> </TextBlock> The first text block happily changes with SelectedItems.Count, showing 0,1,2, etc. The datatrigger on the second block never seems to fire to change the text. Any thoughts?
[ "Alternatively, you could replace your XAML with this:\n<TextBlock Margin=\"0,0,5,0\" Text=\"{Binding ElementName=EditListBox, Path=SelectedItems.Count}\"/>\n<TextBlock>\n <TextBlock.Style>\n <Style TargetType=\"{x:Type TextBlock}\">\n <Setter Property=\"Text\" Value=\"items selected\"/>\n <Style.Triggers>\n <DataTrigger Binding=\"{Binding ElementName=EditListBox, Path=SelectedItems.Count}\" Value=\"1\">\n <Setter Property=\"Text\" Value=\"item selected\"/>\n </DataTrigger>\n </Style.Triggers>\n </Style>\n </TextBlock.Style>\n</TextBlock>\n\nConverters can solve a lot of binding problems but having a lot of specialized converters gets very messy.\n", "The DataTrigger is firing but the Text field for your second TextBlock is hard-coded as \"items selected\" so it won't be able to change. To see it firing, you can remove Text=\"items selected\".\nYour problem is a good candidate for using a ValueConverter instead of DataTrigger. Here's how to create and use the ValueConverter to get it to set the Text to what you want. \nCreate this ValueConverter:\npublic class CountToSelectedTextConverter : IValueConverter\n{\n #region IValueConverter Members\n\n public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)\n {\n if ((int)value == 1)\n return \"item selected\";\n else\n return \"items selected\";\n }\n\n public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)\n {\n throw new NotImplementedException();\n }\n\n #endregion\n}\n\nAdd the namespace reference to your the assembly the converter is located:\nxmlns:local=\"clr-namespace:ValueConverterExample\"\n\nAdd the converter to your resources:\n<Window.Resources>\n <local:CountToSelectedTextConverter x:Key=\"CountToSelectedTextConverter\"/>\n</Window.Resources>\n\nChange your second textblock to:\n <TextBlock Text=\"{Binding ElementName=EditListBox, Path=SelectedItems.Count, Converter={StaticResource CountToSelectedTextConverter}}\"/>\n\n" ]
[ 29, 13 ]
[]
[]
[ "datatrigger", "wpf" ]
stackoverflow_0000094177_datatrigger_wpf.txt
Q: Advice on handling large data volumes So I have a "large" number of "very large" ASCII files of numerical data (gigabytes altogether), and my program will need to process the entirety of it sequentially at least once. Any advice on storing/loading the data? I've thought of converting the files to binary to make them smaller and for faster loading. Should I load everything into memory all at once? If not, is opening what's a good way of loading the data partially? What are some Java-relevant efficiency tips? A: So then what if the processing requires jumping around in the data for multiple files and multiple buffers? Is constant opening and closing of binary files going to become expensive? I'm a big fan of 'memory mapped i/o', aka 'direct byte buffers'. In Java they are called Mapped Byte Buffers are are part of java.nio. (Basically, this mechanism uses the OS's virtual memory paging system to 'map' your files and present them programmatically as byte buffers. The OS will manage moving the bytes to/from disk and memory auto-magically and very quickly. I suggest this approach because a) it works for me, and b) it will let you focus on your algorithm and let the JVM, OS and hardware deal with the performance optimization. All to frequently, they know what is best more so than us lowly programmers. ;) How would you use MBBs in your context? Just create an MBB for each of your files and read them as you see fit. You will only need to store your results. . BTW: How much data are you dealing with, in GB? If it is more than 3-4GB, then this won't work for you on a 32-bit machine as the MBB implementation is defendant on the addressable memory space by the platform architecture. A 64-bit machine & OS will take you to 1TB or 128TB of mappable data. If you are thinking about performance, then know Kirk Pepperdine (a somewhat famous Java performance guru.) He is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: NIO Performance Tips and other Java performance related things. A: You might want to have a look at the entries in the Wide Finder Project (do a google search for "wide finder" java). The Wide finder involves reading over lots of lines in log files, so look at the Java implementations and see what worked and didn't work there. A: You could convert to binary, but then you have 1+ something copies of the data, if you need to keep the original around. It may be practical to build some kind of index on top of your original ascii data, so that if you need to go through the data again you can do it faster in subsequent times. To answer your questions in order: Should I load everything into memory all at once? Not if don't have to. for some files, you may be able to, but if you're just processing sequentially, just do some kind of buffered read through the things one by one, storing whatever you need along the way. If not, is opening what's a good way of loading the data partially? BufferedReaders/etc is simplest, although you could look deeper into FileChannel/etc to use memorymapped I/O to go through windows of the data at a time. What are some Java-relevant efficiency tips? That really depends on what you're doing with the data itself! A: Without any additional insight into what kind of processing is going on, here are some general thoughts from when I have done similar work. Write a prototype of your application (maybe even "one to throw away") that performs some arbitrary operation on your data set. See how fast it goes. If the simplest, most naive thing you can think of is acceptably fast, no worries! If the naive approach does not work, consider pre-processing the data so that subsequent runs will run in an acceptable length of time. You mention having to "jump around" in the data set quite a bit. Is there any way to pre-process that out? Or, one pre-processing step can be to generate even more data - index data - that provides byte-accurate location information about critical, necessary sections of your data set. Then, your main processing run can utilize this information to jump straight to the necessary data. So, to summarize, my approach would be to try something simple right now and see what the performance looks like. Maybe it will be fine. Otherwise, look into processing the data in multiple steps, saving the most expensive operations for infrequent pre-processing. Don't "load everything into memory". Just perform file accesses and let the operating system's disk page cache decide when you get to actually pull things directly out of memory. A: This depends a lot on the data in the file. Big mainframes have been doing sequential data processing for a long time but they don't normally use random access for the data. They just pull it in a line at a time and process that much before continuing. For random access it is often best to build objects with caching wrappers which know where in the file the data they need to construct is. When needed they read that data in and construct themselves. This way when memory is tight you can just start killing stuff off without worrying too much about not being able to get it back later. A: You really haven't given us enough info to help you. Do you need to load each file in its entiretly in order to process it? Or can you process it line by line? Loading an entire file at a time is likely to result in poor performance even for files that aren't terribly large. Your best bet is to define a buffer size that works for you and read/process the data a buffer at a time. A: I've found Informatica to be an exceptionally useful data processing tool. The good news is that the more recent versions even allow Java transformations. If you're dealing with terabytes of data, it might be time to pony up for the best-of-breed ETL tools. I'm assuming you want to do something with the results of the processing here, like store it somewhere. A: If your numerical data is regularly sampled and you need to do random access consider to store them in a quadtree. A: I recommend strongly leveraging Regular Expressions and looking into the "new" IO nio package for faster input. Then it should go as quickly as you can realistically expect Gigabytes of data to go. A: If at all possible, get the data into a database. Then you can leverage all the indexing, caching, memory pinning, and other functionality available to you there. A: If you need to access the data more than once, load it into a database. Most databases have some sort of bulk loading utility. If the data can all fit in memory, and you don't need to keep it around or access it that often, you can probably write something simple in Perl or your favorite scripting language.
Advice on handling large data volumes
So I have a "large" number of "very large" ASCII files of numerical data (gigabytes altogether), and my program will need to process the entirety of it sequentially at least once. Any advice on storing/loading the data? I've thought of converting the files to binary to make them smaller and for faster loading. Should I load everything into memory all at once? If not, is opening what's a good way of loading the data partially? What are some Java-relevant efficiency tips?
[ "\nSo then what if the processing requires jumping around in the data for multiple files and multiple buffers? Is constant opening and closing of binary files going to become expensive?\n\nI'm a big fan of 'memory mapped i/o', aka 'direct byte buffers'. In Java they are called Mapped Byte Buffers are are part of java.nio. (Basically, this mechanism uses the OS's virtual memory paging system to 'map' your files and present them programmatically as byte buffers. The OS will manage moving the bytes to/from disk and memory auto-magically and very quickly. \nI suggest this approach because a) it works for me, and b) it will let you focus on your algorithm and let the JVM, OS and hardware deal with the performance optimization. All to frequently, they know what is best more so than us lowly programmers. ;)\nHow would you use MBBs in your context? Just create an MBB for each of your files and read them as you see fit. You will only need to store your results. .\nBTW: How much data are you dealing with, in GB? If it is more than 3-4GB, then this won't work for you on a 32-bit machine as the MBB implementation is defendant on the addressable memory space by the platform architecture. A 64-bit machine & OS will take you to 1TB or 128TB of mappable data.\nIf you are thinking about performance, then know Kirk Pepperdine (a somewhat famous Java performance guru.) He is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: NIO Performance Tips and other Java performance related things.\n", "You might want to have a look at the entries in the Wide Finder Project (do a google search for \"wide finder\" java).\nThe Wide finder involves reading over lots of lines in log files, so look at the Java implementations and see what worked and didn't work there.\n", "You could convert to binary, but then you have 1+ something copies of the data, if you need to keep the original around. \nIt may be practical to build some kind of index on top of your original ascii data, so that if you need to go through the data again you can do it faster in subsequent times.\nTo answer your questions in order:\n\nShould I load everything into memory all at once?\n\nNot if don't have to. for some files, you may be able to, but if you're just processing sequentially, just do some kind of buffered read through the things one by one, storing whatever you need along the way.\n\nIf not, is opening what's a good way of loading the data partially?\n\nBufferedReaders/etc is simplest, although you could look deeper into FileChannel/etc to use memorymapped I/O to go through windows of the data at a time. \n\nWhat are some Java-relevant efficiency tips?\n\nThat really depends on what you're doing with the data itself!\n", "Without any additional insight into what kind of processing is going on, here are some general thoughts from when I have done similar work.\n\nWrite a prototype of your application (maybe even \"one to throw away\") that performs some arbitrary operation on your data set. See how fast it goes. If the simplest, most naive thing you can think of is acceptably fast, no worries!\nIf the naive approach does not work, consider pre-processing the data so that subsequent runs will run in an acceptable length of time. You mention having to \"jump around\" in the data set quite a bit. Is there any way to pre-process that out? Or, one pre-processing step can be to generate even more data - index data - that provides byte-accurate location information about critical, necessary sections of your data set. Then, your main processing run can utilize this information to jump straight to the necessary data.\n\nSo, to summarize, my approach would be to try something simple right now and see what the performance looks like. Maybe it will be fine. Otherwise, look into processing the data in multiple steps, saving the most expensive operations for infrequent pre-processing.\nDon't \"load everything into memory\". Just perform file accesses and let the operating system's disk page cache decide when you get to actually pull things directly out of memory.\n", "This depends a lot on the data in the file. Big mainframes have been doing sequential data processing for a long time but they don't normally use random access for the data. They just pull it in a line at a time and process that much before continuing. \nFor random access it is often best to build objects with caching wrappers which know where in the file the data they need to construct is. When needed they read that data in and construct themselves. This way when memory is tight you can just start killing stuff off without worrying too much about not being able to get it back later.\n", "You really haven't given us enough info to help you. Do you need to load each file in its entiretly in order to process it? Or can you process it line by line?\nLoading an entire file at a time is likely to result in poor performance even for files that aren't terribly large. Your best bet is to define a buffer size that works for you and read/process the data a buffer at a time.\n", "I've found Informatica to be an exceptionally useful data processing tool. The good news is that the more recent versions even allow Java transformations. If you're dealing with terabytes of data, it might be time to pony up for the best-of-breed ETL tools.\nI'm assuming you want to do something with the results of the processing here, like store it somewhere.\n", "If your numerical data is regularly sampled and you need to do random access consider to store them in a quadtree.\n", "I recommend strongly leveraging Regular Expressions and looking into the \"new\" IO nio package for faster input. Then it should go as quickly as you can realistically expect Gigabytes of data to go.\n", "If at all possible, get the data into a database. Then you can leverage all the indexing, caching, memory pinning, and other functionality available to you there.\n", "If you need to access the data more than once, load it into a database. Most databases have some sort of bulk loading utility. If the data can all fit in memory, and you don't need to keep it around or access it that often, you can probably write something simple in Perl or your favorite scripting language.\n" ]
[ 8, 2, 1, 1, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "java", "large_data_volumes", "large_files", "loading" ]
stackoverflow_0000087679_java_large_data_volumes_large_files_loading.txt
Q: How to get notified of Windows process maximizing CPU? Is there a tool for Windows XP and Vista (built-in or otherwise ideally freeware/OSS) that can notify the user when the CPU is above a (configurable) threshold for some (configurable) duration? I am particularly interested in a minimalist tool that fits the following bill and in order of importance (which a lot of the built-in Windows facilities like Performance/Resource Monitor do not): Does not require administrative privileges Has a low working set so it has no observable cost if left running forever Monitors silently in the system tray Uses a subtle (not in-your-face) notification method like showing a balloon tip with the name of the offending process that has been maximizing the CPU Can be configured to start automatically when a user logs on interactively A: Maybe ProcessTamer could be helpfull. It does not exactly what you are look for. But it might be a quick and dirty solution. Process Tamer is a tiny (140k) and super efficient utility for Microsoft Windows XP/2K/NT that runs in your system tray and constantly monitors the cpu usage of other processes. When it sees a process that is overloading your cpu, it reduces the priority of that process temporarily, until its cpu usage returns to a reasonable level. (source: donationcoder.com) A: You could write your own utility. Here a sample as starter: http://gist.github.com/11658 Create a CpuMeter instance ResetCounter Wait for an intervall Check Cpu utilisation Start again
How to get notified of Windows process maximizing CPU?
Is there a tool for Windows XP and Vista (built-in or otherwise ideally freeware/OSS) that can notify the user when the CPU is above a (configurable) threshold for some (configurable) duration? I am particularly interested in a minimalist tool that fits the following bill and in order of importance (which a lot of the built-in Windows facilities like Performance/Resource Monitor do not): Does not require administrative privileges Has a low working set so it has no observable cost if left running forever Monitors silently in the system tray Uses a subtle (not in-your-face) notification method like showing a balloon tip with the name of the offending process that has been maximizing the CPU Can be configured to start automatically when a user logs on interactively
[ "Maybe ProcessTamer could be helpfull. It does not exactly what you are look for. But it might be a quick and dirty solution.\n\nProcess Tamer is a tiny (140k) and super efficient utility for Microsoft Windows XP/2K/NT that runs in your system tray and constantly monitors the cpu usage of other processes. When it sees a process that is overloading your cpu, it reduces the priority of that process temporarily, until its cpu usage returns to a reasonable level.\n\n(source: donationcoder.com) \n\n", "You could write your own utility.\nHere a sample as starter:\nhttp://gist.github.com/11658\n\nCreate a CpuMeter instance\nResetCounter\nWait for an intervall\nCheck Cpu utilisation\nStart again\n\n" ]
[ 1, 0 ]
[]
[]
[ "performance", "utilities", "windows" ]
stackoverflow_0000101021_performance_utilities_windows.txt
Q: How do you convert your office to build automation? The title should say it all, then I can solidify 2 more ticks on the Joel test. I've implemented build automation using a makefile and a python script already and I understand the basics and the options. But how can I, the new guy who reads the blogs, convince my cohort of its inherent efficacy? A: Ask for forgiveness, instead of permission. Get it working in private (which it looks like you have) and then demonstrate its advantages. One thing that always gets people is using CruiseControl's Tray utility - people love it when they can see, through their system tray, that the build succeeded. (this is assuming you're in a Windows environment, that CruiseControl will work with your existing systems, etc.) NOTE: If asking for forgiveness instead of permission will result in instant termination, you might not want to do the above. You might also want to look for work somewhere else. Your mileage may vary. A: Implement build lights ... we did something similar with lava lamps and it was a huge hit. For added bonus marks give every developer a red light over their desk and have the right light come on when the build breaks. A: Grab an old spare computer & put it in the corner of your office. Set it up to build your project. Write a small script that does: Get latest version of all files. If there was a file change, build Notify you if there's a failure. When you catch a break, compassionately get it fixed. Consider adding a step to run unit tests, too. If you can avoid scolding people for their mistakes, pretty soon people will be impressed with how reliable the build has been since you arrived. Build from there. The trick is to spend very little of your time to generate a lot of value for your team, without pissing anyone off. A: Set up an autobuilder. Once you have it building and running the tests automatically, it won't matter if you convince other people to save their own time :) If you're using git for version control, here's an autobuilder that automatically finds the exact checkin that started causing the tests to fail: http://github.com/apenwarr/gitbuilder/ A: I would take a spare box, install a continuous integration server (Hudson or CruiseControl in the Java world) and set up a job that builds your application each time someone checks in some code. You can either try to convince your coworker or just wait until someone breaks the build. In the latter case, just send the following email: to: all developers Guys, I've just noticed that I can build our software using the latest version because of the following error: ... I you want to be notified by our continuous build system (attached is the mail I received when it failed to build our application), just let me know. Usually it doesn't take that long until everyone is on the list A: I would set up the automated build as a nightly process such that every night it grabs the most recent code revision, builds it, and generates a report. Now you will know first thing every morning whether or not the build is broken, and if it is, you can notify the team. If broken builds are much of a problem on your project, people will probably start coming to you first to find out if it is safe to sync to the latest code, since you will be the person who tends to know on any given day whether or not the build is broken (by the way, an automated suite of unit tests helps a great deal with this as well). With any luck, people will start to realize that your nightly build is a useful thing to have, and you'll be able to just set up your daily build report as an email that goes out. A: James Shora has two great links: For hardware http://jamesshore.com/Blog/Continuous-Integration-on-a-Dollar-a-Day.html For "Humanware" http://jamesshore.com/Change-Diary/ ( The history of how he did it. The read is long but changing an organization is harder ) A: When the build is needed by the team on a regular basis, it's pretty easy. You appoint a team member (rotated periodically) to do the build. If the build process is complicated enough, the team will on its own come up with a way of at least partially automating the build. In the worst case, you'll have to automate the build yourself, but no-one will be against the automation. A: Demonstration is the best, and really the only way to change anyone's mind who is resistant to doing things differently. Here we showed how useful automated builds are by having the ability for QA to grab a green light build straight from the build server and install it and test without any direction from the developers. They are able to continue working, they know that it at least passes it's unit tests. It helped integrate testing and development reducing time bugs were in the system.
How do you convert your office to build automation?
The title should say it all, then I can solidify 2 more ticks on the Joel test. I've implemented build automation using a makefile and a python script already and I understand the basics and the options. But how can I, the new guy who reads the blogs, convince my cohort of its inherent efficacy?
[ "Ask for forgiveness, instead of permission.\nGet it working in private (which it looks like you have) and then demonstrate its advantages.\nOne thing that always gets people is using CruiseControl's Tray utility - people love it when they can see, through their system tray, that the build succeeded. (this is assuming you're in a Windows environment, that CruiseControl will work with your existing systems, etc.)\nNOTE: If asking for forgiveness instead of permission will result in instant termination, you might not want to do the above. You might also want to look for work somewhere else. Your mileage may vary.\n", "Implement build lights ... we did something similar with lava lamps and it was a huge hit. For added bonus marks give every developer a red light over their desk and have the right light come on when the build breaks.\n", "Grab an old spare computer & put it in the corner of your office. Set it up to build your project. Write a small script that does:\n\nGet latest version of all files. \nIf there was a file change, build\nNotify you if there's a failure.\n\nWhen you catch a break, compassionately get it fixed. \nConsider adding a step to run unit tests, too.\nIf you can avoid scolding people for their mistakes, pretty soon people will be impressed with how reliable the build has been since you arrived. Build from there.\nThe trick is to spend very little of your time to generate a lot of value for your team, without pissing anyone off.\n", "Set up an autobuilder. Once you have it building and running the tests automatically, it won't matter if you convince other people to save their own time :)\nIf you're using git for version control, here's an autobuilder that automatically finds the exact checkin that started causing the tests to fail: http://github.com/apenwarr/gitbuilder/\n", "I would take a spare box, install a continuous integration server (Hudson or CruiseControl in the Java world) and set up a job that builds your application each time someone checks in some code.\nYou can either try to convince your coworker or just wait until someone breaks the build. In the latter case, just send the following email:\nto: all developers\n\nGuys,\n\nI've just noticed that I can build our software using the \nlatest version because of the following error:\n\n ...\n\nI you want to be notified by our continuous \nbuild system (attached is the mail I received when\nit failed to build our application), just let me know.\n\nUsually it doesn't take that long until everyone is on the list\n", "I would set up the automated build as a nightly process such that every night it grabs the most recent code revision, builds it, and generates a report. Now you will know first thing every morning whether or not the build is broken, and if it is, you can notify the team. If broken builds are much of a problem on your project, people will probably start coming to you first to find out if it is safe to sync to the latest code, since you will be the person who tends to know on any given day whether or not the build is broken (by the way, an automated suite of unit tests helps a great deal with this as well). With any luck, people will start to realize that your nightly build is a useful thing to have, and you'll be able to just set up your daily build report as an email that goes out.\n", "James Shora has two great links:\nFor hardware\nhttp://jamesshore.com/Blog/Continuous-Integration-on-a-Dollar-a-Day.html\nFor \"Humanware\"\nhttp://jamesshore.com/Change-Diary/\n( The history of how he did it. The read is long but changing an organization is harder )\n", "When the build is needed by the team on a regular basis, it's pretty easy. You appoint a team member (rotated periodically) to do the build. If the build process is complicated enough, the team will on its own come up with a way of at least partially automating the build. In the worst case, you'll have to automate the build yourself, but no-one will be against the automation.\n", "Demonstration is the best, and really the only way to change anyone's mind who is resistant to doing things differently.\nHere we showed how useful automated builds are by having the ability for QA to grab a green light build straight from the build server and install it and test without any direction from the developers. They are able to continue working, they know that it at least passes it's unit tests. It helped integrate testing and development reducing time bugs were in the system.\n" ]
[ 6, 5, 3, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "build_automation", "version_control", "visual_sourcesafe" ]
stackoverflow_0000088211_build_automation_version_control_visual_sourcesafe.txt
Q: Use jQuery to replace XMLHttpRequest I am quite new to JavaScript libraries. I wanted to replace my current code with jQuery. My current code looks like this: var req; function createRequest() { var key = document.getElementById("key"); var keypressed = document.getElementById("keypressed"); keypressed.value = key.value; var url = "/My_Servlet/response?key=" + escape(key.value); if (window.XMLHttpRequest) { req = new XMLHttpRequest(); } else if (window.ActiveXObject) { req = new ActiveXObject("Microsoft.XMLHTTP"); } req.open("Get", url, true); req.onreadystatechange = callback; req.send(null); } function callback() { if (req.readyState == 4) { if (req.status == 200) { var decimal = document.getElementById('decimal'); decimal.value = req.responseText; } } clear(); } I wanted to replace my code with something a little friendlier like jQuery's $.get(url, callback); However it doesn't call my callback function. Also I would like to call a function called createRequest continuously. Does jQuery have a nice way of doing that? ­­­­­­­­­­­­­­­­­­­­­­­ A: $.get(url, {}, callback); should do the trick. Your callback could be simplified like this: function callback(content){ $('#decimal').val(content); } Or even shorter: $.get(url, {}, function(content){ $('#decimal').val(content); }); And all in all I think this should work: function createRequest() { var keyValue = $('#key').val(); $('#keypressed').val(keyValue); var url = "/My_Servlet/response"; $.get(url, {key: keyValue}, function(content){ $('#decimal').val(content); }); } A: Take out the readyState and status checks. jQuery only calls your callback upon success. Your callback is supplied the arguments (data, textStatus), so you should use data instead of req.responseText. window.setTimeout() as suggested by another answer won't do what you want - that only waits and then calls your function once. You need to use window.setInterval() instead, which will call your function periodically until you cancel it. So, in summary: var interval = 500; /* Milliseconds between requests. */ window.setInterval(function() { var val = $("#key").val(); $("#keypressed").val(val); $.get("/My_Servlet/response", { "key": val }, function(data, textStatus) { $("#decimal").val(data); }); }), interval); A: I don't think jQuery implements a timeout function, but plain old javascript does it rather nicely :) A: According to the docs, jQuery.get's arguments are url, data, callback, not url, callback. A call to JavaScript's setTimeout function at the end of your callback function should suffice to get this to continually execute. A: There's no need to set the GET parameters on the URL, jQuery will set them automatically. Try this code: var key = document.getElementById("key"); [...] var url = "/My_Servlet/response"; $.get (url, {'key': key}, function (responseText) { var decimal = document.getElementById ('decimal'); decimal.value = responseText; }); A: In the end I guess it was added the type. This seems to work for me. function convertToDecimal(){ var key = document.getElementById("key"); var keypressed = document.getElementById("keypressed"); keypressed.value = key.value; var url = "/My_Servlet/response?key="+ escape(key.value); jQuery.get(url, {}, function(data){ callback(data);} , "text" ); } function callback(data){ var decimal = document.getElementById('decimal'); decimal.value = data; clear(); } Thanks Everyone for the help. I'll vote you up.
Use jQuery to replace XMLHttpRequest
I am quite new to JavaScript libraries. I wanted to replace my current code with jQuery. My current code looks like this: var req; function createRequest() { var key = document.getElementById("key"); var keypressed = document.getElementById("keypressed"); keypressed.value = key.value; var url = "/My_Servlet/response?key=" + escape(key.value); if (window.XMLHttpRequest) { req = new XMLHttpRequest(); } else if (window.ActiveXObject) { req = new ActiveXObject("Microsoft.XMLHTTP"); } req.open("Get", url, true); req.onreadystatechange = callback; req.send(null); } function callback() { if (req.readyState == 4) { if (req.status == 200) { var decimal = document.getElementById('decimal'); decimal.value = req.responseText; } } clear(); } I wanted to replace my code with something a little friendlier like jQuery's $.get(url, callback); However it doesn't call my callback function. Also I would like to call a function called createRequest continuously. Does jQuery have a nice way of doing that? ­­­­­­­­­­­­­­­­­­­­­­­
[ "$.get(url, {}, callback);\n\nshould do the trick. Your callback could be simplified like this:\nfunction callback(content){\n $('#decimal').val(content);\n}\n\nOr even shorter:\n$.get(url, {}, function(content){\n $('#decimal').val(content);\n});\n\nAnd all in all I think this should work:\nfunction createRequest() {\n var keyValue = $('#key').val();\n $('#keypressed').val(keyValue);\n var url = \"/My_Servlet/response\";\n $.get(url, {key: keyValue}, function(content){\n $('#decimal').val(content);\n });\n}\n\n", "Take out the readyState and status checks. jQuery only calls your callback upon success. Your callback is supplied the arguments (data, textStatus), so you should use data instead of req.responseText.\nwindow.setTimeout() as suggested by another answer won't do what you want - that only waits and then calls your function once. You need to use window.setInterval() instead, which will call your function periodically until you cancel it.\nSo, in summary:\nvar interval = 500; /* Milliseconds between requests. */\nwindow.setInterval(function() {\n var val = $(\"#key\").val();\n $(\"#keypressed\").val(val);\n $.get(\"/My_Servlet/response\", { \"key\": val }, function(data, textStatus) {\n $(\"#decimal\").val(data);\n });\n}), interval);\n\n", "I don't think jQuery implements a timeout function, but plain old javascript does it rather nicely :)\n", "According to the docs, jQuery.get's arguments are url, data, callback, not url, callback.\nA call to JavaScript's setTimeout function at the end of your callback function should suffice to get this to continually execute.\n", "There's no need to set the GET parameters on the URL, jQuery will set them automatically. Try this code:\nvar key = document.getElementById(\"key\");\n[...]\nvar url = \"/My_Servlet/response\";\n$.get (url, {'key': key}, function (responseText)\n{\n var decimal = document.getElementById ('decimal'); \n decimal.value = responseText;\n});\n\n", "In the end I guess it was added the type. This seems to work for me.\n function convertToDecimal(){ \n var key = document.getElementById(\"key\"); \n var keypressed = document.getElementById(\"keypressed\"); \n keypressed.value = key.value; \n var url = \"/My_Servlet/response?key=\"+ escape(key.value);\n jQuery.get(url, {}, function(data){\n callback(data);}\n , \"text\" );\n }\n\n function callback(data){\n var decimal = document.getElementById('decimal');\n decimal.value = data;\n clear();\n }\n\nThanks Everyone for the help. I'll vote you up.\n" ]
[ 19, 3, 2, 2, 2, 0 ]
[]
[]
[ "ajax", "javascript", "jquery" ]
stackoverflow_0000104323_ajax_javascript_jquery.txt
Q: How do I reference a diagram in a DSL T4 template? Google's not coming to my rescue, here, and I just know this is the perfect place to ask. I'm writing a custom DirectiveProcessor for a DSL and I want to be able to access a diagram from within my T4 template. Within my DirectiveProcessor, I've loaded the domain model and my diagram using (wait for it...) LoadModelAndDiagram(...). So, now the diagram's loaded into the default partition in the Store, but I can't for the life of me work out how to resolve a reference to it later. Can anyone guide the way? A: Well, after lots of further work, I decided I didn't need to access my diagram **from within** a custom DirectiveProcessor. I've still got a custom DirectiveProcessor because the standard generated one doesn't load the existing diagram when it loads the domain model. Getting a custom DirectiveProcessor to load the diagram and model at the same time is trivially easy. You subclass the standard generated DirectiveProcessor base class and override: protected override bool LoadDiagramData { get { return true; } } Now, we have the diagram loaded, so to get back to the original question, how do we access it? Like this: using (Transaction t = partition.Store.TransactionManager .BeginTransaction("MyTxn", true)) { MyDslDiagram diagram = partition.ElementDirectory .FindElements<MyDslDiagram>(true).SingleOrDefault(); /* * Now, do stuff with your diagram. * */ } Now, this code works just fine if you have a diagram loaded. If you don't, diagram will come back as null, in which case, we either have to load the diagram explicitly or create one dynamically. I won't go into that, here. Maybe on my blog when I've had some sleep!
How do I reference a diagram in a DSL T4 template?
Google's not coming to my rescue, here, and I just know this is the perfect place to ask. I'm writing a custom DirectiveProcessor for a DSL and I want to be able to access a diagram from within my T4 template. Within my DirectiveProcessor, I've loaded the domain model and my diagram using (wait for it...) LoadModelAndDiagram(...). So, now the diagram's loaded into the default partition in the Store, but I can't for the life of me work out how to resolve a reference to it later. Can anyone guide the way?
[ "Well, after lots of further work, I decided I didn't need to access my diagram **from within** a custom DirectiveProcessor.\nI've still got a custom DirectiveProcessor because the standard generated one doesn't load the existing diagram when it loads the domain model.\nGetting a custom DirectiveProcessor to load the diagram and model at the same time is trivially easy. You subclass the standard generated DirectiveProcessor base class and override:\nprotected override bool LoadDiagramData\n{\n get\n {\n return true;\n }\n}\n\nNow, we have the diagram loaded, so to get back to the original question, how do we access it? Like this:\nusing (Transaction t = partition.Store.TransactionManager\n .BeginTransaction(\"MyTxn\", true))\n{\n MyDslDiagram diagram = partition.ElementDirectory\n .FindElements<MyDslDiagram>(true).SingleOrDefault();\n\n /*\n * Now, do stuff with your diagram.\n *\n */\n}\n\nNow, this code works just fine if you have a diagram loaded. If you don't, diagram will come back as null, in which case, we either have to load the diagram explicitly or create one dynamically.\nI won't go into that, here. Maybe on my blog when I've had some sleep!\n" ]
[ 2 ]
[]
[]
[ "diagram", "dsl", "visual_studio" ]
stackoverflow_0000082776_diagram_dsl_visual_studio.txt
Q: Locking in C# I'm still a little unclear and when to wrap a lock around some code. My general rule-of-thumb is to wrap an operation in a lock when it reads or writes to a static variable. But when a static variable is ONLY read (e.g. it's a readonly that is set during type initialization), accessing it doesn't need to be wrapped in a lock statement, right? I recently saw some code that looked like the following example, and it made me think there may be some gaps in my multithreading knowledge: class Foo { private static readonly string bar = "O_o"; private bool TrySomething() { string bar; lock(Foo.objectToLockOn) { bar = Foo.bar; } // Do something with bar } } That just doesn't make sense to me--why would there by concurrency issues with READING a register? Also, this example brings up another question. Is one of these better than the other? (E.g. example two holds the lock for less time?) I suppose I could disassemble the MSIL... class Foo { private static string joke = "yo momma"; private string GetJoke() { lock(Foo.objectToLockOn) { return Foo.joke; } } } vs. class Foo { private static string joke = "yo momma"; private string GetJoke() { string joke; lock(Foo.objectToLockOn) { joke = Foo.joke; } return joke; } } A: Since none of the code you've written modifies the static field after initialization, there is no need for any locking. Just replacing the string with a new value won't need synchronization either, unless the new value depends on the results of a read of the old value. Static fields aren't the only things that need synchronization, any shared reference which could be modified is vulnerable to synchronization issues. class Foo { private int count = 0; public void TrySomething() { count++; } } You might suppose that two threads executing the TrySomething method would be fine. But its not. Thread A reads the value of count (0) into a register so it can be incremented. Context switch! The thread scheduler decides thread A has had enough execution time. Next in line is Thread B. Thread B reads the value of count (0) into a register. Thread B increments the register. Thread B saves the result (1) to count. Context switch back to A. Thread A reloads the register with the value of count (0) saved on its stack. Thread A increments the register. Thread A saves the result (1) to count. So, even though we called count++ twice, the value of count has just gone from 0 to 1. Lets make the code thread-safe: class Foo { private int count = 0; private readonly object sync = new object(); public void TrySomething() { lock(sync) count++; } } Now when Thread A gets interrupted Thread B cannot mess with count because it will hit the lock statement and then block until Thread A has released sync. By the way, there is an alternative way to make incrementing Int32s and Int64s thread-safe: class Foo { private int count = 0; public void TrySomething() { System.Threading.Interlocked.Increment(ref count); } } Regarding the second part of your question, I think I would just go with whichever is easier to read, any performance difference there will be negligible. Early optimisation is the root of all evil, etc. Why threading is hard A: Reading or writing a 32-bit or smaller field is an atomic operation in C#. There's no need for a lock in the code you presented, as far as I can see. A: It looks to me that the lock is unnecessary in your first case. Using the static initializer to initialize bar is guaranteed to be thread safe. Since you only ever read the value, there's no need to lock it. If the value's never going to change, there will never be any contention, why lock at all? A: If you're just writing a value to a pointer, you don't need to lock, since that action is atomic. Generally, you should lock any time you need to do a transaction involving at least two atomic actions (reads or writes) that depend on the state not changing between the beginning and end. That said – I come from Java land, where all reads and writes of variables are atomic actions. Other answers here suggest that .NET is different. A: Dirty reads? A: In my opinion, you should try very hard to not put static variables in a position where they need to be read/written to from different threads. They are essentially free-for-all global variables in that case, and globals are almost always a Bad Thing. That being said, if you do put a static variable in such a position, you may want to lock during a read, just in case - remember, another thread may have swooped in and changed the value during the read, and if it does, you may end up with corrupt data. Reads are not necessarily atomic operations unless you ensure they are by locking. Same with writes - they are not always atomic operations either. Edit: As Mark pointed out, for certain primitives in C# reads are always atomic. But be careful with other data types. A: As for your "which is better" question, they're the same since the function scope isn't used for anything else.
Locking in C#
I'm still a little unclear and when to wrap a lock around some code. My general rule-of-thumb is to wrap an operation in a lock when it reads or writes to a static variable. But when a static variable is ONLY read (e.g. it's a readonly that is set during type initialization), accessing it doesn't need to be wrapped in a lock statement, right? I recently saw some code that looked like the following example, and it made me think there may be some gaps in my multithreading knowledge: class Foo { private static readonly string bar = "O_o"; private bool TrySomething() { string bar; lock(Foo.objectToLockOn) { bar = Foo.bar; } // Do something with bar } } That just doesn't make sense to me--why would there by concurrency issues with READING a register? Also, this example brings up another question. Is one of these better than the other? (E.g. example two holds the lock for less time?) I suppose I could disassemble the MSIL... class Foo { private static string joke = "yo momma"; private string GetJoke() { lock(Foo.objectToLockOn) { return Foo.joke; } } } vs. class Foo { private static string joke = "yo momma"; private string GetJoke() { string joke; lock(Foo.objectToLockOn) { joke = Foo.joke; } return joke; } }
[ "Since none of the code you've written modifies the static field after initialization, there is no need for any locking. Just replacing the string with a new value won't need synchronization either, unless the new value depends on the results of a read of the old value.\nStatic fields aren't the only things that need synchronization, any shared reference which could be modified is vulnerable to synchronization issues.\nclass Foo\n{\n private int count = 0;\n public void TrySomething() \n {\n count++;\n }\n}\n\nYou might suppose that two threads executing the TrySomething method would be fine. But its not.\n\nThread A reads the value of count (0) into a register so it can be incremented.\nContext switch! The thread scheduler decides thread A has had enough execution time. Next in line is Thread B.\nThread B reads the value of count (0) into a register.\nThread B increments the register.\nThread B saves the result (1) to count.\nContext switch back to A.\nThread A reloads the register with the value of count (0) saved on its stack.\nThread A increments the register.\nThread A saves the result (1) to count.\n\nSo, even though we called count++ twice, the value of count has just gone from 0 to 1. Lets make the code thread-safe:\nclass Foo\n{\n private int count = 0;\n private readonly object sync = new object();\n public void TrySomething() \n {\n lock(sync)\n count++;\n }\n}\n\nNow when Thread A gets interrupted Thread B cannot mess with count because it will hit the lock statement and then block until Thread A has released sync.\nBy the way, there is an alternative way to make incrementing Int32s and Int64s thread-safe:\nclass Foo\n{\n private int count = 0;\n public void TrySomething() \n {\n System.Threading.Interlocked.Increment(ref count);\n }\n}\n\nRegarding the second part of your question, I think I would just go with whichever is easier to read, any performance difference there will be negligible. Early optimisation is the root of all evil, etc.\nWhy threading is hard\n", "Reading or writing a 32-bit or smaller field is an atomic operation in C#. There's no need for a lock in the code you presented, as far as I can see.\n", "It looks to me that the lock is unnecessary in your first case. Using the static initializer to initialize bar is guaranteed to be thread safe. Since you only ever read the value, there's no need to lock it. If the value's never going to change, there will never be any contention, why lock at all?\n", "If you're just writing a value to a pointer, you don't need to lock, since that action is atomic. Generally, you should lock any time you need to do a transaction involving at least two atomic actions (reads or writes) that depend on the state not changing between the beginning and end.\nThat said – I come from Java land, where all reads and writes of variables are atomic actions. Other answers here suggest that .NET is different.\n", "Dirty reads?\n", "In my opinion, you should try very hard to not put static variables in a position where they need to be read/written to from different threads. They are essentially free-for-all global variables in that case, and globals are almost always a Bad Thing.\nThat being said, if you do put a static variable in such a position, you may want to lock during a read, just in case - remember, another thread may have swooped in and changed the value during the read, and if it does, you may end up with corrupt data. Reads are not necessarily atomic operations unless you ensure they are by locking. Same with writes - they are not always atomic operations either.\nEdit:\nAs Mark pointed out, for certain primitives in C# reads are always atomic. But be careful with other data types.\n", "As for your \"which is better\" question, they're the same since the function scope isn't used for anything else.\n" ]
[ 23, 8, 3, 1, 1, 1, 0 ]
[]
[]
[ "c#", "concurrency", "multithreading", "synchronization" ]
stackoverflow_0000105198_c#_concurrency_multithreading_synchronization.txt
Q: Writing post data from one java servlet to another I am trying to write a servlet that will send a XML file (xml formatted string) to another servlet via a POST. (Non essential xml generating code replaced with "Hello there") StringBuilder sb= new StringBuilder(); sb.append("Hello there"); URL url = new URL("theservlet's URL"); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Length", "" + sb.length()); OutputStreamWriter outputWriter = new OutputStreamWriter(connection.getOutputStream()); outputWriter.write(sb.toString()); outputWriter.flush(); outputWriter.close(); This is causing a server error, and the second servlet is never invoked. A: This kind of thing is much easier using a library like HttpClient. There's even a post XML code example: PostMethod post = new PostMethod(url); RequestEntity entity = new FileRequestEntity(inputFile, "text/xml; charset=ISO-8859-1"); post.setRequestEntity(entity); HttpClient httpclient = new HttpClient(); int result = httpclient.executeMethod(post); A: I recommend using Apache HTTPClient instead, because it's a nicer API. But to solve this current problem: try calling connection.setDoOutput(true); after you open the connection. StringBuilder sb= new StringBuilder(); sb.append("Hello there"); URL url = new URL("theservlet's URL"); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setDoOutput(true); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Length", "" + sb.length()); OutputStreamWriter outputWriter = new OutputStreamWriter(connection.getOutputStream()); outputWriter.write(sb.toString()); outputWriter.flush(); outputWriter.close(); A: Don't forget to use: connection.setDoOutput( true) if you intend on sending output. A: The contents of an HTTP post upload stream and the mechanics of it don't seem to be what you are expecting them to be. You cannot just write a file as the post content, because POST has very specific RFC standards on how the data included in a POST request is supposed to be sent. It is not just the formatted of the content itself, but it is also the mechanic of how it is "written" to the outputstream. Alot of the time POST is now written in chunks. If you look at the source code of Apache's HTTPClient you will see how it writes the chunks. There are quirks with the content length as result, because the content length is increased by a small number identifying the chunk and a random small sequence of characters that delimits each chunk as it is written over the stream. Look at some of the other methods described in newer Java versions of the HTTPURLConnection. http://java.sun.com/javase/6/docs/api/java/net/HttpURLConnection.html#setChunkedStreamingMode(int) If you don't know what you are doing and don't want to learn it, dealing with adding a dependency like Apache HTTPClient really does end up being much easier because it abstracts all the complexity and just works.
Writing post data from one java servlet to another
I am trying to write a servlet that will send a XML file (xml formatted string) to another servlet via a POST. (Non essential xml generating code replaced with "Hello there") StringBuilder sb= new StringBuilder(); sb.append("Hello there"); URL url = new URL("theservlet's URL"); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Length", "" + sb.length()); OutputStreamWriter outputWriter = new OutputStreamWriter(connection.getOutputStream()); outputWriter.write(sb.toString()); outputWriter.flush(); outputWriter.close(); This is causing a server error, and the second servlet is never invoked.
[ "This kind of thing is much easier using a library like HttpClient. There's even a post XML code example:\nPostMethod post = new PostMethod(url);\nRequestEntity entity = new FileRequestEntity(inputFile, \"text/xml; charset=ISO-8859-1\");\npost.setRequestEntity(entity);\nHttpClient httpclient = new HttpClient();\nint result = httpclient.executeMethod(post);\n\n", "I recommend using Apache HTTPClient instead, because it's a nicer API. \nBut to solve this current problem: try calling connection.setDoOutput(true); after you open the connection.\nStringBuilder sb= new StringBuilder();\nsb.append(\"Hello there\");\n\nURL url = new URL(\"theservlet's URL\");\nHttpURLConnection connection = (HttpURLConnection)url.openConnection(); \nconnection.setDoOutput(true);\nconnection.setRequestMethod(\"POST\");\nconnection.setRequestProperty(\"Content-Length\", \"\" + sb.length());\n\nOutputStreamWriter outputWriter = new OutputStreamWriter(connection.getOutputStream());\noutputWriter.write(sb.toString());\noutputWriter.flush();\noutputWriter.close();\n\n", "Don't forget to use: \nconnection.setDoOutput( true)\n\nif you intend on sending output.\n", "The contents of an HTTP post upload stream and the mechanics of it don't seem to be what you are expecting them to be. You cannot just write a file as the post content, because POST has very specific RFC standards on how the data included in a POST request is supposed to be sent. It is not just the formatted of the content itself, but it is also the mechanic of how it is \"written\" to the outputstream. Alot of the time POST is now written in chunks. If you look at the source code of Apache's HTTPClient you will see how it writes the chunks.\nThere are quirks with the content length as result, because the content length is increased by a small number identifying the chunk and a random small sequence of characters that delimits each chunk as it is written over the stream. Look at some of the other methods described in newer Java versions of the HTTPURLConnection.\nhttp://java.sun.com/javase/6/docs/api/java/net/HttpURLConnection.html#setChunkedStreamingMode(int)\nIf you don't know what you are doing and don't want to learn it, dealing with adding a dependency like Apache HTTPClient really does end up being much easier because it abstracts all the complexity and just works.\n" ]
[ 13, 8, 2, 2 ]
[]
[]
[ "java", "servlets" ]
stackoverflow_0000096360_java_servlets.txt
Q: Web designer for VS.NET ReportViewer Is there any designer for rdl files (visual studio .net reports) that can be used on a web browser? A: The best that I've seen is RsInteract: http://www.rsinteract.com/
Web designer for VS.NET ReportViewer
Is there any designer for rdl files (visual studio .net reports) that can be used on a web browser?
[ "The best that I've seen is RsInteract: http://www.rsinteract.com/\n" ]
[ 1 ]
[]
[]
[ "designer", "rdl", "report", "reportviewer" ]
stackoverflow_0000104951_designer_rdl_report_reportviewer.txt
Q: int matrix with pointers in C - memory allocation confusion I'm having some issues with producing an int matrix without creating memory leaks. I want to be able to make a given (global) matrix into any size dynamically via read_matrix(). But then i want to be able to free the memory later on. So in my main method the second printf should result in a bus error since it should not have any memory allocated to it. How would i go about creating this? int** first_matrix; int** second_matrix; int** result_matrix; int** read_matrix(int size_x, int size_y) { int** matrix; matrix = calloc(size_x, sizeof(int*)); for(int i = 0;i<size_x;i++) { matrix[i] = calloc(size_y, sizeof(int)); } for(int i = 0;i<size_x;i++) { for(int j = 0;j<size_y;j++) { matrix[i][j] = i*10+j; } } return matrix; } int main(int stackc, char** stack) { first_matrix = read_matrix(10,10); printf("9:3 %d - 4:6 %d \n", first_matrix[9][3], first_matrix[4][6]); free(*first_matrix); free(first_matrix); printf("9:3 %d - 4:6 %d \n", first_matrix[9][3], first_matrix[4][6]); } A: Just because the memory has been free'd doesn't mean you can't access it! Of course, it's a very bad idea to access it after it's been free'd, but that's why it works in your example. Note that free( *first_matrix ) only free's first_matrix[0], not the other arrays. You probably want some kind of marker to signify the last array (unless you will always know when you free the outer array how many inner arrays you allocated). Something like: int** read_matrix(int size_x, int size_y) { int** matrix; matrix = calloc(size_x, 1+sizeof(int*)); // alloc one extra ptr for(int i = 0;i<size_x;i++) { matrix[i] = calloc(size_y, sizeof(int)); } matrix[size_x] = NULL; // set the extra ptr to NULL for(int i = 0;i<size_x;i++) { for(int j = 0;j<size_y;j++) { matrix[i][j] = i*10+j; } } return matrix; } Then when you're freeing them: // keep looping until you find the NULL one for( int i=0; first_matrix[i] != NULL; i++ ) { free( first_matrix[i] ); } free( first_matrix ); A: You need to free each row individually: void free_matrix(int **matrix, int size_x) { for(int i = 0; i < size_x; i++) free(matrix[i]); free(matrix); } A: Freeing the memory doesn't make it go away, it just means that another allocation might grab that same chunk of memory. Whatever you put in it will still be there until something else overwrites it. Also, you're not freeing everything you allocated. You're only freeing the array of pointers and the first row. But even if you free everything correctly, you would still have the same effect. If you want to create a "bus error" you need to point to memory that doesn't belong to your process. Why do you want to do that anyway? A: You only freed the first row (or column) of first_matrix. Write another function like this: void free_matrix(int **matrix, int rows) { int i; for(i=0; i<rows; i++) { free(matrix[i]); } free(matrix); } You might want to make the matrix into a struct to store it's row and column count. A: I recommend using valgrind to track down unfree'd memory, as opposed to trying to make a bus error occur. It rocks for lots of other stuff as well. Sam A: You're getting memory leaks because you're freeing the first row of the matrix and the list of rows, but none of the 1 to nth rows. You need to call free in a loop. There are a couple of alternatives, however: - Allocate sizeof(int*)rows + rowscols*sizeof(int) bytes and use the first bytes for the row pointers. That way, you only have a single chunk of memory to free (and it's easier on the allocator, too) - Use a struct that contains the number of rows. Then you can avoid the row list altogether (saving memory). The only downside is that you have to use a function, a macro, or some messy notation to address the matrix. If you go with the second option, you can use a struct like this in any C99 compiler, and again only have to allocate a single block of memory (of size numints*sizeof(int)+sizeof(int)): struct matrix { int rows; int data[0]; } A: The concept you are missing here, is that for every calloc, there must be a free. and that free must be applied to the pointer passed back from calloc. I recommend you create a function (named delete_matrix) that uses a loop to free all of the pointers that you allocate in here for(int i = 0;i < size_x;i++) { matrix[i] = calloc(size_y, sizeof(int)); } then, once that is done, free the pointer allocated by this. matrix = calloc(size_x, sizeof(int*)); The way you are doing it now, free(*first_matrix); free(first_matrix); won't do what you want it to do.
int matrix with pointers in C - memory allocation confusion
I'm having some issues with producing an int matrix without creating memory leaks. I want to be able to make a given (global) matrix into any size dynamically via read_matrix(). But then i want to be able to free the memory later on. So in my main method the second printf should result in a bus error since it should not have any memory allocated to it. How would i go about creating this? int** first_matrix; int** second_matrix; int** result_matrix; int** read_matrix(int size_x, int size_y) { int** matrix; matrix = calloc(size_x, sizeof(int*)); for(int i = 0;i<size_x;i++) { matrix[i] = calloc(size_y, sizeof(int)); } for(int i = 0;i<size_x;i++) { for(int j = 0;j<size_y;j++) { matrix[i][j] = i*10+j; } } return matrix; } int main(int stackc, char** stack) { first_matrix = read_matrix(10,10); printf("9:3 %d - 4:6 %d \n", first_matrix[9][3], first_matrix[4][6]); free(*first_matrix); free(first_matrix); printf("9:3 %d - 4:6 %d \n", first_matrix[9][3], first_matrix[4][6]); }
[ "Just because the memory has been free'd doesn't mean you can't access it! Of course, it's a very bad idea to access it after it's been free'd, but that's why it works in your example.\nNote that free( *first_matrix ) only free's first_matrix[0], not the other arrays. You probably want some kind of marker to signify the last array (unless you will always know when you free the outer array how many inner arrays you allocated). Something like:\nint** read_matrix(int size_x, int size_y)\n{\n int** matrix;\n matrix = calloc(size_x, 1+sizeof(int*)); // alloc one extra ptr\n for(int i = 0;i<size_x;i++) {\n matrix[i] = calloc(size_y, sizeof(int));\n }\n matrix[size_x] = NULL; // set the extra ptr to NULL\n for(int i = 0;i<size_x;i++) {\n for(int j = 0;j<size_y;j++) {\n matrix[i][j] = i*10+j;\n }\n }\n return matrix;\n}\n\nThen when you're freeing them:\n// keep looping until you find the NULL one\nfor( int i=0; first_matrix[i] != NULL; i++ ) {\n free( first_matrix[i] );\n}\nfree( first_matrix );\n\n", "You need to free each row individually:\n\nvoid free_matrix(int **matrix, int size_x)\n{\n for(int i = 0; i < size_x; i++)\n free(matrix[i]);\n free(matrix);\n}\n\n", "Freeing the memory doesn't make it go away, it just means that another allocation might grab that same chunk of memory. Whatever you put in it will still be there until something else overwrites it.\nAlso, you're not freeing everything you allocated. You're only freeing the array of pointers and the first row. But even if you free everything correctly, you would still have the same effect.\nIf you want to create a \"bus error\" you need to point to memory that doesn't belong to your process. Why do you want to do that anyway?\n", "You only freed the first row (or column) of first_matrix. Write another function like this:\nvoid free_matrix(int **matrix, int rows)\n{\n int i;\n for(i=0; i<rows; i++)\n {\n free(matrix[i]);\n }\n free(matrix);\n}\n\nYou might want to make the matrix into a struct to store it's row and column count.\n", "I recommend using valgrind to track down unfree'd memory, as opposed to trying to make a bus error occur. It rocks for lots of other stuff as well.\nSam\n", "You're getting memory leaks because you're freeing the first row of the matrix and the list of rows, but none of the 1 to nth rows. You need to call free in a loop.\nThere are a couple of alternatives, however:\n- Allocate sizeof(int*)rows + rowscols*sizeof(int) bytes and use the first bytes for the row pointers. That way, you only have a single chunk of memory to free (and it's easier on the allocator, too)\n- Use a struct that contains the number of rows. Then you can avoid the row list altogether (saving memory). The only downside is that you have to use a function, a macro, or some messy notation to address the matrix.\nIf you go with the second option, you can use a struct like this in any C99 compiler, and again only have to allocate a single block of memory (of size numints*sizeof(int)+sizeof(int)):\nstruct matrix {\n int rows;\n int data[0];\n}\n\n", "The concept you are missing here, is that for every calloc, there must be a free.\nand that free must be applied to the pointer passed back from calloc.\nI recommend you create a function (named delete_matrix)\nthat uses a loop to free all of the pointers that you allocate in here\nfor(int i = 0;i < size_x;i++) {\n matrix[i] = calloc(size_y, sizeof(int));\n }\nthen, once that is done, free the pointer allocated by this.\nmatrix = calloc(size_x, sizeof(int*));\nThe way you are doing it now, \nfree(*first_matrix);\n free(first_matrix);\nwon't do what you want it to do.\n" ]
[ 9, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "c", "matrix", "memory_management", "pointers" ]
stackoverflow_0000105653_c_matrix_memory_management_pointers.txt
Q: 3rd party controls for MS SQL 2005 Reporting Services Can anyone recommend a good 3rd party control(s) for MS SQL 2005 Reporting Services. If you know some open library or implementation of such controls that could be very useful too. A: Dundas do great RS add-ins if you have the budget: http://www.dundas.com/index.aspx A: I agree, Dundas has great controls. I used it in one of my projects. This was the sample which I used to test out the CRI: http://www.codeplex.com/MSFTRSProdSamples/Wiki/View.aspx?title=SS2005%21Custom%20Report%20Item%20Sample&referringTitle=Home
3rd party controls for MS SQL 2005 Reporting Services
Can anyone recommend a good 3rd party control(s) for MS SQL 2005 Reporting Services. If you know some open library or implementation of such controls that could be very useful too.
[ "Dundas do great RS add-ins if you have the budget:\nhttp://www.dundas.com/index.aspx\n", "I agree, Dundas has great controls. I used it in one of my projects.\nThis was the sample which I used to test out the CRI:\nhttp://www.codeplex.com/MSFTRSProdSamples/Wiki/View.aspx?title=SS2005%21Custom%20Report%20Item%20Sample&referringTitle=Home\n" ]
[ 3, 1 ]
[]
[]
[ "reporting_services", "reportingservices_2005" ]
stackoverflow_0000105408_reporting_services_reportingservices_2005.txt
Q: What XML do I send for a field thats declared as nillable? I have an application with a REST style interface that takes XML documents via POST from clients. This application is written in Java and uses XML beans to process the posted message. The XML schema definition for a field in the message looks like this: <xs:element name="value" type="xs:string" nillable="true" /> How do I send a null value that meets this spec? I sent <value xsi:nil="true" /> but this caused the XML parser to barf. A: What about <value xsi:nil="true"></value>? That's what's in the spec. A: In the past when I've had XML elements that were null I could either not include them or send them empty so, in your case it'd be: <value /> Have you tried that? A: That's the right way of sending a nil value (assuming that the default namespace and the xsi namespace are set to the correct values, namely "http://www.w3.org/2001/XMLSchema-instance" for xsi.) so it looks like you might have come up against a bug in the CML parser you're using. What's the error message? You might try using xsi:nil="1" or using separate open and close tags (<value xsi:nil="true"></value>) to try working around the bug.
What XML do I send for a field thats declared as nillable?
I have an application with a REST style interface that takes XML documents via POST from clients. This application is written in Java and uses XML beans to process the posted message. The XML schema definition for a field in the message looks like this: <xs:element name="value" type="xs:string" nillable="true" /> How do I send a null value that meets this spec? I sent <value xsi:nil="true" /> but this caused the XML parser to barf.
[ "What about <value xsi:nil=\"true\"></value>? That's what's in the spec.\n", "In the past when I've had XML elements that were null I could either not include them or send them empty so, in your case it'd be:\n<value />\nHave you tried that?\n", "That's the right way of sending a nil value (assuming that the default namespace and the xsi namespace are set to the correct values, namely \"http://www.w3.org/2001/XMLSchema-instance\" for xsi.) so it looks like you might have come up against a bug in the CML parser you're using. What's the error message?\nYou might try using xsi:nil=\"1\" or using separate open and close tags (<value xsi:nil=\"true\"></value>) to try working around the bug.\n" ]
[ 15, 1, 0 ]
[]
[]
[ "xml", "xml_nil", "xsd" ]
stackoverflow_0000105688_xml_xml_nil_xsd.txt