content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: What is the current state of art in Linux virtualization technology? What VM technologies exist for Linux, their pros and cons, and which is recommended for which application? Since this kind of question can be asked for X other than "VM technologies for Linux", and since the answer changes with progress, I suggest to define a template for this kind of pages. Those pages will have the tag 'stateoftheart' and they will be revisited each month and each month there will be up-to-date list of technologies, up-to-date reviews and up-to-date recommendations. A: This is a job for ... Wikipedia! Types of Virtualization Platform Virtualization Comparison of Virtual Machines Now that the obvious stuff is out of the way... Linux runs fine as a guest on every VM host I've used, so I'm going to assume that you're referring to Linux as the host operating system. I'm also going to assume x86 or amd64 hardware. Platform virtualization breaks down into two major forms: Desktop virtualization and Server virtualization. Both types will allow you to load and run multiple OS instances as guests that virtualize their I/O through the host OS. Desktop virtualization concentrates on providing a highly interactive console experience for each of the guest VMs, while Server virtualization concentrates on maximizing computing performance, generally while sacrificing console services and more exotic devices (Sound cards, USB, etc.) Server virtualization implementations typically include either RDP or VNC for remote access to a virtual console. On Linux, your choices for Desktop Virtualization include: VMware Workstation -- it's commercial, somewhat expensive, mature, and provides the most hardware, device, and guest OS support of any solution. VMware Player -- it's commercial (freeware) and only supports VMs that were created elsewhere. Available with Ubuntu. Parallels Workstation -- it's commercial, somewhat expensive, and not up to par with VMware. Doesn't support 64-bit guests. VirtualBox -- available in commercial (freeware) and community versions (GPL). Fedora's preferred solution. On Linux, your choices for Server Virtualization include: VMware Server -- it's commercial (freeware), mature, and provides the most hardware, device, and guest OS support of any solution. Available with Ubuntu. Xen -- it's open source. A para-virtualization solution, it has only recently added hardware-virtualization, so Windows guest support depends upon specific CPU support. Virtual Iron -- a commercialized version of Xen that adds native virtualization. KVM -- it's open source. It depends upon QEMU for the last mile. Ubuntu's preferred solution. Linux-VServer -- it's open source. It provides virtual jails based on the host OS kernel, so no Windows guests. For myself, I stick with VMware Workstation (7+ years) and VMware Server for my Linux-hosted virtualization needs. At work, it's VMware Workstation (on Windows), VMware Server (on Windows), and VMware ESX (on bare metal). I'll probably have another look at Xen, KVM, and VirtualBox at some point, but for right now compatibility between work and home is paramount. A: 2008 Oct To be filled in at October to reflect the market status then. 2008 Sept Products/services/technologies currently existing VMware Xen VirtualBox VServer ??? Comparisons ??? Recommendations for particular application areas Home multi-boot replacement Small business which has MS-Windows legacy applications Datacenter of multinational corporation ??? A: W Craig Trader answer is great, but just to add there is also User-mode Linux (UML) which has been around for a while - it has been in the mainline kernel tree since 2.6.0 . Note that I haven't used it myself. Ubuntu prefers KVM, and I believe Red Hat is moving to it over Xen now as well. Both KVM and Xen can be managed by libvirt, optionally through the virtual machine manager GUI. The virtual machine manager can manage remote instances through ssh connections. In addition, a good comparison can be found here (pdf). Lots of performance tests done. The short version is that xen and linux-vserver were generally the best on performance grounds.
What is the current state of art in Linux virtualization technology?
What VM technologies exist for Linux, their pros and cons, and which is recommended for which application? Since this kind of question can be asked for X other than "VM technologies for Linux", and since the answer changes with progress, I suggest to define a template for this kind of pages. Those pages will have the tag 'stateoftheart' and they will be revisited each month and each month there will be up-to-date list of technologies, up-to-date reviews and up-to-date recommendations.
[ "This is a job for ... Wikipedia!\n\nTypes of Virtualization\nPlatform Virtualization\nComparison of Virtual Machines\n\nNow that the obvious stuff is out of the way...\nLinux runs fine as a guest on every VM host I've used, so I'm going to assume that you're referring to Linux as the host operating system. I'm also going to assume x86 or amd64 hardware.\nPlatform virtualization breaks down into two major forms: Desktop virtualization and Server virtualization. Both types will allow you to load and run multiple OS instances as guests that virtualize their I/O through the host OS. Desktop virtualization concentrates on providing a highly interactive console experience for each of the guest VMs, while Server virtualization concentrates on maximizing computing performance, generally while sacrificing console services and more exotic devices (Sound cards, USB, etc.) Server virtualization implementations typically include either RDP or VNC for remote access to a virtual console.\nOn Linux, your choices for Desktop Virtualization include:\n\nVMware Workstation -- it's commercial, somewhat expensive, mature, and provides the most hardware, device, and guest OS support of any solution.\nVMware Player -- it's commercial (freeware) and only supports VMs that were created elsewhere. Available with Ubuntu.\nParallels Workstation -- it's commercial, somewhat expensive, and not up to par with VMware. Doesn't support 64-bit guests.\nVirtualBox -- available in commercial (freeware) and community versions (GPL). Fedora's preferred solution.\n\nOn Linux, your choices for Server Virtualization include:\n\nVMware Server -- it's commercial (freeware), mature, and provides the most hardware, device, and guest OS support of any solution. Available with Ubuntu.\nXen -- it's open source. A para-virtualization solution, it has only recently added hardware-virtualization, so Windows guest support depends upon specific CPU support.\nVirtual Iron -- a commercialized version of Xen that adds native virtualization.\nKVM -- it's open source. It depends upon QEMU for the last mile. Ubuntu's preferred solution.\nLinux-VServer -- it's open source. It provides virtual jails based on the host OS kernel, so no Windows guests.\n\nFor myself, I stick with VMware Workstation (7+ years) and VMware Server for my Linux-hosted virtualization needs. At work, it's VMware Workstation (on Windows), VMware Server (on Windows), and VMware ESX (on bare metal). I'll probably have another look at Xen, KVM, and VirtualBox at some point, but for right now compatibility between work and home is paramount.\n", "2008 Oct\nTo be filled in at October to reflect the market status then.\n2008 Sept\nProducts/services/technologies currently existing\n\nVMware\nXen\nVirtualBox\nVServer\n???\n\nComparisons\n???\nRecommendations for particular application areas\n\nHome multi-boot replacement\nSmall business which has MS-Windows legacy applications\nDatacenter of multinational corporation\n???\n\n", "W Craig Trader answer is great, but just to add there is also User-mode Linux (UML) which has been around for a while - it has been in the mainline kernel tree since 2.6.0 . Note that I haven't used it myself.\nUbuntu prefers KVM, and I believe Red Hat is moving to it over Xen now as well. Both KVM and Xen can be managed by libvirt, optionally through the virtual machine manager GUI. The virtual machine manager can manage remote instances through ssh connections.\nIn addition, a good comparison can be found here (pdf). Lots of performance tests done. The short version is that xen and linux-vserver were generally the best on performance grounds.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "linux", "virtualization" ]
stackoverflow_0000109797_linux_virtualization.txt
Q: Setting result for IAuthorizationFilter I am looking to set the result action from a failed IAuthorizationFilter. However I am unsure how to create an ActionResult from inside the Filter. The controller doesn't seem to be accible from inside the filter so my usual View("SomeView") isn't working. Is there a way to get the controler or else another way of creating an actionresult as it doesn't appear to be instantiable? Doesn't work: [AttributeUsage(AttributeTargets.Method)] public sealed class RequiresAuthenticationAttribute : ActionFilterAttribute, IAuthorizationFilter { public void OnAuthorization(AuthorizationContext context) { if (!context.HttpContext.User.Identity.IsAuthenticated) { context.Result = View("User/Login"); } } } A: You should look at the implementation of IAuthorizationFilter that comes with the MVC framework, AuthorizeAttribute. If you are using forms authentication, there's no need for you to set the result to User/Login. You can raise a 401 HTTP status response and ASP.NET Will redirect to the login page for you. The one issue with setting the result to user/login is that the user's address bar is not updated, so they will be on the login page, but the URL won't match. For some people, this is not an issue. But some people want their site's URL to correspond to what the user sees in their browser. A: You can instantiate the appropriate ActionResult directly, then set it on the context. For example: public void OnAuthorization(AuthorizationContext context) { if (!context.HttpContext.User.Identity.IsAuthenticated) { context.Result = new ViewResult { ViewName = "Whatever" }; } }
Setting result for IAuthorizationFilter
I am looking to set the result action from a failed IAuthorizationFilter. However I am unsure how to create an ActionResult from inside the Filter. The controller doesn't seem to be accible from inside the filter so my usual View("SomeView") isn't working. Is there a way to get the controler or else another way of creating an actionresult as it doesn't appear to be instantiable? Doesn't work: [AttributeUsage(AttributeTargets.Method)] public sealed class RequiresAuthenticationAttribute : ActionFilterAttribute, IAuthorizationFilter { public void OnAuthorization(AuthorizationContext context) { if (!context.HttpContext.User.Identity.IsAuthenticated) { context.Result = View("User/Login"); } } }
[ "You should look at the implementation of IAuthorizationFilter that comes with the MVC framework, AuthorizeAttribute. If you are using forms authentication, there's no need for you to set the result to User/Login. You can raise a 401 HTTP status response and ASP.NET Will redirect to the login page for you.\nThe one issue with setting the result to user/login is that the user's address bar is not updated, so they will be on the login page, but the URL won't match. For some people, this is not an issue. But some people want their site's URL to correspond to what the user sees in their browser.\n", "You can instantiate the appropriate ActionResult directly, then set it on the context. For example:\npublic void OnAuthorization(AuthorizationContext context)\n{\n if (!context.HttpContext.User.Identity.IsAuthenticated)\n {\n context.Result = new ViewResult { ViewName = \"Whatever\" };\n }\n}\n\n" ]
[ 2, 1 ]
[]
[]
[ "asp.net_mvc" ]
stackoverflow_0000110384_asp.net_mvc.txt
Q: .Net 8-bit Encoding I'm working on serial port, transmitting and receiving data to some hardware at 8bit data. I would like to store it as string to facilitate comparison, and preset data are stored as string or hex format in xml file. I found out that only when using Encoding.Default which is ANSI encoding then the 8bit data is converted properly and easily reversible. ASCII encoding will only works for 7bit data, and UTF8 or UTF7 doesn't works well too, since I'm using some character from 1-255. Encoding.Default would be just fine, but I read on MSDN that it's dependent on OS codepage setting, which means it might behave differently on different codepage configured. I use GetBytes() and GetString extensively using the Encoding, but would like a failsafe and portable method that works all the time at any configuration. Any idea or better suggestion for this? A: Latin-1 aka ISO-8859-1 aka codepage 28591 is a useful codepage for this scenario, as it maps values in the range 128-255 unchanged. The following are interchangeable: Encoding.GetEncoding(28591) Encoding.GetEncoding("Latin1") Encoding.GetEncoding("iso-8859-1") The following code illustrates the fact that for Latin1, unlike Encoding.Default, all characters in the range 0-255 are mapped unchanged: static void Main(string[] args) { Console.WriteLine("Test Default Encoding returned {0}", TestEncoding(Encoding.Default)); Console.WriteLine("Test Latin1 Encoding returned {0}", TestEncoding(Encoding.GetEncoding("Latin1"))); Console.ReadLine(); return; } private static bool CompareBytes(char[] chars, byte[] bytes) { bool result = true; if (chars.Length != bytes.Length) { Console.WriteLine("Length mismatch {0} bytes and {1} chars" + bytes.Length, chars.Length); return false; } for (int i = 0; i < chars.Length; i++) { int charValue = (int)chars[i]; if (charValue != (int)bytes[i]) { Console.WriteLine("Byte at index {0} value {1:X4} does not match char {2:X4}", i, (int) bytes[i], charValue); result = false; } } return result; } private static bool TestEncoding(Encoding encoding) { byte[] inputBytes = new byte[256]; for (int i = 0; i < 256; i++) { inputBytes[i] = (byte) i; } char[] outputChars = encoding.GetChars(inputBytes); Console.WriteLine("Comparing input bytes and output chars"); if (!CompareBytes(outputChars, inputBytes)) return false; byte[] outputBytes = encoding.GetBytes(outputChars); Console.WriteLine("Comparing output bytes and output chars"); if (!CompareBytes(outputChars, outputBytes)) return false; return true; } A: Why not just use an array of bytes instead? It would have none of the encoding problems you're likely to suffer with the text approach. A: I think you should use a byte array instead. For comparison you can use some method like this: static bool CompareRange(byte[] a, byte[] b, int index, int count) { bool res = true; for(int i = index; i < index + count; i++) { res &= a[i] == b[i]; } return res; } A: Use the Hebrew codepage for Windows-1255. Its 8 bit. Encoding enc = Encoding.GetEncoding("windows-1255"); I missunderstod you when you wrote "1-255", thought you where refereing to characters in codepage 1255.
.Net 8-bit Encoding
I'm working on serial port, transmitting and receiving data to some hardware at 8bit data. I would like to store it as string to facilitate comparison, and preset data are stored as string or hex format in xml file. I found out that only when using Encoding.Default which is ANSI encoding then the 8bit data is converted properly and easily reversible. ASCII encoding will only works for 7bit data, and UTF8 or UTF7 doesn't works well too, since I'm using some character from 1-255. Encoding.Default would be just fine, but I read on MSDN that it's dependent on OS codepage setting, which means it might behave differently on different codepage configured. I use GetBytes() and GetString extensively using the Encoding, but would like a failsafe and portable method that works all the time at any configuration. Any idea or better suggestion for this?
[ "Latin-1 aka ISO-8859-1 aka codepage 28591 is a useful codepage for this scenario, as it maps values in the range 128-255 unchanged. The following are interchangeable:\nEncoding.GetEncoding(28591)\nEncoding.GetEncoding(\"Latin1\")\nEncoding.GetEncoding(\"iso-8859-1\")\n\nThe following code illustrates the fact that for Latin1, unlike Encoding.Default, all characters in the range 0-255 are mapped unchanged:\nstatic void Main(string[] args)\n{\n\n Console.WriteLine(\"Test Default Encoding returned {0}\", TestEncoding(Encoding.Default));\n Console.WriteLine(\"Test Latin1 Encoding returned {0}\", TestEncoding(Encoding.GetEncoding(\"Latin1\")));\n Console.ReadLine();\n return;\n}\n\nprivate static bool CompareBytes(char[] chars, byte[] bytes)\n{\n bool result = true;\n if (chars.Length != bytes.Length)\n {\n Console.WriteLine(\"Length mismatch {0} bytes and {1} chars\" + bytes.Length, chars.Length);\n return false;\n }\n for (int i = 0; i < chars.Length; i++)\n {\n int charValue = (int)chars[i];\n if (charValue != (int)bytes[i])\n {\n Console.WriteLine(\"Byte at index {0} value {1:X4} does not match char {2:X4}\", i, (int) bytes[i], charValue);\n result = false;\n }\n }\n return result;\n}\nprivate static bool TestEncoding(Encoding encoding)\n{\n byte[] inputBytes = new byte[256];\n for (int i = 0; i < 256; i++)\n {\n inputBytes[i] = (byte) i;\n }\n\n char[] outputChars = encoding.GetChars(inputBytes);\n Console.WriteLine(\"Comparing input bytes and output chars\");\n if (!CompareBytes(outputChars, inputBytes)) return false;\n\n byte[] outputBytes = encoding.GetBytes(outputChars);\n Console.WriteLine(\"Comparing output bytes and output chars\");\n if (!CompareBytes(outputChars, outputBytes)) return false;\n\n return true;\n}\n\n", "Why not just use an array of bytes instead? It would have none of the encoding problems you're likely to suffer with the text approach.\n", "I think you should use a byte array instead. For comparison you can use some method like this:\nstatic bool CompareRange(byte[] a, byte[] b, int index, int count)\n{\n bool res = true;\n for(int i = index; i < index + count; i++)\n {\n res &= a[i] == b[i];\n }\n return res;\n}\n\n", "Use the Hebrew codepage for Windows-1255. Its 8 bit.\nEncoding enc = Encoding.GetEncoding(\"windows-1255\");\nI missunderstod you when you wrote \"1-255\", thought you where refereing to characters in codepage 1255.\n" ]
[ 19, 9, 2, 1 ]
[ "You could use base64 encoding to convert from byte to string and back. No problems with code pages or weird characters that way, and it'll be more space-efficient than hex.\nbyte[] toEncode; \nstring encoded = System.Convert.ToBase64String(toEncode);\n\n" ]
[ -2 ]
[ ".net", "encoding" ]
stackoverflow_0000111460_.net_encoding.txt
Q: Redirect to different controller I have some code in an IAuthorizationFilter which redirects the user to a login page but I'm having trouble changing the controller which is used. So I might do public void OnAuthorization(AuthorizationContext context) { UserController u = new UserController(); context.Result = u.Login(); context.Cancel = true; } But this results in The view 'Login' or its master could not be found. The following locations were searched: ~/Views/Product/Login.aspx ~/Views/Product/Login.ascx ~/Views/Shared/Login.aspx ~/Views/Shared/Login.ascx I am running this from a product controler. How do I get the view engine to use the user controler rather than the product controler? Edit: I got it working with RedirectResult r = new RedirectResult("../User.aspx/Login"); context.Result = r; context.Cancel = true; But this is a cludge, I'm sure there is a better way. There is frustratingly little exposed in the ActionFilterAttribute. Seems like it might be useful if the controller exposed in AuthorizationContext had RedirectToAction exposed this would be easy. A: Agree with ddc0660, you should be redirecting. Don't run u.Login(), but rather set context.Result to a RedirectResult.
Redirect to different controller
I have some code in an IAuthorizationFilter which redirects the user to a login page but I'm having trouble changing the controller which is used. So I might do public void OnAuthorization(AuthorizationContext context) { UserController u = new UserController(); context.Result = u.Login(); context.Cancel = true; } But this results in The view 'Login' or its master could not be found. The following locations were searched: ~/Views/Product/Login.aspx ~/Views/Product/Login.ascx ~/Views/Shared/Login.aspx ~/Views/Shared/Login.ascx I am running this from a product controler. How do I get the view engine to use the user controler rather than the product controler? Edit: I got it working with RedirectResult r = new RedirectResult("../User.aspx/Login"); context.Result = r; context.Cancel = true; But this is a cludge, I'm sure there is a better way. There is frustratingly little exposed in the ActionFilterAttribute. Seems like it might be useful if the controller exposed in AuthorizationContext had RedirectToAction exposed this would be easy.
[ "Agree with ddc0660, you should be redirecting. Don't run u.Login(), but rather set context.Result to a RedirectResult.\n" ]
[ 2 ]
[]
[]
[ "asp.net_mvc" ]
stackoverflow_0000111363_asp.net_mvc.txt
Q: How can I draw a curve that varies in thickness along its path? I'm capturing data from a tablet using Java (JPen library rocks) and would like to be able to paint a penstroke in a more natural way. Currently I'm drawing the pen stroke as straight line segments each with a different Stroke thickness. There has to be something in Java's Graphics Library that lets me to this more efficiently. Right? A: I've never done this, but here are a couple things you could try. First, you could implement a custom Stroke that creates skinny trapezoids. The width of the end caps would be a function of the pressure at the end points. If that works, you could try to make the line segments look more natural by using Bezier curves to form "curvy trapezoids". You might be able to use QuadCurve2D to help. A: There's a more general solution available at least. The feature was added to Inkscape based on a recent algorithm. You can see it applied directly to your problem in some screenshots. It can extrude any shape brush along the path to mimic a paintbrush for example, but you'd have to port it to Java from the algorithm in the first link or from the Inkscape sources. Also, it's covered by patents so you'd have to release your code under the GPL (the author gives explicit permission) or buy a patent license. A: PostScript RIPs often convert circles to curves and curves to a series of straight line segments. The number of segments depends on the flatness setting which defaults to one suitable for the raster display resolution. A thick line or thick line segments can be converted to a skinny filled polygon.
How can I draw a curve that varies in thickness along its path?
I'm capturing data from a tablet using Java (JPen library rocks) and would like to be able to paint a penstroke in a more natural way. Currently I'm drawing the pen stroke as straight line segments each with a different Stroke thickness. There has to be something in Java's Graphics Library that lets me to this more efficiently. Right?
[ "I've never done this, but here are a couple things you could try. First, you could implement a custom Stroke that creates skinny trapezoids. The width of the end caps would be a function of the pressure at the end points. If that works, you could try to make the line segments look more natural by using Bezier curves to form \"curvy trapezoids\". You might be able to use QuadCurve2D to help.\n", "There's a more general solution available at least. The feature was added to Inkscape based on a recent algorithm. You can see it applied directly to your problem in some screenshots. It can extrude any shape brush along the path to mimic a paintbrush for example, but you'd have to port it to Java from the algorithm in the first link or from the Inkscape sources. Also, it's covered by patents so you'd have to release your code under the GPL (the author gives explicit permission) or buy a patent license.\n", "PostScript RIPs often convert circles to curves and curves to a series of straight line segments. The number of segments depends on the flatness setting which defaults to one suitable for the raster display resolution.\nA thick line or thick line segments can be converted to a skinny filled polygon.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "graphics", "java" ]
stackoverflow_0000042546_graphics_java.txt
Q: Kerberos and T125 protocol Why does Kerberos authentication use T125 protocol? I believe Kerberos authentication behaves this way: Client asks for a ticket to the Kerberos authority The Kerberos authority provides a Ticket to the client The Client tries to authenticate towards a Server and sends to the server this Ticket. The Server verifies the Ticket is OK with the Kerberos Authority, and authenticates the Client. Now, where, in this process, is used T125 and why? And does the Client send the Ticket any time it tries to access (e.g: for each HTTP GET page) to the Server and the Server checks this Ticket any time, or is it just once at the beginning of the "conversation"? Thank you! A: Not familiar with T125, but your Kerberos flow is off a little. Roughly: User authenticates to KDC (Kerberos authority) KDC grants user a TGT (ticket granting ticket) user tries to access server Server demands server ticket, sends user some info (to identify the server) user asks KDC for ticket for server, sends TGT and server info KDC issues server ticket to user User submits server ticket to user on every access. I know I didnt directly answer your T125 question, but I hope this helped anyway.
Kerberos and T125 protocol
Why does Kerberos authentication use T125 protocol? I believe Kerberos authentication behaves this way: Client asks for a ticket to the Kerberos authority The Kerberos authority provides a Ticket to the client The Client tries to authenticate towards a Server and sends to the server this Ticket. The Server verifies the Ticket is OK with the Kerberos Authority, and authenticates the Client. Now, where, in this process, is used T125 and why? And does the Client send the Ticket any time it tries to access (e.g: for each HTTP GET page) to the Server and the Server checks this Ticket any time, or is it just once at the beginning of the "conversation"? Thank you!
[ "Not familiar with T125, but your Kerberos flow is off a little.\nRoughly:\n\nUser authenticates to KDC (Kerberos authority) \nKDC grants user a TGT (ticket granting ticket) \nuser tries to access server\nServer demands server ticket, sends user some info (to identify the server)\nuser asks KDC for ticket for server, sends TGT and server info\nKDC issues server ticket to user\nUser submits server ticket to user on every access.\n\nI know I didnt directly answer your T125 question, but I hope this helped anyway.\n" ]
[ 2 ]
[]
[]
[ "authentication", "http_authentication", "kerberos", "security" ]
stackoverflow_0000111451_authentication_http_authentication_kerberos_security.txt
Q: How much business logic should Value objects contain? One mentor I respect suggests that a simple bean is a waste of time - that value objects 'MUST' contain some business logic to be useful. Another says such code is difficult to maintain and that all business logic must be externalized. I realize this question is subjective. Asking anyway - want to know answers from more perspectives. A: The idea of putting data and business logic together is to promote encapsulation, and to expose as little internal state as possible to other objects. That way, clients can rely on an interface rather than on an implementation. See the "Tell, Don't Ask" principle and the Law of Demeter. Encapsulation makes it easier to understand the states data can be in, easier to read code, easier to decouple classes and generally easier to unit test. Externalising business logic (generally into "Service" or "Manager" classes) makes questions like "where is this data used?" and "What states can it be in?" a lot more difficult to answer. It's also a procedural way of thinking, wrapped up in an object. This can lead to an anemic domain model. Externalising behaviour isn't always bad. For example, a service layer might orchestrate domain objects, but without taking over their state-manipulating responsibilities. Or, when you are mostly doing reads/writes to a DB that map nicely to input forms, maybe you don't need a domain model - or the painful object/relational mapping overhead it entails - at all. Transfer Objects often serve to decouple architectural layers from each other (or from an external system) by providing the minimum state information the calling layer needs, without exposing any business logic. This can be useful, for example when preparing information for the view: just give the view the information it needs, and nothing else, so that it can concentrate on how to display the information, rather than what information to display. For example, the TO might be an aggregation of several sources of data. One advantage is that your views and your domain objects are decoupled. Using your domain objects in JSPs can make your domain harder to refactor and promotes the indiscriminate use of getters and setters (hence breaking encapsulation). However, there's also an overhead associated with having a lot of Transfer Objects and often a lot of duplication, too. Some projects I've been on end up with TO's that basically mirror other domain objects (which I consider an anti-pattern). A: You should better call them Transfer Objects or Data transfer objects (DTO). Earlier this same j2ee pattern was called 'Value object' but they changed the name because it was confused with this http://dddcommunity.org/discussion/messageboardarchive/ValueObjects.html To answer your question, I would only put minimal logic to my DTOs, logic that is required for display reasons. Even better, if we are talking about a database based web application, I would go beyond the core j2ee patterns and use Hibernate or the Java Persistence API to create a domain model that supports lazy loading of relations and use this in the view. See the Open session in view. In this way, you don't have to program a set of DTOs and you have all the business logic available to use in your views/controllers etc. A: It depends. oops, did I just blurt out a cliche? The basic question to ask for designing an object is: will the logic governing the object's data be different or the same when used/consumed by other objects? If different areas of usage call for different logic, externalise it. If it is the same no matter where the object travels to, place it together with the class. A: My personal preference is to put all business logic in the domain model itself, that is in the "true" domain objects. So when Data Transfer Objects are created they are mostly just a (immutable) state representation of domain objects and hence contain no business logic. They can contain methods for cloning and comparing though, but the meat of the business logic code stays in the domain objects. A: What Korros said. Value Object := A small simple object, like money or a date range, whose equality isn't based on identity. DTO := An object that carries data between processes in order to reduce the number of method calls. These are the defintions proposed by Martin Fowler and I'd like to popularize them. A: I agree with Panagiotis: the open session in view pattern is much better than using DTOs. Put otherwise, I've found that an application is much much simpler if you traffic in your domain objects(or some composite thereof) from your view layer all the way down. That said, it's hard to pull off, because you will need to make your HttpSession coincident with your persistence layer's unit of work. Then you will need to ensure that all database modifications (i.e. create, updates and deletes) are intentional. In other words, you do not want it be the case that the view layer has a domain object, a field gets modified and the modification gets persisted without the application code intentionally saving the change. Another problem that is important to deal with is to ensure that your transactional semantics are satisfactory. Usually fetching and modifying one domain object will take place in one transactional context and it's not difficult to make your ORM layer require a new transaction. What is challenging is is a nested transaction, where you want to include a second transactional context within the first one opened. If you don't mind investigating how a non-Java API handles these problems, it's worth looking at Rails' Active Record, which allows Ruby server pages to work directly with the domain model and traverse its associations.
How much business logic should Value objects contain?
One mentor I respect suggests that a simple bean is a waste of time - that value objects 'MUST' contain some business logic to be useful. Another says such code is difficult to maintain and that all business logic must be externalized. I realize this question is subjective. Asking anyway - want to know answers from more perspectives.
[ "The idea of putting data and business logic together is to promote encapsulation, and to expose as little internal state as possible to other objects. That way, clients can rely on an interface rather than on an implementation. See the \"Tell, Don't Ask\" principle and the Law of Demeter. Encapsulation makes it easier to understand the states data can be in, easier to read code, easier to decouple classes and generally easier to unit test.\nExternalising business logic (generally into \"Service\" or \"Manager\" classes) makes questions like \"where is this data used?\" and \"What states can it be in?\" a lot more difficult to answer. It's also a procedural way of thinking, wrapped up in an object. This can lead to an anemic domain model.\nExternalising behaviour isn't always bad. For example, a service layer might orchestrate domain objects, but without taking over their state-manipulating responsibilities. Or, when you are mostly doing reads/writes to a DB that map nicely to input forms, maybe you don't need a domain model - or the painful object/relational mapping overhead it entails - at all.\nTransfer Objects often serve to decouple architectural layers from each other (or from an external system) by providing the minimum state information the calling layer needs, without exposing any business logic.\nThis can be useful, for example when preparing information for the view: just give the view the information it needs, and nothing else, so that it can concentrate on how to display the information, rather than what information to display. For example, the TO might be an aggregation of several sources of data.\nOne advantage is that your views and your domain objects are decoupled. Using your domain objects in JSPs can make your domain harder to refactor and promotes the indiscriminate use of getters and setters (hence breaking encapsulation).\nHowever, there's also an overhead associated with having a lot of Transfer Objects and often a lot of duplication, too. Some projects I've been on end up with TO's that basically mirror other domain objects (which I consider an anti-pattern).\n", "You should better call them Transfer Objects or Data transfer objects (DTO).\nEarlier this same j2ee pattern was called 'Value object' but they changed the name because it was confused with this\nhttp://dddcommunity.org/discussion/messageboardarchive/ValueObjects.html\nTo answer your question, I would only put minimal logic to my DTOs, logic that is required for display reasons.\nEven better, if we are talking about a database based web application, I would go beyond the core j2ee patterns and use Hibernate or the Java Persistence API to create a domain model that supports lazy loading of relations and use this in the view.\nSee the Open session in view.\nIn this way, you don't have to program a set of DTOs and you have all the business logic available to use in your views/controllers etc.\n", "It depends.\noops, did I just blurt out a cliche?\nThe basic question to ask for designing an object is: will the logic governing the object's data be different or the same when used/consumed by other objects?\nIf different areas of usage call for different logic, externalise it. If it is the same no matter where the object travels to, place it together with the class.\n", "My personal preference is to put all business logic in the domain model itself, that is in the \"true\" domain objects. So when Data Transfer Objects are created they are mostly just a (immutable) state representation of domain objects and hence contain no business logic. They can contain methods for cloning and comparing though, but the meat of the business logic code stays in the domain objects.\n", "What Korros said.\nValue Object := A small simple object, like money or a date range, whose equality isn't based on identity.\nDTO := An object that carries data between processes in order to reduce the number of method calls.\nThese are the defintions proposed by Martin Fowler and I'd like to popularize them.\n", "I agree with Panagiotis: the open session in view pattern is much better than using DTOs. Put otherwise, I've found that an application is much much simpler if you traffic in your domain objects(or some composite thereof) from your view layer all the way down.\nThat said, it's hard to pull off, because you will need to make your HttpSession coincident with your persistence layer's unit of work. Then you will need to ensure that all database modifications (i.e. create, updates and deletes) are intentional. In other words, you do not want it be the case that the view layer has a domain object, a field gets modified and the modification gets persisted without the application code intentionally saving the change. Another problem that is important to deal with is to ensure that your transactional semantics are satisfactory. Usually fetching and modifying one domain object will take place in one transactional context and it's not difficult to make your ORM layer require a new transaction. What is challenging is is a nested transaction, where you want to include a second transactional context within the first one opened. \nIf you don't mind investigating how a non-Java API handles these problems, it's worth looking at Rails' Active Record, which allows Ruby server pages to work directly with the domain model and traverse its associations.\n" ]
[ 31, 7, 6, 4, 3, 2 ]
[]
[]
[ "data_transfer_objects", "java", "oop" ]
stackoverflow_0000110328_data_transfer_objects_java_oop.txt
Q: Which gcc switch disables "left-hand operand of comma has no effect" warning? It's a part of larger code base, which forces -Werror on gcc. This warning is generated in a third party code that shouldn't be changed (and I actually know how to fix it), but I can disable specific warnings. This time man gcc failed me, so please, let some gcc master enlighten me. TIA. A: It is the -Wno-unused-value option, see the documentation A: If you use -fdiagnostics-show-option, GCC will tell you how to disable a warning (if possible). A: Have you tried using a diagnostic pragma directive? These are available in gcc 4.2.1+, I believe.
Which gcc switch disables "left-hand operand of comma has no effect" warning?
It's a part of larger code base, which forces -Werror on gcc. This warning is generated in a third party code that shouldn't be changed (and I actually know how to fix it), but I can disable specific warnings. This time man gcc failed me, so please, let some gcc master enlighten me. TIA.
[ "It is the -Wno-unused-value option, see the documentation\n", "If you use -fdiagnostics-show-option, GCC will tell you how to disable a warning (if possible).\n", "Have you tried using a diagnostic pragma directive? These are available in gcc 4.2.1+, I believe.\n" ]
[ 6, 5, 0 ]
[]
[]
[ "gcc", "warnings" ]
stackoverflow_0000111432_gcc_warnings.txt
Q: What happens on deserialization with this? I'm currently convering my ASP.NET v2 application to serialize/deserialize it's objects because I want to shift from inproc session state to stateserver. This is because my host, webhost4life, has a nasty tendency to recycle the worker process frequently thus causing session timeouts. Anyway... the question... I'm trying to not serialize things I don't need to, i.e. variables that are re-initialised each page, don't need to be serialised. Here's one of them: Private RollbackQueue As New Queue(Of DataServer.Rollback) On deserialisation, will RollbackQueue be a) nothing or b) an empty queue? My guess is that when .NET deserialises, it creates the parent object as normal and then fills in the fields one by one. Therefore, the NEW bit will fire. But that is a guess. Thanks, Rob. A: It will be nothing. The CLR serialization logic will create the object uninitialized by way of FormatterServices.GetSafeUnitializedObject without running any construction logic. If you need to ensure the field has a value I would recommend moving such initialization into an Initialize() method that is called both from your constructor and from a method marked with the OnDeserialized attribute. A: Why not write a simple test application to find out? Here's one I wrote (excuse the C# instead of VB, but I have the C# Express version of VS2008 open at the moment). [Serializable] class TestClass { [NonSerialized] public Queue<string> queue = new Queue<string>(); } class Program { static void Main(string[] args) { var obj = new TestClass(); Console.WriteLine("Original is null? {0}", obj.queue == null); var stream = new MemoryStream(); var formatter = new BinaryFormatter(); formatter.Serialize(stream, obj); stream.Position = 0L; var copy = (TestClass)formatter.Deserialize(stream); Console.WriteLine("Copy is null? {0}", copy.queue == null); Console.ReadLine(); } } The output from this is Original is null? False Copy is null? True Now you know for sure, that it will be null when deserialized. Kent has already explained in another post why this is the case, and what you can do about it, so I won't re-state it.
What happens on deserialization with this?
I'm currently convering my ASP.NET v2 application to serialize/deserialize it's objects because I want to shift from inproc session state to stateserver. This is because my host, webhost4life, has a nasty tendency to recycle the worker process frequently thus causing session timeouts. Anyway... the question... I'm trying to not serialize things I don't need to, i.e. variables that are re-initialised each page, don't need to be serialised. Here's one of them: Private RollbackQueue As New Queue(Of DataServer.Rollback) On deserialisation, will RollbackQueue be a) nothing or b) an empty queue? My guess is that when .NET deserialises, it creates the parent object as normal and then fills in the fields one by one. Therefore, the NEW bit will fire. But that is a guess. Thanks, Rob.
[ "It will be nothing. The CLR serialization logic will create the object uninitialized by way of FormatterServices.GetSafeUnitializedObject without running any construction logic. If you need to ensure the field has a value I would recommend moving such initialization into an Initialize() method that is called both from your constructor and from a method marked with the OnDeserialized attribute.\n", "Why not write a simple test application to find out? Here's one I wrote (excuse the C# instead of VB, but I have the C# Express version of VS2008 open at the moment).\n[Serializable]\nclass TestClass\n{\n [NonSerialized]\n public Queue<string> queue = new Queue<string>();\n}\n\nclass Program\n{\n static void Main(string[] args)\n {\n var obj = new TestClass();\n Console.WriteLine(\"Original is null? {0}\", obj.queue == null);\n var stream = new MemoryStream();\n var formatter = new BinaryFormatter();\n formatter.Serialize(stream, obj);\n stream.Position = 0L;\n var copy = (TestClass)formatter.Deserialize(stream);\n Console.WriteLine(\"Copy is null? {0}\", copy.queue == null);\n Console.ReadLine();\n }\n}\n\nThe output from this is\nOriginal is null? False \nCopy is null? True\n\nNow you know for sure, that it will be null when deserialized. Kent has already explained in another post why this is the case, and what you can do about it, so I won't re-state it.\n" ]
[ 3, 0 ]
[]
[]
[ ".net", "serialization" ]
stackoverflow_0000111643_.net_serialization.txt
Q: C++ Thread question - setting a value to indicate the thread has finished Is the following safe? I am new to threading and I want to delegate a time consuming process to a separate thread in my C++ program. Using the boost libraries I have written code something like this: thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag); Where finished_flag is a boolean member of my class. When the thread is finished it sets the value and the main loop of my program checks for a change in that value. I assume that this is okay because I only ever start one thread, and that thread is the only thing that changes the value (except for when it is initialised before I start the thread) So is this okay, or am I missing something, and need to use locks and mutexes, etc A: You never mentioned the type of finished_flag... If it's a straight bool, then it might work, but it's certainly bad practice, for several reasons. First, some compilers will cache the reads of the finished_flag variable, since the compiler doesn't always pick up the fact that it's being written to by another thread. You can get around this by declaring the bool volatile, but that's taking us in the wrong direction. Even if reads and writes are happening as you'd expect, there's nothing to stop the OS scheduler from interleaving the two threads half way through a read / write. That might not be such a problem here where you have one read and one write op in separate threads, but it's a good idea to start as you mean to carry on. If, on the other hand it's a thread-safe type, like a CEvent in MFC (or equivilent in boost) then you should be fine. This is the best approach: use thread-safe synchronization objects for inter-thread communication, even for simple flags. A: Instead of using a member variable to signal that the thread is done, why not use a condition? You are already are using the boost libraries, and condition is part of the thread library. Check it out. It allows the worker thread to 'signal' that is has finished, and the main thread can check during execution if the condition has been signaled and then do whatever it needs to do with the completed work. There are examples in the link. As a general case I would neve make the assumption that a resource will only be modified by the thread. You might know what it is for, however someone else might not - causing no ends of grief as the main thread thinks that the work is done and tries to access data that is not correct! It might even delete it while the worker thread is still using it, and causing the app to crash. Using a condition will help this. Looking at the thread documentation, you could also call thread.timed_join in the main thread. timed_join will wait for a specified amount for the thread to 'join' (join means that the thread has finsihed) A: If you really want to get into the details of communication between threads via shared memory, even declaring a variable volatile won't be enough, even if the compiler does use appropriate access semantics to ensure that it won't get a stale version of data after checking the flag. The CPU can issue reads and writes out of order as long (x86 usually doesn't, but PPC definitely does) and there is nothing in C++9x that allows the compiler to generate code to order memory accesses appropriately. Herb Sutter's Effective Concurrency series has an extremely in depth look at how the C++ world intersects the multicore/multiprocessor world. A: I don't mean to be presumptive, but it seems like the purpose of your finished_flag variable is to pause the main thread (at some point) until the thread thrd has completed. The easiest way to do this is to use boost::thread::join // launch the thread... thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag); // ... do other things maybe ... // wait for the thread to complete thrd.join(); A: Having the thread set a flag (or signal an event) before it exits is a race condition. The thread has not necessarily returned to the OS yet, and may still be executing. For example, consider a program that loads a dynamic library (pseudocode): lib = loadLibrary("someLibrary"); fun = getFunction("someFunction"); fun(); unloadLibrary(lib); And let's suppose that this library uses your thread: void someFunction() { volatile bool finished_flag = false; thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag); while(!finished_flag) { // ignore the polling loop, it's besides the point sleep(); } delete thrd; } void myclass::mymethod() { // do stuff finished_flag = true; } When myclass::mymethod() sets finished_flag to true, myclass::mymethod() hasn't returned yet. At the very least, it still has to execute a "return" instruction of some sort (if not much more: destructors, exception handler management, etc.). If the thread executing myclass::mymethod() gets pre-empted before that point, someFunction() will return to the calling program, and the calling program will unload the library. When the thread executing myclass::mymethod() gets scheduled to run again, the address containing the "return" instruction is no longer valid, and the program crashes. The solution would be for someFunction() to call thrd->join() before returning. This would ensure that the thread has returned to the OS and is no longer executing.
C++ Thread question - setting a value to indicate the thread has finished
Is the following safe? I am new to threading and I want to delegate a time consuming process to a separate thread in my C++ program. Using the boost libraries I have written code something like this: thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag); Where finished_flag is a boolean member of my class. When the thread is finished it sets the value and the main loop of my program checks for a change in that value. I assume that this is okay because I only ever start one thread, and that thread is the only thing that changes the value (except for when it is initialised before I start the thread) So is this okay, or am I missing something, and need to use locks and mutexes, etc
[ "You never mentioned the type of finished_flag...\nIf it's a straight bool, then it might work, but it's certainly bad practice, for several reasons. First, some compilers will cache the reads of the finished_flag variable, since the compiler doesn't always pick up the fact that it's being written to by another thread. You can get around this by declaring the bool volatile, but that's taking us in the wrong direction. Even if reads and writes are happening as you'd expect, there's nothing to stop the OS scheduler from interleaving the two threads half way through a read / write. That might not be such a problem here where you have one read and one write op in separate threads, but it's a good idea to start as you mean to carry on.\nIf, on the other hand it's a thread-safe type, like a CEvent in MFC (or equivilent in boost) then you should be fine. This is the best approach: use thread-safe synchronization objects for inter-thread communication, even for simple flags.\n", "Instead of using a member variable to signal that the thread is done, why not use a condition? You are already are using the boost libraries, and condition is part of the thread library.\nCheck it out. It allows the worker thread to 'signal' that is has finished, and the main thread can check during execution if the condition has been signaled and then do whatever it needs to do with the completed work. There are examples in the link.\nAs a general case I would neve make the assumption that a resource will only be modified by the thread. You might know what it is for, however someone else might not - causing no ends of grief as the main thread thinks that the work is done and tries to access data that is not correct! It might even delete it while the worker thread is still using it, and causing the app to crash. Using a condition will help this.\nLooking at the thread documentation, you could also call thread.timed_join in the main thread. timed_join will wait for a specified amount for the thread to 'join' (join means that the thread has finsihed)\n", "If you really want to get into the details of communication between threads via shared memory, even declaring a variable volatile won't be enough, even if the compiler does use appropriate access semantics to ensure that it won't get a stale version of data after checking the flag. The CPU can issue reads and writes out of order as long (x86 usually doesn't, but PPC definitely does) and there is nothing in C++9x that allows the compiler to generate code to order memory accesses appropriately.\nHerb Sutter's Effective Concurrency series has an extremely in depth look at how the C++ world intersects the multicore/multiprocessor world.\n", "I don't mean to be presumptive, but it seems like the purpose of your finished_flag variable is to pause the main thread (at some point) until the thread thrd has completed. \nThe easiest way to do this is to use boost::thread::join\n// launch the thread...\nthrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);\n\n// ... do other things maybe ... \n\n// wait for the thread to complete\nthrd.join();\n\n", "Having the thread set a flag (or signal an event) before it exits is a race condition. The thread has not necessarily returned to the OS yet, and may still be executing. \nFor example, consider a program that loads a dynamic library (pseudocode):\nlib = loadLibrary(\"someLibrary\");\nfun = getFunction(\"someFunction\");\nfun();\nunloadLibrary(lib);\n\nAnd let's suppose that this library uses your thread:\nvoid someFunction() {\n volatile bool finished_flag = false;\n thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);\n while(!finished_flag) { // ignore the polling loop, it's besides the point\n sleep();\n }\n delete thrd;\n}\n\nvoid myclass::mymethod() {\n // do stuff\n finished_flag = true;\n}\n\nWhen myclass::mymethod() sets finished_flag to true, myclass::mymethod() hasn't returned yet. At the very least, it still has to execute a \"return\" instruction of some sort (if not much more: destructors, exception handler management, etc.). If the thread executing myclass::mymethod() gets pre-empted before that point, someFunction() will return to the calling program, and the calling program will unload the library. When the thread executing myclass::mymethod() gets scheduled to run again, the address containing the \"return\" instruction is no longer valid, and the program crashes.\nThe solution would be for someFunction() to call thrd->join() before returning. This would ensure that the thread has returned to the OS and is no longer executing.\n" ]
[ 11, 7, 5, 5, 2 ]
[]
[]
[ "boost_thread", "c++", "multithreading" ]
stackoverflow_0000034151_boost_thread_c++_multithreading.txt
Q: Algorithm for finding characters in the same positions in a list of strings? Suppose I have: Toby Tiny Tory Tily Is there an algorithm that can easily create a list of common characters in the same positions in all these strings? (in this case the common characters are 'T' at position 0 and 'y' at position 3) I tried looking at some of the algorithms used for DNA sequence matching but it seems most of them are just used for finding common substrings regardless of their positions. A: Finding a list of characters that are common in ALL strings at a certain position is trivially simple. Just iterate on each string for each character position 1 character position at a time. If any string's character is not the match of it's closest neighbor string's character, then the position does not contain a common character. For any i = 0 to length -1... Once you find Si[x] != Si+1[x] you can skip to the next position x+1. Where Si is the ith string in the list. And [x] is the character at position x. A: Some generic code that has pretty poor performance O(n^2) str[] = { "Toby", "Tiny", "Tory", "Tily" }; result = null; largestString = str.getLargestString(); // Made up function str.remove(largestString) for (i = 0; i < largestString.length; i++) { hits = 0; foreach (str as value) { if (i < value.length) { if (value.charAt(i) == largestString.charAt(i)) hits++; } } if (hits == str.length) result += largestString.charAt(i); } print(str.items); A: I can't think of anything especially optimized. You can do something like this, which shouldn't be too hard: //c# -- assuming your strings are in a List<string> named Names int shortestLength = Names[0].Length, j; char[] CommonCharacters; char single; for (int i = 1; i < Names.Count; i++) { if (Names[i].Length < shortestLength) shortestLength = Names[i].Length; } CommonCharacters = new char[shortestLength]; for (int i = 0; i < shortestLength; i++) { j = 1; single = Names[0][i]; CommonCharacters[i] = single; while (j < shortestLength) { if (single != Names[j][i]) { CommonCharacters[i] = " "[0]; break; } j++; } } This would give you an array of characters that are the same across everything in the list. A: What about something like this? strings = %w(Tony Tiny Tory Tily) positions = Hash.new { |h,k| h[k] = Hash.new { |h,k| h[k] = 0 } } strings.each { |str| 0.upto(str.length-1) { |i| positions[i][str[i,1]]+=1 } } At the end of execution, the result will be: positions = { 0=>{"T"=>4}, 1=>{"o"=>2, "i"=>2}, 2=>{"l"=>1, "n"=>2, "r"=>1}, 3=>{"y"=>4} } A: Here's an algorithm in 5 lines of ruby: #!/usr/bin/env ruby chars = STDIN.gets.chomp.split("") STDIN.each do |string| chars = string.chomp.split("").zip(chars).map {|x,y| x == y ? x : nil } end chars.each_index {|i| puts "#{chars[i]} #{i}" if chars[i] } Put this in commonletters.rb. Sample usage: $ commonletters.rb < input.txt T 0 y 3 Assuming that input.txt contains: Toby Tiny Tory Tily This should work with whatever inputs you throw at it. It will break if the input file is empty, but you can probably fix that yourself. This is O(n) (n is total number of chars in the input). A: And here's a trivial version in Python: items = ['Toby', 'Tiny', 'Tory', 'Tily'] tuples = sorted(x for item in items for x in enumerate(item)) print [x[0] for x in itertools.groupby(tuples) if len(list(x[1])) == len(items)] Which prints: [(0, 'T'), (3, 'y')] Edit: Here's a better version that doesn't require creating a (potentially) huge list of tuples: items = ['Toby', 'Tiny', 'Tory', 'Tily'] minlen = min(len(x) for x in items) print [(i, items[0][i]) for i in range(minlen) if all(x[i] == items[0][i] for x in items)] A: #include <iostream> int main(void) { char words[4][5] = { "Toby", "Tiny", "Tory", "Tily" }; int wordsCount = 4; int lettersPerWord = 4; int z; for (z = 1; z < wordsCount; z++) { int y; for (y = 0; y < lettersPerWord; y++) { if (words[0][y] != words[z][y]) { words[0][y] = ' '; } } } std::cout << words[0] << std::endl; return 0; } A: In lisp: CL-USER> (defun common-chars (&rest strings) (apply #'map 'list #'char= strings)) COMMON-CHARS Just pass in the strings: CL-USER> (common-chars "Toby" "Tiny" "Tory" "Tily") (T NIL NIL T) If you want the characters themselves: CL-USER> (defun common-chars2 (&rest strings) (apply #'map 'list #'(lambda (&rest chars) (when (apply #'char= chars) (first chars))) ; return the char instead of T strings)) COMMON-CHARS2 CL-USER> (common-chars2 "Toby" "Tiny" "Tory" "Tily") (#\T NIL NIL #\y) If you don't care about posiitons, and just want a list of the common characters: CL-USER> (format t "~{~@[~A ~]~}" (common-chars2 "Toby" "Tiny" "Tory" "Tily")) T y NIL I admit this wasn't an algorithm... just a way to do it in lisp using existing functionality If you wanted to do it manually, as has been said, you loop comparing all the characters at a given index to each other. If they all match, save the matching character.
Algorithm for finding characters in the same positions in a list of strings?
Suppose I have: Toby Tiny Tory Tily Is there an algorithm that can easily create a list of common characters in the same positions in all these strings? (in this case the common characters are 'T' at position 0 and 'y' at position 3) I tried looking at some of the algorithms used for DNA sequence matching but it seems most of them are just used for finding common substrings regardless of their positions.
[ "Finding a list of characters that are common in ALL strings at a certain position is trivially simple. Just iterate on each string for each character position 1 character position at a time. If any string's character is not the match of it's closest neighbor string's character, then the position does not contain a common character.\nFor any i = 0 to length -1... Once you find Si[x] != Si+1[x] you can skip to the next position x+1.\nWhere Si is the ith string in the list. And [x] is the character at position x.\n", "Some generic code that has pretty poor performance O(n^2)\nstr[] = { \"Toby\", \"Tiny\", \"Tory\", \"Tily\" };\nresult = null;\nlargestString = str.getLargestString(); // Made up function\nstr.remove(largestString)\nfor (i = 0; i < largestString.length; i++) {\n hits = 0;\n foreach (str as value) {\n if (i < value.length) {\n if (value.charAt(i) == largestString.charAt(i))\n hits++;\n }\n }\n if (hits == str.length)\n result += largestString.charAt(i);\n}\nprint(str.items);\n\n", "I can't think of anything especially optimized.\nYou can do something like this, which shouldn't be too hard:\n //c# -- assuming your strings are in a List<string> named Names\n int shortestLength = Names[0].Length, j;\n char[] CommonCharacters;\n char single;\n\n for (int i = 1; i < Names.Count; i++)\n {\n if (Names[i].Length < shortestLength) shortestLength = Names[i].Length;\n }\n\n CommonCharacters = new char[shortestLength];\n for (int i = 0; i < shortestLength; i++)\n {\n j = 1;\n single = Names[0][i];\n CommonCharacters[i] = single;\n while (j < shortestLength)\n {\n if (single != Names[j][i])\n {\n CommonCharacters[i] = \" \"[0];\n break;\n }\n j++;\n }\n }\n\nThis would give you an array of characters that are the same across everything in the list.\n", "What about something like this?\nstrings = %w(Tony Tiny Tory Tily)\npositions = Hash.new { |h,k| h[k] = Hash.new { |h,k| h[k] = 0 } }\nstrings.each { |str| \n 0.upto(str.length-1) { |i| \n positions[i][str[i,1]]+=1 \n }\n}\n\nAt the end of execution, the result will be:\npositions = {\n 0=>{\"T\"=>4},\n 1=>{\"o\"=>2, \"i\"=>2}, \n 2=>{\"l\"=>1, \"n\"=>2, \"r\"=>1},\n 3=>{\"y\"=>4}\n}\n\n", "Here's an algorithm in 5 lines of ruby:\n#!/usr/bin/env ruby\nchars = STDIN.gets.chomp.split(\"\")\nSTDIN.each do |string|\n chars = string.chomp.split(\"\").zip(chars).map {|x,y| x == y ? x : nil }\nend\nchars.each_index {|i| puts \"#{chars[i]} #{i}\" if chars[i] }\n\nPut this in commonletters.rb. Sample usage:\n$ commonletters.rb < input.txt\nT 0\ny 3\n\nAssuming that input.txt contains:\nToby\nTiny\nTory\nTily\n\nThis should work with whatever inputs you throw at it. It will break if the input file is empty, but you can probably fix that yourself. This is O(n) (n is total number of chars in the input).\n", "And here's a trivial version in Python:\nitems = ['Toby', 'Tiny', 'Tory', 'Tily']\ntuples = sorted(x for item in items for x in enumerate(item))\nprint [x[0] for x in itertools.groupby(tuples) if len(list(x[1])) == len(items)]\n\nWhich prints:\n[(0, 'T'), (3, 'y')]\n\nEdit: Here's a better version that doesn't require creating a (potentially) huge list of tuples:\nitems = ['Toby', 'Tiny', 'Tory', 'Tily']\nminlen = min(len(x) for x in items)\nprint [(i, items[0][i]) for i in range(minlen) if all(x[i] == items[0][i] for x in items)]\n\n", "#include <iostream>\n\nint main(void)\n{\n char words[4][5] = \n {\n \"Toby\",\n \"Tiny\",\n \"Tory\",\n \"Tily\"\n };\n\n int wordsCount = 4;\n int lettersPerWord = 4;\n\n int z;\n for (z = 1; z < wordsCount; z++)\n {\n int y;\n for (y = 0; y < lettersPerWord; y++)\n {\n if (words[0][y] != words[z][y])\n {\n words[0][y] = ' ';\n }\n }\n }\n\n std::cout << words[0] << std::endl;\n\n return 0;\n}\n\n", "In lisp:\nCL-USER> (defun common-chars (&rest strings)\n (apply #'map 'list #'char= strings))\nCOMMON-CHARS\n\nJust pass in the strings:\nCL-USER> (common-chars \"Toby\" \"Tiny\" \"Tory\" \"Tily\")\n(T NIL NIL T)\n\nIf you want the characters themselves:\nCL-USER> (defun common-chars2 (&rest strings)\n (apply #'map\n 'list\n #'(lambda (&rest chars)\n (when (apply #'char= chars)\n (first chars))) ; return the char instead of T\n strings))\nCOMMON-CHARS2\n\nCL-USER> (common-chars2 \"Toby\" \"Tiny\" \"Tory\" \"Tily\")\n(#\\T NIL NIL #\\y)\n\nIf you don't care about posiitons, and just want a list of the common characters:\nCL-USER> (format t \"~{~@[~A ~]~}\" (common-chars2 \"Toby\" \"Tiny\" \"Tory\" \"Tily\"))\nT y \nNIL\n\nI admit this wasn't an algorithm... just a way to do it in lisp using existing functionality\nIf you wanted to do it manually, as has been said, you loop comparing all the characters at a given index to each other. If they all match, save the matching character.\n" ]
[ 3, 1, 1, 1, 1, 1, 1, 0 ]
[]
[]
[ "algorithm", "string" ]
stackoverflow_0000068664_algorithm_string.txt
Q: How to convert legacy Interbase DB to SQL Server? I have an Interbase DB. How can I convert it to SQL Server? A: You could use SQL Server built in Data Transformation Services (DTS) in SQL Server 2000 or SQL Server Integration Services (SSIS) in SQL Server 2005. Try setting up an ODBC DSN for Interbase. Then in DTS / SSIS use the Other (ODBC Data Source) and the DSN. If that does not work then see if Interbase has a utility to export to text files and then use DTS / SSIS to import the text files. A: If you want to spend some money, this will do it: http://www.spectralcore.com/fullconvert/tutorials/convert-interbase-firebird-to-mssql-sql-server.php A: The Interbase DB Wikipedia page says that it supports OBDC and ADO.NET, so I would think that SQL Server can probably import this database on its own. I don't have access to an Interbase DB installation to try, but you might find these pages helpful. MSDN on import data wizard MSDN on bulk import command (if Interbase DB can dump a text file) Article on bulk importing from an ADO.NET supporting source Hopefully somebody will have direct experience with this database and can help. Good luck! A: If you only need to convert tables and data, that's rather simple. Just use ODBC driver for InterBase, connect to it and pump the data. However, if you need business logic as well, you cannot covert it just like that. You can convert regular tables and views without too much problems. Domain info would be lost but you don't need it in MSSQL anyway. The only problem with tables can be array fields, which you need to convert to separate tables, but that isn't too hard either. The problem is the conversion of triggers and stored procedures, since InterBase uses its own, custom PSQL language. It has some concepts that are different from MSSQL. For example, you have procedures that can return resultsets, and you would need to convert those to MSSQL functions. In any case, it shouldn't be too hard, since you're going from low to high complexity, but there are no tools to do it automatically.
How to convert legacy Interbase DB to SQL Server?
I have an Interbase DB. How can I convert it to SQL Server?
[ "You could use SQL Server built in Data Transformation Services (DTS) in SQL Server 2000 or SQL Server Integration Services (SSIS) in SQL Server 2005.\nTry setting up an ODBC DSN for Interbase. Then in DTS / SSIS use the Other (ODBC Data Source) and the DSN.\nIf that does not work then see if Interbase has a utility to export to text files and then use DTS / SSIS to import the text files.\n", "If you want to spend some money, this will do it:\nhttp://www.spectralcore.com/fullconvert/tutorials/convert-interbase-firebird-to-mssql-sql-server.php\n", "The Interbase DB Wikipedia page says that it supports OBDC and ADO.NET, so I would think that SQL Server can probably import this database on its own. I don't have access to an Interbase DB installation to try, but you might find these pages helpful.\nMSDN on import data wizard\nMSDN on bulk import command (if Interbase DB can dump a text file)\nArticle on bulk importing from an ADO.NET supporting source \nHopefully somebody will have direct experience with this database and can help. Good luck!\n", "If you only need to convert tables and data, that's rather simple. Just use ODBC driver for InterBase, connect to it and pump the data.\nHowever, if you need business logic as well, you cannot covert it just like that. You can convert regular tables and views without too much problems. Domain info would be lost but you don't need it in MSSQL anyway. The only problem with tables can be array fields, which you need to convert to separate tables, but that isn't too hard either.\nThe problem is the conversion of triggers and stored procedures, since InterBase uses its own, custom PSQL language. It has some concepts that are different from MSSQL. For example, you have procedures that can return resultsets, and you would need to convert those to MSSQL functions. \nIn any case, it shouldn't be too hard, since you're going from low to high complexity, but there are no tools to do it automatically.\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "database", "interbase", "sql_server" ]
stackoverflow_0000111318_database_interbase_sql_server.txt
Q: 2d game physics? Can anyone point me to a library for 2D game physics, etc for programming gravity, jumping actions, etc for a 2d platform/sidescrolling game ? Or could you suggest some algorithms for side scroller like mario, sonic etc? A: It sounds like Chipmunk might meet your needs. A: Your best bet is most likely Box2D. It does 2D physics, has tons of options, and is very easy to integrate into an existing project. It does CCD by default for fixed bodies, but any rigid body can be selectively included in the CCD calculation. A: If all you need is gravity, you can program that yourself in 5 minutes. Free-falling objects accelerate down at 9.8 meters per second per second - that is, an object's downward velocity increases by 9.8 meters per second of free-fall. For a game, you'll want to divide that 9.8 by whatever your frame rate is. For jumping, just pick a significant negative vertical velocity, apply that to the character at the instant they jump, and decrement it by your per-frame gravity increment. That's really all you need for something like Mario, unless you're looking for a 3d background for your 2d side scroller. If you want to get fancier, you can try to take an object's impact force into account, making falling objects hurt people or crack pavement or something. For this, use the formula for Kinetic Energy: KE = 1/2 * M * V^2, where M is mass and V is velocity. A: What platform are you looking for? What library you use will depend on this. For the XNA framework, Farseer is pretty nice. A: To answer the second part of your question, if you want to get a handle on how a simple 2D platformer works, take a read through the tutorials for N. Yes, N is a flash-based game but that doesn't mean it isn't constructed like a "real" game, so the collision detection (and response) tutorials are very much applicable. They're a straightforward read with some intuitive demos embedded in the page to show off the geometric concepts. A: You could look at the Havok engine. I believe they released a free version for non-commerical use. There is a constraint kit for it that will allow you to constrain the physics to 2 planes, in your case, x and y. A: The physics in most 2D side-scrolling platform games are so simple that you could easily implement them yourself. What kind of effects are you looking for? A: If you got the time you could use PhysX but its likely an over kill for 2D. Besides that if you plan on having your game work on a PC and want some cool physics, try googling for "verlet integration" I know there are quite a few verlet implementations around (nice for particles and 2D rag-dolls). A: I've used Box2D in personal projects. It is a 2D physic simulation API. But, it might be overkill if what you want is more a game/graphic API. A: This guy has done a lot of work with Javascript games: http://blog.nihilogic.dk/ A: You can do 2d physics with opende as well
2d game physics?
Can anyone point me to a library for 2D game physics, etc for programming gravity, jumping actions, etc for a 2d platform/sidescrolling game ? Or could you suggest some algorithms for side scroller like mario, sonic etc?
[ "It sounds like Chipmunk might meet your needs.\n", "Your best bet is most likely Box2D. It does 2D physics, has tons of options, and is very easy to integrate into an existing project. It does CCD by default for fixed bodies, but any rigid body can be selectively included in the CCD calculation.\n", "If all you need is gravity, you can program that yourself in 5 minutes. Free-falling objects accelerate down at 9.8 meters per second per second - that is, an object's downward velocity increases by 9.8 meters per second of free-fall. For a game, you'll want to divide that 9.8 by whatever your frame rate is. For jumping, just pick a significant negative vertical velocity, apply that to the character at the instant they jump, and decrement it by your per-frame gravity increment. That's really all you need for something like Mario, unless you're looking for a 3d background for your 2d side scroller.\nIf you want to get fancier, you can try to take an object's impact force into account, making falling objects hurt people or crack pavement or something. For this, use the formula for Kinetic Energy: KE = 1/2 * M * V^2, where M is mass and V is velocity.\n", "What platform are you looking for? What library you use will depend on this.\nFor the XNA framework, Farseer is pretty nice.\n", "To answer the second part of your question, if you want to get a handle on how a simple 2D platformer works, take a read through the tutorials for N. Yes, N is a flash-based game but that doesn't mean it isn't constructed like a \"real\" game, so the collision detection (and response) tutorials are very much applicable. They're a straightforward read with some intuitive demos embedded in the page to show off the geometric concepts.\n", "You could look at the Havok engine. I believe they released a free version for non-commerical use. There is a constraint kit for it that will allow you to constrain the physics to 2 planes, in your case, x and y.\n", "The physics in most 2D side-scrolling platform games are so simple that you could easily implement them yourself. What kind of effects are you looking for?\n", "If you got the time you could use PhysX but its likely an over kill for 2D.\nBesides that if you plan on having your game work on a PC and want some cool physics, try googling for \"verlet integration\" I know there are quite a few verlet implementations around (nice for particles and 2D rag-dolls).\n", "I've used Box2D in personal projects. It is a 2D physic simulation API. But, it might be overkill if what you want is more a game/graphic API.\n", "This guy has done a lot of work with Javascript games:\nhttp://blog.nihilogic.dk/\n", "You can do 2d physics with opende as well\n" ]
[ 22, 12, 9, 5, 4, 2, 2, 2, 2, 2, 2 ]
[]
[]
[ "c++", "physics" ]
stackoverflow_0000098628_c++_physics.txt
Q: How do you store your code and files for use across machines I am interested to know what strategies people have to keep their code AND work versioned across multiple machines. For example I have a desktop PC running XP, a macbook running OSX and VMWare running XP as well as a sales laptop for running product demos. I want to know how I can always have these in sync. Subversion is a possibility for this but i find it less useful for dealing with binary files - maybe I have overlooked something here. What do other people use as they must have similar issues? Do they keep all files on a USB drive and never on the local file system. I am not always online so remote storage is not really an option. A: Like others have said, subversion is your best bet for code. For binary files/non-code, I find DropBox to be very convenient. It stores revisions, has undelete, easy sharing, etc. basically an automagic, web-friendly SVN. Not having to think about it is the biggest plus for me. A: I use mercurial for keeping my workfiles in sync. It's not great for big binaries either, but it lets me commit without being online and makes it easy to branche/merge different versions. A: Ah the old VCS Debate. The simplest way to share/sync Source Code is to use some sort of VCS (Version Control System) - this gives you plenty of benefits over being able to keep things synced. There are many VCSs out there, I personally use Bazaar-NG and Subversion - though I'd suggest you trial a few and see how you feel using them. For syncing general files, espescially if it's only for yourself, I'd reccomend using "DropBox" (http://www.getdropbox.com/) - I've been using this for the last week or so, and it makes syncing up my multiple machines with a certain set of files so much more easy. It also has some extra features that'd probably be useful for collaboration too, but I haven't tried those out yet. A: Subversion works just great in our office for sales, project management, design and code files. A: I store my dotfiles (.zshrc, etc) in a Git repository that is checked out into my homedir. I also do the same for the LaTeX files comprising my classwork. A: I put important builds in Source Control -- it's fine for binary files. A: For most files including source code we do use Subversion. It's really great. If there are larger files oder Project management related documents which are used by people who have no access to the source control system, we use Microsoft SharePoint. This is especially usefull if you are working with people outside your company. A: I keep all my work encrypted on a USB stick. It also has a bootable Linux partition so I can get into a sensible working development environment from any machine, such as a borrowed work laptop with some software to carry to a conference that I can't move to my own machine. When you have more people working on the same code, I'd put it in a central Subversion repository and set up scripts (in Windows you could use the autorun feature for the USB stick) to synchronize things between the repo and a USB stick always carried along. A: FolderShare (http://foldershare.com) is also nice for syncing files. I use it to keep documents, etc. in sync between my laptop and my desktop, for example. Of course, for code especially this doesn't obviate the need for source control. A: The main point I see reg. using SVN as central repository for binary files, is that if those files are of any reasonable size, they will take some time to be synced over the net. So if you don't want to spend time waiting for your files coming in over the net, here the building blocks for an other mirroring solution: MirrorFolder No better tool to be found when it comes to syncing a Data-Tanks with several other "local" copies. TrueCrypt Use this to encrypt your USB-Tank just in case you drop it somewhere.
How do you store your code and files for use across machines
I am interested to know what strategies people have to keep their code AND work versioned across multiple machines. For example I have a desktop PC running XP, a macbook running OSX and VMWare running XP as well as a sales laptop for running product demos. I want to know how I can always have these in sync. Subversion is a possibility for this but i find it less useful for dealing with binary files - maybe I have overlooked something here. What do other people use as they must have similar issues? Do they keep all files on a USB drive and never on the local file system. I am not always online so remote storage is not really an option.
[ "Like others have said, subversion is your best bet for code. For binary files/non-code, I find DropBox to be very convenient. It stores revisions, has undelete, easy sharing, etc. basically an automagic, web-friendly SVN. Not having to think about it is the biggest plus for me.\n", "I use mercurial for keeping my workfiles in sync. It's not great for big binaries either, but it lets me commit without being online and makes it easy to branche/merge different versions.\n", "Ah the old VCS Debate.\nThe simplest way to share/sync Source Code is to use some sort of VCS (Version Control System) - this gives you plenty of benefits over being able to keep things synced. There are many VCSs out there, I personally use Bazaar-NG and Subversion - though I'd suggest you trial a few and see how you feel using them.\nFor syncing general files, espescially if it's only for yourself, I'd reccomend using \"DropBox\" (http://www.getdropbox.com/) - I've been using this for the last week or so, and it makes syncing up my multiple machines with a certain set of files so much more easy.\nIt also has some extra features that'd probably be useful for collaboration too, but I haven't tried those out yet.\n", "Subversion works just great in our office for sales, project management, design and code files.\n", "I store my dotfiles (.zshrc, etc) in a Git repository that is checked out into my homedir. I also do the same for the LaTeX files comprising my classwork.\n", "I put important builds in Source Control -- it's fine for binary files.\n", "For most files including source code we do use Subversion. It's really great. \nIf there are larger files oder Project management related documents which are used by people who have no access to the source control system, we use Microsoft SharePoint. \nThis is especially usefull if you are working with people outside your company.\n", "I keep all my work encrypted on a USB stick. It also has a bootable Linux partition so I can get into a sensible working development environment from any machine, such as a borrowed work laptop with some software to carry to a conference that I can't move to my own machine.\nWhen you have more people working on the same code, I'd put it in a central Subversion repository and set up scripts (in Windows you could use the autorun feature for the USB stick) to synchronize things between the repo and a USB stick always carried along.\n", "FolderShare (http://foldershare.com) is also nice for syncing files. I use it to keep documents, etc. in sync between my laptop and my desktop, for example.\nOf course, for code especially this doesn't obviate the need for source control.\n", "The main point I see reg. using SVN as central repository for binary files, is that if those files are of any reasonable size, they will take some time to be synced over the net.\nSo if you don't want to spend time waiting for your files coming in over the net, here the building blocks for an other mirroring solution:\n\nMirrorFolder\n\nNo better tool to be found when it comes to syncing a Data-Tanks with \nseveral other \"local\" copies. \n\nTrueCrypt\n\nUse this to encrypt your USB-Tank just in case you drop it somewhere.\n" ]
[ 8, 2, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "operating_system", "versioning" ]
stackoverflow_0000111629_operating_system_versioning.txt
Q: Ruby - Ensure Syslog Gets Closed Is it absolutely critical that I always close Syslog when I'm done using it? Is there a huge negative impact from not doing so? If it turns out that I definitely need to, what's a good way to do it? I'm opening Syslog in my class constructor and I don't see a way to do class destructors in Ruby, and currently have something resembling this: class Foo def initialize @@log = Syslog.open("foo") end end I don't immediately see the place where the Syslog.close call should be, but what do you recommend? A: The open method accepts a block. Do something like this: class Foo def do_something Syslog.open do # work with the syslog here end end end A: It looks like you're opening it as a class variable... so the proper way would be to do... class Foo def initialize @@log = Syslog.open("foo") end def Foo.finalize(id) @@log.close if @@log end end Though this is not necesssarily predictable or supported. It's the way to do it if you're going to keep the code the way you do.
Ruby - Ensure Syslog Gets Closed
Is it absolutely critical that I always close Syslog when I'm done using it? Is there a huge negative impact from not doing so? If it turns out that I definitely need to, what's a good way to do it? I'm opening Syslog in my class constructor and I don't see a way to do class destructors in Ruby, and currently have something resembling this: class Foo def initialize @@log = Syslog.open("foo") end end I don't immediately see the place where the Syslog.close call should be, but what do you recommend?
[ "The open method accepts a block. Do something like this:\nclass Foo\n def do_something\n Syslog.open do\n # work with the syslog here\n end\n end\nend\n\n", "It looks like you're opening it as a class variable... so the proper way would be to do...\nclass Foo\n def initialize\n @@log = Syslog.open(\"foo\")\n end\n\n def Foo.finalize(id)\n @@log.close if @@log\n end\nend\n\nThough this is not necesssarily predictable or supported. It's the way to do it if you're going to keep the code the way you do.\n" ]
[ 2, 1 ]
[]
[]
[ "ruby", "syslog" ]
stackoverflow_0000111687_ruby_syslog.txt
Q: Datagridview virtual model combobox how can you dynamically add items to a combobox using the datagridview virtual mode? A: Well, I assume you are working with a very large set of data, and thus are using virtual mode to implement your own data binding. If that is the case here is a link that demonstrates the process: http://msdn.microsoft.com/en-us/library/2b177d6d.aspx It primarily involves implementing an event handler for the CellValueNeeded event.
Datagridview virtual model combobox
how can you dynamically add items to a combobox using the datagridview virtual mode?
[ "Well, I assume you are working with a very large set of data, and thus are using virtual mode to implement your own data binding.\nIf that is the case here is a link that demonstrates the process:\nhttp://msdn.microsoft.com/en-us/library/2b177d6d.aspx\nIt primarily involves implementing an event handler for the CellValueNeeded event.\n" ]
[ 1 ]
[]
[]
[ "c#", "combobox", "datagridview" ]
stackoverflow_0000111785_c#_combobox_datagridview.txt
Q: How would you format multiple properties when using Property Initialization? (.Net) For example: root.Nodes.Add(new TNode() { Foo1 = bar1, Foo2 = bar2, Foo3 = bar3 }); or: root.Nodes.Add(new TNode() { Foo1 = bar1, Foo2 = bar2, Foo3 = bar3 }); A: I've done it both ways.. IMO it depends on the complexity of the initialization. If it is simple 2 or 3 properties I will initialize on one line generally, but if i'm setting up an object with values for insertion into a database or something that has alot of properties i'll break it out like your second example. Income income = new Income { Initials = something, CheckNumber = something, CheckDate = something, BranchNumber = something }; or return new Report.ReportData { ReportName = something, Formulas = something}; A: Both notations are fine. I would simply suggest to use the first (1-line) notation whenever your line stay within 100 characters, and switch to the second (multi-line) notation whenever the expression is longer. A: For longer stuff I do it this way: root.Nodes.Add(new TNode() { Foo1 = bar1, Foo2 = bar2, Foo3 = bar3 });
How would you format multiple properties when using Property Initialization? (.Net)
For example: root.Nodes.Add(new TNode() { Foo1 = bar1, Foo2 = bar2, Foo3 = bar3 }); or: root.Nodes.Add(new TNode() { Foo1 = bar1, Foo2 = bar2, Foo3 = bar3 });
[ "I've done it both ways.. IMO it depends on the complexity of the initialization.\nIf it is simple 2 or 3 properties I will initialize on one line generally, but if i'm setting up an object with values for insertion into a database or something that has alot of properties i'll break it out like your second example.\nIncome income = new Income\n{\n Initials = something,\n CheckNumber = something,\n CheckDate = something,\n BranchNumber = something\n};\n\nor\nreturn new Report.ReportData { ReportName = something, Formulas = something};\n\n", "Both notations are fine. I would simply suggest to use the first (1-line) notation whenever your line stay within 100 characters, and switch to the second (multi-line) notation whenever the expression is longer.\n", "For longer stuff I do it this way:\nroot.Nodes.Add(new TNode() {\n Foo1 = bar1, \n Foo2 = bar2, \n Foo3 = bar3\n});\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ ".net", "coding_style", "convention", "formatting" ]
stackoverflow_0000111792_.net_coding_style_convention_formatting.txt
Q: How do you automate some routine actions for improving productivity? Every morning, after logging into your machine, you do a variety of routine stuffs. The list can include stuffs like opening/checking your email clients, rss readers, launching visual studio, running some business apps, typing some replies, getting latest version from Source Control, compiling, connecting to a different domain etc. To a big extend, we can automate using scripting solutions like AutoIt, nightly jobs etc. I would love to hear from you geeks out there for the list of stuffs you found doing repeatedly and how you solved it by automating it. Any cool tips? A: I use Linux. I have a bunch of scripts that do anything I want. Typically I write a script whenever a "block" of work can be reused in the future. For example, simple refactorings, deployments, etc... Over time I started to combine these blocks, hence getting ever more efficient. Regarding the "load stuff at startup", under Linux that comes out of the box (you can "save your current session" when you log out or turn off the computer). On windows, my suggestion is to use programs that can be automated via command line. A: A favorite way is to leave the computer on at night or better, if it's a laptop, put it to sleep. Running a web browsing virtual machine in VMware or similar works also, you can set the VM start along with the machine and save its state on shutdown, so your web pages and email client stay open. This works for development also if you're doing scripting or something similar where the performance hit of the VM on large compiles won't negate the benefits. A: SlickRun is very handy for this, just a few keys to navigate to anything common and a very small footprint. With input variables and file path recognition all part of it I can quick remote desktop to any machine, search anything, pull up whatever's needed. A: On OS X, I have an Applescript that I run at the beginning of the day. It sets an away message on IM, hides or quits programs that would distract me, gets new mail, and so forth. I also plug in my USB backup disk, so when I'm going home, another script ejects it and quits some programs. When the script is done, so am I. I invoke these scripts with key combos using Quicksilver. If you don't have a Mac, by the way, Quicksilver and Applescript are probably the #1 and #2 reasons to switch. Between the two of them, you can tell your computer to do practically anything you want in very short order. A: Use a good app launcher such as Quicksilver or Launchy to cut down on the time it takes to perform simple tasks. They're usually not scriptable, but they do let you do each step faster. A: Writing shell scripts (Applescript, Bash, PowerShell, etc..) is a great way to automate most mundane tasks, assuming your apps are scriptable, as well as pick up a new language. As you venture further into this practice, you'll find yourself more and more annoyed at the apps you use that aren't scriptable, to the point where it starts to affect your choice of apps ;-) Also, consider a cron job, Windows scheduled task, or similar OS X analog to automatically run certain tasks at certain times of day/week/month/year. You can use this for anything from the "workday morning" scripts mentioned previously, to reminding you of your wife's birthday and anniversary every year. There's some more info here for *NIX systems, or here for Windows boxes. Happy automation! A: I have a hard time wrapping my head around Applescript, but since Apple runs BASH scripts just fine, I just use those instead. I've got a development server on my mac, so I've got a script that I can run to create a new site directory, create a new virtual host in apache, add a new domain to my /etc/hosts file, etc. It's especially cool to integrate Bash (or probably applescript, although I don't know how) with Growl. That way, you can put a nice message up on the screen, complete with a png icon. This is more useful for things that your scripts do during the day though. A: I do most of my programming work on a development server at work, so in the evening I simply detach my screen session and re-attach it in the morning, so it takes just a few seconds until I'm exactly where I left the day before. I have some macros defined in mutt to clean up my inbox (archive commit mails etc.), I have a script that mounts some directories on the development server on my notebook via sshfs (works without interaction using public keys), and after that all I have to do is start up a browser and get a coffee. :)
How do you automate some routine actions for improving productivity?
Every morning, after logging into your machine, you do a variety of routine stuffs. The list can include stuffs like opening/checking your email clients, rss readers, launching visual studio, running some business apps, typing some replies, getting latest version from Source Control, compiling, connecting to a different domain etc. To a big extend, we can automate using scripting solutions like AutoIt, nightly jobs etc. I would love to hear from you geeks out there for the list of stuffs you found doing repeatedly and how you solved it by automating it. Any cool tips?
[ "I use Linux. I have a bunch of scripts that do anything I want. Typically I write a script whenever a \"block\" of work can be reused in the future. For example, simple refactorings, deployments, etc...\nOver time I started to combine these blocks, hence getting ever more efficient.\nRegarding the \"load stuff at startup\", under Linux that comes out of the box (you can \"save your current session\" when you log out or turn off the computer).\nOn windows, my suggestion is to use programs that can be automated via command line.\n", "A favorite way is to leave the computer on at night or better, if it's a laptop, put it to sleep. Running a web browsing virtual machine in VMware or similar works also, you can set the VM start along with the machine and save its state on shutdown, so your web pages and email client stay open. This works for development also if you're doing scripting or something similar where the performance hit of the VM on large compiles won't negate the benefits.\n", "SlickRun is very handy for this, just a few keys to navigate to anything common and a very small footprint. With input variables and file path recognition all part of it I can quick remote desktop to any machine, search anything, pull up whatever's needed.\n", "On OS X, I have an Applescript that I run at the beginning of the day. It sets an away message on IM, hides or quits programs that would distract me, gets new mail, and so forth. I also plug in my USB backup disk, so when I'm going home, another script ejects it and quits some programs. When the script is done, so am I.\nI invoke these scripts with key combos using Quicksilver.\nIf you don't have a Mac, by the way, Quicksilver and Applescript are probably the #1 and #2 reasons to switch. Between the two of them, you can tell your computer to do practically anything you want in very short order.\n", "Use a good app launcher such as Quicksilver or Launchy to cut down on the time it takes to perform simple tasks. They're usually not scriptable, but they do let you do each step faster.\n", "Writing shell scripts (Applescript, Bash, PowerShell, etc..) is a great way to automate most mundane tasks, assuming your apps are scriptable, as well as pick up a new language. As you venture further into this practice, you'll find yourself more and more annoyed at the apps you use that aren't scriptable, to the point where it starts to affect your choice of apps ;-)\nAlso, consider a cron job, Windows scheduled task, or similar OS X analog to automatically run certain tasks at certain times of day/week/month/year. You can use this for anything from the \"workday morning\" scripts mentioned previously, to reminding you of your wife's birthday and anniversary every year. There's some more info here for *NIX systems, or here for Windows boxes.\nHappy automation!\n", "I have a hard time wrapping my head around Applescript, but since Apple runs BASH scripts just fine, I just use those instead. I've got a development server on my mac, so I've got a script that I can run to create a new site directory, create a new virtual host in apache, add a new domain to my /etc/hosts file, etc. \nIt's especially cool to integrate Bash (or probably applescript, although I don't know how) with Growl. That way, you can put a nice message up on the screen, complete with a png icon. This is more useful for things that your scripts do during the day though.\n", "I do most of my programming work on a development server at work, so in the evening I simply detach my screen session and re-attach it in the morning, so it takes just a few seconds until I'm exactly where I left the day before.\nI have some macros defined in mutt to clean up my inbox (archive commit mails etc.), I have a script that mounts some directories on the development server on my notebook via sshfs (works without interaction using public keys), and after that all I have to do is start up a browser and get a coffee. :)\n" ]
[ 4, 3, 1, 1, 1, 1, 1, 1 ]
[]
[]
[ "automation" ]
stackoverflow_0000111683_automation.txt
Q: GWT-EXT - What is the best way to widgets to a specific ContentPanel after an event? first post don't hurt me :) I am using a BorderLayout with the usual North, West, Center, South Panels. On the West ContentPanel, I've got a Tree. If an event (OnClick)occurs I want a particular dialog box displayed on the Center ContentPanel. What is the best way for me to do this? Currently I'm using a function called returnPanel() that returns the center ContentPanel. In the event handler I call this function (MainWindow.returnPanel().add(myDialog)). A: The way you are doing it is intuitive and works, but will start causing hell when the application grows, because different parts of the application are strongly coupled. The solutions to this problems are the MVC design pattern and the observer design pattern. Ideally, using the MVC pattern, you don't want any widget to 'know' of any other widget. There is only class that knows all the widgets, which is the Controller. Anytime one widget needs to message/signal another widget, it tells it to the Controller class, which relays the message in the appropriate way to the appropriate widget. In this way, the two widgets are decpoupled and one can change without breaking the other. You may want to use an enum to enumerate all possible actions to which the controller has to responsd. If your widget has to call only the Controller when an event occurs, you may simply call an aptly named (static) method on it and be done with it. However, as soon as multiple other classes needs to be informed of an event, you are better of using the Observer pattern, which allows you to signal multiple other classes, without changing your class. It simply calls notifyPObservers() in the eventHandler and that's it. How many listeners there are, and what type they are, is irrelevant. This way, you also decouple a class from it's listeners. Even if only the Controller listens, it may be advisable to use the pattern, as it clearly seperated the 'call back' code from the other code in the classes. BTW, this has nothing to do with GWT or even Java in particular.
GWT-EXT - What is the best way to widgets to a specific ContentPanel after an event?
first post don't hurt me :) I am using a BorderLayout with the usual North, West, Center, South Panels. On the West ContentPanel, I've got a Tree. If an event (OnClick)occurs I want a particular dialog box displayed on the Center ContentPanel. What is the best way for me to do this? Currently I'm using a function called returnPanel() that returns the center ContentPanel. In the event handler I call this function (MainWindow.returnPanel().add(myDialog)).
[ "The way you are doing it is intuitive and works, but will start causing hell when the application grows, because different parts of the application are strongly coupled. The solutions to this problems are the MVC design pattern and the observer design pattern.\nIdeally, using the MVC pattern, you don't want any widget to 'know' of any other widget. There is only class that knows all the widgets, which is the Controller. Anytime one widget needs to message/signal another widget, it tells it to the Controller class, which relays the message in the appropriate way to the appropriate widget. In this way, the two widgets are decpoupled and one can change without breaking the other. You may want to use an enum to enumerate all possible actions to which the controller has to responsd. \nIf your widget has to call only the Controller when an event occurs, you may simply call an aptly named (static) method on it and be done with it. However, as soon as multiple other classes needs to be informed of an event, you are better of using the Observer pattern, which allows you to signal multiple other classes, without changing your class. It simply calls notifyPObservers() in the eventHandler and that's it. How many listeners there are, and what type they are, is irrelevant. This way, you also decouple a class from it's listeners. Even if only the Controller listens, it may be advisable to use the pattern, as it clearly seperated the 'call back' code from the other code in the classes.\nBTW, this has nothing to do with GWT or even Java in particular.\n" ]
[ 1 ]
[]
[]
[ "gwt", "gwt_ext" ]
stackoverflow_0000111854_gwt_gwt_ext.txt
Q: Getting Arduino LilyPad to switch BlueSmirf v2.11 to/from command mode A battery powered (2 x AA) Arduino LilyPad should switch a BlueSmirf v2.11 Bluetooth modem to/from command mode (see source code below). The BlueSmirf has been set to 9600 baud. If the PC connects via Bluetooth (see source code below), the Arduino program runs fine at the beginning (sending multiple "ping\n"). After some time it (LilyPad/BlueSmirf) starts to also send "$$$" and "---\n" over the Bluetooth connection instead of switching to/from command mode. Any ideas? Regards, tamberg // Arduino source code: void setup () { Serial.begin(9600); } void loop () { Serial.print("$$$"); delay(2000); // TODO: Inquiry, etc. Serial.print("---\n"); delay(100); Serial.print("ping\n"); delay(2000); } // C# source code (runs on PC) using System; using System.IO.Ports; class Program { static void Main () { SerialPort p = new SerialPort( "COM20", 9600, Parity.None, 8, StopBits.One); using (p) { p.Open(); while (p.IsOpen) { Console.Write((char) p.ReadChar()); } } } } A: From the datasheet, page 6: NOTE1 : You can enter command mode locally over the serial port at any time when not connected. Once a connection is made, you can only enter command mode if the config timer has not expired. To enable continuous configuration, set the config timer to 255. Also, if the device is in Auto Master mode 3, you will NOT be able to enter command mode when connected over Bluetooth. My guess would be that the config timer is expiring.
Getting Arduino LilyPad to switch BlueSmirf v2.11 to/from command mode
A battery powered (2 x AA) Arduino LilyPad should switch a BlueSmirf v2.11 Bluetooth modem to/from command mode (see source code below). The BlueSmirf has been set to 9600 baud. If the PC connects via Bluetooth (see source code below), the Arduino program runs fine at the beginning (sending multiple "ping\n"). After some time it (LilyPad/BlueSmirf) starts to also send "$$$" and "---\n" over the Bluetooth connection instead of switching to/from command mode. Any ideas? Regards, tamberg // Arduino source code: void setup () { Serial.begin(9600); } void loop () { Serial.print("$$$"); delay(2000); // TODO: Inquiry, etc. Serial.print("---\n"); delay(100); Serial.print("ping\n"); delay(2000); } // C# source code (runs on PC) using System; using System.IO.Ports; class Program { static void Main () { SerialPort p = new SerialPort( "COM20", 9600, Parity.None, 8, StopBits.One); using (p) { p.Open(); while (p.IsOpen) { Console.Write((char) p.ReadChar()); } } } }
[ "From the datasheet, page 6:\n\nNOTE1 : You can enter command mode\n locally over the serial port at any\n time when not connected. Once a\n connection is made, you can only enter\n command mode if the config timer has\n not expired. To enable continuous\n configuration, set the config timer to\n 255. Also, if the device is in Auto Master mode 3, you will NOT be able to\n enter command mode when connected over\n Bluetooth.\n\nMy guess would be that the config timer is expiring.\n" ]
[ 1 ]
[]
[]
[ "arduino", "bluetooth" ]
stackoverflow_0000111331_arduino_bluetooth.txt
Q: Get millions of records from fixed-width flat file to SQL 2000 Obviously I can use BCP but here is the issue. If one of the records in a Batch have an invalid date I want to redirect that to a separate table/file/whatever, but keep the batch processing running. I don't think SSIS can be installed on the server which would have helped. A: Create a trigger that processes on INSERT. This trigger will do a validation check on your date field. If it fails the validation, then do an insert into your separate table, and you can also choose to continue the insert or not allow it to go through. an important note: by default triggers do not fire on bulk inserts (BCP & SSIS included). To get this to work, you'll need to specify that you want the trigger to fire, using something like: BULK INSERT your_database.your_schema.your_table FROM your_file WITH (FIRE_TRIGGERS ) A: Yeah, if you are using DTS, you should just import into a staging table that uses varchar instead of dates and then massage the data into the proper tables afterwords. A: The problem with What Matt said is that you should not use a cursor to manipulate the data afterwards especially if you have millions of records. CUrsoprs are extremely inefficient and should be avoided. Use batch processing instead. But by all means use his idea of a staging table. I wouldn' ever consider importing directly into a production table as too many things can happen over time to change the data in the input file and cause problems. A: You're saying there's a column full of dates in the file, and you want that data to go into a column of type "datetime" in a table in a SQL database? And it'll blow up if one of the values from the file isn't a valid date? I just wanted to make sure I understand this right. You could create another, temporary, table in the SQL database, of the same structure as the table you want the data from the file to end up in, but with every column of type varchar(255) or something. Sucking the data out of the file and into that table shouldn't fail whether any of the dates is valid or not. Then, in SQL, you could massage the data however you want. You could use a cursor to select all of the records from the temporary table and loop through them. For each record, you could use the T-SQL ISDATE function to conditionally insert the values from the current record into one table or another. I'm saying, get the data into the database and then run script like this: // **this is untested, there could be syntax errors** // if we have tables like this: CREATE TABLE tempoary (id VARCHAR(255), theDate VARCHAR(255), somethingElse VARCHAR(255)) CREATE TABLE theGood (id INT, theDate DATETIME, somethingElse VARCHAR(255)) CREATE TABLE theBad (id INT, theDate VARCHAR(255)) // then after getting the data into [tempoary], do this: DECLARE tempCursor CURSOR FOR SELECT id, theDate, somethingElse FROM temporary OPEN tempCursor DECLARE @id VARCHAR(255) DECLARE @theDate VARCHAR(255) DECLARE @somethingElse VARCHAR(255) FETCH NEXT FROM tempCursor INTO @id, @theDate, @somethingElse While (@@FETCH_STATUS <> -1) BEGIN IF ISDATE(@theDate) BEGIN INSERT INTO theGood (id, theDate, somethingElse) VALUES (CONVERT(INT, @id), CONVERT(DATETIME, theDate), somethingElse) END ELSE BEGIN INSERT INTO theBad (id, theDate) VALUES (CONVERT(INT, @id), theDate) END FETCH NEXT FROM tempCursor INTO @id, @theDate, @somethingElse END CLOSE tempCursor DEALLOCATE tempCursor
Get millions of records from fixed-width flat file to SQL 2000
Obviously I can use BCP but here is the issue. If one of the records in a Batch have an invalid date I want to redirect that to a separate table/file/whatever, but keep the batch processing running. I don't think SSIS can be installed on the server which would have helped.
[ "Create a trigger that processes on INSERT. This trigger will do a validation check on your date field. If it fails the validation, then do an insert into your separate table, and you can also choose to continue the insert or not allow it to go through.\nan important note: by default triggers do not fire on bulk inserts (BCP & SSIS included). To get this to work, you'll need to specify that you want the trigger to fire, using something like:\nBULK INSERT your_database.your_schema.your_table FROM your_file WITH (FIRE_TRIGGERS )\n\n", "Yeah, if you are using DTS, you should just import into a staging table that uses varchar instead of dates and then massage the data into the proper tables afterwords.\n", "The problem with What Matt said is that you should not use a cursor to manipulate the data afterwards especially if you have millions of records. CUrsoprs are extremely inefficient and should be avoided.\nUse batch processing instead.\nBut by all means use his idea of a staging table. I wouldn' ever consider importing directly into a production table as too many things can happen over time to change the data in the input file and cause problems.\n", "You're saying there's a column full of dates in the file, and you want that data to go into a column of type \"datetime\" in a table in a SQL database? And it'll blow up if one of the values from the file isn't a valid date? I just wanted to make sure I understand this right.\nYou could create another, temporary, table in the SQL database, of the same structure as the table you want the data from the file to end up in, but with every column of type varchar(255) or something. Sucking the data out of the file and into that table shouldn't fail whether any of the dates is valid or not.\nThen, in SQL, you could massage the data however you want. You could use a cursor to select all of the records from the temporary table and loop through them. For each record, you could use the T-SQL ISDATE function to conditionally insert the values from the current record into one table or another.\nI'm saying, get the data into the database and then run script like this:\n// **this is untested, there could be syntax errors**\n\n// if we have tables like this:\nCREATE TABLE tempoary (id VARCHAR(255), theDate VARCHAR(255), somethingElse VARCHAR(255))\nCREATE TABLE theGood (id INT, theDate DATETIME, somethingElse VARCHAR(255))\nCREATE TABLE theBad (id INT, theDate VARCHAR(255))\n\n// then after getting the data into [tempoary], do this:\nDECLARE tempCursor CURSOR\nFOR SELECT id, theDate, somethingElse FROM temporary\n\nOPEN tempCursor\n\nDECLARE @id VARCHAR(255)\nDECLARE @theDate VARCHAR(255)\nDECLARE @somethingElse VARCHAR(255)\n\nFETCH NEXT FROM tempCursor INTO @id, @theDate, @somethingElse\nWhile (@@FETCH_STATUS <> -1)\nBEGIN\n IF ISDATE(@theDate)\n BEGIN\n INSERT INTO theGood (id, theDate, somethingElse)\n VALUES (CONVERT(INT, @id), CONVERT(DATETIME, theDate), somethingElse)\n END\n ELSE\n BEGIN\n INSERT INTO theBad (id, theDate)\n VALUES (CONVERT(INT, @id), theDate)\n END\n FETCH NEXT FROM tempCursor INTO @id, @theDate, @somethingElse\nEND\nCLOSE tempCursor\nDEALLOCATE tempCursor\n\n" ]
[ 5, 3, 3, 0 ]
[]
[]
[ "c#", "sql_server" ]
stackoverflow_0000110325_c#_sql_server.txt
Q: How can I detect, using php, if the machine has oracle (oci8 and/or pdo_oci) installed? How can I detect, using php, if the machine has oracle (oci8 and/or pdo_oci) installed? I'm working on a PHP project where some developers, such as myself, have it installed, but there's little need for the themers to have it. How can I write a quick function to use in the code so that my themers are able to work on the look of the site without having it crash on them? A: if the oci extension isn't installed, then you'll get a fatal error with farside.myopenid.com's answer, you can use function_exists('oci_connect') or extension_loaded('oci8') (or whatever the extension's actually called) A: The folks here have pieces of the solution, but let's roll it all into one solution. For just a single instance of an oracle function, testing with function_exists() is good enough; but if the code is sprinkled throughout to OCI calls, it's going to be a huge pain in the ass to wrap every one in a function_exists() test. Therefore, I think the simplest solution would be to create a file called nodatabase.php that might look something like this: <?php // nodatabase.php // explicitly override database functions with empty stubs. Only include this file // when you want to run the code without an actual database backend. Any database- // related functions used in the codebase must be included below. function oci_connect($user, $password, $db = '', $charset='UTF-8', $session_mode=null) { } function oci_execute($statement, $mode=0) { } // and so on... Then, conditionally include this file if a global (say, THEME_TESTING) is defined just ahead of where the database code is called. Such an include might look like this: // define("THEME_TESTING", true) // uncomment this line to disable database usage if( defined(THEME_TESTING) ) include('nodatabase.php'); // override oracle API with stub functions for the artists. Now, when you hand the project over to the artists, they simply need to make that one modification and they're good to go. A: I dont know if I fully understand your question but a simple way would be to do this: <?php $connection = oci_connect('username', 'password', 'table'); if (!$connection) { // no OCI connection. } ?> A: As mentioned above by Greg, programmatically you can use the function_exists() method. Don't forget you can also use the following to see all the environment specifics with your PHP install using the following: <?php phpinfo(); ?>
How can I detect, using php, if the machine has oracle (oci8 and/or pdo_oci) installed?
How can I detect, using php, if the machine has oracle (oci8 and/or pdo_oci) installed? I'm working on a PHP project where some developers, such as myself, have it installed, but there's little need for the themers to have it. How can I write a quick function to use in the code so that my themers are able to work on the look of the site without having it crash on them?
[ "if the oci extension isn't installed, then you'll get a fatal error with farside.myopenid.com's answer, you can use function_exists('oci_connect') or extension_loaded('oci8') (or whatever the extension's actually called)\n", "The folks here have pieces of the solution, but let's roll it all into one solution.\nFor just a single instance of an oracle function, testing with function_exists() is good enough; but if the code is sprinkled throughout to OCI calls, it's going to be a huge pain in the ass to wrap every one in a function_exists() test.\nTherefore, I think the simplest solution would be to create a file called nodatabase.php that might look something like this:\n<?php\n// nodatabase.php\n// explicitly override database functions with empty stubs. Only include this file\n// when you want to run the code without an actual database backend. Any database-\n// related functions used in the codebase must be included below.\nfunction oci_connect($user, $password, $db = '', $charset='UTF-8', $session_mode=null)\n{\n}\n\nfunction oci_execute($statement, $mode=0)\n{\n}\n// and so on...\n\nThen, conditionally include this file if a global (say, THEME_TESTING) is defined just ahead of where the database code is called. Such an include might look like this:\n// define(\"THEME_TESTING\", true) // uncomment this line to disable database usage\nif( defined(THEME_TESTING) )\n include('nodatabase.php'); // override oracle API with stub functions for the artists.\n\nNow, when you hand the project over to the artists, they simply need to make that one modification and they're good to go.\n", "I dont know if I fully understand your question but a simple way would be to do this:\n<?php\n $connection = oci_connect('username', 'password', 'table');\n if (!$connection) {\n // no OCI connection.\n }\n?>\n\n", "As mentioned above by Greg, programmatically you can use the function_exists() method. Don't forget you can also use the following to see all the environment specifics with your PHP install using the following:\n<?php\nphpinfo();\n?>\n\n" ]
[ 5, 1, 0, 0 ]
[]
[]
[ "oracle", "php" ]
stackoverflow_0000111440_oracle_php.txt
Q: Multiple Tables in a TClientDataset? Is it possible to put the results from more than one query on more than one table into a TClientDataset? Just something like SELECT * from t1; SELECT * from t2; SELECT * from t3; I can't seem to figure out a way to get a data provider (SetProvider) to pull in results from more than one table at a time. A: ClientDatasets can contain fields that are themselves other datasets. So if you want to create three tables in a single dataset, create three ClientDatasets holding the three result sets that you want, and then you can put them into a single ClientDataSet. This article: http://dn.codegear.com/article/29001 shows you how to do it both at runtime and at designtime. Pay particular attention to the section entitled: "Creating a ClientDataSet's Structure at Runtime using TFields" A: The only way would be to join the tables. But then you have to provide the criteria of the join through joined foreign keys. select * from t1, t2, t3 where t1.key = t2.key and t2.key = t3.key; Now suppose you came up with a key (like LineNr) that would allow for such a join. You then could use a full outer join to include all records (important if not all tables have the same number of rows). But this would somehow be a hack. Be sure not to take auto_number for the key, as it does not reuse keys and therefore tends to leave holes in the numbering, resulting in many lines that are only partially filled with values. If you want to populate a clientdataset from multiple tables that have the same set of fields, you can use the UNION operator to do so. This will just use the same columns and combine all rows into one table. A: There is not a way to have multiple table data in the same TClientDataSet like you referenced. The TClientDataSet holds a single cursor for a single dataset.
Multiple Tables in a TClientDataset?
Is it possible to put the results from more than one query on more than one table into a TClientDataset? Just something like SELECT * from t1; SELECT * from t2; SELECT * from t3; I can't seem to figure out a way to get a data provider (SetProvider) to pull in results from more than one table at a time.
[ "ClientDatasets can contain fields that are themselves other datasets. So if you want to create three tables in a single dataset, create three ClientDatasets holding the three result sets that you want, and then you can put them into a single ClientDataSet.\nThis article:\nhttp://dn.codegear.com/article/29001\nshows you how to do it both at runtime and at designtime. Pay particular attention to the section entitled:\n\"Creating a ClientDataSet's Structure at Runtime using TFields\"\n", "The only way would be to join the tables. But then you have to provide the criteria of the join through joined foreign keys.\nselect * from t1, t2, t3 where t1.key = t2.key and t2.key = t3.key;\n\nNow suppose you came up with a key (like LineNr) that would allow for such a join. You then could use a full outer join to include all records (important if not all tables have the same number of rows). But this would somehow be a hack. Be sure not to take auto_number for the key, as it does not reuse keys and therefore tends to leave holes in the numbering, resulting in many lines that are only partially filled with values.\nIf you want to populate a clientdataset from multiple tables that have the same set of fields, you can use the UNION operator to do so. This will just use the same columns and combine all rows into one table.\n", "There is not a way to have multiple table data in the same TClientDataSet like you referenced. The TClientDataSet holds a single cursor for a single dataset.\n" ]
[ 12, 4, 0 ]
[]
[]
[ "database", "delphi", "sqlite", "tclientdataset" ]
stackoverflow_0000111287_database_delphi_sqlite_tclientdataset.txt
Q: How do I get my Twitter feed to integrate with a blog (with individual comment threads)? I would like to create a blog where my Twitter updates essentially create blog posts, with a comment thread. If there isn't blog software that does this right now (I did some searching but couldn't find the commenting aspect) what would be the simplest approach and starting blog software to do this? Potentially an alternate approach to this would be a blog interface that could auto-update my Twitter feed with the title text. Whatever the solution, I'd like it to be fully automated so that it is roughly no more work than currently updating my Twitter feed using the Twitter web interface. Note: I'm also interested in 'normal' blog posting via the default blog web admin interface. A: You could use something like Tumblr or Sweetcron with Disqus comments. You can auto-import your Twitter/Flickr/any RSS feed. You can also post text/audio/video from the site admin. You'll have to manually add Disqus comments, but then each post or Twitter message will have its own threaded comments. A: If you would like to use Wordpress, you can use the Twitter Tools plugin. "Pull your tweets into your blog and create new tweets on blog posts and from within WordPress." Each tweet/blog post would automatically have comments enabled. Good luck man, Brian Gianforcaro
How do I get my Twitter feed to integrate with a blog (with individual comment threads)?
I would like to create a blog where my Twitter updates essentially create blog posts, with a comment thread. If there isn't blog software that does this right now (I did some searching but couldn't find the commenting aspect) what would be the simplest approach and starting blog software to do this? Potentially an alternate approach to this would be a blog interface that could auto-update my Twitter feed with the title text. Whatever the solution, I'd like it to be fully automated so that it is roughly no more work than currently updating my Twitter feed using the Twitter web interface. Note: I'm also interested in 'normal' blog posting via the default blog web admin interface.
[ "You could use something like Tumblr or Sweetcron with Disqus comments. You can auto-import your Twitter/Flickr/any RSS feed. You can also post text/audio/video from the site admin. You'll have to manually add Disqus comments, but then each post or Twitter message will have its own threaded comments.\n", "If you would like to use Wordpress, you can use the Twitter Tools plugin. \n\n\"Pull your tweets into your blog and create new tweets on blog posts and from within WordPress.\"\n\nEach tweet/blog post would automatically have comments enabled. \nGood luck man, \nBrian Gianforcaro\n" ]
[ 2, 2 ]
[]
[]
[ "blogs", "comments", "customization", "twitter" ]
stackoverflow_0000112162_blogs_comments_customization_twitter.txt
Q: Background color stretches accross entire width of ul I have a simple list I am using for a horizontal menu: <ul> <h1>Menu</h1> <li> <a href="/" class="selected">Home</a> </li> <li> <a href="/Home">Forum</a> </li> </ul> When I add a background color to the selected class, only the text gets the color, I want it to stretch the entire distance of the section. Hope this makes sense. A: The a element is an inline element, meaning it only applies to the text it encloses. If you want the background color to stretch across horizontally, apply the selected class to a block level element. Applying the class to the li element should work fine. Alternatively, you could add this to the selected class' CSS: display: block; Which will make the a element display like a block element. A: Everyone is correct that your problem is that anchors are inline elements, but I thought it is also worth mentioning that you have an H1 inside of your list as well. The H1 isn't allowed there and should be pulled out of the UL or placed into an LI tag. A: Would something like this work? .selected { display: block; width: 100%; background: #BEBEBE; } A: Put the selected class on the <li> and not the <a> A: <a> elements are inline by default. This means that they don't establish their own block, they are just part of the text. You want them to establish their own block, so you should use a { display: block; } with an appropriate context. This also enables you to add padding to the <a> elements rather than the <li> elements, making their clickable area larger, and thus easier to use.
Background color stretches accross entire width of ul
I have a simple list I am using for a horizontal menu: <ul> <h1>Menu</h1> <li> <a href="/" class="selected">Home</a> </li> <li> <a href="/Home">Forum</a> </li> </ul> When I add a background color to the selected class, only the text gets the color, I want it to stretch the entire distance of the section. Hope this makes sense.
[ "The a element is an inline element, meaning it only applies to the text it encloses. If you want the background color to stretch across horizontally, apply the selected class to a block level element. Applying the class to the li element should work fine.\nAlternatively, you could add this to the selected class' CSS:\ndisplay: block;\n\nWhich will make the a element display like a block element.\n", "Everyone is correct that your problem is that anchors are inline elements, but I thought it is also worth mentioning that you have an H1 inside of your list as well. The H1 isn't allowed there and should be pulled out of the UL or placed into an LI tag.\n", "Would something like this work?\n.selected {\n display: block;\n width: 100%;\n background: #BEBEBE;\n}\n\n", "Put the selected class on the <li> and not the <a>\n", "<a> elements are inline by default. This means that they don't establish their own block, they are just part of the text. You want them to establish their own block, so you should use a { display: block; } with an appropriate context. This also enables you to add padding to the <a> elements rather than the <li> elements, making their clickable area larger, and thus easier to use.\n" ]
[ 11, 3, 2, 1, 1 ]
[]
[]
[ "css" ]
stackoverflow_0000112093_css.txt
Q: Has anyone got an example of aerith style swing mixed with GUI maintainability of SWT editing? My boss loves VB (we work in a Java shop) because he thinks it's easy to learn and maintain. We want to replace some of the VB with java equivalents using the Eclipse SWT editor, because we think it is almost as easy to maintain. To sell this, we'd like to use an aerith style L&F. Can anyone provide an example of an SWT application still being able to edit the GUI in eclipse, but having the Aerith L&F? A: SWT doesn't support look & feels. You can get different L&F's by altering your OS native L&F. The only exception is to using the eclipse forms toolkit. It still has the OS native feel, but strives for a web-browser-like look. It does this mostly by setting everything to SWT.FLAT, and using white backgrounds on everything. Occassionally, they have to manually draw outlines around controls that don't natively support it. If you're looking for custom L&F's that will appear across platforms, you really want Swing. A: Like Heath Borders said, SWT doesn't support L&Fs, so you have to use Swing for that. Aerith however does not base on a look and feel, but on custom painting on the components with a lot of gradients. If you are looking for a Swing GUI Editor that is (nearly) as easy to use as VB, try the Matisse GUI Builder in NetBeans. There is also a version for Eclipse, but it is shipped with the commercial MyEclipse. If you want to learn more about writing apps with cool a cool GUI, have a look at the Filthy Rich Clients book by Chet Haase and Romain Guy. If this does not convince your boss, try to resize the VB GUI and then resize the Swing GUI. ;-) And I would say a VB is really not very good to maintain in the long run...
Has anyone got an example of aerith style swing mixed with GUI maintainability of SWT editing?
My boss loves VB (we work in a Java shop) because he thinks it's easy to learn and maintain. We want to replace some of the VB with java equivalents using the Eclipse SWT editor, because we think it is almost as easy to maintain. To sell this, we'd like to use an aerith style L&F. Can anyone provide an example of an SWT application still being able to edit the GUI in eclipse, but having the Aerith L&F?
[ "SWT doesn't support look & feels. You can get different L&F's by altering your OS native L&F. The only exception is to using the eclipse forms toolkit. It still has the OS native feel, but strives for a web-browser-like look. It does this mostly by setting everything to SWT.FLAT, and using white backgrounds on everything. Occassionally, they have to manually draw outlines around controls that don't natively support it. If you're looking for custom L&F's that will appear across platforms, you really want Swing.\n", "Like Heath Borders said, SWT doesn't support L&Fs, so you have to use Swing for that. Aerith however does not base on a look and feel, but on custom painting on the components with a lot of gradients.\nIf you are looking for a Swing GUI Editor that is (nearly) as easy to use as VB, try the Matisse GUI Builder in NetBeans. There is also a version for Eclipse, but it is shipped with the commercial MyEclipse. If you want to learn more about writing apps with cool a cool GUI, have a look at the Filthy Rich Clients book by Chet Haase and Romain Guy.\nIf this does not convince your boss, try to resize the VB GUI and then resize the Swing GUI. ;-) And I would say a VB is really not very good to maintain in the long run...\n" ]
[ 1, 1 ]
[]
[]
[ "eclipse", "java", "lf", "swing", "swt" ]
stackoverflow_0000097586_eclipse_java_lf_swing_swt.txt
Q: How to auto-focus RTE editor inside firefox? We have a RTE editor based on htmlarea which consists of content with editmode enabled inside an iframe. The question is how to automatically bring the focus into the editor? A: Where the id of the IFRAME is myRTE: var iframe = document.getElementById("myRTE"); if ( iframe && iframe.contentWindow ) iframe.contentWindow.focus();
How to auto-focus RTE editor inside firefox?
We have a RTE editor based on htmlarea which consists of content with editmode enabled inside an iframe. The question is how to automatically bring the focus into the editor?
[ "Where the id of the IFRAME is myRTE:\nvar iframe = document.getElementById(\"myRTE\");\nif ( iframe && iframe.contentWindow )\n iframe.contentWindow.focus();\n\n" ]
[ 2 ]
[]
[]
[ "editmode", "firefox", "focus", "iframe", "javascript" ]
stackoverflow_0000112261_editmode_firefox_focus_iframe_javascript.txt
Q: What can cause an ASP.NET worker process to be recycled? Here is my current question: I'm guessing that my problem (described below) is being caused by ASP.NET worker processes being recycled, per the answers below—I'm using InProc sessions storage and don't see much chance of moving away, due to the restriction for other types of storage that all session objects be serializable. However, I can't figure out what would make the worker process be recycled as often as I'm seeing it—there wasn't any changing of the files in the app directory as far as I know, and the options in IIS seem to imply that the process would only be recycled every 1,740 minutes—which is much less frequent than the actual session loss. So, my question is now, what different cases can cause an ASP.NET worker process to be recycled? Here is my original question: I have a difficult-to-reproduce problem that occurs in my ASP.NET web application. The application has one main .aspx page that is loaded and initializes a number of session variables. This page uses the ASP.NET Ajax Sys.Net.WebRequest class to repeatedly access another .aspx page, which uses the session variables to make database queries and update the main page (the main page is never re-requested). Occasionally, after a period of time using the page, causing successful HTTP requests where the session created in the main page properly carries over to the subpage, one of the requests seems to cause a new ASP.NET session to be created—all the session variables are lost (causing an exception to be thrown in my code), and a new session id is reported in the dynamically requested page. That means that suddenly, the main page is disconnected from the server—as far as the server is concerned, the user is no longer logged in. I'm nearly positive it's not a session timeout—the timeout time is set to something ridiculous, the amount of time it takes to get this to happen is variable but is never long enough to cause the session to time out, and the constant Sys.Net.WebRequests should refresh the session timer. So, what else could be happening that would cause the HTTP requests to lose contact with the ASP.NET session? I unfortunately haven't been sniffing network traffic when this has happened to me, or I would've checked if the ASP.NET session cookie has stuck around or not. A: One solution would be to use a StateServer, rather than InProc session management. Lots of things can cause the session state to be lost: Editing Web.Config IIS resetting etc. If the session state is important to your app then use either SQL state management, or the State Server which ships with ASP. NET. Cheers, RB. A: We had problems of Session when we did migrating the AnkerEx application to the new server. The new server had Microsoft Windows Server 2008 as operation system and Microsoft Internet Information Services 7. Also in the server were installed .NET Framework of versions 1.0.3705, 1.1.4322, 2.0.50727, 3.0 and 3.5. For solving of this problem i have done enabling health monitoring for application's Lifetime related events in ASP.NET 2.0. I had added to the web.config: ... ... <system.web> ... ... <healthMonitoring> <rules> <add name="Application Events" eventName="Application Lifetime Events" provider="EventLogProvider" profile="Default" minInterval="00:01:00" /> </rules> </healthMonitoring> ... ... It is help to us to check the AppDomain recycles. We can see it at our Event Viewer. The link to more details is http://blogs.msdn.com/rahulso/archive/2006/04/13/575715.aspx After I have done adding to web.config, the Event Viewer showed me that my application is restarting every time when i do click to almost any link in my application. From the article of http://blogs.msdn.com/toddca/archive/2005/12/01/499144.aspx i found out that ASP.NET has the new behavior - if we will do deleting, for example a sub-directory of the application's root directory, then ASP.NET 2.0 will do the restarting AppDomain. The problem was in that that I had in the web.config the instruction: ... <compilation debug="true" tempDirectory="c:\AnkerEx\Temporary ASP.NET files"> ... I.e. the ASP.NET did compiling of aspx pages in folder of my application root. I think he created folders, may be and did removing some of them also. I removed tempDirectory instruction and the application began work stable. A: The worker process is probably cycling. http://www.lattimore.id.au/2006/06/03/iis-dropping-sessions/ A: It could be caused by an unhandled exception in a background thread. It can cause your ASP.NET worker process to terminate. A new process is started very quickly so you don't actually notice it but all your sessions are lost. Here is an article that explains it much better than I can: ASP.NET 2.0 Unhandled Exception Issues quote: An unhandled exception in a running ASP.NET 2.0 application will usually terminate the W3WP.exe process, and leave you with a very cryptic EventLog entry something like this: "EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.1830, P3 42435be1, P4 app_web_ncsnb2-n, P5 0.0.0.0, P6 440a4082, P7 5, P8 1, P9 system.nullreferenceexception, P10 NIL." Here is a Microsoft KB article that explains the same issue: KB911816 Unhandled exceptions cause ASP.NET-based applications to unexpectedly quit in the .NET Framework 2.0 A: My guess would be memory consumption - but, set up IIS to log recycles and you'll know for sure.
What can cause an ASP.NET worker process to be recycled?
Here is my current question: I'm guessing that my problem (described below) is being caused by ASP.NET worker processes being recycled, per the answers below—I'm using InProc sessions storage and don't see much chance of moving away, due to the restriction for other types of storage that all session objects be serializable. However, I can't figure out what would make the worker process be recycled as often as I'm seeing it—there wasn't any changing of the files in the app directory as far as I know, and the options in IIS seem to imply that the process would only be recycled every 1,740 minutes—which is much less frequent than the actual session loss. So, my question is now, what different cases can cause an ASP.NET worker process to be recycled? Here is my original question: I have a difficult-to-reproduce problem that occurs in my ASP.NET web application. The application has one main .aspx page that is loaded and initializes a number of session variables. This page uses the ASP.NET Ajax Sys.Net.WebRequest class to repeatedly access another .aspx page, which uses the session variables to make database queries and update the main page (the main page is never re-requested). Occasionally, after a period of time using the page, causing successful HTTP requests where the session created in the main page properly carries over to the subpage, one of the requests seems to cause a new ASP.NET session to be created—all the session variables are lost (causing an exception to be thrown in my code), and a new session id is reported in the dynamically requested page. That means that suddenly, the main page is disconnected from the server—as far as the server is concerned, the user is no longer logged in. I'm nearly positive it's not a session timeout—the timeout time is set to something ridiculous, the amount of time it takes to get this to happen is variable but is never long enough to cause the session to time out, and the constant Sys.Net.WebRequests should refresh the session timer. So, what else could be happening that would cause the HTTP requests to lose contact with the ASP.NET session? I unfortunately haven't been sniffing network traffic when this has happened to me, or I would've checked if the ASP.NET session cookie has stuck around or not.
[ "One solution would be to use a StateServer, rather than InProc session management.\nLots of things can cause the session state to be lost:\n\nEditing Web.Config\nIIS resetting\netc.\n\nIf the session state is important to your app then use either SQL state management, or the State Server which ships with ASP. NET.\nCheers,\nRB.\n", "We had problems of Session when we did migrating the AnkerEx application to the \nnew server. The new server had Microsoft Windows Server 2008 as operation system \nand Microsoft Internet Information Services 7. Also in the server were installed \n.NET Framework of versions 1.0.3705, 1.1.4322, 2.0.50727, 3.0 and 3.5.\nFor solving of this problem i have done enabling health monitoring for \napplication's Lifetime related events in ASP.NET 2.0. I had added to the web.config:\n...\n...\n<system.web>\n...\n...\n <healthMonitoring>\n <rules>\n <add name=\"Application Events\"\n eventName=\"Application Lifetime Events\"\n provider=\"EventLogProvider\"\n profile=\"Default\"\n minInterval=\"00:01:00\" />\n </rules>\n </healthMonitoring>\n...\n...\n\nIt is help to us to check the AppDomain recycles. We can see it at our Event Viewer.\nThe link to more details is http://blogs.msdn.com/rahulso/archive/2006/04/13/575715.aspx\nAfter I have done adding to web.config, the Event Viewer showed me that my \napplication is restarting every time when i do click to almost any link in my \napplication.\nFrom the article of http://blogs.msdn.com/toddca/archive/2005/12/01/499144.aspx i \nfound out that ASP.NET has the new behavior - if we will do deleting, for example \na sub-directory of the application's root directory, then ASP.NET 2.0 will do the \nrestarting AppDomain.\nThe problem was in that that I had in the web.config the instruction:\n...\n<compilation debug=\"true\" tempDirectory=\"c:\\AnkerEx\\Temporary ASP.NET files\">\n...\n\nI.e. the ASP.NET did compiling of aspx pages in folder of my application root. \nI think he created folders, may be and did removing some of them also. I removed \ntempDirectory instruction and the application began work stable.\n", "The worker process is probably cycling.\nhttp://www.lattimore.id.au/2006/06/03/iis-dropping-sessions/ \n", "It could be caused by an unhandled exception in a background thread. It can cause your ASP.NET worker process to terminate. A new process is started very quickly so you don't actually notice it but all your sessions are lost.\nHere is an article that explains it much better than I can: ASP.NET 2.0 Unhandled Exception Issues\nquote:\n\nAn unhandled exception in a running ASP.NET 2.0 application will usually terminate the W3WP.exe process, and leave you with a very cryptic EventLog entry something like this:\n\"EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.1830, P3 42435be1, P4 app_web_ncsnb2-n, P5 0.0.0.0, P6 440a4082, P7 5, P8 1, P9 system.nullreferenceexception, P10 NIL.\"\n\nHere is a Microsoft KB article that explains the same issue: KB911816 Unhandled exceptions cause ASP.NET-based applications to unexpectedly quit in the .NET Framework 2.0\n", "My guess would be memory consumption - but, set up IIS to log recycles and you'll know for sure.\n" ]
[ 3, 3, 1, 1, 1 ]
[]
[]
[ "ajax", "asp.net", "session" ]
stackoverflow_0000094042_ajax_asp.net_session.txt
Q: Binding TRANSFORM query in Access to a report Whats the best way to bind variable column names to a report field in Access when using a crosstab query? A: This page has an exhaustive example of setting up a dynamic column ("crosstab"-type) report in Access. http://www.blueclaw-db.com/report_dynamic_crosstab_field.htm (From google search: access transform query report) A: The best article I found for binding columns from a crosstab query to a report is from ewbi.develops's notes. Specifically, PARAMETERS foryear Short; TRANSFORM Sum(mytable.amount) AS total SELECT mytable.project FROM mytable WHERE mytable.year In ([foryear],[foryear]+1) GROUP BY mytable.project PIVOT IIf(mytable.year=[foryear],"thisyear","nextyear") IN ("thisyear", "nextyear"); This only displays two columns that can be bound as needed.
Binding TRANSFORM query in Access to a report
Whats the best way to bind variable column names to a report field in Access when using a crosstab query?
[ "This page has an exhaustive example of setting up a dynamic column (\"crosstab\"-type) report in Access.\nhttp://www.blueclaw-db.com/report_dynamic_crosstab_field.htm\n(From google search: access transform query report)\n", "The best article I found for binding columns from a crosstab query to a report is from ewbi.develops's notes.\nSpecifically,\nPARAMETERS foryear Short;\nTRANSFORM Sum(mytable.amount) AS total\nSELECT mytable.project\nFROM mytable\nWHERE mytable.year In ([foryear],[foryear]+1)\nGROUP BY mytable.project\nPIVOT IIf(mytable.year=[foryear],\"thisyear\",\"nextyear\") IN (\"thisyear\", \"nextyear\");\n\nThis only displays two columns that can be bound as needed. \n" ]
[ 1, 1 ]
[]
[]
[ "ms_access", "report", "sql", "transform" ]
stackoverflow_0000109251_ms_access_report_sql_transform.txt
Q: SQL Server 2005 implementation of MySQL REPLACE INTO? MySQL has this incredibly useful yet proprietary REPLACE INTO SQL Command. Can this easily be emulated in SQL Server 2005? Starting a new Transaction, doing a Select() and then either UPDATE or INSERT and COMMIT is always a little bit of a pain, especially when doing it in the application and therefore always keeping 2 versions of the statement. I wonder if there is an easy and universal way to implement such a function into SQL Server 2005? A: This is something that annoys me about MSSQL (rant on my blog). I wish MSSQL supported upsert. @Dillie-O's code is a good way in older SQL versions (+1 vote), but it still is basically two IO operations (the exists and then the update or insert) There's a slightly better way on this post, basically: --try an update update tablename set field1 = 'new value', field2 = 'different value', ... where idfield = 7 --insert if failed if @@rowcount = 0 and @@error = 0 insert into tablename ( idfield, field1, field2, ... ) values ( 7, 'value one', 'another value', ... ) This reduces it to one IO operations if it's an update, or two if an insert. MS Sql2008 introduces merge from the SQL:2003 standard: merge tablename as target using (values ('new value', 'different value')) as source (field1, field2) on target.idfield = 7 when matched then update set field1 = source.field1, field2 = source.field2, ... when not matched then insert ( idfield, field1, field2, ... ) values ( 7, source.field1, source.field2, ... ) Now it's really just one IO operation, but awful code :-( A: The functionality you're looking for is traditionally called an UPSERT. Atleast knowing what it's called might help you find what you're looking for. I don't think SQL Server 2005 has any great ways of doing this. 2008 introduces the MERGE statement that can be used to accomplish this as shown in: http://www.databasejournal.com/features/mssql/article.php/3739131 or http://blogs.conchango.com/davidportas/archive/2007/11/14/SQL-Server-2008-MERGE.aspx Merge was available in the beta of 2005, but they removed it out in the final release. A: What the upsert/merge is doing is something to the effect of... IF EXISTS (SELECT * FROM [Table] WHERE Id = X) UPDATE [Table] SET... ELSE INSERT INTO [Table] So hopefully the combination of those articles and this pseudo code can get things moving. A: I wrote a blog post about this issue. The bottom line is that if you want cheap updates and want to be safe for concurrent usage, try: update t set hitCount = hitCount + 1 where pk = @id if @@rowcount < 1 begin begin tran update t with (serializable) set hitCount = hitCount + 1 where pk = @id if @@rowcount = 0 begin insert t (pk, hitCount) values (@id,1) end commit tran end This way you have 1 operation for updates and a max of 3 operations for inserts. So, if you are generally updating, this is a safe cheap option. I would also be very careful not to use anything that is unsafe for concurrent usage. It's really easy to get primary key violations or duplicate rows in production.
SQL Server 2005 implementation of MySQL REPLACE INTO?
MySQL has this incredibly useful yet proprietary REPLACE INTO SQL Command. Can this easily be emulated in SQL Server 2005? Starting a new Transaction, doing a Select() and then either UPDATE or INSERT and COMMIT is always a little bit of a pain, especially when doing it in the application and therefore always keeping 2 versions of the statement. I wonder if there is an easy and universal way to implement such a function into SQL Server 2005?
[ "This is something that annoys me about MSSQL (rant on my blog). I wish MSSQL supported upsert. \n@Dillie-O's code is a good way in older SQL versions (+1 vote), but it still is basically two IO operations (the exists and then the update or insert)\nThere's a slightly better way on this post, basically:\n--try an update\nupdate tablename \nset field1 = 'new value',\n field2 = 'different value',\n ...\nwhere idfield = 7\n\n--insert if failed\nif @@rowcount = 0 and @@error = 0\n insert into tablename \n ( idfield, field1, field2, ... )\n values ( 7, 'value one', 'another value', ... )\n\nThis reduces it to one IO operations if it's an update, or two if an insert. \nMS Sql2008 introduces merge from the SQL:2003 standard:\nmerge tablename as target\nusing (values ('new value', 'different value'))\n as source (field1, field2)\n on target.idfield = 7\nwhen matched then\n update\n set field1 = source.field1,\n field2 = source.field2,\n ...\nwhen not matched then\n insert ( idfield, field1, field2, ... )\n values ( 7, source.field1, source.field2, ... )\n\nNow it's really just one IO operation, but awful code :-(\n", "The functionality you're looking for is traditionally called an UPSERT. Atleast knowing what it's called might help you find what you're looking for.\nI don't think SQL Server 2005 has any great ways of doing this. 2008 introduces the MERGE statement that can be used to accomplish this as shown in: http://www.databasejournal.com/features/mssql/article.php/3739131 or http://blogs.conchango.com/davidportas/archive/2007/11/14/SQL-Server-2008-MERGE.aspx\nMerge was available in the beta of 2005, but they removed it out in the final release.\n", "What the upsert/merge is doing is something to the effect of...\nIF EXISTS (SELECT * FROM [Table] WHERE Id = X)\n UPDATE [Table] SET...\nELSE\n INSERT INTO [Table]\n\nSo hopefully the combination of those articles and this pseudo code can get things moving.\n", "I wrote a blog post about this issue.\nThe bottom line is that if you want cheap updates and want to be safe for concurrent usage, try:\nupdate t\nset hitCount = hitCount + 1\nwhere pk = @id\n\nif @@rowcount < 1 \nbegin \n begin tran\n update t with (serializable)\n set hitCount = hitCount + 1\n where pk = @id\n if @@rowcount = 0\n begin\n insert t (pk, hitCount)\n values (@id,1)\n end\n commit tran\nend\n\nThis way you have 1 operation for updates and a max of 3 operations for inserts. So, if you are generally updating, this is a safe cheap option.\nI would also be very careful not to use anything that is unsafe for concurrent usage. It's really easy to get primary key violations or duplicate rows in production.\n" ]
[ 62, 20, 17, 10 ]
[]
[]
[ "mysql", "sql_server", "sql_server_2005" ]
stackoverflow_0000000234_mysql_sql_server_sql_server_2005.txt
Q: Should I prefix my method with "get" or "load" when communicating with a web service? I'm writing a desktop application that communicates with a web service. Would you name all web-service functions that that fetch data LoadXXXX, since they take a while to execute. Or would you use GetXXXX, for instance when getting just a single object. A: Use MyObject.GetXXXX() when the method returns XXXX. Use MyObject.LoadXXXX() when XXXX will be loaded into MyObject, in other words, when MyObject keeps control of XXXX. The same applies to webservices, I guess. A: I would use Load if you expect it to take "file-time" and Get if you expect it to take "simple DB" time. That is, if the call is expensive, use "Load". A: Get. And then provide a way of calling them asynchronously to emphasize that they may be out to lunch for a while... A: Do what the verb implies. GetXXX implies that something is being returned to the caller, while LoadXXX doesn't necessarily return something as it may be just loading something into memory. For an API, use GetXXX to be clear to the caller that something will be returned. A: Always use Get, except perhaps when actually loading something (eg, loading a file into memory). A: When I read LoadXXX, I'm already thinking that the data comes from some storage media. Since the web service is up in the cloud, GetXXX feels more natural.
Should I prefix my method with "get" or "load" when communicating with a web service?
I'm writing a desktop application that communicates with a web service. Would you name all web-service functions that that fetch data LoadXXXX, since they take a while to execute. Or would you use GetXXXX, for instance when getting just a single object.
[ "Use MyObject.GetXXXX() when the method returns XXXX.\nUse MyObject.LoadXXXX() when XXXX will be loaded into MyObject, in other words, when MyObject keeps control of XXXX.\nThe same applies to webservices, I guess.\n", "I would use Load if you expect it to take \"file-time\" and Get if you expect it to take \"simple DB\" time.\nThat is, if the call is expensive, use \"Load\".\n", "Get. And then provide a way of calling them asynchronously to emphasize that they may be out to lunch for a while...\n", "Do what the verb implies. GetXXX implies that something is being returned to the caller, while LoadXXX doesn't necessarily return something as it may be just loading something into memory.\nFor an API, use GetXXX to be clear to the caller that something will be returned.\n", "Always use Get, except perhaps when actually loading something (eg, loading a file into memory).\n", "When I read LoadXXX, I'm already thinking that the data comes from some storage media. Since the web service is up in the cloud, GetXXX feels more natural.\n" ]
[ 11, 4, 3, 1, 0, 0 ]
[]
[]
[ "naming_conventions", "web_services" ]
stackoverflow_0000112121_naming_conventions_web_services.txt
Q: Best introduction to C++ template metaprogramming? Static metaprogramming (aka "template metaprogramming") is a great C++ technique that allows the execution of programs at compile-time. A light bulb went off in my head as soon as I read this canonical metaprogramming example: #include <iostream> using namespace std; template< int n > struct factorial { enum { ret = factorial< n - 1 >::ret * n }; }; template<> struct factorial< 0 > { enum { ret = 1 }; }; int main() { cout << "7! = " << factorial< 7 >::ret << endl; // 5040 return 0; } If one wants to learn more about C++ static metaprogramming, what are the best sources (books, websites, on-line courseware, whatever)? A: [Answering my own question] The best introductions I've found so far are chapter 10, "Static Metaprogramming in C++" from Generative Programming, Methods, Tools, and Applications by Krzysztof Czarnecki and Ulrich W. Eisenecker, ISBN-13: 9780201309775; and chapter 17, "Metaprograms" of C++ Templates: The Complete Guide by David Vandevoorder and Nicolai M. Josuttis, ISBN-13: 9780201734843. Todd Veldhuizen has an excellent tutorial here. A good resource for C++ programming in general is Modern C++ Design by Andrei Alexandrescu, ISBN-13: 9780201704310. This book mixes a bit of metaprogramming with other template techniques. For metaprogramming in particular, see sections 2.1 "Compile-Time Assertions", 2.4 "Mapping Integral Constants to Types", 2.6 "Type Selection", 2.7 "Detecting Convertibility and Inheritance at Compile Time", 2.9 "NullType and EmptyType" and 2.10 "Type Traits". The best intermediate/advanced resource I've found is C++ Template Metaprogramming by David Abrahams and Aleksey Gurtovoy, ISBN-13: 9780321227256 If you'd prefer just one book, get C++ Templates: The Complete Guide since it is also the definitive reference for templates in general. A: Andrei Alexandrescu's Modern C++ Design book covers a lot of this and other tricks for speedy and efficient modern C++ code and is the basis for the Loki library. Also worth mentioning is the Boost libraries, which heavily use these techniques and are usually of very high quality to learn from (although some are quite dense). A: Modern C++ Design, a brilliant book and design pattern framework by Alexandrescu. Word of warning, after reading this book I stopped doing C++ and thought "What the heck, I can just pick a better language and get it for free". A: Two good books that spring to mind are: Modern C++ Design / Andrei Alexandrescu (It's actually 7 years old despite the name!) C++ Templates: The Complete Guide / Vandevoorde & Josuttis It's quite an in-depth field, so a good book like one of these is definitely recommended over websites. Some of the more advanced techniques will have you studying the code for some time to figure out how they work! A: Modern C++ is one of the best introductions I've read. It covers actual useful examples of template metaprogramming. Also take a look at the companion library Loki. A: There won't be a large list of books, as the list of people with a lot of experience is limited. Template metaprogramming started for real around the first C++ Template Programming Workshop in 2000, and many of the authors named so far attended. (IIRC, Andrei didn't.) These pioneers greatly influenced the field, and basically what should be written is now written. Personally, I'd advice Vandevoorde & Josuttis. Alexandrescu's is a tough book if you're new to the field. A: google Alexandrescu, Modern C++ Design: Generic Programming and Design Patterns Applied A: Veldhuizen's original papers were good. If you up for a whole book, then there's Vandevoorde's book "C++ Templates Complete Guide". And when you're ready for the master's course, try Alexandrescu's Modern C++ Design.
Best introduction to C++ template metaprogramming?
Static metaprogramming (aka "template metaprogramming") is a great C++ technique that allows the execution of programs at compile-time. A light bulb went off in my head as soon as I read this canonical metaprogramming example: #include <iostream> using namespace std; template< int n > struct factorial { enum { ret = factorial< n - 1 >::ret * n }; }; template<> struct factorial< 0 > { enum { ret = 1 }; }; int main() { cout << "7! = " << factorial< 7 >::ret << endl; // 5040 return 0; } If one wants to learn more about C++ static metaprogramming, what are the best sources (books, websites, on-line courseware, whatever)?
[ "[Answering my own question]\nThe best introductions I've found so far are chapter 10, \"Static Metaprogramming in C++\" from Generative Programming, Methods, Tools, and Applications by Krzysztof Czarnecki and Ulrich W. Eisenecker, ISBN-13: 9780201309775; and chapter 17, \"Metaprograms\" of C++ Templates: The Complete Guide by David Vandevoorder and Nicolai M. Josuttis, ISBN-13: 9780201734843.\n \nTodd Veldhuizen has an excellent tutorial here.\nA good resource for C++ programming in general is Modern C++ Design by Andrei Alexandrescu, ISBN-13: 9780201704310. This book mixes a bit of metaprogramming with other template techniques. For metaprogramming in particular, see sections 2.1 \"Compile-Time Assertions\", 2.4 \"Mapping Integral Constants to Types\", 2.6 \"Type Selection\", 2.7 \"Detecting Convertibility and Inheritance at Compile Time\", 2.9 \"NullType and EmptyType\" and 2.10 \"Type Traits\".\nThe best intermediate/advanced resource I've found is C++ Template Metaprogramming by David Abrahams and Aleksey Gurtovoy, ISBN-13: 9780321227256\nIf you'd prefer just one book, get C++ Templates: The Complete Guide since it is also the definitive reference for templates in general.\n", "Andrei Alexandrescu's Modern C++ Design book covers a lot of this and other tricks for speedy and efficient modern C++ code and is the basis for the Loki library.\nAlso worth mentioning is the Boost libraries, which heavily use these techniques and are usually of very high quality to learn from (although some are quite dense).\n", "Modern C++ Design, a brilliant book and design pattern framework by Alexandrescu. Word of warning, after reading this book I stopped doing C++ and thought \"What the heck, I can just pick a better language and get it for free\".\n", "Two good books that spring to mind are:\n\nModern C++ Design / Andrei Alexandrescu (It's actually 7 years old despite the name!)\nC++ Templates: The Complete Guide / Vandevoorde & Josuttis\n\nIt's quite an in-depth field, so a good book like one of these is definitely recommended over websites. Some of the more advanced techniques will have you studying the code for some time to figure out how they work!\n", "Modern C++ is one of the best introductions I've read. It covers actual useful examples of template metaprogramming. Also take a look at the companion library Loki.\n", "There won't be a large list of books, as the list of people with a lot of experience is limited. Template metaprogramming started for real around the first C++ Template Programming Workshop in 2000, and many of the authors named so far attended. (IIRC, Andrei didn't.) These pioneers greatly influenced the field, and basically what should be written is now written. Personally, I'd advice Vandevoorde & Josuttis. Alexandrescu's is a tough book if you're new to the field.\n", "google Alexandrescu, Modern C++ Design: Generic Programming and Design Patterns Applied\n", "Veldhuizen's original papers were good. If you up for a whole book, then there's Vandevoorde's book \"C++ Templates Complete Guide\". And when you're ready for the master's course, try Alexandrescu's Modern C++ Design.\n" ]
[ 118, 25, 11, 7, 5, 5, 4, 4 ]
[]
[]
[ "c++", "metaprogramming", "templates" ]
stackoverflow_0000112277_c++_metaprogramming_templates.txt
Q: How long does it really take to do something? I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison. I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes. Ryan A: It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based). This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total. A: Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable. If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...). Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it. Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills. A: I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that. I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others. So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period. A: I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates. A: It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate. A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work. I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice. The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project. A: Hofstadter's Law: 'It always takes longer than you expect, even when you take Hofstadter's Law into account.' I believe this is because: Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter. Unexpected problems occur during the project. In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later. I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker". A: I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large. This is because I consider a lot of things that people forget: Time for bug fixing Time for deployments Time for management/meetings/interaction Time to allow requirement owners to change their mind etc The trick is to use evidence based scheduling (see Joel on Software). Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates. A: I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects. After a few projects, you should have very good estimates.
How long does it really take to do something?
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison. I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes. Ryan
[ "It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).\nThis is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.\n", "Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.\nIf you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).\nOnce you do this, add it up and double it. If you are going to have to coordinate with others, triple it.\nRecord your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.\n", "I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.\nI work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.\nSo, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.\n", "I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.\n", "It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.\nA caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.\nI've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.\nThe only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.\n", "Hofstadter's Law:\n'It always takes longer than you expect, even when you take Hofstadter's Law into account.'\nI believe this is because:\n\nWork expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.\nUnexpected problems occur during the project.\n\nIn any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.\nI'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of \"sitting there coding\", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you \"should have been quicker\".\n", "I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.\nThis is because I consider a lot of things that people forget:\n\nTime for bug fixing\nTime for deployments\nTime for management/meetings/interaction\nTime to allow requirement owners to change their mind\netc\n\nThe trick is to use evidence based scheduling (see Joel on Software).\nThing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.\n", "I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.\nAfter a few projects, you should have very good estimates.\n" ]
[ 7, 2, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "project_management", "time_management" ]
stackoverflow_0000112131_project_management_time_management.txt
Q: How do you maintain your program vocabulary? In a not-so-small program, when you have not-so-few entities, in order to maintain code readability, common terms, and otherwise improve mutual understanding between team members, one have to define and maintain program vocabulary. How do you (or your company) deal with this task, what discipline do you have, what arrangements do you introduce? A: Most projects of reasonable size should have a programming/coding standards document that dictates common conventions and naming guidelines that should be followed. Another way to help with this is through code reviews. Obviously some coordination among reviewers is required (the document helps with that, too). Code reviews help keep the greener devs and senior devs alike on track and act as an avenue to enforce the coding standards. A: @Ilya Ryzhenkov, I'm afraid most companies don't have such practice :) I've worked in the not-so-small company with multimillion LOC code base and they don't have any documentation at all (beside common coding guideline) On one of my projects we maintained thesaurus of common terms used in our application domain and used it during code review. I analyzed .NET XML documentation diff from time to time to decide which entities\terms should be added to the thesaurus. Only means to enforce compliance with thesaurus was coding guideline. Wiki approach proved to be non-applicable because nobody cares to update it regularly :) I'm wondering what methods do you use at JetBrains ? I've inspected ReSharper's code in Reflector and was amazed with number and names of entities :) A: Divide your packages/modules into logical groups and use descriptive and concise names. Avoid generic names except if they are really counters etc. Create conventions for groups of functions or functionality and stick to them. A: Domain Driven Design is interesting here, since it encourages programmers to embrace the domain vocabulary. On top of that, there is some design conventions, which allow you to refer parts of your application using well known terms, like services, repositories, factories, etc. Combining domain vocabulary and using technical conventions above it could be a good solution. A: My team keeps this kind of information (conventions/vocabulary etc.) on a wiki. This makes it easy to keep up to date and share.
How do you maintain your program vocabulary?
In a not-so-small program, when you have not-so-few entities, in order to maintain code readability, common terms, and otherwise improve mutual understanding between team members, one have to define and maintain program vocabulary. How do you (or your company) deal with this task, what discipline do you have, what arrangements do you introduce?
[ "Most projects of reasonable size should have a programming/coding standards document that dictates common conventions and naming guidelines that should be followed.\nAnother way to help with this is through code reviews. Obviously some coordination among reviewers is required (the document helps with that, too). Code reviews help keep the greener devs and senior devs alike on track and act as an avenue to enforce the coding standards.\n", "@Ilya Ryzhenkov,\nI'm afraid most companies don't have such practice :) I've worked in the not-so-small company with multimillion LOC code base and they don't have any documentation at all (beside common coding guideline)\nOn one of my projects we maintained thesaurus of common terms used in our application domain and used it during code review. I analyzed .NET XML documentation diff from time to time to decide which entities\\terms should be added to the thesaurus. Only means to enforce compliance with thesaurus was coding guideline. \nWiki approach proved to be non-applicable because nobody cares to update it regularly :)\nI'm wondering what methods do you use at JetBrains ? I've inspected ReSharper's code in Reflector and was amazed with number and names of entities :)\n", "Divide your packages/modules into logical groups and use descriptive and concise names. Avoid generic names except if they are really counters etc. Create conventions for groups of functions or functionality and stick to them.\n", "Domain Driven Design is interesting here, since it encourages programmers to embrace the domain vocabulary. On top of that, there is some design conventions, which allow you to refer parts of your application using well known terms, like services, repositories, factories, etc.\nCombining domain vocabulary and using technical conventions above it could be a good solution.\n", "My team keeps this kind of information (conventions/vocabulary etc.) on a wiki. This makes it easy to keep up to date and share.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "language_agnostic", "vocabulary" ]
stackoverflow_0000112161_language_agnostic_vocabulary.txt
Q: Error creating UDF in SQL Server 2005 I'm trying to create a UDF in SQL Server 2005 Express as below: CREATE FUNCTION [CombineValues] () RETURNS VARCHAR(8000) AS BEGIN DECLARE @CuisineList VARCHAR(8000); RETURN ( SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines ) END Cuisines has the structure: CuisineID INT PK, Cuisine VARCHAR(20) When I try to create the function as above, I get an error: Msg 102, Level 15, State 1, Procedure CombineValues, Line 10 Incorrect syntax near '='. What am I doing wrong? A: You need to be careful when using this method. This may not affect you now, for this query, but please keep this in mind for future applications of this method. The problem occurs when you have a NULL value in your list. When this happens, you will get incorrect results. For example, if your original table looks like this... 1 Blah 2 NULL 3 Foo 4 Cracker Your function will return Foo, Cracker. The first value, Blah, will be missed by this function call. It is very easy to accommodate this, with a slight alteration to your function, like this... CREATE FUNCTION [CombineValues] () RETURNS VARCHAR(8000) AS BEGIN DECLARE @CuisineList VARCHAR(8000); SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines WHERE Cuisine Is Not NULL RETURN @CuisineList END By testing for NOT NULL, you will eliminate this potential problem. A: try changing SELECT to SET and then end your function by SELECT (ing) your @CuisineList A: Hojou, your suggestion didn't work, but something similar did: CREATE FUNCTION [CombineValues] () RETURNS VARCHAR(8000) AS BEGIN DECLARE @CuisineList VARCHAR(8000); SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines; RETURN ( SELECT @CuisineList ) END I would like to mark this as the answer, but since I am the one who asked this question, I'm not sure this is appropriate? Any suggestions? Please feel feel to comment. A: This answer is from the original poster, Wild Thing. Please do not vote it up or down. CREATE FUNCTION [CombineValues] () RETURNS VARCHAR(8000) AS BEGIN DECLARE @CuisineList VARCHAR(8000); SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines; RETURN ( SELECT @CuisineList ) END
Error creating UDF in SQL Server 2005
I'm trying to create a UDF in SQL Server 2005 Express as below: CREATE FUNCTION [CombineValues] () RETURNS VARCHAR(8000) AS BEGIN DECLARE @CuisineList VARCHAR(8000); RETURN ( SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines ) END Cuisines has the structure: CuisineID INT PK, Cuisine VARCHAR(20) When I try to create the function as above, I get an error: Msg 102, Level 15, State 1, Procedure CombineValues, Line 10 Incorrect syntax near '='. What am I doing wrong?
[ "You need to be careful when using this method. This may not affect you now, for this query, but please keep this in mind for future applications of this method.\nThe problem occurs when you have a NULL value in your list. When this happens, you will get incorrect results.\nFor example, if your original table looks like this...\n1 Blah\n2 NULL\n3 Foo\n4 Cracker\n\nYour function will return Foo, Cracker. The first value, Blah, will be missed by this function call. It is very easy to accommodate this, with a slight alteration to your function, like this...\nCREATE FUNCTION [CombineValues] ()\nRETURNS VARCHAR(8000)\nAS\nBEGIN\n\nDECLARE @CuisineList VARCHAR(8000);\n SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + \n CAST(Cuisine AS varchar(20))\n FROM Cuisines\n WHERE Cuisine Is Not NULL\n\nRETURN @CuisineList\nEND\n\nBy testing for NOT NULL, you will eliminate this potential problem. \n", "try changing SELECT to SET and then end your function by SELECT (ing) your @CuisineList\n", "Hojou, your suggestion didn't work, but something similar did:\nCREATE FUNCTION [CombineValues] ()\nRETURNS VARCHAR(8000)\nAS\nBEGIN\n\nDECLARE @CuisineList VARCHAR(8000);\n\nSELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines;\n\nRETURN \n(\nSELECT @CuisineList\n)\nEND\n\nI would like to mark this as the answer, but since I am the one who asked this question, I'm not sure this is appropriate? Any suggestions? Please feel feel to comment.\n", "This answer is from the original poster, Wild Thing. Please do not vote it up or down.\nCREATE FUNCTION [CombineValues] ()\nRETURNS VARCHAR(8000)\nAS\nBEGIN\n\nDECLARE @CuisineList VARCHAR(8000);\n\nSELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines;\n\nRETURN \n(\nSELECT @CuisineList\n)\nEND\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "coalesce", "sql_server", "user_defined_functions" ]
stackoverflow_0000111504_coalesce_sql_server_user_defined_functions.txt
Q: How does replication in team foundation server work We have two offices in different states and issues with performance when using integrated source control over the WAN. We were contemplating using replication in TFS to allow both office to have fast and robust connectivity. We need to understand network load, speed of access, how conflicts are managed. A: TFS has a proxy server which should help to aleviate the performance issues. EDIT: Although i have never used it personally, the docs state that is exactly what it is for
How does replication in team foundation server work
We have two offices in different states and issues with performance when using integrated source control over the WAN. We were contemplating using replication in TFS to allow both office to have fast and robust connectivity. We need to understand network load, speed of access, how conflicts are managed.
[ "TFS has a proxy server which should help to aleviate the performance issues.\nEDIT: Although i have never used it personally, the docs state that is exactly what it is for\n" ]
[ 2 ]
[]
[]
[ "tfsbuild", "version_control" ]
stackoverflow_0000112411_tfsbuild_version_control.txt
Q: C++ strings: UTF-8 or 16-bit encoding? I'm still trying to decide whether my (home) project should use UTF-8 strings (implemented in terms of std::string with additional UTF-8-specific functions when necessary) or some 16-bit string (implemented as std::wstring). The project is a programming language and environment (like VB, it's a combination of both). There are a few wishes/constraints: It would be cool if it could run on limited hardware, such as computers with limited memory. I want the code to run on Windows, Mac and (if resources allow) Linux. I'll be using wxWidgets as my GUI layer, but I want the code that interacts with that toolkit confined in a corner of the codebase (I will have non-GUI executables). I would like to avoid working with two different kinds of strings when working with user-visible text and with the application's data. Currently, I'm working with std::string, with the intent of using UTF-8 manipulation functions only when necessary. It requires less memory, and seems to be the direction many applications are going anyway. If you recommend a 16-bit encoding, which one: UTF-16? UCS-2? Another one? A: UTF-16 is still a variable length character encoding (there are more than 2^16 unicode codepoints), so you can't do O(1) string indexing operations. If you're doing lots of that sort of thing, you're not saving anything in speed over UTF-8. On the other hand, if your text includes a lot of codepoints in the 256-65535 range, UTF-16 can be a substantial improvement in size. UCS-2 is a variation on UTF-16 that is fixed length, at the cost of prohibiting any codepoints greater than 2^16. Without knowing more about your requirements, I would personally go for UTF-8. It's the easiest to deal with for all the reasons others have already listed. A: I have never found any reasons to use anything else than UTF-8 to be honest. A: If you decide to go with UTF-8 encoding, check out this library: http://utfcpp.sourceforge.net/ It may make your life much easier. A: I've actually written a widely used application (5million+ users) so every kilobyte used adds up, literally. Despite that, I just stuck to wxString. I've configured it to be derived from std::wstring, so I can pass them to functions expecting a wstring const&. Please note that std::wstring is native Unicode on the Mac (no UTF-16 needed for characters above U+10000), and therefore it uses 4 bytes/wchar_t. The big advantage of this is that i++ gets you the next character, always. On Win32 that is true in only 99.9% of the cases. As a fellow programmer, you'll understand how little 99.9% is. But if you're not convinced, write the function to uppercase a std::string[UTF-8] and a std::wstring. Those 2 functions will tell you which way is insanity. Your on-disk format is another matter. For portability, that should be UTF-8. There's no endianness concern in UTF-8, nor a discussion over the width (2/4). This may be why many programs appear to use UTF-8. On a slightly unrelated note, please read up on Unicode string comparisions and normalization. Or you'll end up with the same bug as .NET, where you can have two variables föö and föö differing only in (invisible) normalization. A: I would recommend UTF-16 for any kind of data manipulation and UI. The Mac OS X and Win32 API uses UTF-16, same for wxWidgets, Qt, ICU, Xerces, and others. UTF-8 might be better for data interchange and storage. See http://unicode.org/notes/tn12/. But whatever you choose, I would definitely recommend against std::string with UTF-8 "only when necessary". Go all the way with UTF-16 or UTF-8, but do not mix and match, that is asking for trouble. A: MicroATX is pretty much a standard PC motherboard format, most capable of 4-8 GB of RAM. If you're talking picoATX maybe you're limited to 1-2 GB RAM. Even then that's plenty for a development environment. I'd still stick with UTF-8 for reasons mentioned above, but memory shouldn't be your concern. A: From what I've read, it's better to use a 16-bit encoding internally unless you're short on memory. It fits almost all living languages in one character I'd also look at ICU. If you're not going to be using certain STL features of strings, using the ICU string types might be better for you. A: Have you considered using wxStrings? If I remember correctly, they can do utf-8 <-> Unicode conversions and it will make it a bit easier when you have to pass strings to and from the UI.
C++ strings: UTF-8 or 16-bit encoding?
I'm still trying to decide whether my (home) project should use UTF-8 strings (implemented in terms of std::string with additional UTF-8-specific functions when necessary) or some 16-bit string (implemented as std::wstring). The project is a programming language and environment (like VB, it's a combination of both). There are a few wishes/constraints: It would be cool if it could run on limited hardware, such as computers with limited memory. I want the code to run on Windows, Mac and (if resources allow) Linux. I'll be using wxWidgets as my GUI layer, but I want the code that interacts with that toolkit confined in a corner of the codebase (I will have non-GUI executables). I would like to avoid working with two different kinds of strings when working with user-visible text and with the application's data. Currently, I'm working with std::string, with the intent of using UTF-8 manipulation functions only when necessary. It requires less memory, and seems to be the direction many applications are going anyway. If you recommend a 16-bit encoding, which one: UTF-16? UCS-2? Another one?
[ "UTF-16 is still a variable length character encoding (there are more than 2^16 unicode codepoints), so you can't do O(1) string indexing operations. If you're doing lots of that sort of thing, you're not saving anything in speed over UTF-8. On the other hand, if your text includes a lot of codepoints in the 256-65535 range, UTF-16 can be a substantial improvement in size. UCS-2 is a variation on UTF-16 that is fixed length, at the cost of prohibiting any codepoints greater than 2^16.\nWithout knowing more about your requirements, I would personally go for UTF-8. It's the easiest to deal with for all the reasons others have already listed.\n", "I have never found any reasons to use anything else than UTF-8 to be honest.\n", "If you decide to go with UTF-8 encoding, check out this library: http://utfcpp.sourceforge.net/\nIt may make your life much easier.\n", "I've actually written a widely used application (5million+ users) so every kilobyte used adds up, literally. Despite that, I just stuck to wxString. I've configured it to be derived from std::wstring, so I can pass them to functions expecting a wstring const&. \nPlease note that std::wstring is native Unicode on the Mac (no UTF-16 needed for characters above U+10000), and therefore it uses 4 bytes/wchar_t. The big advantage of this is that i++ gets you the next character, always. On Win32 that is true in only 99.9% of the cases. As a fellow programmer, you'll understand how little 99.9% is.\nBut if you're not convinced, write the function to uppercase a std::string[UTF-8] and a std::wstring. Those 2 functions will tell you which way is insanity.\nYour on-disk format is another matter. For portability, that should be UTF-8. There's no endianness concern in UTF-8, nor a discussion over the width (2/4). This may be why many programs appear to use UTF-8.\nOn a slightly unrelated note, please read up on Unicode string comparisions and normalization. Or you'll end up with the same bug as .NET, where you can have two variables föö and föö differing only in (invisible) normalization.\n", "I would recommend UTF-16 for any kind of data manipulation and UI.\nThe Mac OS X and Win32 API uses UTF-16, same for wxWidgets, Qt, ICU, Xerces, and others.\nUTF-8 might be better for data interchange and storage.\nSee http://unicode.org/notes/tn12/.\nBut whatever you choose, I would definitely recommend against std::string with UTF-8 \"only when necessary\".\nGo all the way with UTF-16 or UTF-8, but do not mix and match, that is asking for trouble. \n", "MicroATX is pretty much a standard PC motherboard format, most capable of 4-8 GB of RAM. If you're talking picoATX maybe you're limited to 1-2 GB RAM. Even then that's plenty for a development environment. I'd still stick with UTF-8 for reasons mentioned above, but memory shouldn't be your concern.\n", "From what I've read, it's better to use a 16-bit encoding internally unless you're short on memory. It fits almost all living languages in one character\nI'd also look at ICU. If you're not going to be using certain STL features of strings, using the ICU string types might be better for you.\n", "Have you considered using wxStrings? If I remember correctly, they can do utf-8 <-> Unicode conversions and it will make it a bit easier when you have to pass strings to and from the UI.\n" ]
[ 26, 6, 5, 4, 2, 2, 1, 0 ]
[]
[]
[ "c++", "encoding", "stdstring", "ucs2", "utf_8" ]
stackoverflow_0000103358_c++_encoding_stdstring_ucs2_utf_8.txt
Q: How can I sort an array of double pointers based on the values they point to? I am trying to build a function in C/C++ to sort an array and replace each value with its "score" or rank. It takes in a double pointer array to an array of ints, and sorts the double pointers based on the dereferenced value of the integers. I have tried quite a few times to make it work, but can't get it down. Once again, it must sort the double pointers based on the values they point to. This is what I have: void SortArray( int ** pArray, int ArrayLength ) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no * for(i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j = 0; j < (ArrayLength -1); j++) { if (*pArray[j+1] > *pArray[j]) // ascending order simply changes to < { temp = &pArray[j]; // swap elements pArray[j] = &pArray[j+1]; pArray[j+1] = &temp; flag = 1; // indicates that a swap occurred. } } } } A: You're close. You're referencing the address of the array items when you swap, which isn't necessary. The items in the array are pointers, and that's what needs to be swapped. See below: void SortArray( int ** pArray, int ArrayLength ) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no * for(i = ArrayLength - 1; i > 0 && flag; i--) { flag = 0; for (j = 0; j < i; j++) { if (*pArray[j] > *pArray[j+1]) // ascending order simply changes to < { temp = pArray[j]; // swap elements pArray[j] = pArray[j+1]; pArray[j+1] = temp; flag = 1; // indicates that a swap occurred. } } } } Also, check out this lovely blog post on Bubble Sorting in case you're interested (sorry, shameless plug :)). Hope that helps you with your homework ;) Edit: Note the subtle "optimisation" where you count back from the array length and only increment up until 'i' in the inner loop. This saves you from needlessly reparsing items that have already been sorted. A: Heh, this isnt homework. If thats the case then consider using the STL to manage arrays and sort. Its easier to develop and maintain and the std::sort algorithm is asymptotically faster than bubble sort. A: You should consider using std::swap() to do your swapping. If you do, call it as such: swap( obj1, obj2 ); rather than: std::swap( obj1, obj2 ); As the first calling semantic will allow the proper namespace lookup to find the correct overload if one exists. Be sure to have either: using namespace std; or: using std::swap; somewhere. A: Hmm, I don't have much experience with the STL. Could you give an example? This program creates a vector of ints, sorts it, and displays the results. #include <vector> #include <algorithm> #include <iostream> using namespace std; int main() { vector<int>; vec; vec.push_back(7); vec.push_back(5); vec.push_back(13); sort(vec.begin(), vec.end()); for (vector<int>::size_type i = 0; i < vec.size(); ++i) { cout << vec[i] << endl; } } A: To complete Brian Ensink's post, you'll find the STL full of surprises. For example, the std::sort algorithm: #include <iostream> #include <vector> #include <algorithm> void printArray(const std::vector<int *> & p_aInt) { for(std::vector<int *>::size_type i = 0, iMax = p_aInt.size(); i < iMax; ++i) { std::cout << "i[" << static_cast<int>(i) << "] = " << reinterpret_cast<unsigned int>(p_aInt[i]) << std::endl ; } std::cout << std::endl ; } int main(int argc, char **argv) { int a = 1 ; int b = 2 ; int c = 3 ; int d = 4 ; int e = 5 ; std::vector<int *> aInt ; // We fill the vector with variables in an unordered way aInt.push_back(&c) ; aInt.push_back(&b) ; aInt.push_back(&e) ; aInt.push_back(&d) ; aInt.push_back(&a) ; printArray(aInt) ; // We see the addresses are NOT ordered std::sort(aInt.begin(), aInt.end()) ; // DO THE SORTING printArray(aInt) ; // We see the addresses are ORDERED return EXIT_SUCCESS; } The first printing of the array will show unordered addresses. The second, after the sort, will show ordered adresses. On my compiler, we have: i[0] = 3216087168 i[1] = 3216087172 i[2] = 3216087160 i[3] = 3216087164 i[4] = 3216087176 i[0] = 3216087160 i[1] = 3216087164 i[2] = 3216087168 i[3] = 3216087172 i[4] = 3216087176 Give STL's <algorithm> header a look http://www.cplusplus.com/reference/algorithm/ You'll find a lot of utilities. Note that you have other implementation of containers that could suit you better (std::list? std::map?).
How can I sort an array of double pointers based on the values they point to?
I am trying to build a function in C/C++ to sort an array and replace each value with its "score" or rank. It takes in a double pointer array to an array of ints, and sorts the double pointers based on the dereferenced value of the integers. I have tried quite a few times to make it work, but can't get it down. Once again, it must sort the double pointers based on the values they point to. This is what I have: void SortArray( int ** pArray, int ArrayLength ) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no * for(i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j = 0; j < (ArrayLength -1); j++) { if (*pArray[j+1] > *pArray[j]) // ascending order simply changes to < { temp = &pArray[j]; // swap elements pArray[j] = &pArray[j+1]; pArray[j+1] = &temp; flag = 1; // indicates that a swap occurred. } } } }
[ "You're close. You're referencing the address of the array items when you swap, which isn't necessary. The items in the array are pointers, and that's what needs to be swapped.\nSee below:\nvoid SortArray( int ** pArray, int ArrayLength )\n{\n int i, j, flag = 1; // set flag to 1 to begin initial pass\n int * temp; // holding variable orig with no *\n for(i = ArrayLength - 1; i > 0 && flag; i--)\n {\n flag = 0;\n for (j = 0; j < i; j++)\n {\n if (*pArray[j] > *pArray[j+1]) // ascending order simply changes to <\n { \n temp = pArray[j]; // swap elements\n pArray[j] = pArray[j+1];\n pArray[j+1] = temp;\n flag = 1; // indicates that a swap occurred.\n }\n }\n }\n}\n\nAlso, check out this lovely blog post on Bubble Sorting in case you're interested (sorry, shameless plug :)). Hope that helps you with your homework ;)\n\nEdit: Note the subtle \"optimisation\" where you count back from the array length and only increment up until 'i' in the inner loop. This saves you from needlessly reparsing items that have already been sorted.\n", "\nHeh, this isnt homework. \n\nIf thats the case then consider using the STL to manage arrays and sort. Its easier to develop and maintain and the std::sort algorithm is asymptotically faster than bubble sort.\n", "You should consider using std::swap() to do your swapping. If you do, call it as such:\nswap( obj1, obj2 );\n\nrather than:\nstd::swap( obj1, obj2 );\n\nAs the first calling semantic will allow the proper namespace lookup to find the correct overload if one exists. Be sure to have either:\nusing namespace std;\n\nor:\nusing std::swap;\n\nsomewhere.\n", "\nHmm, I don't have much experience with the STL. Could you give an example?\n\nThis program creates a vector of ints, sorts it, and displays the results.\n#include <vector>\n#include <algorithm>\n#include <iostream>\nusing namespace std;\n\nint main()\n{\n vector<int>; vec;\n vec.push_back(7);\n vec.push_back(5);\n vec.push_back(13);\n sort(vec.begin(), vec.end());\n\n for (vector<int>::size_type i = 0; i < vec.size(); ++i)\n {\n cout << vec[i] << endl;\n }\n}\n\n", "To complete Brian Ensink's post, you'll find the STL full of surprises. For example, the std::sort algorithm:\n#include <iostream>\n#include <vector>\n#include <algorithm>\n\nvoid printArray(const std::vector<int *> & p_aInt)\n{\n for(std::vector<int *>::size_type i = 0, iMax = p_aInt.size(); i < iMax; ++i)\n {\n std::cout << \"i[\" << static_cast<int>(i) << \"] = \" << reinterpret_cast<unsigned int>(p_aInt[i]) << std::endl ;\n }\n\n std::cout << std::endl ;\n}\n\n\nint main(int argc, char **argv)\n{\n int a = 1 ;\n int b = 2 ;\n int c = 3 ;\n int d = 4 ;\n int e = 5 ;\n\n std::vector<int *> aInt ;\n\n // We fill the vector with variables in an unordered way\n aInt.push_back(&c) ;\n aInt.push_back(&b) ;\n aInt.push_back(&e) ;\n aInt.push_back(&d) ;\n aInt.push_back(&a) ;\n\n printArray(aInt) ; // We see the addresses are NOT ordered\n std::sort(aInt.begin(), aInt.end()) ; // DO THE SORTING\n printArray(aInt) ; // We see the addresses are ORDERED\n\n return EXIT_SUCCESS;\n}\n\nThe first printing of the array will show unordered addresses. The second, after the sort, will show ordered adresses. On my compiler, we have:\ni[0] = 3216087168\ni[1] = 3216087172\ni[2] = 3216087160\ni[3] = 3216087164\ni[4] = 3216087176\n\ni[0] = 3216087160\ni[1] = 3216087164\ni[2] = 3216087168\ni[3] = 3216087172\ni[4] = 3216087176\n\nGive STL's <algorithm> header a look http://www.cplusplus.com/reference/algorithm/\nYou'll find a lot of utilities. Note that you have other implementation of containers that could suit you better (std::list? std::map?).\n" ]
[ 5, 3, 2, 1, 0 ]
[]
[]
[ "arrays", "c", "c++", "pointers", "reference" ]
stackoverflow_0000017299_arrays_c_c++_pointers_reference.txt
Q: Quoting command-line arguments in shell scripts The following shell script takes a list of arguments, turns Unix paths into WINE/Windows paths and invokes the given executable under WINE. #! /bin/sh if [ "${1+set}" != "set" ] then echo "Usage; winewrap EXEC [ARGS...]" exit 1 fi EXEC="$1" shift ARGS="" for p in "$@"; do if [ -e "$p" ] then p=$(winepath -w $p) fi ARGS="$ARGS '$p'" done CMD="wine '$EXEC' $ARGS" echo $CMD $CMD However, there's something wrong with the quotation of command-line arguments. $ winewrap '/home/chris/.wine/drive_c/Program Files/Microsoft Research/Z3-1.3.6/bin/z3.exe' -smt /tmp/smtlib3cee8b.smt Executing: wine '/home/chris/.wine/drive_c/Program Files/Microsoft Research/Z3-1.3.6/bin/z3.exe' '-smt' 'Z: mp\smtlib3cee8b.smt' wine: cannot find ''/home/chris/.wine/drive_c/Program' Note that: The path to the executable is being chopped off at the first space, even though it is single-quoted. The literal "\t" in the last path is being transformed into a tab character. Obviously, the quotations aren't being parsed the way I intended by the shell. How can I avoid these errors? EDIT: The "\t" is being expanded through two levels of indirection: first, "$p" (and/or "$ARGS") is being expanded into Z:\tmp\smtlib3cee8b.smt; then, \t is being expanded into the tab character. This is (seemingly) equivalent to Y='y\ty' Z="z${Y}z" echo $Z which yields zy\tyz and not zy yz UPDATE: eval "$CMD" does the trick. The "\t" problem seems to be echo's fault: "If the first operand is -n, or if any of the operands contain a backslash ( '\' ) character, the results are implementation-defined." (POSIX specification of echo) A: bash’s arrays are unportable but the only sane way to handle argument lists in shell The number of arguments is in ${#} Bad stuff will happen with your script if there are filenames starting with a dash in the current directory If the last line of your script just runs a program, and there are no traps on exit, you should exec it With that in mind #! /bin/bash # push ARRAY arg1 arg2 ... # adds arg1, arg2, ... to the end of ARRAY function push() { local ARRAY_NAME="${1}" shift for ARG in "${@}"; do eval "${ARRAY_NAME}[\${#${ARRAY_NAME}[@]}]=\${ARG}" done } PROG="$(basename -- "${0}")" if (( ${#} < 1 )); then # Error messages should state the program name and go to stderr echo "${PROG}: Usage: winewrap EXEC [ARGS...]" 1>&2 exit 1 fi EXEC=("${1}") shift for p in "${@}"; do if [ -e "${p}" ]; then p="$(winepath -w -- "${p}")" fi push EXEC "${p}" done exec "${EXEC[@]}" A: I you do want to have the assignment to CMD you should use eval $CMD instead of just $CMD in the last line of your script. This should solve your problem with spaces in the paths, I don't know what to do about the "\t" problem. A: You can try preceeding the spaces with \ like so: /home/chris/.wine/drive_c/Program Files/Microsoft\ Research/Z3-1.3.6/bin/z3.exe You can also do the same with your \t problem - replace it with \\t. A: replace the last line from $CMD to just wine '$EXEC' $ARGS You'll note that the error is ''/home/chris/.wine/drive_c/Program' and not '/home/chris/.wine/drive_c/Program' The single quotes are not being interpolated properly, and the string is being split by spaces.
Quoting command-line arguments in shell scripts
The following shell script takes a list of arguments, turns Unix paths into WINE/Windows paths and invokes the given executable under WINE. #! /bin/sh if [ "${1+set}" != "set" ] then echo "Usage; winewrap EXEC [ARGS...]" exit 1 fi EXEC="$1" shift ARGS="" for p in "$@"; do if [ -e "$p" ] then p=$(winepath -w $p) fi ARGS="$ARGS '$p'" done CMD="wine '$EXEC' $ARGS" echo $CMD $CMD However, there's something wrong with the quotation of command-line arguments. $ winewrap '/home/chris/.wine/drive_c/Program Files/Microsoft Research/Z3-1.3.6/bin/z3.exe' -smt /tmp/smtlib3cee8b.smt Executing: wine '/home/chris/.wine/drive_c/Program Files/Microsoft Research/Z3-1.3.6/bin/z3.exe' '-smt' 'Z: mp\smtlib3cee8b.smt' wine: cannot find ''/home/chris/.wine/drive_c/Program' Note that: The path to the executable is being chopped off at the first space, even though it is single-quoted. The literal "\t" in the last path is being transformed into a tab character. Obviously, the quotations aren't being parsed the way I intended by the shell. How can I avoid these errors? EDIT: The "\t" is being expanded through two levels of indirection: first, "$p" (and/or "$ARGS") is being expanded into Z:\tmp\smtlib3cee8b.smt; then, \t is being expanded into the tab character. This is (seemingly) equivalent to Y='y\ty' Z="z${Y}z" echo $Z which yields zy\tyz and not zy yz UPDATE: eval "$CMD" does the trick. The "\t" problem seems to be echo's fault: "If the first operand is -n, or if any of the operands contain a backslash ( '\' ) character, the results are implementation-defined." (POSIX specification of echo)
[ "\nbash’s arrays are unportable but the only sane way to handle argument lists in shell\nThe number of arguments is in ${#}\nBad stuff will happen with your script if there are filenames starting with a dash in the current directory\nIf the last line of your script just runs a program, and there are no traps on exit, you should exec it\n\nWith that in mind\n#! /bin/bash\n\n# push ARRAY arg1 arg2 ...\n# adds arg1, arg2, ... to the end of ARRAY\nfunction push() {\n local ARRAY_NAME=\"${1}\"\n shift\n for ARG in \"${@}\"; do\n eval \"${ARRAY_NAME}[\\${#${ARRAY_NAME}[@]}]=\\${ARG}\"\n done\n}\n\nPROG=\"$(basename -- \"${0}\")\"\n\nif (( ${#} < 1 )); then\n # Error messages should state the program name and go to stderr\n echo \"${PROG}: Usage: winewrap EXEC [ARGS...]\" 1>&2\n exit 1\nfi\n\nEXEC=(\"${1}\")\nshift\n\nfor p in \"${@}\"; do\n if [ -e \"${p}\" ]; then\n p=\"$(winepath -w -- \"${p}\")\"\n fi\n push EXEC \"${p}\"\ndone\n\nexec \"${EXEC[@]}\"\n\n", "I you do want to have the assignment to CMD you should use \neval $CMD \ninstead of just $CMD in the last line of your script. This should solve your problem with spaces in the paths, I don't know what to do about the \"\\t\" problem.\n", "You can try preceeding the spaces with \\ like so:\n/home/chris/.wine/drive_c/Program Files/Microsoft\\ Research/Z3-1.3.6/bin/z3.exe\n\nYou can also do the same with your \\t problem - replace it with \\\\t.\n", "replace the last line from $CMD to just\nwine '$EXEC' $ARGS\nYou'll note that the error is ''/home/chris/.wine/drive_c/Program' and not '/home/chris/.wine/drive_c/Program'\nThe single quotes are not being interpolated properly, and the string is being split by spaces.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "shell", "unix", "wine" ]
stackoverflow_0000036109_shell_unix_wine.txt
Q: Not getting emails from ExceptionNotifier I followed this tutorial on configuring the Rails plugin ExceptionNotifier. I know that I have ActionMailer configured correctly because I am getting mail from other forms. I even have local_addresses.clear set so that it should be delivering mail no matter what. I am using Apache with a mongrel_cluster running in the backend. What am I missing? A: You're using the SVN version of the plugin, which is probably unmaintained. Latest version can be found here. Second thing which you can do is check the production log. Mailings get written to the log, so you'll see if Rails ever even tried to send it. If there are no entries, that means things are silently failing, which probably happens because -- for some reason -- exceptions are not caught properly. A: Check your production log, exceptions can be throw in side the exception_notifier plugin, which prevent it from sending mails A: If you added your ExceptionNotifier configuration information (your email address, etc.) into config/environment.rb, did you add it within the Rails::Initializer block or did you add it at the end of the file? The tutorial you linked to doesn't specify where in the environment file to put the configuration information. The tutorial I followed (which might have been this one) does specify to put it outside the block. Which things go inside that block and which outside is, frankly, still a little mysterious to me. But I thought this might answer your specific question.
Not getting emails from ExceptionNotifier
I followed this tutorial on configuring the Rails plugin ExceptionNotifier. I know that I have ActionMailer configured correctly because I am getting mail from other forms. I even have local_addresses.clear set so that it should be delivering mail no matter what. I am using Apache with a mongrel_cluster running in the backend. What am I missing?
[ "You're using the SVN version of the plugin, which is probably unmaintained. Latest version can be found here.\nSecond thing which you can do is check the production log. Mailings get written to the log, so you'll see if Rails ever even tried to send it. If there are no entries, that means things are silently failing, which probably happens because -- for some reason -- exceptions are not caught properly.\n", "Check your production log, exceptions can be throw in side the exception_notifier plugin, which prevent it from sending mails\n", "If you added your ExceptionNotifier configuration information (your email address, etc.) into config/environment.rb, did you add it within the Rails::Initializer block or did you add it at the end of the file?\nThe tutorial you linked to doesn't specify where in the environment file to put the configuration information. The tutorial I followed (which might have been this one) does specify to put it outside the block.\nWhich things go inside that block and which outside is, frankly, still a little mysterious to me. But I thought this might answer your specific question.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "exception", "ruby", "ruby_on_rails" ]
stackoverflow_0000109830_exception_ruby_ruby_on_rails.txt
Q: SSRS 2005 - Looping Through Report Parameters I would like to be able to loop through all of the defined parameters on my reports and build a display string of the parameter name and value. I'd then display the results on the report so the user knows which parameters were used for that specific execution. The only problem is that I cannot loop through the Parameters collection. There doesn't seem to be an indexer on the Parameters collection, nor does it seem to implement IEnumerable. Has anyone been able to accomplish this? I'm using SSRS 2005 and it must be implemented within the Report Code (i.e., no external assembly). Thanks! A: Unfortunately, it looks like there's no simple way to do this. See http://www.jameskovacs.com/blog/DiggingDeepIntoReportingServices.aspx for more info. If you look at the comments of that post, there are some ways to get around this, but they're not very elegant. The simplest solution will require you to have a list of the report parameters somewhere in your Report Code, which obviously violates the DRY principle, but if you want the simplest solution, you might just have to live with that. You might want to rethink your constraint of no external assembly, as it looks to me that it would be much easier to do this with an external assembly. Or if your report isn't going to change much, you can create the list of parameter names and values manually. A: If I'm understanding your question, just do what I do: Drop a textbox on the report, then while you are setting up the report, insert the following: ="Parameter1: " + Parameters!Parameter.Label + ", Parameter2: " + Parameters!Parameter2.Label... Granted, it's not the prettiest thing, but it does work pretty well in our app. And I'm using Labels instead of Values since we have datetime values, and the user only cares about either the short date or the month and year (depending on circumstance), and I've already done that formatting work in setting up the parameters. A: I can think of at least two ways to do this. The first might work, the second will definitely work. Use the web service. I'm pretty sure I saw API for getting a collection of parameters. Even if there's no direct access you can always create a standard collection and copy the ReportParameter objects from one to the other in a foreach loop - and then access Count, with individual parameter properties available by dereferencing the ReportParameter instances. Reports are RDL. RDL is XML. Create an XmlDocument and load the RDL file, then use the DOM to do, well, anything you like up to and including setting default values or even rewriting connection strings. If your app won't have file-system access to the RDL files you can get them via the web service.
SSRS 2005 - Looping Through Report Parameters
I would like to be able to loop through all of the defined parameters on my reports and build a display string of the parameter name and value. I'd then display the results on the report so the user knows which parameters were used for that specific execution. The only problem is that I cannot loop through the Parameters collection. There doesn't seem to be an indexer on the Parameters collection, nor does it seem to implement IEnumerable. Has anyone been able to accomplish this? I'm using SSRS 2005 and it must be implemented within the Report Code (i.e., no external assembly). Thanks!
[ "Unfortunately, it looks like there's no simple way to do this.\nSee http://www.jameskovacs.com/blog/DiggingDeepIntoReportingServices.aspx for more info. If you look at the comments of that post, there are some ways to get around this, but they're not very elegant. The simplest solution will require you to have a list of the report parameters somewhere in your Report Code, which obviously violates the DRY principle, but if you want the simplest solution, you might just have to live with that.\nYou might want to rethink your constraint of no external assembly, as it looks to me that it would be much easier to do this with an external assembly. Or if your report isn't going to change much, you can create the list of parameter names and values manually.\n", "If I'm understanding your question, just do what I do:\nDrop a textbox on the report, then while you are setting up the report, insert the following:\n=\"Parameter1: \" + Parameters!Parameter.Label + \", Parameter2: \" + Parameters!Parameter2.Label...\nGranted, it's not the prettiest thing, but it does work pretty well in our app.\nAnd I'm using Labels instead of Values since we have datetime values, and the user only cares about either the short date or the month and year (depending on circumstance), and I've already done that formatting work in setting up the parameters.\n", "I can think of at least two ways to do this. The first might work, the second will definitely work.\n\nUse the web service. I'm pretty sure I saw API for getting a collection of parameters. Even if there's no direct access you can always create a standard collection and copy the ReportParameter objects from one to the other in a foreach loop - and then access Count, with individual parameter properties available by dereferencing the ReportParameter instances.\nReports are RDL. RDL is XML. Create an XmlDocument and load the RDL file, then use the DOM to do, well, anything you like up to and including setting default values or even rewriting connection strings.\n\nIf your app won't have file-system access to the RDL files you can get them via the web service.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "parameters", "reporting_services", "sql_server_2005" ]
stackoverflow_0000084195_parameters_reporting_services_sql_server_2005.txt
Q: SQL Server Priority Ordering I have a table that contains tasks and I want to give these an explicit ordering based on the priority of the task. The only way I can think to do this is via an unique int column that indexes where the task is in term of the priority (i.e. 1 is top 1000 is low). The problem is that say I wanted to update task and set its priority to a lower value , I would have to update all the other rows between its current value and its new value. Can anyone suggest a better way of implementing this? A: Use a real number value as the priority. You can always slide in a value between two existing values with something like newPri = task1Pri + (task2Pri - task1Pri)/2 where Task1 has the lower priority numeric value (which is probably the higher piority). Corin points out that min and max priorities would have to be calculated for tasks inserted at the top or bottom of the priority list. And joelhardi reminds us that a reorder process is a good idea to clean up the table from time to time. A: Instead of creating an numbered column like you said, create a field called something like parent. Each row contains the pk of its parent item. When you want to move one item down just change its parent pk to the new one and the item(s) which reference it in their parent pk. Think singly linked lists. A: I like Kevin's answer best, but if you want a quick-and-dirty solution, just do it the way you've already described, but instead of incrementing by 1, increment by 10 or 100... that way if you need to re-prioritize, you have some wiggle room between tasks. A: I would assign only a small number of values (1..10) and then ORDER BY Priority DESC, DateCreated ASC. If you need to have different priorities for each task you need to UPDATE WHERE Priority > xxx like you said. A: if no two tasks can have the same priority then I think that is what you have to do. But you could have a priority and a datemodified column and just sort by both to get the right order based on priority and last update if you allow priority to be duplicated.
SQL Server Priority Ordering
I have a table that contains tasks and I want to give these an explicit ordering based on the priority of the task. The only way I can think to do this is via an unique int column that indexes where the task is in term of the priority (i.e. 1 is top 1000 is low). The problem is that say I wanted to update task and set its priority to a lower value , I would have to update all the other rows between its current value and its new value. Can anyone suggest a better way of implementing this?
[ "Use a real number value as the priority. You can always slide in a value between two existing values with something like newPri = task1Pri + (task2Pri - task1Pri)/2 where Task1 has the lower priority numeric value (which is probably the higher piority).\nCorin points out that min and max priorities would have to be calculated for tasks inserted at the top or bottom of the priority list.\nAnd joelhardi reminds us that a reorder process is a good idea to clean up the table from time to time.\n", "Instead of creating an numbered column like you said, create a field called something like parent. Each row contains the pk of its parent item. When you want to move one item down just change its parent pk to the new one and the item(s) which reference it in their parent pk. Think singly linked lists.\n", "I like Kevin's answer best, but if you want a quick-and-dirty solution, just do it the way you've already described, but instead of incrementing by 1, increment by 10 or 100... that way if you need to re-prioritize, you have some wiggle room between tasks.\n", "I would assign only a small number of values (1..10) and then ORDER BY Priority DESC, DateCreated ASC.\nIf you need to have different priorities for each task you need to UPDATE WHERE Priority > xxx like you said.\n", "if no two tasks can have the same priority then I think that is what you have to do. But you could have a priority and a datemodified column and just sort by both to get the right order based on priority and last update if you allow priority to be duplicated. \n" ]
[ 7, 3, 1, 0, 0 ]
[]
[]
[ "data_structures", "sql_server" ]
stackoverflow_0000112551_data_structures_sql_server.txt
Q: DropShadowBitmapEffect Doesn't work on TextBlock Does anyone know why the DropShadowBitmapEffect and the EmbossBitmapEffect won't work on a TextBlock (not textBOX) in WPF? OuterGlow, Blur and Bevel seem to work fine. The transparent background brush is apparently not the answer because you can get a dropshadow with a null background brush. The default softness on a dropshadow is 50% and if you have a small font, the softness dissipates the shadow too much. There seems to be a steep drop off around softness of 39% (at which point the shadow more or less disappears). Try setting it to 0 and slowly moving you're way up until you find a number that still shows the shadow. Yet another note: the softness is definitely a factor, but be aware in Xaml the valid values are really only 0 to 1, but in Blend it shows it as a percentage up to 100. So if you set the value to 100 in Xaml, it will be completely dissipated. The background brush = transparent solution still may work for the embossing effect A: Bitmap effects work by looking at the post-rendered pixels and running standard image manipulation on them. It should only be dependent on the color of the pixels. I wonder if their algorithms don't work well on white. Try changing the color to see if that has an effect -- if it does, you might want to try putting a black panel underneath with drop shadow set on it. Edit: The questioner found the answer "Thanks for pointing me in the correct general direction. It wasn't the color of the text or the DropShadow that mattered, what is needed is to make the Background Brush on the TextBlock the Transparent Brush (Alpha = 0) instead of null." A: Important Sidenote: you shouldn't really be using BitmapEffects any more. Use the Effect property based on ShaderModel effects introduced in .net 3.5 SP1, it uses hardware rendering and has far better performance. More Information
DropShadowBitmapEffect Doesn't work on TextBlock
Does anyone know why the DropShadowBitmapEffect and the EmbossBitmapEffect won't work on a TextBlock (not textBOX) in WPF? OuterGlow, Blur and Bevel seem to work fine. The transparent background brush is apparently not the answer because you can get a dropshadow with a null background brush. The default softness on a dropshadow is 50% and if you have a small font, the softness dissipates the shadow too much. There seems to be a steep drop off around softness of 39% (at which point the shadow more or less disappears). Try setting it to 0 and slowly moving you're way up until you find a number that still shows the shadow. Yet another note: the softness is definitely a factor, but be aware in Xaml the valid values are really only 0 to 1, but in Blend it shows it as a percentage up to 100. So if you set the value to 100 in Xaml, it will be completely dissipated. The background brush = transparent solution still may work for the embossing effect
[ "Bitmap effects work by looking at the post-rendered pixels and running standard image manipulation on them. It should only be dependent on the color of the pixels. I wonder if their algorithms don't work well on white. Try changing the color to see if that has an effect -- if it does, you might want to try putting a black panel underneath with drop shadow set on it.\nEdit: The questioner found the answer\n\"Thanks for pointing me in the correct general direction. It wasn't the color of the text or the DropShadow that mattered, what is needed is to make the Background Brush on the TextBlock the Transparent Brush (Alpha = 0) instead of null.\"\n", "Important Sidenote: you shouldn't really be using BitmapEffects any more. Use the Effect property based on ShaderModel effects introduced in .net 3.5 SP1, it uses hardware rendering and has far better performance.\nMore Information\n" ]
[ 2, 1 ]
[]
[]
[ "bitmapeffect", "wpf" ]
stackoverflow_0000111686_bitmapeffect_wpf.txt
Q: Is there any way to get access to the DOM from Objective-C when using UIWebView? UIWebView is fine for displaying HTML, but I'd like to modify the loaded DOM from my Objective-C program. Does anybody know how to do that? This is a third party page, so I can't really include any custom JS to do so...unless I can modify the DOM somehow. A: This may be of some help, as not to plagiarize the solution provider in the link, here's the link: http://lists.apple.com/archives/safari-iphone-web-dev/2008/Sep/msg00001.html A: Some more related techniques around Obj-C / JS communication: Using UIWebView for local resources: http://dominiek.com/articles/2008/7/19/iphone-app-development-for-web-hackers Google Maps API for iPhone: http://code.google.com/p/iphone-google-maps-component/ Bidirectional calling between Obj-C and Javascript: http://tetontech.wordpress.com/2008/08/14/calling-objective-c-from-javascript-in-an-iphone-uiwebview/
Is there any way to get access to the DOM from Objective-C when using UIWebView?
UIWebView is fine for displaying HTML, but I'd like to modify the loaded DOM from my Objective-C program. Does anybody know how to do that? This is a third party page, so I can't really include any custom JS to do so...unless I can modify the DOM somehow.
[ "This may be of some help, as not to plagiarize the solution provider in the link, here's the link:\nhttp://lists.apple.com/archives/safari-iphone-web-dev/2008/Sep/msg00001.html\n", "Some more related techniques around Obj-C / JS communication:\nUsing UIWebView for local resources:\nhttp://dominiek.com/articles/2008/7/19/iphone-app-development-for-web-hackers\nGoogle Maps API for iPhone: \nhttp://code.google.com/p/iphone-google-maps-component/\nBidirectional calling between Obj-C and Javascript:\nhttp://tetontech.wordpress.com/2008/08/14/calling-objective-c-from-javascript-in-an-iphone-uiwebview/\n" ]
[ 3, 1 ]
[]
[]
[ "iphone", "objective_c" ]
stackoverflow_0000099241_iphone_objective_c.txt
Q: Is there a way to emulate PHP5's __call() magic method in PHP4? PHP5 has a "magic method" __call()that can be defined on any class that is invoked when an undefined method is called -- it is roughly equivalent to Ruby's method_missing or Perl's AUTOLOAD. Is it possible to do something like this in older versions of PHP? A: The most important bit that I was missing was that __call exists in PHP4, but you must enable it on a per-class basis by calling overload(), as seen in php docs here . Unfortunately, the __call() function signatures are different between PHP4 and PHP5, and there does not seem to be a way to make an implementation that will run in both. A: I recall using it, and a little bit of googling suggests that function __call($method_name, $parameters, &$return) { $return_value = "You called ${method_name}!"; } as a member function will do the job.
Is there a way to emulate PHP5's __call() magic method in PHP4?
PHP5 has a "magic method" __call()that can be defined on any class that is invoked when an undefined method is called -- it is roughly equivalent to Ruby's method_missing or Perl's AUTOLOAD. Is it possible to do something like this in older versions of PHP?
[ "The most important bit that I was missing was that __call exists in PHP4, but you must enable it on a per-class basis by calling overload(), as seen in php docs here .\nUnfortunately, the __call() function signatures are different between PHP4 and PHP5, and there does not seem to be a way to make an implementation that will run in both.\n", "I recall using it, and a little bit of googling suggests that\nfunction __call($method_name, $parameters, &$return)\n{\n $return_value = \"You called ${method_name}!\";\n}\n\nas a member function will do the job.\n" ]
[ 2, 0 ]
[]
[]
[ "delegation", "php4" ]
stackoverflow_0000076328_delegation_php4.txt
Q: Using the Window API, how do I ensure controls retain a native appearance? Some of the controls I've created seem to default to the old Windows 95 theme, how do I prevent this? Here's an example of a button that does not retain the Operating System's native appearance (I'm using Vista as my development environment): HWND button = CreateWindowEx(NULL, L"BUTTON", L"OK", WS_VISIBLE | WS_CHILD | BS_PUSHBUTTON, 170, 340, 80, 25, hwnd, NULL, GetModuleHandle(NULL), NULL); I'm using native C++ with the Windows API, no managed code. A: To add a manifest to the application you need create a MyApp.manifest file and add it to the application resource file: //-- This define is normally part of the SDK but define it if this //-- is an older version of the SDK. #ifndef RT_MANIFEST #define RT_MANIFEST 24 #endif //-- Add the MyApp XP Manifest file CREATEPROCESS_MANIFEST_RESOURCE_ID RT_MANIFEST "MyApp.manifest" With newer versions of Visual Studio there is a Manifest Tool tab found in the project settings and the Additional Manifest Files field found on this tab can also be used to define the manifest file. Here is a simple MyApp.manifest file for a Win32 application: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.0.0.1" processorArchitecture="X86" name="Microsoft.Windows.MyApp" type="win32" /> <description>MyApp</description> </assembly> If you application depends on the other dlls these details can also be added to the manifest and Windows will use this information to make sure your application always uses the correct versions of these dependent dlls. For example here are the manifest dependency details for the common control and version 8.0 C runtime libraries: <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="X86" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC80.CRT" version="8.0.50608.0" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> A: I believe it has got nothing to do with your code, but you need to set up a proper manifest file to get the themed controls. Some info here: @msdn.com and here: @blogs.msdn.com You can see a difference between application with and without manifest here: heaventools.com
Using the Window API, how do I ensure controls retain a native appearance?
Some of the controls I've created seem to default to the old Windows 95 theme, how do I prevent this? Here's an example of a button that does not retain the Operating System's native appearance (I'm using Vista as my development environment): HWND button = CreateWindowEx(NULL, L"BUTTON", L"OK", WS_VISIBLE | WS_CHILD | BS_PUSHBUTTON, 170, 340, 80, 25, hwnd, NULL, GetModuleHandle(NULL), NULL); I'm using native C++ with the Windows API, no managed code.
[ "To add a manifest to the application you need create a MyApp.manifest file and add it to the application resource file:\n//-- This define is normally part of the SDK but define it if this \n//-- is an older version of the SDK.\n#ifndef RT_MANIFEST\n#define RT_MANIFEST 24\n#endif\n\n//-- Add the MyApp XP Manifest file\nCREATEPROCESS_MANIFEST_RESOURCE_ID RT_MANIFEST \"MyApp.manifest\"\n\nWith newer versions of Visual Studio there is a Manifest Tool tab found in the project settings and the Additional Manifest Files field found on this tab can also be used to define the manifest file.\nHere is a simple MyApp.manifest file for a Win32 application:\n<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<assembly xmlns=\"urn:schemas-microsoft-com:asm.v1\" manifestVersion=\"1.0\">\n<assemblyIdentity\n version=\"1.0.0.1\"\n processorArchitecture=\"X86\"\n name=\"Microsoft.Windows.MyApp\"\n type=\"win32\"\n/>\n<description>MyApp</description>\n</assembly>\n\nIf you application depends on the other dlls these details can also be added to the manifest and Windows will use this information to make sure your application always uses the correct versions of these dependent dlls.\nFor example here are the manifest dependency details for the common control and version 8.0 C runtime libraries:\n<dependentAssembly>\n <assemblyIdentity\n type=\"win32\"\n name=\"Microsoft.Windows.Common-Controls\"\n version=\"6.0.0.0\"\n processorArchitecture=\"X86\"\n publicKeyToken=\"6595b64144ccf1df\"\n language=\"*\"\n />\n</dependentAssembly>\n<dependentAssembly>\n <assemblyIdentity\n type=\"win32\"\n name=\"Microsoft.VC80.CRT\"\n version=\"8.0.50608.0\"\n processorArchitecture=\"x86\"\n publicKeyToken=\"1fc8b3b9a1e18e3b\" />\n</dependentAssembly>\n\n", "I believe it has got nothing to do with your code, but you need to set up a proper manifest file to get the themed controls.\nSome info here: @msdn.com and here: @blogs.msdn.com\nYou can see a difference between application with and without manifest here: heaventools.com\n" ]
[ 6, 4 ]
[]
[]
[ "appearance", "c++", "user_interface", "winapi" ]
stackoverflow_0000111630_appearance_c++_user_interface_winapi.txt
Q: What is the easiest way to upgrade a large C# winforms app to WPF I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort? A: You can start by creating a WPF host. Then you can use the <WindowsFormHost/> control to host your current application. Then, I suggest creating a library of your new controls in WPF. One at a time, you can create the controls (I suggest making them custom controls, not usercontrols). Within the style for each control, you can start with using the <ElementHost/> control to include the "old" windows forms control. Then you can take your time to refactor and recreate each control as complete WPF. I think it will still take an initial effort to create your control wrappers and design a WPF host for the application. I am not sure the size of the application and or the complexity of the user controls, so I'm not sure how much effort that would be for you. Relatively speaking, it is significantly less effort and much faster to get you application up and running in WPF this way. I wouldn't just do that and forget about it though, as you may run into issues with controls overlaying each other (Windows forms does not play well with WPF, especially with transparencies and other visuals) Please update us on the status of this project, or provide more technical information if you would like more specific guidance. Thanks :) A: Do you use a lot of User controls for the pieces? WPF can host winform controls, so you could piecewise bring in parts into the main form. A: WPF allows you to embed windows forms user controls into a WPF application, which may help you make the transition in smaller steps. Take a look at the WindowsFormsHost class in the WPF documentation. A: I assume that you are not just looing for an ElementHost to put your vast Winforms app. That is anyway not a real porting to WPF. Consider the answers on this Thread What are the bigger hurdles to overcome migrating from Winforms to WPF?, It will be very helpfull. A: There is a very interesting white paper on migrating a .NET 2.0 winform application toward WPF, see Evolving toward a .NET 3.5 application Paper abstract: In this paper, I’m going to outline some of the thought processes, decisions and issues we had to face when evolving a Microsoft .NET application from 1.x/2.x to 3.x. I’ll look at how we helped our client to adopt the new technology, and yet still maintained a release schedule acceptable to the business.
What is the easiest way to upgrade a large C# winforms app to WPF
I work on a large C# application (approximately 450,000 lines of code), we constantly have problems with desktop heap and GDI handle leaks. WPF solves these issues, but I don't know what is the best way to upgrade (I expect this is going to take a long time). The application has only a few forms but these can contain many different sets of user-controls which are determined programatically. This is an internal company app so our release cycles are very short (typically 3 week release cycle). Is there some gradual upgrade path or do we have to take the hit in one massive effort?
[ "You can start by creating a WPF host. \nThen you can use the <WindowsFormHost/> control to host your current application. Then, I suggest creating a library of your new controls in WPF. One at a time, you can create the controls (I suggest making them custom controls, not usercontrols). Within the style for each control, you can start with using the <ElementHost/> control to include the \"old\" windows forms control. Then you can take your time to refactor and recreate each control as complete WPF. \nI think it will still take an initial effort to create your control wrappers and design a WPF host for the application. I am not sure the size of the application and or the complexity of the user controls, so I'm not sure how much effort that would be for you. Relatively speaking, it is significantly less effort and much faster to get you application up and running in WPF this way. \nI wouldn't just do that and forget about it though, as you may run into issues with controls overlaying each other (Windows forms does not play well with WPF, especially with transparencies and other visuals)\nPlease update us on the status of this project, or provide more technical information if you would like more specific guidance. Thanks :)\n", "Do you use a lot of User controls for the pieces? WPF can host winform controls, so you could piecewise bring in parts into the main form.\n", "WPF allows you to embed windows forms user controls into a WPF application, which may help you make the transition in smaller steps.\nTake a look at the WindowsFormsHost class in the WPF documentation.\n", "I assume that you are not just looing for an ElementHost to put your vast Winforms app. That is anyway not a real porting to WPF.\nConsider the answers on this Thread What are the bigger hurdles to overcome migrating from Winforms to WPF?, It will be very helpfull. \n", "There is a very interesting white paper on migrating a .NET 2.0 winform application toward WPF, see Evolving toward a .NET 3.5 application\nPaper abstract:\nIn this paper, I’m going to outline some of the thought processes, decisions and issues we had to face when evolving a Microsoft .NET application from 1.x/2.x to 3.x. I’ll look at how we helped our client to adopt the new technology, and yet still maintained a release schedule acceptable to the business.\n" ]
[ 12, 3, 2, 2, 1 ]
[]
[]
[ ".net", "c#", "upgrade", "wpf" ]
stackoverflow_0000111817_.net_c#_upgrade_wpf.txt
Q: Is it possible to have one appBase served by multiple context paths in Tomcat? Is it possible to have one appBase served up by multiple context paths in Tomcat? I have an application base that recently replaced a second application base. My problem is a number of users still access the old context. I would like to serve the, now common, application from a single appBase yet accessed via either context. I took a swing at the low lying fruit and used a symbolic link in the 'webapps' directory... pointing the old context path at the new context path; it works, but feels "cheezy." And I don't like that a database connection pool is created for both contexts ( I would like to minimize the resources for connecting to the database ). Anyway, if anyone knows of the "proper" way to do this I will greatly appreciate it. I'm using Tomcat 6.0.16 - no apache front end ( I suppose URL rewrite would be nice ). A: I'm not sure if the answer above will prevent your webapp from loading twice (as you'd have to deploy it to both new and old context paths), but I could be mistaken. Another option would be to have an extremely simple webapp left in the old context, that does nothing except have one custom servlet filter declared in the web.xml that re-writes all requests to the new path (essentially simulating apache's rewrite rule behaviour). You'd have to write the filter class yourself but it would be quite trivial. A: Yes, go into the Tomcat Web Application Manager and scroll down to "Deploy directory or WAR file located on server". For "Context Path (optional):" put in the new context. For "WAR or Directory URL:" put in the same path as your existing app.
Is it possible to have one appBase served by multiple context paths in Tomcat?
Is it possible to have one appBase served up by multiple context paths in Tomcat? I have an application base that recently replaced a second application base. My problem is a number of users still access the old context. I would like to serve the, now common, application from a single appBase yet accessed via either context. I took a swing at the low lying fruit and used a symbolic link in the 'webapps' directory... pointing the old context path at the new context path; it works, but feels "cheezy." And I don't like that a database connection pool is created for both contexts ( I would like to minimize the resources for connecting to the database ). Anyway, if anyone knows of the "proper" way to do this I will greatly appreciate it. I'm using Tomcat 6.0.16 - no apache front end ( I suppose URL rewrite would be nice ).
[ "I'm not sure if the answer above will prevent your webapp from loading twice (as you'd have to deploy it to both new and old context paths), but I could be mistaken. Another option would be to have an extremely simple webapp left in the old context, that does nothing except have one custom servlet filter declared in the web.xml that re-writes all requests to the new path (essentially simulating apache's rewrite rule behaviour). You'd have to write the filter class yourself but it would be quite trivial.\n", "Yes, go into the Tomcat Web Application Manager and scroll down to \"Deploy directory or WAR file located on server\". For \"Context Path (optional):\" put in the new context. For \"WAR or Directory URL:\" put in the same path as your existing app.\n" ]
[ 2, 1 ]
[]
[]
[ "java", "tomcat" ]
stackoverflow_0000112480_java_tomcat.txt
Q: How to convert from Decimal to T? I've built a wrapper over NumbericUpDown control. The wrapper is generic and can support int? and double? I would like to write a method that will do the following. public partial class NullableNumericUpDown<T> : UserControl where T : struct { private NumbericUpDown numericUpDown; private T? Getvalue() { T? value = numericUpDown.Value as T?; // <-- this is null :) thus my question return value; }} of course there is no cast between decimal and double? or int? so I need to use a certain way of converting. I would like to avoid switch or if expressions. What would you do? To clarify my question I've provided more code... A: It's not clear how you gonna use it. If you want double create GetDouble() method, for integers - GetInteger() EDIT: Ok, now I think I understand your use case Try this: using System; using System.ComponentModel; static Nullable<T> ConvertFromString<T>(string value) where T:struct { TypeConverter converter = TypeDescriptor.GetConverter(typeof(T)); if (converter != null && !string.IsNullOrEmpty(value)) { try { return (T)converter.ConvertFrom(value); } catch (Exception e) // Unfortunately Converter throws general Exception { return null; } } return null; } ... double? @double = ConvertFromString<double>("1.23"); Console.WriteLine(@double); // prints 1.23 int? @int = ConvertFromString<int>("100"); Console.WriteLine(@int); // prints 100 long? @long = ConvertFromString<int>("1.1"); Console.WriteLine(@long.HasValue); // prints False A: Since this method will always return the result of numericUpDown.Value you have no cause for the value to be converted to anything other than Decimal. Are you trying to solve a problem you don't have? A: public class FromDecimal<T> where T : struct, IConvertible { public T GetFromDecimal(decimal Source) { T myValue = default(T); myValue = (T) Convert.ChangeType(Source, myValue.GetTypeCode()); return myValue; } } public class FromDecimalTestClass { public void TestMethod() { decimal a = 1.1m; var Inter = new FromDecimal<int>(); int x = Inter.GetFromDecimal(a); int? y = Inter.GetFromDecimal(a); Console.WriteLine("{0} {1}", x, y); var Doubler = new FromDecimal<double>(); double dx = Doubler.GetFromDecimal(a); double? dy = Doubler.GetFromDecimal(a); Console.WriteLine("{0} {1}", dx, dy); } } private T? Getvalue() { T? value = null; if (this.HasValue) value = new FromDecimal<T>().GetFromDecimal(NumericUpDown); return value; }
How to convert from Decimal to T?
I've built a wrapper over NumbericUpDown control. The wrapper is generic and can support int? and double? I would like to write a method that will do the following. public partial class NullableNumericUpDown<T> : UserControl where T : struct { private NumbericUpDown numericUpDown; private T? Getvalue() { T? value = numericUpDown.Value as T?; // <-- this is null :) thus my question return value; }} of course there is no cast between decimal and double? or int? so I need to use a certain way of converting. I would like to avoid switch or if expressions. What would you do? To clarify my question I've provided more code...
[ "It's not clear how you gonna use it.\nIf you want double create GetDouble() method, for integers - GetInteger()\nEDIT:\nOk, now I think I understand your use case\nTry this:\nusing System;\nusing System.ComponentModel;\n\nstatic Nullable<T> ConvertFromString<T>(string value) where T:struct\n{\n TypeConverter converter = TypeDescriptor.GetConverter(typeof(T));\n if (converter != null && !string.IsNullOrEmpty(value))\n {\n try\n {\n return (T)converter.ConvertFrom(value);\n }\n catch (Exception e) // Unfortunately Converter throws general Exception\n {\n return null;\n }\n }\n\n return null;\n}\n\n...\n\ndouble? @double = ConvertFromString<double>(\"1.23\");\nConsole.WriteLine(@double); // prints 1.23\n\nint? @int = ConvertFromString<int>(\"100\");\nConsole.WriteLine(@int); // prints 100\n\nlong? @long = ConvertFromString<int>(\"1.1\");\nConsole.WriteLine(@long.HasValue); // prints False\n\n", "Since this method will always return the result of\nnumericUpDown.Value\n\nyou have no cause for the value to be converted to anything other than Decimal. Are you trying to solve a problem you don't have?\n", "public class FromDecimal<T> where T : struct, IConvertible\n{\n public T GetFromDecimal(decimal Source)\n {\n T myValue = default(T);\n myValue = (T) Convert.ChangeType(Source, myValue.GetTypeCode());\n return myValue;\n }\n}\n\npublic class FromDecimalTestClass\n{\n public void TestMethod()\n {\n decimal a = 1.1m;\n var Inter = new FromDecimal<int>();\n int x = Inter.GetFromDecimal(a);\n int? y = Inter.GetFromDecimal(a);\n Console.WriteLine(\"{0} {1}\", x, y);\n\n var Doubler = new FromDecimal<double>();\n double dx = Doubler.GetFromDecimal(a);\n double? dy = Doubler.GetFromDecimal(a);\n Console.WriteLine(\"{0} {1}\", dx, dy);\n }\n}\n\n\nprivate T? Getvalue()\n{\n T? value = null;\n if (this.HasValue)\n value = new FromDecimal<T>().GetFromDecimal(NumericUpDown);\n return value;\n}\n\n" ]
[ 5, 0, 0 ]
[]
[]
[ "c#" ]
stackoverflow_0000110763_c#.txt
Q: All possible uses for the Application.SysCmd Method Is there a place to find all possible uses of the syscmd method in MS Access? I know Microsoft has a developer reference, but I have found there are many other uses for this method that are not listed here. A: Access itself provides an interface to the full object model of all libraries in use. In the VBE, hit F2 on the keyboard (or, from the VIEW menu, choose OBJECT BROWSER). Type "syscmd" in the search box and you'll get the full details on it. The variable names are verbose enough to explain just about everything you need to know. EDIT: The object browser doesn't give you anything but the SysCmd functions that have been documented by assigning named constants. But the recommendation to familiarize yourself with the object browser is a good one, especially if you right click on the CLASSES list and choose SHOW HIDDEN MEMBERS -- you can learn a lot from that. A: Here are a few of the "undocumented" functions, I know from experience that you can basically run anything that windows can do using syscmd once you understand how to structure the commands from examples like these. http://www.everythingaccess.com/tutorials.asp?ID=Undocumented-SysCmd-Functions From google search: syscmd access A: Here's a comprehensive list, including which Access versions each command applies to, translated into English. http://www.excite-webtl.jp/world/english/web/?wb_url=http%3A%2F%2Fwww.f3.dion.ne.jp%2F%7Eelement%2Fmsaccess%2FAcTipsUnDocumentedSysCmd.html&wb_lp=JAEN&wb_dis=2
All possible uses for the Application.SysCmd Method
Is there a place to find all possible uses of the syscmd method in MS Access? I know Microsoft has a developer reference, but I have found there are many other uses for this method that are not listed here.
[ "Access itself provides an interface to the full object model of all libraries in use. In the VBE, hit F2 on the keyboard (or, from the VIEW menu, choose OBJECT BROWSER). Type \"syscmd\" in the search box and you'll get the full details on it. The variable names are verbose enough to explain just about everything you need to know.\nEDIT: The object browser doesn't give you anything but the SysCmd functions that have been documented by assigning named constants. But the recommendation to familiarize yourself with the object browser is a good one, especially if you right click on the CLASSES list and choose SHOW HIDDEN MEMBERS -- you can learn a lot from that.\n", "Here are a few of the \"undocumented\" functions, I know from experience that you can basically run anything that windows can do using syscmd once you understand how to structure the commands from examples like these.\nhttp://www.everythingaccess.com/tutorials.asp?ID=Undocumented-SysCmd-Functions\nFrom google search: syscmd access\n", "Here's a comprehensive list, including which Access versions each command applies to, translated into English.\nhttp://www.excite-webtl.jp/world/english/web/?wb_url=http%3A%2F%2Fwww.f3.dion.ne.jp%2F%7Eelement%2Fmsaccess%2FAcTipsUnDocumentedSysCmd.html&wb_lp=JAEN&wb_dis=2\n" ]
[ 2, 1, 1 ]
[]
[]
[ "methods", "ms_access", "vba" ]
stackoverflow_0000109345_methods_ms_access_vba.txt
Q: Lots of unnecessary frameworks load into my iPhone app - can I prevent this? There appear to be a lot of unnecessary frameworks loading into my iPhone app. I didn't link against them in Xcode, and I don't need them. When I run "lsof -p" against them on the iPhone, I see these (and others) that I can't explain: CoreVideo AddressBookUI JavaScriptCore MobileSync EAP8021X BluetoothManager MusicLibrary CoreAudio MobileMusicPlayer AddressBook CoreTelephony MobileBluetooth Calendar TelephonyUI WebCore / WebKit MediaPlayer VideoToolbox I wonder whether this is contributing to the slow startup times. My app is very simple. It is basically a Twitter-like posting client. The only multimedia function is to pick an image from the camera or library, and it uses simple NSURL / NSURLConnection functions to post data to a couple of web services. This is a jailbroken 2.1 iPhone with a few apps installed from Cydia. Is this normal? A: Before you go to all of the trouble of trying to stop the OS from loading these frameworks, you should rule out other causes of your slow launch time. First, build a "Hello, World" app and use it as a baseline. A project template app with nothing added should serve well. If that is starting up faster than your own app, then it is something you are doing in your own code. A: This is normal, but that doesn't mean it's ideal. It probably only has a small impact on app startup time, but it'll have a slightly greater impact than that on memory usage. If you'd like this to be improved, the best thing to do is to head on over to Apple's bug reporter and file a bug about it. Attach a copy of your application (the binary, not the source) and they should be able to track things down from there. I'm sure they'd be interested in reports like this.
Lots of unnecessary frameworks load into my iPhone app - can I prevent this?
There appear to be a lot of unnecessary frameworks loading into my iPhone app. I didn't link against them in Xcode, and I don't need them. When I run "lsof -p" against them on the iPhone, I see these (and others) that I can't explain: CoreVideo AddressBookUI JavaScriptCore MobileSync EAP8021X BluetoothManager MusicLibrary CoreAudio MobileMusicPlayer AddressBook CoreTelephony MobileBluetooth Calendar TelephonyUI WebCore / WebKit MediaPlayer VideoToolbox I wonder whether this is contributing to the slow startup times. My app is very simple. It is basically a Twitter-like posting client. The only multimedia function is to pick an image from the camera or library, and it uses simple NSURL / NSURLConnection functions to post data to a couple of web services. This is a jailbroken 2.1 iPhone with a few apps installed from Cydia. Is this normal?
[ "Before you go to all of the trouble of trying to stop the OS from loading these frameworks, you should rule out other causes of your slow launch time.\nFirst, build a \"Hello, World\" app and use it as a baseline. A project template app with nothing added should serve well. If that is starting up faster than your own app, then it is something you are doing in your own code.\n", "This is normal, but that doesn't mean it's ideal. It probably only has a small impact on app startup time, but it'll have a slightly greater impact than that on memory usage.\nIf you'd like this to be improved, the best thing to do is to head on over to Apple's bug reporter and file a bug about it. Attach a copy of your application (the binary, not the source) and they should be able to track things down from there. I'm sure they'd be interested in reports like this.\n" ]
[ 3, 2 ]
[]
[]
[ "iphone", "objective_c" ]
stackoverflow_0000111558_iphone_objective_c.txt
Q: Using GLUT with Visual C++ Express Edition What are the basic steps to compile an OpenGL application using GLUT (OpenGL Utility Toolkit) under Visual C++ Express Edition? A: If you don't have Visual C++ Express Edition (VCEE), download and install VCEE. The default install of Visual C++ Express Edition builds for the .Net platform. We'll need to build for the Windows platform since OpenGL and GLUT are not yet fully supported under .Net. For this we need the Microsoft Platform SDK. (If you're using an older version of VCEE, download and install the Microsoft Platform SDK. Visual C++ Express Edition will need to be configured to build for Windows platform. All these instructions are available here.) If you don't have GLUT, download and unzip Nate Robin's Windows port of GLUT. Add glut.h to your Platform SDK/include/GL/ directory Link the project with glut.lib. (Go to VCEE Project Properties -> Additional Linker Directories and add the directory which has glut.lib. Add glut.dll to the Windows/System32 directory, so that all programs using GLUT can find it at runtime. Your program which uses GLUT or OpenGL should compile under Visual C++ Express Edition now. A: The GLUT port on Nate Robin's site is from 2001 and has some incompatibilities with versions of Visual Studio more recent than that (.NET 2003 and up). The incompatibility manifests itself as errors about redefinition of exit(). If you see this error, there are two possible solutions: Replace the exit() prototype in glut.h with the one in your stdlib.h so that they match. This is probably the best solution. An easier solution is to #define GLUT_DISABLE_ATEXIT_HACK before you #include <gl/glut.h> in your program. (Due credit: I originally saw this advice on the TAMU help desk website.) I've been using approach #1 myself since .NET 2003 came out, and have used the same modified glut.h with VC++ 2003, VC++ 2005 and VC++ 2008. Here's the diff for the glut.h I use which does #1 (but in appropriate #ifdef blocks so that it still works with older versions of Visual Studio): --- c:\naterobbins\glut.h 2000-12-13 00:22:52.000000000 +0900 +++ c:\updated\glut.h 2006-05-23 11:06:10.000000000 +0900 @@ -143,7 +143,12 @@ #if defined(_WIN32) # ifndef GLUT_BUILDING_LIB -extern _CRTIMP void __cdecl exit(int); +/* extern _CRTIMP void __cdecl exit(int); /* Changed for .NET */ +# if _MSC_VER >= 1200 +extern _CRTIMP __declspec(noreturn) void __cdecl exit(int); +# else +extern _CRTIMP void __cdecl exit(int); +# endif # endif #else /* non-Win32 case. */
Using GLUT with Visual C++ Express Edition
What are the basic steps to compile an OpenGL application using GLUT (OpenGL Utility Toolkit) under Visual C++ Express Edition?
[ "\nIf you don't have Visual C++ Express Edition (VCEE), download and install VCEE.\nThe default install of Visual C++ Express Edition builds for the .Net platform. We'll need to build for the Windows platform since OpenGL and GLUT are not yet fully supported under .Net. For this we need the Microsoft Platform SDK. (If you're using an older version of VCEE, download and install the Microsoft Platform SDK. Visual C++ Express Edition will need to be configured to build for Windows platform. All these instructions are available here.)\nIf you don't have GLUT, download and unzip Nate Robin's Windows port of GLUT.\nAdd glut.h to your Platform SDK/include/GL/ directory\nLink the project with glut.lib. (Go to VCEE Project Properties -> Additional Linker Directories and add the directory which has glut.lib.\nAdd glut.dll to the Windows/System32 directory, so that all programs using GLUT\ncan find it at runtime.\n\nYour program which uses GLUT or OpenGL should compile under Visual C++ Express Edition now.\n", "The GLUT port on Nate Robin's site is from 2001 and has some incompatibilities with versions of Visual Studio more recent than that (.NET 2003 and up). The incompatibility manifests itself as errors about redefinition of exit(). If you see this error, there are two possible solutions:\n\nReplace the exit() prototype in glut.h with the one in your stdlib.h so that they match. This is probably the best solution.\nAn easier solution is to #define GLUT_DISABLE_ATEXIT_HACK before you #include <gl/glut.h> in your program.\n\n(Due credit: I originally saw this advice on the TAMU help desk website.)\nI've been using approach #1 myself since .NET 2003 came out, and have used the same modified glut.h with VC++ 2003, VC++ 2005 and VC++ 2008.\nHere's the diff for the glut.h I use which does #1 (but in appropriate #ifdef blocks so that it still works with older versions of Visual Studio):\n--- c:\\naterobbins\\glut.h 2000-12-13 00:22:52.000000000 +0900\n+++ c:\\updated\\glut.h 2006-05-23 11:06:10.000000000 +0900\n@@ -143,7 +143,12 @@\n\n #if defined(_WIN32)\n # ifndef GLUT_BUILDING_LIB\n-extern _CRTIMP void __cdecl exit(int);\n+/* extern _CRTIMP void __cdecl exit(int); /* Changed for .NET */\n+# if _MSC_VER >= 1200\n+extern _CRTIMP __declspec(noreturn) void __cdecl exit(int);\n+# else\n+extern _CRTIMP void __cdecl exit(int);\n+# endif\n # endif\n #else\n /* non-Win32 case. */\n\n" ]
[ 9, 6 ]
[]
[]
[ "glut", "opengl", "visual_c++", "visual_studio" ]
stackoverflow_0000014264_glut_opengl_visual_c++_visual_studio.txt
Q: Are C++ non-type parameters to (function) templates ordered? I am hosting SpiderMonkey in a current project and would like to have template functions generate some of the simple property get/set methods, eg: template <typename TClassImpl, int32 TClassImpl::*mem> JSBool JS_DLL_CALLBACK WriteProp(JSContext* cx, JSObject* obj, jsval id, jsval* vp) { if (TClassImpl* pImpl = (TClassImpl*)::JS_GetInstancePrivate(cx, obj, &TClassImpl::s_JsClass, NULL)) return ::JS_ValueToInt32(cx, *vp, &(pImpl->*mem)); return JS_FALSE; } Used: ::JSPropertySpec Vec2::s_JsProps[] = { {"x", 1, JSPROP_PERMANENT, &JsWrap::ReadProp<Vec2, &Vec2::x>, &JsWrap::WriteProp<Vec2, &Vec2::x>}, {"y", 2, JSPROP_PERMANENT, &JsWrap::ReadProp<Vec2, &Vec2::y>, &JsWrap::WriteProp<Vec2, &Vec2::y>}, {0} }; This works fine, however, if I add another member type: template <typename TClassImpl, JSObject* TClassImpl::*mem> JSBool JS_DLL_CALLBACK WriteProp(JSContext* cx, JSObject* obj, jsval id, jsval* vp) { if (TClassImpl* pImpl = (TClassImpl*)::JS_GetInstancePrivate(cx, obj, &TClassImpl::s_JsClass, NULL)) return ::JS_ValueToObject(cx, *vp, &(pImpl->*mem)); return JS_FALSE; } Then Visual C++ 9 attempts to use the JSObject* wrapper for int32 members! 1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2440: 'specialization' : cannot convert from 'int32 JsGlobal::Vec2::* ' to 'JSObject *JsGlobal::Vec2::* const ' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2973: 'JsWrap::ReadProp' : invalid template argument 'int32 JsGlobal::Vec2::* ' 1> d:\projects\testing\jswnd\src\wrap_js.h(64) : see declaration of 'JsWrap::ReadProp' 1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2440: 'initializing' : cannot convert from 'overloaded-function' to 'JSPropertyOp' 1> None of the functions with this name in scope match the target type Surprisingly, parening JSObject* incurs a parse error! (unexpected '('). This is probably a VC++ error (can anyone test that "template void foo() {}" compiles in GCC?). Same error with "typedef JSObject* PObject; ..., PObject TClassImpl::mem>", void, struct Undefined*, and double. Since the function usage is fully instantiated: "&ReadProp", there should be no normal function overload semantics coming into play, it is a defined function at that point and gets priority over template functions. It seems the template ordering is failing here. Vec2 is just: class Vec2 { public: int32 x, y; Vec2(JSContext* cx, JSObject* obj, uintN argc, jsval* argv); static ::JSClass s_JsClass; static ::JSPropertySpec s_JsProps[]; }; JSPropertySpec is described in JSAPI link in OP, taken from header: typedef JSBool (* JS_DLL_CALLBACK JSPropertyOp)(JSContext *cx, JSObject *obj, jsval id, jsval *vp); ... struct JSPropertySpec { const char *name; int8 tinyid; uint8 flags; JSPropertyOp getter; JSPropertyOp setter; }; A: Pretty sure VC++ has "issues" here. Comeau and g++ 4.2 are both happy with the following program: struct X { int i; void* p; }; template<int X::*P> void foo(X* t) { t->*P = 0; } template<void* X::*P> void foo(X* t) { t->*P = 0; } int main() { X x; foo<&X::i>(&x); foo<&X::p>(&x); } VC++ 2008SP1, however, is having none of it. I haven't the time to read through my standard to find out exactly what's what... but I think VC++ is in the wrong here. A: Try changing the JSObject * to another pointer type to see if that reproduces the error. Is JSObject defined at the point of use? Also, maybe JSObject* needs to be in parens. A: I am certainly no template guru, but does this boil down to a subtle case of trying to differentiate overloads based purely on the return type? Since C++ doesn't allow overloading of functions based on return type, perhaps the same thing applies to template parameters.
Are C++ non-type parameters to (function) templates ordered?
I am hosting SpiderMonkey in a current project and would like to have template functions generate some of the simple property get/set methods, eg: template <typename TClassImpl, int32 TClassImpl::*mem> JSBool JS_DLL_CALLBACK WriteProp(JSContext* cx, JSObject* obj, jsval id, jsval* vp) { if (TClassImpl* pImpl = (TClassImpl*)::JS_GetInstancePrivate(cx, obj, &TClassImpl::s_JsClass, NULL)) return ::JS_ValueToInt32(cx, *vp, &(pImpl->*mem)); return JS_FALSE; } Used: ::JSPropertySpec Vec2::s_JsProps[] = { {"x", 1, JSPROP_PERMANENT, &JsWrap::ReadProp<Vec2, &Vec2::x>, &JsWrap::WriteProp<Vec2, &Vec2::x>}, {"y", 2, JSPROP_PERMANENT, &JsWrap::ReadProp<Vec2, &Vec2::y>, &JsWrap::WriteProp<Vec2, &Vec2::y>}, {0} }; This works fine, however, if I add another member type: template <typename TClassImpl, JSObject* TClassImpl::*mem> JSBool JS_DLL_CALLBACK WriteProp(JSContext* cx, JSObject* obj, jsval id, jsval* vp) { if (TClassImpl* pImpl = (TClassImpl*)::JS_GetInstancePrivate(cx, obj, &TClassImpl::s_JsClass, NULL)) return ::JS_ValueToObject(cx, *vp, &(pImpl->*mem)); return JS_FALSE; } Then Visual C++ 9 attempts to use the JSObject* wrapper for int32 members! 1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2440: 'specialization' : cannot convert from 'int32 JsGlobal::Vec2::* ' to 'JSObject *JsGlobal::Vec2::* const ' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2973: 'JsWrap::ReadProp' : invalid template argument 'int32 JsGlobal::Vec2::* ' 1> d:\projects\testing\jswnd\src\wrap_js.h(64) : see declaration of 'JsWrap::ReadProp' 1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2440: 'initializing' : cannot convert from 'overloaded-function' to 'JSPropertyOp' 1> None of the functions with this name in scope match the target type Surprisingly, parening JSObject* incurs a parse error! (unexpected '('). This is probably a VC++ error (can anyone test that "template void foo() {}" compiles in GCC?). Same error with "typedef JSObject* PObject; ..., PObject TClassImpl::mem>", void, struct Undefined*, and double. Since the function usage is fully instantiated: "&ReadProp", there should be no normal function overload semantics coming into play, it is a defined function at that point and gets priority over template functions. It seems the template ordering is failing here. Vec2 is just: class Vec2 { public: int32 x, y; Vec2(JSContext* cx, JSObject* obj, uintN argc, jsval* argv); static ::JSClass s_JsClass; static ::JSPropertySpec s_JsProps[]; }; JSPropertySpec is described in JSAPI link in OP, taken from header: typedef JSBool (* JS_DLL_CALLBACK JSPropertyOp)(JSContext *cx, JSObject *obj, jsval id, jsval *vp); ... struct JSPropertySpec { const char *name; int8 tinyid; uint8 flags; JSPropertyOp getter; JSPropertyOp setter; };
[ "Pretty sure VC++ has \"issues\" here. Comeau and g++ 4.2 are both happy with the following program:\nstruct X\n{\n int i;\n void* p;\n};\n\ntemplate<int X::*P>\nvoid foo(X* t)\n{\n t->*P = 0;\n}\n\ntemplate<void* X::*P>\nvoid foo(X* t)\n{\n t->*P = 0;\n}\n\nint main()\n{\n X x;\n foo<&X::i>(&x);\n foo<&X::p>(&x);\n}\n\nVC++ 2008SP1, however, is having none of it.\nI haven't the time to read through my standard to find out exactly what's what... but I think VC++ is in the wrong here.\n", "Try changing the JSObject * to another pointer type to see if that reproduces the error. Is JSObject defined at the point of use? Also, maybe JSObject* needs to be in parens.\n", "I am certainly no template guru, but does this boil down to a subtle case of trying to differentiate overloads based purely on the return type?\nSince C++ doesn't allow overloading of functions based on return type, perhaps the same thing applies to template parameters.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "c++", "overloading", "templates" ]
stackoverflow_0000112612_c++_overloading_templates.txt
Q: source control with VB2005 Express Can anyone suggest a good source control system that interfaces with VB2005 Express? As the Express editions of Visual Studio do not allow add-ins does this mean that I will not be able to integrate source control into the IDE? I'm used to the check-in/check-out process of SourceSafe integrated into VB6. Can anyone recommend TortoiseSVN as an alternative? A: TortoiseSVN is a good choice. Although it won't integrate into the IDE (because of the plug-in problem you mentioned), it's really solid in the Explorer right-button menu. Also consider Vault from SourceGear. If you're used to SourceSafe, Vault will be easier to learn; Vault was specifically designed for ex-SourceSafe users. A: Take a look at Perforce. It is lightning fast, rock solid, simple to use and configure, and has features to support pretty much any source control scenario. If you are working on your own (which seems likely, given that you are using VB 2k5 Express), it is free for up to two users. If / when you switch to VS Pro, it has very good integration, and on its own it has several excellent clients and Windows Explorer integration. A: I would recommend using Tortoise and do version control through Windows Explorer. I actually prefer that to Ankh in VS2008. A: I use TortoiseSVN and windows explorer for all my development projects and believe it works great. I started with SourceSafe, but when I changed jobs I went to an SVN shop and have now incorporated it into my own development projects. You can also use Source Safe without integration. You use Source Safe to check in/out files in a folder and then manage it outside the IDE. While this isn't as "simple" it may work just fine for certain projects. I use a hosted SVN provider, you may want to check them out: Hosted-Projects. A: No, Source Control systems can't be integrated with the VS Express IDEs by design. If you want to continue using your existing VSS, you're best option is to upgrade to Visual Studio Standard. Otherwise, check out TortoiseSVN. Here's a good quick start: http://www.polymorphicpodcast.com/shows/subversion/
source control with VB2005 Express
Can anyone suggest a good source control system that interfaces with VB2005 Express? As the Express editions of Visual Studio do not allow add-ins does this mean that I will not be able to integrate source control into the IDE? I'm used to the check-in/check-out process of SourceSafe integrated into VB6. Can anyone recommend TortoiseSVN as an alternative?
[ "TortoiseSVN is a good choice. Although it won't integrate into the IDE (because of the plug-in problem you mentioned), it's really solid in the Explorer right-button menu.\nAlso consider Vault from SourceGear. If you're used to SourceSafe, Vault will be easier to learn; Vault was specifically designed for ex-SourceSafe users.\n", "Take a look at Perforce. It is lightning fast, rock solid, simple to use and configure, and has features to support pretty much any source control scenario.\nIf you are working on your own (which seems likely, given that you are using VB 2k5 Express), it is free for up to two users. If / when you switch to VS Pro, it has very good integration, and on its own it has several excellent clients and Windows Explorer integration.\n", "I would recommend using Tortoise and do version control through Windows Explorer.\nI actually prefer that to Ankh in VS2008.\n", "I use TortoiseSVN and windows explorer for all my development projects and believe it works great. I started with SourceSafe, but when I changed jobs I went to an SVN shop and have now incorporated it into my own development projects. You can also use Source Safe without integration. You use Source Safe to check in/out files in a folder and then manage it outside the IDE. While this isn't as \"simple\" it may work just fine for certain projects. I use a hosted SVN provider, you may want to check them out: Hosted-Projects.\n", "No, Source Control systems can't be integrated with the VS Express IDEs by design.\nIf you want to continue using your existing VSS, you're best option is to upgrade to Visual Studio Standard. Otherwise, check out TortoiseSVN. Here's a good quick start:\nhttp://www.polymorphicpodcast.com/shows/subversion/\n" ]
[ 4, 2, 1, 1, 0 ]
[]
[]
[ "vb.net", "version_control" ]
stackoverflow_0000111910_vb.net_version_control.txt
Q: Is it feasible to introduce Test Driven Development (TDD) in a mature project? Say we have realized a value of TDD too late. Project is already matured, good deal of customers started using it. Say automated testing used are mostly functional/system testing and there is a good deal of automated GUI testing. Say we have new feature requests, and new bug reports (!). So good deal of development still goes on. Note there would already be plenty of business object with no or little unit testing. Too much collaboration/relationships between them, which again is tested only through higher level functional/system testing. No integration testing per se. Big databases in place with plenty of tables, views, etc. Just to instantiate a single business object there already goes good deal of database round trips. How can we introduce TDD at this stage? Mocking seems to be the way to go. But the amount of mocking we need to do here seems like too much. Sounds like elaborate infrastructure needs to be developed for the mocking system working for existing stuff (BO, databases, etc.). Does that mean TDD is a suitable methodology only when starting from scratch? I am interested to hear about the feasible strategies to introduce TDD in an already mature product. A: Creating a complex mocking infrastructure will probably just hide the problems in your code. I would recommend that you start with integration tests, with a test database, around the areas of the code base that you plan to change. Once you have enough tests to ensure that you won't break anything if you make a change, you can start to refactor the code to make it more testable. Se also Michael Feathers excellent book Working effectively with legacy code, its a must read for anyone thinking of introducing TDD into a legacy code base. A: I think its completely feasible to introduce TDD into an existing application, in fact I have recently done it myself. It is easiest to code new functionality in a TDD way and restructuring the existing code to accommodate this. This way you start of with a small section of your code tested but the effects start to spread through the whole code base. If you've got a bug, then write a unit test to reproduce it, refactoring the code as necessary (unless the effort is really not worth it). Personally, I don't think there's any need to go crazy and try and retrofit tests into the existing system as that can be very tedious without a great amount of benefit. In summary, start small and your project will become more and more test infected. A: Yes you can. From your description the project is in a good shape - solid amount of functional tests automation is a way to go! In some aspects its even more useful than unit testing. Remember that TDD != unit testing, it's all about short iterations and solid acceptance criteria. Please remember that having an existing and accepted project actually makes testing easier - working application is the best requirements specification. So you're in a better position than someone who just have a scrap of paper to work with. Just start working on your new requirements/bug fixes with an TDD. Remember that there will be an overhead associated with switching the methodology (make sure your clients are aware of this!) and probably expect a good deal of reluctance from the team members who are used to the 'good old ways'. Don't touch the old things unless you need to. If you will have an enhancement request which will affect existing stuff then factor in extra time for doing the extra set-up things. Personally I don't see much value in introducing a complex infrastructure for mock-ups - surely there is a way to achieve the same results in a lightweight mode but it obviously depends on your circumstances A: One tool that can help you testing legacy code (assuming you can't\won't have the time to refactor it, is Typemock Isolator: Typemock.com It allows injecting dependencies into existing code without needing to extract interfaces and such because it does not use standard reflection techniques (dynamic proxy etc..) but uses the profiler APIs instead. It's been used to test apps that rely on sharepoint, HTTPContext and other problematic areas. I recommend you take a look. (I work as a dev in that company, but it is the only tool that does not force you to refactor existing legacy code, saving you time and money) I would also highly recommend "Working effectively with legacy code" for more techniques. Roy A: Yes you can. Don't do it all at once, but introduce just what you need to test a module whenever you touch it. You can also start with more high level acceptance tests and work your way down from there (take a look at Fitnesse for this). A: I would start with some basic integration tests. This will get buy-in from the rest of the staff. Then start to separate the parts of your code which have dependencies. Work towards using Dependency Injection as it will make your code much more testable. Treat bugs as an opportunity to write testable code.
Is it feasible to introduce Test Driven Development (TDD) in a mature project?
Say we have realized a value of TDD too late. Project is already matured, good deal of customers started using it. Say automated testing used are mostly functional/system testing and there is a good deal of automated GUI testing. Say we have new feature requests, and new bug reports (!). So good deal of development still goes on. Note there would already be plenty of business object with no or little unit testing. Too much collaboration/relationships between them, which again is tested only through higher level functional/system testing. No integration testing per se. Big databases in place with plenty of tables, views, etc. Just to instantiate a single business object there already goes good deal of database round trips. How can we introduce TDD at this stage? Mocking seems to be the way to go. But the amount of mocking we need to do here seems like too much. Sounds like elaborate infrastructure needs to be developed for the mocking system working for existing stuff (BO, databases, etc.). Does that mean TDD is a suitable methodology only when starting from scratch? I am interested to hear about the feasible strategies to introduce TDD in an already mature product.
[ "Creating a complex mocking infrastructure will probably just hide the problems in your code. I would recommend that you start with integration tests, with a test database, around the areas of the code base that you plan to change. Once you have enough tests to ensure that you won't break anything if you make a change, you can start to refactor the code to make it more testable. \nSe also Michael Feathers excellent book Working effectively with legacy code, its a must read for anyone thinking of introducing TDD into a legacy code base.\n", "I think its completely feasible to introduce TDD into an existing application, in fact I have recently done it myself.\nIt is easiest to code new functionality in a TDD way and restructuring the existing code to accommodate this. This way you start of with a small section of your code tested but the effects start to spread through the whole code base. \nIf you've got a bug, then write a unit test to reproduce it, refactoring the code as necessary (unless the effort is really not worth it).\nPersonally, I don't think there's any need to go crazy and try and retrofit tests into the existing system as that can be very tedious without a great amount of benefit.\nIn summary, start small and your project will become more and more test infected.\n", "Yes you can. From your description the project is in a good shape - solid amount of functional tests automation is a way to go! In some aspects its even more useful than unit testing. Remember that TDD != unit testing, it's all about short iterations and solid acceptance criteria.\nPlease remember that having an existing and accepted project actually makes testing easier - working application is the best requirements specification. So you're in a better position than someone who just have a scrap of paper to work with.\nJust start working on your new requirements/bug fixes with an TDD. Remember that there will be an overhead associated with switching the methodology (make sure your clients are aware of this!) and probably expect a good deal of reluctance from the team members who are used to the 'good old ways'.\nDon't touch the old things unless you need to. If you will have an enhancement request which will affect existing stuff then factor in extra time for doing the extra set-up things.\nPersonally I don't see much value in introducing a complex infrastructure for mock-ups - surely there is a way to achieve the same results in a lightweight mode but it obviously depends on your circumstances\n", "One tool that can help you testing legacy code (assuming you can't\\won't have the time to refactor it, is Typemock Isolator: Typemock.com\nIt allows injecting dependencies into existing code without needing to extract interfaces and such because it does not use standard reflection techniques (dynamic proxy etc..) but uses the profiler APIs instead.\nIt's been used to test apps that rely on sharepoint, HTTPContext and other problematic areas.\nI recommend you take a look.\n(I work as a dev in that company, but it is the only tool that does not force you to refactor existing legacy code, saving you time and money)\nI would also highly recommend \"Working effectively with legacy code\" for more techniques.\nRoy\n", "Yes you can. Don't do it all at once, but introduce just what you need to test a module whenever you touch it.\nYou can also start with more high level acceptance tests and work your way down from there (take a look at Fitnesse for this).\n", "I would start with some basic integration tests. This will get buy-in from the rest of the staff. Then start to separate the parts of your code which have dependencies. Work towards using Dependency Injection as it will make your code much more testable. Treat bugs as an opportunity to write testable code.\n" ]
[ 27, 16, 9, 5, 3, 2 ]
[]
[]
[ "mocking", "tdd", "unit_testing" ]
stackoverflow_0000107919_mocking_tdd_unit_testing.txt
Q: PHP with AWASP framework Who here is using WASP (http://wasp.sourceforge.net/content/) to in real world applications? What impressions do you have? Good? Bad? If you can provide any inputs, how good it is comparing with rails for example. I'm really looking for MVC frameworks for PHP Update: This comparation I found is good. A: I downloaded it a while ago and tried it out, but as the documentation is pretty terrible at the moment (consisting of some auto-generated 'documentation' that was useless) I gave up pretty quickly. I think one of the most important things to have in a framework is clear, thorough documentation - if you have to spend time digging through the code of the framework to find out if a class you want exists, the point of using a framework is lost. WASP does not seem to be ready for production environments just yet, as even their website admits that its not ready for enterprise applications. If you're looking for a PHP framework I would recommend CodeIgniter, which has excellent documentation and a helpful community, or Zend, which is pretty mature. A: CakePHP is a great framework with great documentation. Symfony lost me with all the configuration, at the time I was new to both frameworks and CakePHP stood out as being the best for me and I was able to pick it up very quickly A: Hey Victor, that comparison is pretty badly out of date. It was done about 1.5 years ago and, at least in the case of the Zend Framework that I use regularly,things have changed greatly since then. I'd say that comparison is so old as to be useless. A: Check out symfony, too. Free software, top-notch documentation. A: QCodo is great - amazing code generation, full MVC support. The strongest object-relational mapping I've seen; their scaffolding model is so much stronger than CakePHP and Zend... Plus, it's beautifully extensible with community controls. I've been using it for large projects for the last two years, it's great! A: Have you tried CodeIgniter? I tested CakePHP but it's too much a la Rails style and i didn't like it. CodeIgniter gives you more freedom to do whatever you whant.
PHP with AWASP framework
Who here is using WASP (http://wasp.sourceforge.net/content/) to in real world applications? What impressions do you have? Good? Bad? If you can provide any inputs, how good it is comparing with rails for example. I'm really looking for MVC frameworks for PHP Update: This comparation I found is good.
[ "I downloaded it a while ago and tried it out, but as the documentation is pretty terrible at the moment (consisting of some auto-generated 'documentation' that was useless) I gave up pretty quickly. I think one of the most important things to have in a framework is clear, thorough documentation - if you have to spend time digging through the code of the framework to find out if a class you want exists, the point of using a framework is lost. WASP does not seem to be ready for production environments just yet, as even their website admits that its not ready for enterprise applications.\nIf you're looking for a PHP framework I would recommend CodeIgniter, which has excellent documentation and a helpful community, or Zend, which is pretty mature.\n", "CakePHP is a great framework with great documentation. Symfony lost me with all the configuration, at the time I was new to both frameworks and CakePHP stood out as being the best for me and I was able to pick it up very quickly\n", "Hey Victor, that comparison is pretty badly out of date. It was done about 1.5 years ago and, at least in the case of the Zend Framework that I use regularly,things have changed greatly since then. I'd say that comparison is so old as to be useless.\n", "Check out symfony, too. Free software, top-notch documentation.\n", "QCodo is great - amazing code generation, full MVC support. The strongest object-relational mapping I've seen; their scaffolding model is so much stronger than CakePHP and Zend... Plus, it's beautifully extensible with community controls. I've been using it for large projects for the last two years, it's great!\n", "Have you tried CodeIgniter?\nI tested CakePHP but it's too much a la Rails style and i didn't like it.\nCodeIgniter gives you more freedom to do whatever you whant.\n" ]
[ 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "frameworks", "model_view_controller", "php" ]
stackoverflow_0000109858_frameworks_model_view_controller_php.txt
Q: Combining Scriptaculous and jQuery in a Rails application I've got the following situation A rails application that makes use of rjs / Scriptaculous to offer AJAX functionality Lot of nice javascript written using jQuery (for a separate application) I want to combine the two and use my jQuery based functionality in my Rails application, but I'm worried about jQuery and Scriptaculous clashing (they both define the $() function, etc). What is my easiest option to bring the two together? Thanks! A: jQuery.noConflict(); Then use jQuery instead of $ to refer to jQuery. e.g., jQuery('div.foo').doSomething() If you need to adapt jQuery code that uses $, you can surround it with this: (function($) { ...your code here... })(jQuery); A: I believe it's jQuery.noConflict(). You can call it standalone like this: jQuery.noConflict(); jQuery('div').hide(); Or you can assign it to another variable of your choosing: var $j = jQuery.noConflict(); $j('div').hide(); Or you can keep using jQuery's $ function inside a block like this: jQuery.noConflict(); // Put all your code in your document ready area jQuery(document).ready(function($){ // Do jQuery stuff using $ $("div").hide(); }); // Use Prototype with $(...), etc. $('someid').hide(); For more information, see Using jQuery with Other Libraries in the jQuery documentation. A: jRails is a drop-in replacement for scriptaculous/prototype in Rails using the jQuery library, it does exactly what you're looking for.
Combining Scriptaculous and jQuery in a Rails application
I've got the following situation A rails application that makes use of rjs / Scriptaculous to offer AJAX functionality Lot of nice javascript written using jQuery (for a separate application) I want to combine the two and use my jQuery based functionality in my Rails application, but I'm worried about jQuery and Scriptaculous clashing (they both define the $() function, etc). What is my easiest option to bring the two together? Thanks!
[ "jQuery.noConflict();\n\nThen use jQuery instead of $ to refer to jQuery. e.g.,\njQuery('div.foo').doSomething()\n\nIf you need to adapt jQuery code that uses $, you can surround it with this:\n(function($) {\n...your code here...\n})(jQuery);\n\n", "I believe it's jQuery.noConflict().\nYou can call it standalone like this:\njQuery.noConflict();\n\njQuery('div').hide();\n\nOr you can assign it to another variable of your choosing:\nvar $j = jQuery.noConflict();\n\n$j('div').hide();\n\nOr you can keep using jQuery's $ function inside a block like this:\njQuery.noConflict();\n\n // Put all your code in your document ready area\n jQuery(document).ready(function($){\n // Do jQuery stuff using $\n $(\"div\").hide();\n });\n// Use Prototype with $(...), etc.\n$('someid').hide();\n\nFor more information, see Using jQuery with Other Libraries in the jQuery documentation.\n", "jRails is a drop-in replacement for scriptaculous/prototype in Rails using the jQuery library, it does exactly what you're looking for.\n" ]
[ 15, 2, 1 ]
[]
[]
[ "javascript", "jquery", "ruby", "ruby_on_rails", "scriptaculous" ]
stackoverflow_0000112721_javascript_jquery_ruby_ruby_on_rails_scriptaculous.txt
Q: How can I use a traditional HTML id attribute with an ASP.net runat='server' tag? I am refactoring some CSS on a website. I have been working on, and noticed the absence of traditional HTML IDs in the code. There is heavy use of CssClass='&hellip;', or sometimes just class='&hellip;', but I can't seem to find a way to say id='…' and not have it swapped out by the server. Here is an example: <span id='position_title' runat='server'>Manager</span> When the response comes back from the server, I get: <span id='$aspnet$crap$here$position_title'>Manager</span> Any help here? A: Use jQuery to select the element: $("span[id$='position_title']").... jQuery's flexible selectors, especially its 'begins with'/'ends with selectors' (the 'end with' selector is shown above, provide a great way around ASP.NET's dom id munge. rp A: The 'crap' placed in front of the id is related to the container(s) of the control and there is no way (as far as I know) to prevent this behavior, other than not putting it in any container. If you need to refer to the id in script, you can use the ClientID of the control, like so: <script type="text/javascript"> var theSpan = document.getElementById('<%= position_title.ClientID %>'); </script> A: You can embed your CSS within the page, sprinkled with some server tags to overcome the problem. At runtime the code blocks will be replaced with the ASP.NET generated IDs. For example: [style type="text/css"] #<%= AspNetId.ClientID %> { ... styles go here... } [/style] [script type="text/javascript"] document.getElementById("<%= AspNetId.ClientID %>"); [/script] You could go a bit further and have some code files that generate CSS too, if you wanted to have your CSS contained within a separate file. Also, I may be jumping the gun a bit here, but you could use the ASP.NET MVC stuff (not yet officially released as of this writing) which gets away from the Web Forms and gives you total control over the markup generated. A: Most of the fixes suggested her are overkill for a very simple problem. Just have separate divs and spans that you target with CSS. Don't target the ASP.NET controls directly if you want to use IDs. <span id="FooContainer"> <span runat="server" id="Foo" > ...... <span> </span> A: .Net will always replace your id values with some mangled (every so slightly predictable, but still don't count on it) value. Do you really NEED to have that id runat=server? If you don't put in runat=server, then it won't mangle it... ADDED: Like leddt said, you can reference the span (or any runat=server with an id) by using ClientID, but I don't think that works in CSS. But I think that you have a larger problem if your CSS is using ID based selectors. You can't re-use an ID. You can't have multiple items on the same page with the same ID. .Net will complain about that. So, with that in mind, is your job of refactoring the CSS getting to be a bit larger in scope? A: Ok, I guess the jury is out on this one. @leddt, I already knew that the 'crap' was the containers surrounding it, but I thought maybe Microsoft would have left a backdoor to leave the ID alone. Regenerating CSS files on every use by including ClientIDs would be a horrible idea. I'm either left with using classes everywhere, or some garbled looking IDs hardcoded in the css. A: @Matt Dawdy: There are some great uses for IDs in CSS, primarily when you want to style an element that you know only appears once in either the website or a page, such as a logout button or masthead. A: If you are accessing the span or whatever tag is giving you problems from the C# or VB code behind, then the runat="server" has to remain and you should use instead <span class="some_class" id="someID">. If you are not accessing the tag in the code behind, then remove the runat="server". A: The best thing to do here is give it a unique class name. A: You're likely going to have to remove the runat="server" from the span and then place a within the span so you can stylize the span and still have the dynamic internal content. Not an elegant or easy solution (and it requires a recompile), but it works. A: I don't know of a way to stop .NET from mangling the ID, but I can think of a couple ways to work around it: 1 - Nest spans, one with runat="server", one without: <style type="text/css"> #position_title { // Whatever } <span id="position_titleserver" runat="server"><span id="position_title">Manager</span></span> 2 - As Joel Coehoorn suggested, use a unique class name instead. Already using the class for something? Doesn't matter, you can use more than 1! This... <style type="text/css"> .position_title { font-weight: bold; } .foo { color: red; } .bar { font-style: italic; } </style> <span id="thiswillbemangled" class="foo bar position_title" runat="server">Manager</span> ...will display this: Manager 3 - Write a Javascript function to fix the IDs after the page loads function fixIds() { var tagList = document.getElementsByTagName("*"); for(var i=0;i<tagList.length;i++) { if(tagList[i].id) { if(tagList[i].id.indexOf('$') > -1) { var tempArray = tagList[i].id.split("$"); tagList[i].id = tempArray[tempArray.length - 1]; } } } } A: If you're fearing classitus, try using an id on a parent or child selector that contains the element that you wish to style. This parent element should NOT have the runat server applied. Simply put, it's a good idea to plan your structural containers to not run code behind (ie. no runat), that way you can access major portions of your application/site using non-altered IDs. If it's too late to do so, add a wrapper div/span or use the class solution as mentioned. A: Is there a particular reason that you want the controls to be runat="server"? If so, I second the use of < asp : Literal > . . . It should do the job for you as you will still be able to edit the data in code behind. A: I usually make my own control that extends WebControl or HtmlGenericControl, and I override ClientID - returning the ID property instead of the generated ClientID. This will cause any transformation that .NET does to the ClientID because of naming containers to be reverted back to the original id that you specified in tag markup. This is great if you are using client side libraries like jQuery and need predictable unique ids, but tough if you rely on viewstate for anything server-side.
How can I use a traditional HTML id attribute with an ASP.net runat='server' tag?
I am refactoring some CSS on a website. I have been working on, and noticed the absence of traditional HTML IDs in the code. There is heavy use of CssClass='&hellip;', or sometimes just class='&hellip;', but I can't seem to find a way to say id='…' and not have it swapped out by the server. Here is an example: <span id='position_title' runat='server'>Manager</span> When the response comes back from the server, I get: <span id='$aspnet$crap$here$position_title'>Manager</span> Any help here?
[ "Use jQuery to select the element: \n$(\"span[id$='position_title']\")....\n\njQuery's flexible selectors, especially its 'begins with'/'ends with selectors' (the 'end with' selector is shown above, provide a great way around ASP.NET's dom id munge.\nrp\n", "The 'crap' placed in front of the id is related to the container(s) of the control and there is no way (as far as I know) to prevent this behavior, other than not putting it in any container. \nIf you need to refer to the id in script, you can use the ClientID of the control, like so:\n<script type=\"text/javascript\">\n var theSpan = document.getElementById('<%= position_title.ClientID %>');\n</script>\n\n", "You can embed your CSS within the page, sprinkled with some server tags to overcome the problem. At runtime the code blocks will be replaced with the ASP.NET generated IDs.\nFor example:\n[style type=\"text/css\"]\n #<%= AspNetId.ClientID %> {\n ... styles go here...\n }\n[/style]\n\n[script type=\"text/javascript\"]\n document.getElementById(\"<%= AspNetId.ClientID %>\");\n[/script]\n\nYou could go a bit further and have some code files that generate CSS too, if you wanted to have your CSS contained within a separate file.\nAlso, I may be jumping the gun a bit here, but you could use the ASP.NET MVC stuff (not yet officially released as of this writing) which gets away from the Web Forms and gives you total control over the markup generated.\n", "Most of the fixes suggested her are overkill for a very simple problem. Just have separate divs and spans that you target with CSS. Don't target the ASP.NET controls directly if you want to use IDs.\n <span id=\"FooContainer\">\n <span runat=\"server\" id=\"Foo\" >\n ......\n <span>\n </span>\n\n", ".Net will always replace your id values with some mangled (every so slightly predictable, but still don't count on it) value. Do you really NEED to have that id runat=server? If you don't put in runat=server, then it won't mangle it...\nADDED:\nLike leddt said, you can reference the span (or any runat=server with an id) by using ClientID, but I don't think that works in CSS.\nBut I think that you have a larger problem if your CSS is using ID based selectors. You can't re-use an ID. You can't have multiple items on the same page with the same ID. .Net will complain about that.\nSo, with that in mind, is your job of refactoring the CSS getting to be a bit larger in scope?\n", "Ok, I guess the jury is out on this one. \n@leddt, I already knew that the 'crap' was the containers surrounding it, but I thought maybe Microsoft would have left a backdoor to leave the ID alone. Regenerating CSS files on every use by including ClientIDs would be a horrible idea. \nI'm either left with using classes everywhere, or some garbled looking IDs hardcoded in the css.\n", "@Matt Dawdy: There are some great uses for IDs in CSS, primarily when you want to style an element that you know only appears once in either the website or a page, such as a logout button or masthead.\n", "If you are accessing the span or whatever tag is giving you problems from the C# or VB code behind, then the runat=\"server\" has to remain and you should use instead <span class=\"some_class\" id=\"someID\">. If you are not accessing the tag in the code behind, then remove the runat=\"server\".\n", "The best thing to do here is give it a unique class name. \n", "You're likely going to have to remove the runat=\"server\" from the span and then place a within the span so you can stylize the span and still have the dynamic internal content.\nNot an elegant or easy solution (and it requires a recompile), but it works.\n", "I don't know of a way to stop .NET from mangling the ID, but I can think of a couple ways to work around it:\n1 - Nest spans, one with runat=\"server\", one without:\n<style type=\"text/css\">\n#position_title { // Whatever\n}\n<span id=\"position_titleserver\" runat=\"server\"><span id=\"position_title\">Manager</span></span>\n\n2 - As Joel Coehoorn suggested, use a unique class name instead. Already using the class for something? Doesn't matter, you can use more than 1! This...\n<style type=\"text/css\">\n.position_title { font-weight: bold; }\n.foo { color: red; }\n.bar { font-style: italic; }\n</style>\n<span id=\"thiswillbemangled\" class=\"foo bar position_title\" runat=\"server\">Manager</span>\n\n...will display this:\nManager\n3 - Write a Javascript function to fix the IDs after the page loads\nfunction fixIds()\n{\n var tagList = document.getElementsByTagName(\"*\");\n for(var i=0;i<tagList.length;i++)\n {\n if(tagList[i].id)\n {\n if(tagList[i].id.indexOf('$') > -1)\n {\n var tempArray = tagList[i].id.split(\"$\");\n tagList[i].id = tempArray[tempArray.length - 1];\n }\n }\n }\n}\n\n", "If you're fearing classitus, try using an id on a parent or child selector that contains the element that you wish to style. This parent element should NOT have the runat server applied. Simply put, it's a good idea to plan your structural containers to not run code behind (ie. no runat), that way you can access major portions of your application/site using non-altered IDs. If it's too late to do so, add a wrapper div/span or use the class solution as mentioned.\n", "Is there a particular reason that you want the controls to be runat=\"server\"?\nIf so, I second the use of < asp : Literal > . . . \nIt should do the job for you as you will still be able to edit the data in code behind.\n", "I usually make my own control that extends WebControl or HtmlGenericControl, and I override ClientID - returning the ID property instead of the generated ClientID. This will cause any transformation that .NET does to the ClientID because of naming containers to be reverted back to the original id that you specified in tag markup. This is great if you are using client side libraries like jQuery and need predictable unique ids, but tough if you rely on viewstate for anything server-side.\n" ]
[ 6, 5, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "asp.net", "css" ]
stackoverflow_0000065039_asp.net_css.txt
Q: What are the advantages and disadvantages of the Session Façade Core J2EE Pattern? What are the advantages and disadvantages of the Session Façade Core J2EE Pattern? What are the assumptions behind it? Are these assumptions valid in a particular environment? A: Session Facade is a fantastic pattern - it is really a specific version of the Business Facade pattern. The idea is to tie up business functionality into discrete bundles - such as TransferMoney(), Withdraw(), Deposit()... So that your UI code is accessing things in terms of business operations instead of low level data access or other details that it shouldn't have to be concerned with. Specifically with the Session Facade - you use a Session EJB to act as the business facade - which is nice cause then you can take advantage of all the J2EE services (authentication/authorization, transactions, etc)... Hope that helps... A: The main advantage of the Session Facade pattern is that you can divide up a J2EE application into logical groups by business functionality. A Session Facade will be called by a POJO from the UI (i.e. a Business Delegate), and have references to appropriate Data Access Objects. E.g. a PersonSessionFacade would be called by the PersonBusinessDelegate and then it could call the PersonDAO. The methods on the PersonSessionFacade will, at the very least, follow the CRUD pattern (Create, Retrieve, Update and Delete). Typically, most Session Facades are implemented as stateless session EJBs. Or if you're in Spring land using AOP for transactions, you can create a service POJO that which can be all the join points for your transaction manager. Another advantage of the SessionFacade pattern is that any J2EE developer with a modicum of experience will immediately understand you. Disadvantages of the SessionFacade pattern: it assumes a specific enterprise architecture that is constrained by the limits of the J2EE 1.4 specification (see Rod Johnson's books for these criticisms). The most damaging disadvantage is that it is more complicated than necessary. In most enterprise web applications, you'll need a servlet container, and most of the stress in a web application will be at the tier that handles HttpRequests or database access. Consequently, it doesn't seem worthwhile to deploy the servlet container in a separate process space from the EJB container. I.e. remote calls to EJBs create more pain than gain. A: Rod Johnson claims that the main reason you'd want to use a Session Facade is if you're doing container managed transactions - which aren't necessary with more modern frameworks (like Spring.) He says that if you have business logic - put it in the POJO. (Which I agree with - I think its a more object-oriented approach - rather than implementing a session EJB.) http://forum.springframework.org/showthread.php?t=18155 Happy to hear contrasting arguments. A: It seems that whenever you talk about anything J2EE related - there are always a whole bunch of assumptions behind the scenes - which people assume one way or the other - which then leads to confusion. (I probably could have made the question clearer too.) Assuming (a) we want to use container managed transactions in a strict sense through the EJB specification then Session facades are a good idea - because they abstract away the low-level database transactions to be able to provide higher level application transaction management. Assuming (b) that you mean the general architectural concept of the session façade - then Decoupling services and consumers and providing a friendly interface over the top of this is a good idea. Computer science has solved lots of problems by 'adding an additional layer of indirection'. Rod Johnson writes "SLSBs with remote interfaces provide a very good solution for distributed applications built over RMI. However, this is a minority requirement. Experience has shown that we don't want to use distributed architecture unless forced to by requirements. We can still service remote clients if necessary by implementing a remoting façade on top of a good co-located object model." (Johnson, R "J2EE Development without EJB" p119.) Assuming (c) that you consider the EJB specification (and in particular the session façade component) to be a blight on the landscape of good design then: Rod Johnson writes "In general, there are not many reasons you would use a local SLSB at all in a Spring application, as Spring provides more capable declarative transaction management than EJB, and CMT is normally the main motivation for using local SLSBs. So you might not need th EJB layer at all. " http://forum.springframework.org/showthread.php?t=18155 In an environment where performance and scalability of the web server are the primary concerns - and cost is an issue - then the session facade architecture looks less attractive - it can be simpler to talk directly to the datbase (although this is more about tiering.)
What are the advantages and disadvantages of the Session Façade Core J2EE Pattern?
What are the advantages and disadvantages of the Session Façade Core J2EE Pattern? What are the assumptions behind it? Are these assumptions valid in a particular environment?
[ "Session Facade is a fantastic pattern - it is really a specific version of the Business Facade pattern. The idea is to tie up business functionality into discrete bundles - such as TransferMoney(), Withdraw(), Deposit()... So that your UI code is accessing things in terms of business operations instead of low level data access or other details that it shouldn't have to be concerned with.\nSpecifically with the Session Facade - you use a Session EJB to act as the business facade - which is nice cause then you can take advantage of all the J2EE services (authentication/authorization, transactions, etc)...\nHope that helps...\n", "The main advantage of the Session Facade pattern is that you can divide up a J2EE application into logical groups by business functionality. A Session Facade will be called by a POJO from the UI (i.e. a Business Delegate), and have references to appropriate Data Access Objects. E.g. a PersonSessionFacade would be called by the PersonBusinessDelegate and then it could call the PersonDAO. The methods on the PersonSessionFacade will, at the very least, follow the CRUD pattern (Create, Retrieve, Update and Delete). \nTypically, most Session Facades are implemented as stateless session EJBs. Or if you're in Spring land using AOP for transactions, you can create a service POJO that which can be all the join points for your transaction manager. \nAnother advantage of the SessionFacade pattern is that any J2EE developer with a modicum of experience will immediately understand you. \nDisadvantages of the SessionFacade pattern: it assumes a specific enterprise architecture that is constrained by the limits of the J2EE 1.4 specification (see Rod Johnson's books for these criticisms). The most damaging disadvantage is that it is more complicated than necessary. In most enterprise web applications, you'll need a servlet container, and most of the stress in a web application will be at the tier that handles HttpRequests or database access. Consequently, it doesn't seem worthwhile to deploy the servlet container in a separate process space from the EJB container. I.e. remote calls to EJBs create more pain than gain. \n", "Rod Johnson claims that the main reason you'd want to use a Session Facade is if you're doing container managed transactions - which aren't necessary with more modern frameworks (like Spring.)\nHe says that if you have business logic - put it in the POJO. (Which I agree with - I think its a more object-oriented approach - rather than implementing a session EJB.)\nhttp://forum.springframework.org/showthread.php?t=18155\nHappy to hear contrasting arguments. \n", "It seems that whenever you talk about anything J2EE related - there are always a whole bunch of assumptions behind the scenes - which people assume one way or the other - which then leads to confusion. (I probably could have made the question clearer too.)\nAssuming (a) we want to use container managed transactions in a strict sense through the EJB specification then\nSession facades are a good idea - because they abstract away the low-level database transactions to be able to provide higher level application transaction management.\nAssuming (b) that you mean the general architectural concept of the session façade - then \nDecoupling services and consumers and providing a friendly interface over the top of this is a good idea. Computer science has solved lots of problems by 'adding an additional layer of indirection'. \nRod Johnson writes \"SLSBs with remote interfaces provide a very good solution for distributed applications built over RMI. However, this is a minority requirement. Experience has shown that we don't want to use distributed architecture unless forced to by requirements. We can still service remote clients if necessary by implementing a remoting façade on top of a good co-located object model.\" (Johnson, R \"J2EE Development without EJB\" p119.)\nAssuming (c) that you consider the EJB specification (and in particular the session façade component) to be a blight on the landscape of good design then:\nRod Johnson writes\n\"In general, there are not many reasons you would use a local SLSB at all in a Spring application, as Spring provides more capable declarative transaction management than EJB, and CMT is normally the main motivation for using local SLSBs. So you might not need th EJB layer at all. \" http://forum.springframework.org/showthread.php?t=18155\nIn an environment where performance and scalability of the web server are the primary concerns - and cost is an issue - then the session facade architecture looks less attractive - it can be simpler to talk directly to the datbase (although this is more about tiering.)\n" ]
[ 6, 0, 0, 0 ]
[]
[]
[ "design_patterns", "facade", "jakarta_ee", "session" ]
stackoverflow_0000088833_design_patterns_facade_jakarta_ee_session.txt
Q: How to resolve a conflict with git-svn? What is the best way to resolve a conflict when doing a git svn rebase, and the git branch you are on becomes "(no-branch)"? A: While doing a git svn rebase, if you have merge conflicts here are some things to remember: 1) If anything bad happens while performing a rebase you will end up on a (no-branch) branch. 2) If you run git status, you'll see a .dotest file in your working directory. This is safe to ignore. 3) If you want to abort the rebase use the following command.1 git rebase --abort 4) If you have a merge conflict: Manually edit the files to resolve the conflicts Stage any changes with git add [file] Continue the rebase with git rebase --continue2 If git asks: "did you forget to call git add?", then the edits turned the conflict into a no-op change3. Continue with git rebase --skip You may have to repeat this process until the rebase is complete. At any point you can git rebase --abort to cancel and abandon the rebase. 1: There is no --abort option for git svn rebase. 2: There is no --continue option for git svn rebase. 3: This is very strange, but the files are in a state where git thinks they are the same after that particular patch. The solution is to "skip" that patch on the rebase. A: You can use git mergetool to view and edit the conflicts in the usual fashion. Once you are sure the conflicts are resolved do git rebase --continue to continue the rebase, or if you don't want to include that revision do git rebase --skip
How to resolve a conflict with git-svn?
What is the best way to resolve a conflict when doing a git svn rebase, and the git branch you are on becomes "(no-branch)"?
[ "While doing a git svn rebase, if you have merge conflicts here are some things to remember:\n1) If anything bad happens while performing a rebase you will end up on a (no-branch) branch.\n2) If you run git status, you'll see a .dotest file in your working directory. This is safe to ignore.\n3) If you want to abort the rebase use the following command.1\ngit rebase --abort\n\n4) If you have a merge conflict:\n\nManually edit the files to resolve the conflicts\nStage any changes with git add [file]\nContinue the rebase with git rebase --continue2\n\nIf git asks: \"did you forget to call git add?\", then the edits turned the conflict into a no-op change3. Continue with git rebase --skip\n\n\nYou may have to repeat this process until the rebase is complete. At any point you can git rebase --abort to cancel and abandon the rebase.\n\n1: There is no --abort option for git svn rebase.\n2: There is no --continue option for git svn rebase.\n3: This is very strange, but the files are in a state where git thinks they are the same after that particular patch. The solution is to \"skip\"\n that patch on the rebase.\n", "You can use git mergetool to view and edit the conflicts in the usual fashion. Once you are sure the conflicts are resolved do git rebase --continue to continue the rebase, or if you don't want to include that revision do git rebase --skip\n" ]
[ 90, 27 ]
[]
[]
[ "conflict", "git", "merge", "svn" ]
stackoverflow_0000112839_conflict_git_merge_svn.txt
Q: How do I use genshi.builder to programmatically build an HTML document? I recently discovered the genshi.builder module. It reminds me of Divmod Nevow's Stan module. How would one use genshi.builder.tag to build an HTML document with a particular doctype? Or is this even a good thing to do? If not, what is the right way? A: It's not possible to build an entire page using just genshi.builder.tag -- you would need to perform some surgery on the resulting stream to insert the doctype. Besides, the resulting code would look horrific. The recommended way to use Genshi is to use a separate template file, generate a stream from it, and then render that stream to the output type you want. genshi.builder.tag is mostly useful for when you need to generate simple markup from within Python, such as when you're building a form or doing some sort of logic-heavy modification of the output. See documentation for: Creating and using templates The XML-based template language genshi.builder API docs If you really want to generate a full document using only builder.tag, this (completely untested) code could be a good starting point: from itertools import chain from genshi.core import DOCTYPE, Stream from genshi.output import DocType from genshi.builder import tag as t # Build the page using `genshi.builder.tag` page = t.html (t.head (t.title ("Hello world!")), t.body (t.div ("Body text"))) # Convert the page element into a stream stream = page.generate () # Chain the page stream with a stream containing only an HTML4 doctype declaration stream = Stream (chain ([(DOCTYPE, DocType.get ('html4'), None)], stream)) # Convert the stream to text using the "html" renderer (could also be xml, xhtml, text, etc) text = stream.render ('html') The resulting page will have no whitespace in it -- it'll look normal, but you'll have a hard time reading the source code because it will be entirely on one line. Implementing appropriate filters to add whitespace is left as an exercise to the reader. A: Genshi.builder is for "programmatically generating markup streams"[1]. I believe the purpose of it is as a backend for the templating language. You're probably looking for the templating language for generating a whole page. You can, however do the following: >>> import genshi.output >>> genshi.output.DocType('html') ('html', '-//W3C//DTD HTML 4.01//EN', 'http://www.w3.org/TR/html4/strict.dtd') See other Doctypes here: http://genshi.edgewall.org/wiki/ApiDocs/genshi.output#genshi.output:DocType [1] genshi.builder.__doc__
How do I use genshi.builder to programmatically build an HTML document?
I recently discovered the genshi.builder module. It reminds me of Divmod Nevow's Stan module. How would one use genshi.builder.tag to build an HTML document with a particular doctype? Or is this even a good thing to do? If not, what is the right way?
[ "It's not possible to build an entire page using just genshi.builder.tag -- you would need to perform some surgery on the resulting stream to insert the doctype. Besides, the resulting code would look horrific. The recommended way to use Genshi is to use a separate template file, generate a stream from it, and then render that stream to the output type you want.\ngenshi.builder.tag is mostly useful for when you need to generate simple markup from within Python, such as when you're building a form or doing some sort of logic-heavy modification of the output.\nSee documentation for:\n\nCreating and using templates\nThe XML-based template language\ngenshi.builder API docs\n\nIf you really want to generate a full document using only builder.tag, this (completely untested) code could be a good starting point:\nfrom itertools import chain\nfrom genshi.core import DOCTYPE, Stream\nfrom genshi.output import DocType\nfrom genshi.builder import tag as t\n\n# Build the page using `genshi.builder.tag`\npage = t.html (t.head (t.title (\"Hello world!\")), t.body (t.div (\"Body text\")))\n\n# Convert the page element into a stream\nstream = page.generate ()\n\n# Chain the page stream with a stream containing only an HTML4 doctype declaration\nstream = Stream (chain ([(DOCTYPE, DocType.get ('html4'), None)], stream))\n\n# Convert the stream to text using the \"html\" renderer (could also be xml, xhtml, text, etc)\ntext = stream.render ('html')\n\nThe resulting page will have no whitespace in it -- it'll look normal, but you'll have a hard time reading the source code because it will be entirely on one line. Implementing appropriate filters to add whitespace is left as an exercise to the reader.\n", "Genshi.builder is for \"programmatically generating markup streams\"[1]. I believe the purpose of it is as a backend for the templating language. You're probably looking for the templating language for generating a whole page.\nYou can, however do the following:\n>>> import genshi.output\n>>> genshi.output.DocType('html')\n('html', '-//W3C//DTD HTML 4.01//EN', 'http://www.w3.org/TR/html4/strict.dtd')\n\nSee other Doctypes here: http://genshi.edgewall.org/wiki/ApiDocs/genshi.output#genshi.output:DocType\n[1] genshi.builder.__doc__\n\n" ]
[ 6, 3 ]
[]
[]
[ "genshi", "html", "python", "templates" ]
stackoverflow_0000112564_genshi_html_python_templates.txt
Q: How to build interface for such enum I have the following enum: public enum Status implements StringEnum{ ONLINE("on"),OFFLINE("off"); private String status = null; private Status(String status) { this.status = status; } public String toString() { return this.status; } public static Status find(String value) { for(Status status : Status.values()) { if(status.toString().equals(value)) { return status; } } throw new IllegalArgumentException("Unknown value: " + value ); } } Is it possible to build StringEnum interface to make sure every enum has find(), toString() and a constructor? Thanks. A: It's not possible to specify either constructors or static methods in an interface. For a good, concise explanation, check out this article: No static methods in interfaces A: Enums already have a valueOf() (your find method) method. And "toString()" is a java.lang.Object method, so, every class will have that, in other words, you can't force it! I can't see the value of enforcing a constructor since different enums could have different initializations. Kind Regards A: static methods cannot be defined in interfaces constructors cannot be defined in interfaces toString is defined in java.lang.Object, requiring it in an interface will never result in a compile error if the method isn't defined. Why do you want to enforce the constructor anyway? You cannot create new instances of enums at runtime anyway (unless maybe via some reflection mechanism).
How to build interface for such enum
I have the following enum: public enum Status implements StringEnum{ ONLINE("on"),OFFLINE("off"); private String status = null; private Status(String status) { this.status = status; } public String toString() { return this.status; } public static Status find(String value) { for(Status status : Status.values()) { if(status.toString().equals(value)) { return status; } } throw new IllegalArgumentException("Unknown value: " + value ); } } Is it possible to build StringEnum interface to make sure every enum has find(), toString() and a constructor? Thanks.
[ "It's not possible to specify either constructors or static methods in an interface. For a good, concise explanation, check out this article: No static methods in interfaces\n", "Enums already have a valueOf() (your find method) method. And \"toString()\" is a java.lang.Object method, so, every class will have that, in other words, you can't force it! I can't see the value of enforcing a constructor since different enums could have different initializations.\nKind Regards\n", "\nstatic methods cannot be defined in interfaces \nconstructors cannot be defined in interfaces\ntoString is defined in java.lang.Object, requiring it in an interface will never result in a compile error if the method isn't defined.\n\nWhy do you want to enforce the constructor anyway? You cannot create new instances of enums at runtime anyway (unless maybe via some reflection mechanism).\n" ]
[ 6, 4, 3 ]
[]
[]
[ "enums", "interface", "java" ]
stackoverflow_0000112517_enums_interface_java.txt
Q: Recent Projects panel on VS2008 not working for fresh installs The Recent Projects panel on the Start Page of VS2008 Professional doesn't appear to work, and constantly remains empty. I've noticed this on 3 of our developers VS2008 installations, in fact all the installations that weren't updated from 2005 but installed from scratch. I generally treat this as a bit of a curiosity, but now I have a new laptop and fresh install of VS2008, it's also happening to me, and I've upgraded the phenomena from a curio to an annoyance. Anyone know if this is a bug or if there is a setting I'm missing somewhere. Thanks EDIT Thanks, but Tools | Options | Environment | General | "items shown in recently used lists" was and is set to 6 by default A: Is Tools | Options | Environment | General | "items shown in recently used lists" set to a number greater than 0? A: Finally worked it out! The recent projects is driven by (or at least shares a 'Show' flag with) the Recent Documents in the Start Menu. For some reason our SOE has this hidden. Both the following need th be set to 0: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoRecentDocsHistory HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoRecentDocsMenu
Recent Projects panel on VS2008 not working for fresh installs
The Recent Projects panel on the Start Page of VS2008 Professional doesn't appear to work, and constantly remains empty. I've noticed this on 3 of our developers VS2008 installations, in fact all the installations that weren't updated from 2005 but installed from scratch. I generally treat this as a bit of a curiosity, but now I have a new laptop and fresh install of VS2008, it's also happening to me, and I've upgraded the phenomena from a curio to an annoyance. Anyone know if this is a bug or if there is a setting I'm missing somewhere. Thanks EDIT Thanks, but Tools | Options | Environment | General | "items shown in recently used lists" was and is set to 6 by default
[ "Is Tools | Options | Environment | General | \"items shown in recently used lists\" set to a number greater than 0?\n", "Finally worked it out!\nThe recent projects is driven by (or at least shares a 'Show' flag with) the Recent Documents in the Start Menu.\nFor some reason our SOE has this hidden.\nBoth the following need th be set to 0: \nHKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\\NoRecentDocsHistory HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\\NoRecentDocsMenu \n" ]
[ 1, 0 ]
[]
[]
[ "visual_studio_2008" ]
stackoverflow_0000058353_visual_studio_2008.txt
Q: MD5 routines that are GLib friendly? Does anyone know of an MD5/SHA1/etc routine that is easily used with GLib (i.e. you can give it a GIOChannel, etc)? A: Unless you have a very good reason, use glib's built-in MD5, SHA1, and SHA256 implementations with GChecksum. It doesn't have a built-in function to construct a checksum from an IO stream, but you can write a simple one in 10 lines, and you'd need to write a complex one yourself anyway. A: You normally have to do library glue stuff yourself... void get_channel_md5( GIOChannel* channel, unsigned char output[16] ) { md5_context ctx; gint64 fileSize = <get file size somehow?>; gint64 filePos = 0ll; gsize bufferSize = g_io_channel_get_buffer_size( channel ); void* buffer = malloc( bufferSize ); md5_starts( &ctx ); // hash buffer at a time: while ( filePos < fileSize ) { gint64 size = fileSize - filePos; if ( size > bufferSize ) size = bufferSize; g_io_channel_read( channel, buffer ); md5_update( &ctx, buffer, (int)size ); filePos += bufferSize; } free( buffer ); md5_finish( &ctx, output ); }
MD5 routines that are GLib friendly?
Does anyone know of an MD5/SHA1/etc routine that is easily used with GLib (i.e. you can give it a GIOChannel, etc)?
[ "Unless you have a very good reason, use glib's built-in MD5, SHA1, and SHA256 implementations with GChecksum. It doesn't have a built-in function to construct a checksum from an IO stream, but you can write a simple one in 10 lines, and you'd need to write a complex one yourself anyway.\n", "You normally have to do library glue stuff yourself...\nvoid get_channel_md5( GIOChannel* channel, unsigned char output[16] )\n{\n md5_context ctx;\n\n gint64 fileSize = <get file size somehow?>;\n gint64 filePos = 0ll;\n\n gsize bufferSize = g_io_channel_get_buffer_size( channel );\n void* buffer = malloc( bufferSize );\n\n md5_starts( &ctx );\n\n // hash buffer at a time: \n while ( filePos < fileSize )\n {\n gint64 size = fileSize - filePos;\n if ( size > bufferSize )\n size = bufferSize;\n\n g_io_channel_read( channel, buffer );\n md5_update( &ctx, buffer, (int)size );\n\n filePos += bufferSize;\n }\n\n free( buffer );\n\n md5_finish( &ctx, output );\n}\n\n" ]
[ 6, 2 ]
[]
[]
[ "c", "glib", "linux" ]
stackoverflow_0000112802_c_glib_linux.txt
Q: How would one log into a phpBB3 forum through a Python script using urllib, urllib2 and ClientCookie? (ClientCookie is a module for (automatic) cookie-handling: http://wwwsearch.sourceforge.net/ClientCookie) # I encode the data I'll be sending: data = urllib.urlencode({'username': 'mandark', 'password': 'deedee'}) # And I send it and read the page: page = ClientCookie.urlopen('http://www.forum.com/ucp.php?mode=login', data) output = page.read() The script doesn't log in, but rather seems to get redirected back to the same login page asking it for a username and password. What am I doing wrong? Any help would be greatly appreciated! Thanks! A: Have you tried fetching the login page first? I would suggest using Tamper Data to have a peek at exactly what's being sent when you request the login page and then log in normally using a web browser from a fresh start, with no initial cookies in place, so that your script can replicate it exactly. That's the approach I used when writing the following, extracted from a script which needs to login to an Invision Power Board forum, using cookielib and urllib2 - you may find it useful as a reference. import cookielib import logging import sys import urllib import urllib2 cookies = cookielib.LWPCookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies)) urllib2.install_opener(opener) headers = { 'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.0; en-GB; rv:1.8.1.12) Gecko/20080201 Firefox/2.0.0.12', 'Accept': 'text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'Accept-Language': 'en-gb,en;q=0.5', 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', } # Fetch the login page to set initial cookies urllib2.urlopen(urllib2.Request('http://www.rllmukforum.com/index.php?act=Login&CODE=00', None, headers)) # Login so we can access the Off Topic forum login_headers = headers.copy() login_headers.update({ 'Referer': 'http://www.rllmukforum.com/index.php?act=Login&CODE=00', 'Content-Type': 'application/x-www-form-urlencoded', }) html = urllib2.urlopen(urllib2.Request('http://www.rllmukforum.com/index.php?act=Login&CODE=01', urllib.urlencode({ 'referer': 'http://www.rllmukforum.com/index.php?', 'UserName': RLLMUK_USERNAME, 'PassWord': RLLMUK_PASSWORD, }), login_headers)).read() if 'The following errors were found' in html: logging.error('RLLMUK login failed') logging.info(html) sys.exit(1) A: I'd recommend taking a look at the mechanize library; it's designed for precisely this type of task. It's also far easier than doing it by hand.
How would one log into a phpBB3 forum through a Python script using urllib, urllib2 and ClientCookie?
(ClientCookie is a module for (automatic) cookie-handling: http://wwwsearch.sourceforge.net/ClientCookie) # I encode the data I'll be sending: data = urllib.urlencode({'username': 'mandark', 'password': 'deedee'}) # And I send it and read the page: page = ClientCookie.urlopen('http://www.forum.com/ucp.php?mode=login', data) output = page.read() The script doesn't log in, but rather seems to get redirected back to the same login page asking it for a username and password. What am I doing wrong? Any help would be greatly appreciated! Thanks!
[ "Have you tried fetching the login page first?\nI would suggest using Tamper Data to have a peek at exactly what's being sent when you request the login page and then log in normally using a web browser from a fresh start, with no initial cookies in place, so that your script can replicate it exactly.\nThat's the approach I used when writing the following, extracted from a script which needs to login to an Invision Power Board forum, using cookielib and urllib2 - you may find it useful as a reference.\nimport cookielib\nimport logging\nimport sys\nimport urllib\nimport urllib2\n\ncookies = cookielib.LWPCookieJar()\nopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies))\nurllib2.install_opener(opener)\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.0; en-GB; rv:1.8.1.12) Gecko/20080201 Firefox/2.0.0.12',\n 'Accept': 'text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5',\n 'Accept-Language': 'en-gb,en;q=0.5',\n 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7',\n}\n\n# Fetch the login page to set initial cookies\nurllib2.urlopen(urllib2.Request('http://www.rllmukforum.com/index.php?act=Login&CODE=00', None, headers))\n\n# Login so we can access the Off Topic forum\nlogin_headers = headers.copy()\nlogin_headers.update({\n 'Referer': 'http://www.rllmukforum.com/index.php?act=Login&CODE=00',\n 'Content-Type': 'application/x-www-form-urlencoded',\n})\nhtml = urllib2.urlopen(urllib2.Request('http://www.rllmukforum.com/index.php?act=Login&CODE=01',\n urllib.urlencode({\n 'referer': 'http://www.rllmukforum.com/index.php?',\n 'UserName': RLLMUK_USERNAME,\n 'PassWord': RLLMUK_PASSWORD,\n }),\n login_headers)).read()\nif 'The following errors were found' in html:\n logging.error('RLLMUK login failed')\n logging.info(html)\n sys.exit(1)\n\n", "I'd recommend taking a look at the mechanize library; it's designed for precisely this type of task. It's also far easier than doing it by hand.\n" ]
[ 2, 0 ]
[]
[]
[ "post", "python", "urllib" ]
stackoverflow_0000112768_post_python_urllib.txt
Q: How do I save a model with this dynamically generated field? I have a rails model that looks something like this: class Recipe < ActiveRecord::Base has_many :ingredients attr_accessor :ingredients_string attr_accessible :title, :directions, :ingredients, :ingredients_string before_save :set_ingredients def ingredients_string ingredients.join("\n") end private def set_ingredients self.ingredients.each { |x| x.destroy } self.ingredients_string ||= false if self.ingredients_string self.ingredients_string.split("\n").each do |x| ingredient = Ingredient.create(:ingredient_string => x) self.ingredients << ingredient end end end end The idea is that when I create the ingredient from the webpage, I pass in the ingredients_string and let the model sort it all out. Of course, if I am editing an ingredient I need to re-create that string. The bug is basically this: how do I inform the view of the ingredient_string (elegantly) and still check to see if the ingredient_string is defined in the set_ingredients method? A: Using these two together are probably causing your issues. Both are trying to define an ingredients_string method that do different things attr_accessor :ingredients_string def ingredients_string ingredients.join("\n") end Get rid of the attr_accessor, the before_save, set_ingredients method and define your own ingredients_string= method, something like this: def ingredients_string=(ingredients) ingredients.each { |x| x.destroy } ingredients_string ||= false if ingredients_string ingredients_string.split("\n").each do |x| ingredient = Ingredient.create(:ingredient_string => x) self.ingredients << ingredient end end end Note I just borrowed your implementation of set_ingredients. There's probably a more elegant way to break up that string and create/delete Ingredient model associations as needed, but it's late and I can't think of it right now. :) A: The previous answer is very good but it could do with a few changes. def ingredients_string=(text) ingredients.each { |x| x.destroy } unless text.blank? text.split("\n").each do |x| ingredient = Ingredient.find_or_create_by_ingredient_string(:ingredient_string => x) self.ingredients A: I basically just modified Otto's answer: class Recipe < ActiveRecord::Base has_many :ingredients attr_accessible :title, :directions, :ingredients, :ingredients_string def ingredients_string=(ingredient_string) ingredient_string ||= false if ingredient_string self.ingredients.each { |x| x.destroy } unless ingredient_string.blank? ingredient_string.split("\n").each do |x| ingredient = Ingredient.create(:ingredient_string => x) self.ingredients << ingredient end end end end def ingredients_string ingredients.join("\n") end end
How do I save a model with this dynamically generated field?
I have a rails model that looks something like this: class Recipe < ActiveRecord::Base has_many :ingredients attr_accessor :ingredients_string attr_accessible :title, :directions, :ingredients, :ingredients_string before_save :set_ingredients def ingredients_string ingredients.join("\n") end private def set_ingredients self.ingredients.each { |x| x.destroy } self.ingredients_string ||= false if self.ingredients_string self.ingredients_string.split("\n").each do |x| ingredient = Ingredient.create(:ingredient_string => x) self.ingredients << ingredient end end end end The idea is that when I create the ingredient from the webpage, I pass in the ingredients_string and let the model sort it all out. Of course, if I am editing an ingredient I need to re-create that string. The bug is basically this: how do I inform the view of the ingredient_string (elegantly) and still check to see if the ingredient_string is defined in the set_ingredients method?
[ "Using these two together are probably causing your issues. Both are trying to define an ingredients_string method that do different things\n attr_accessor :ingredients_string\n\n def ingredients_string\n ingredients.join(\"\\n\")\n end\n\nGet rid of the attr_accessor, the before_save, set_ingredients method and define your own ingredients_string= method, something like this:\ndef ingredients_string=(ingredients)\n ingredients.each { |x| x.destroy }\n ingredients_string ||= false\n if ingredients_string\n ingredients_string.split(\"\\n\").each do |x|\n ingredient = Ingredient.create(:ingredient_string => x)\n self.ingredients << ingredient\n end\n end\nend\n\nNote I just borrowed your implementation of set_ingredients. There's probably a more elegant way to break up that string and create/delete Ingredient model associations as needed, but it's late and I can't think of it right now. :)\n", "The previous answer is very good but it could do with a few changes.\n\n\ndef ingredients_string=(text)\n ingredients.each { |x| x.destroy }\n unless text.blank?\n text.split(\"\\n\").each do |x|\n ingredient = Ingredient.find_or_create_by_ingredient_string(:ingredient_string => x)\n self.ingredients \n", "I basically just modified Otto's answer:\nclass Recipe < ActiveRecord::Base\n has_many :ingredients\n attr_accessible :title, :directions, :ingredients, :ingredients_string\n\n def ingredients_string=(ingredient_string)\n ingredient_string ||= false\n if ingredient_string\n self.ingredients.each { |x| x.destroy }\n unless ingredient_string.blank?\n ingredient_string.split(\"\\n\").each do |x|\n ingredient = Ingredient.create(:ingredient_string => x)\n self.ingredients << ingredient\n end\n end\n end\n end\n\n def ingredients_string\n ingredients.join(\"\\n\")\n end\n\nend\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "ruby_on_rails" ]
stackoverflow_0000110163_ruby_on_rails.txt
Q: Calling a ASP Page thru it Class Like in Windows Forms: Dim myForm as New AForm(Constr-arg1, Constr-arg2) myForm.Show ... is there a similar way to Load a Page in ASP.Net. I would like to overload the Page Constructor and instantiate the correct Page Contructor depending on the situation. A: Can you just link to the page passing parameters in the QueryString (after the ? in the URL) and then use them in the constructor (more likely PageLoad) A: I think the best approach here for ASP.NET is to write User Control (*.ascx file) that represents page content, and load different controls based on current situation using Page.LoadControl() method. This solution is flexible enough, because only reference to control is its name. And this approach is much more useful than page constructor overloading as soos as you're not related on strong types, only on controls' names. A: This isn't really the "correct" way to redirect to a page in .Net web programming. Instead, you should call either Request.Redirect("~/newpage.aspx") or Server.Transfer("~/newpage.aspx"). You should then handle the request in the new page's Page_Load handler. You can pass state between the pages by adding to the query string of the redirected URL (i.e. ~/newpage.aspx?q1=test), or by assiging values to the Session store (i.e Session["q1"] = value).
Calling a ASP Page thru it Class
Like in Windows Forms: Dim myForm as New AForm(Constr-arg1, Constr-arg2) myForm.Show ... is there a similar way to Load a Page in ASP.Net. I would like to overload the Page Constructor and instantiate the correct Page Contructor depending on the situation.
[ "Can you just link to the page passing parameters in the QueryString (after the ? in the URL) and then use them in the constructor (more likely PageLoad)\n", "I think the best approach here for ASP.NET is to write User Control (*.ascx file) that represents page content, and load different controls based on current situation using Page.LoadControl() method. This solution is flexible enough, because only reference to control is its name. And this approach is much more useful than page constructor overloading as soos as you're not related on strong types, only on controls' names.\n", "This isn't really the \"correct\" way to redirect to a page in .Net web programming.\nInstead, you should call either Request.Redirect(\"~/newpage.aspx\") or Server.Transfer(\"~/newpage.aspx\"). You should then handle the request in the new page's Page_Load handler.\nYou can pass state between the pages by adding to the query string of the redirected URL (i.e. ~/newpage.aspx?q1=test), or by assiging values to the Session store (i.e Session[\"q1\"] = value).\n" ]
[ 1, 0, 0 ]
[]
[]
[ "asp.net", "load" ]
stackoverflow_0000112870_asp.net_load.txt
Q: Deploying a site from VSS In answer to this question Joel Coehoorn said Finally, only after the site's gone through a suitable QA process, the production server is updated from source control, not from within visual studio. Does VSS Explorer have tools for deploying sites (via FTP, I would assume)? I noticed for the first time a Web/Deploy menu option, but it's grayed out. How does this work? A: VSS has a pretty comprehensive set of command line arguments. The best way I know is to write a batch file to: 1 - Get Latest to the local system (presumably a clean build machine) 2 - Push the newly-updated local files to your FTP site. A: We use Nant for our project. Nant is a dot net port of ant for java. It has tasks to checkout from VSS, compile and deploy. Nant VSS checkout task Codeproject Article on deployment with Nant A: Managing Web Content Using MS Visual SourceSafe yes you can deploy via ftp or network share using the "web deploy" feature in the VSS explorer.
Deploying a site from VSS
In answer to this question Joel Coehoorn said Finally, only after the site's gone through a suitable QA process, the production server is updated from source control, not from within visual studio. Does VSS Explorer have tools for deploying sites (via FTP, I would assume)? I noticed for the first time a Web/Deploy menu option, but it's grayed out. How does this work?
[ "VSS has a pretty comprehensive set of command line arguments. The best way I know is to write a batch file to:\n1 - Get Latest to the local system (presumably a clean build machine)\n2 - Push the newly-updated local files to your FTP site.\n", "We use Nant for our project. Nant is a dot net port of ant for java. It has tasks to checkout from VSS, compile and deploy. \n\nNant\nVSS checkout task\nCodeproject Article on deployment with Nant\n\n", "Managing Web Content Using MS Visual SourceSafe\nyes you can deploy via ftp or network share using the \"web deploy\" feature in the VSS explorer.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "asp.net", "visual_sourcesafe" ]
stackoverflow_0000109199_asp.net_visual_sourcesafe.txt
Q: Maximum # of Results in a Sitecore Droplink field? In Sitecore 6, one of my templates contains a "Droplink" field bound to the results of a particular sitecore query. This query currently returns approximately 200 items. When I look at an item that implements this template in the content editor, I can only see the first 50 items in the field's dropdown list. How do I display all of the items returned from this query in the editor's dropdown? A: There is a setting in the web.config that controls the max number of items that can be returned by a query: <setting name="Query.MaxItems" value="100" /> By default, it's set to 100 so I'm not quite sure why your query is only returning 50, perhaps someone else changed the setting? Also, be wary of a performance hit when returning more than 100 items. Depending on your hardware and overall Sitecore architecture it may not be that noticeable, but just something to be wary of.
Maximum # of Results in a Sitecore Droplink field?
In Sitecore 6, one of my templates contains a "Droplink" field bound to the results of a particular sitecore query. This query currently returns approximately 200 items. When I look at an item that implements this template in the content editor, I can only see the first 50 items in the field's dropdown list. How do I display all of the items returned from this query in the editor's dropdown?
[ "There is a setting in the web.config that controls the max number of items that can be returned by a query:\n<setting name=\"Query.MaxItems\" value=\"100\" />\n\nBy default, it's set to 100 so I'm not quite sure why your query is only returning 50, perhaps someone else changed the setting?\nAlso, be wary of a performance hit when returning more than 100 items. Depending on your hardware and overall Sitecore architecture it may not be that noticeable, but just something to be wary of.\n" ]
[ 8 ]
[]
[]
[ "asp.net", "sitecore" ]
stackoverflow_0000104121_asp.net_sitecore.txt
Q: How can you change the Visual Studio IDE profile? Can the Visual Studio IDE profile be changed without clearing all VS settings? A: Tools -> Import and Export Settings.. -> [X] Import Selected ... -> Save Current -> Choose options you wish to change
How can you change the Visual Studio IDE profile?
Can the Visual Studio IDE profile be changed without clearing all VS settings?
[ "Tools -> Import and Export Settings.. -> [X] Import Selected ... -> Save Current -> Choose options you wish to change\n" ]
[ 16 ]
[]
[]
[ "visual_studio" ]
stackoverflow_0000112926_visual_studio.txt
Q: Accessing files across the windows network with near MAX_PATH length I'm using C++ and accessing a UNC path across the network. This path is slightly greater than MAX_PATH. So I cannot obtain a file handle. But if I run the program on the computer in question, the path is not greater than MAX_PATH. So I can get a file handle. If I rename the file to have less characters (minus length of computer name) I can access the file. Can this file be accessed across the network even know the computer name in the UNC path puts it over the MAX_PATH limit? A: I recall that there is some feature like using \\?\ at the start of the path to get around the MAX_PATH limit. Here is a reference on MSDN: http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx For remote machines, you would use a path name such as: \\?\unc\server\share\path\file. The \\?\unc\ is the special prefix and is not used as part of the actual filename. A: You might be able to get a handle to the file if you try opening the file after converting the file name to a short (8.3) file name. Failing that can you map the dir the file is in as a drive and access the file that way?
Accessing files across the windows network with near MAX_PATH length
I'm using C++ and accessing a UNC path across the network. This path is slightly greater than MAX_PATH. So I cannot obtain a file handle. But if I run the program on the computer in question, the path is not greater than MAX_PATH. So I can get a file handle. If I rename the file to have less characters (minus length of computer name) I can access the file. Can this file be accessed across the network even know the computer name in the UNC path puts it over the MAX_PATH limit?
[ "I recall that there is some feature like using \\\\?\\ at the start of the path to get around the MAX_PATH limit. Here is a reference on MSDN:\nhttp://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx\nFor remote machines, you would use a path name such as: \\\\?\\unc\\server\\share\\path\\file. The \\\\?\\unc\\ is the special prefix and is not used as part of the actual filename.\n", "You might be able to get a handle to the file if you try opening the file after converting the file name to a short (8.3) file name. Failing that can you map the dir the file is in as a drive and access the file that way?\n" ]
[ 10, 0 ]
[]
[]
[ "c++", "lan", "unc", "visual_c++", "windows" ]
stackoverflow_0000112946_c++_lan_unc_visual_c++_windows.txt
Q: Database Choice for a C# 2008 front end I was wondering what and why you would choose to be able to make a database that can support no more than 100 users with no more than 10 using it at once with a Visual Studio 2008 C# Windows Form front end to access it by. I have to access the database over a network connection, not just on the local machine. I also need to define where the database is found at run-time in the code as opposed to the "Data Source" view in Visual Studio. If my question needs reframing or is not understood, let me know and I will adjust. Part of my problem is I am not sure even how to ask the right question, much less what the answer is. A: If it is not for comercial purposes you can try SQL Server 2008 Express. It can integrate nicely with Visual Studio 2008 for development and has support for LINQ, Entity Data Model and ADO.NET Entity Framework to make it easy to create next generation data-enabled applications. http://www.microsoft.com/express/sql/default.aspx You can also store your connections strings in the application configuration file and retrieve them programatically for setting up the database connection. http://www.codeguru.com/columns/DotNet/article.php/c7987/ A: I would probably go with Sql Server Express, it's free and works well with .NET. Assuming your schema is not changing at runtime you can probably still use the design time data source features in Visual Studio. The connection information is stored in the app.config file which you can update after the app is deployed to point to a different database. You can also develop a class that gets the connection info from somewhere else as well and just use that when you need to open a database connection. A: I know using mssql you can pick between different connection strings for all of your db calls, just do something like Command.Connection = GetMyConnectionWithWhateverLogicINeed(); A: I'd have a look at Sql Server Workgroup Edition http://www.microsoft.com/sql/editions/workgroup/ Express edition used to have some limiting features for more than about 5 users and it is not supplied with any management tools which is a bit disheartening. A: I'm not sure I totally get what you are asking, Matt, but I can tell you that I developed a series of apps written with VS 2008 and we used a MySQL DB for it. While I'm definitely not a DB guru at this point, I've not had many issues with using MySQL. Perhaps if you rephrase your question, we can provide better answers. A: SQLite for sure. ADO 2.0 Provider
Database Choice for a C# 2008 front end
I was wondering what and why you would choose to be able to make a database that can support no more than 100 users with no more than 10 using it at once with a Visual Studio 2008 C# Windows Form front end to access it by. I have to access the database over a network connection, not just on the local machine. I also need to define where the database is found at run-time in the code as opposed to the "Data Source" view in Visual Studio. If my question needs reframing or is not understood, let me know and I will adjust. Part of my problem is I am not sure even how to ask the right question, much less what the answer is.
[ "If it is not for comercial purposes you can try SQL Server 2008 Express. It can integrate nicely with Visual Studio 2008 for development and has support for LINQ, Entity Data Model and ADO.NET Entity Framework to make it easy to create next generation data-enabled applications.\nhttp://www.microsoft.com/express/sql/default.aspx\nYou can also store your connections strings in the application configuration file and retrieve them programatically for setting up the database connection.\nhttp://www.codeguru.com/columns/DotNet/article.php/c7987/\n", "I would probably go with Sql Server Express, it's free and works well with .NET. Assuming your schema is not changing at runtime you can probably still use the design time data source features in Visual Studio. The connection information is stored in the app.config file which you can update after the app is deployed to point to a different database. You can also develop a class that gets the connection info from somewhere else as well and just use that when you need to open a database connection.\n", "I know using mssql you can pick between different connection strings for all of your db calls, just do something like\nCommand.Connection = GetMyConnectionWithWhateverLogicINeed();\n", "I'd have a look at Sql Server Workgroup Edition\nhttp://www.microsoft.com/sql/editions/workgroup/\nExpress edition used to have some limiting features for more than about 5 users and it is not supplied with any management tools which is a bit disheartening.\n", "I'm not sure I totally get what you are asking, Matt, but I can tell you that I developed a series of apps written with VS 2008 and we used a MySQL DB for it. While I'm definitely not a DB guru at this point, I've not had many issues with using MySQL.\nPerhaps if you rephrase your question, we can provide better answers.\n", "SQLite for sure.\nADO 2.0 Provider\n" ]
[ 5, 1, 0, 0, 0, 0 ]
[]
[]
[ "c#", "database", "networking" ]
stackoverflow_0000109810_c#_database_networking.txt
Q: Relative path in web config How can I have a relative path in the web.config file. This value is not in the connection string so I cannot use |DataDirectory| (I think), so what can I do? A: What is the relative path for? Are you talking about a physical directory path or a url path? Edit: I needed to do something similar for one of my projects. I needed to locate a config file that was stored in a certain folder. While the web.config file itself does not provide anything special for this, you can take a path from the web.config file and convert it to an app-relative path. Request.ApplicationPath gets you the base directory of the web application. YOu can append the relative path to this and give it to whatever needs it. Also see this blog post by Rick Strahl for other interesting directories that may help you. You could then append the relatvie path to
Relative path in web config
How can I have a relative path in the web.config file. This value is not in the connection string so I cannot use |DataDirectory| (I think), so what can I do?
[ "What is the relative path for?\nAre you talking about a physical directory path or a url path?\nEdit: \nI needed to do something similar for one of my projects. I needed to locate a config file that was stored in a certain folder. While the web.config file itself does not provide anything special for this, you can take a path from the web.config file and convert it to an app-relative path.\nRequest.ApplicationPath gets you the base directory of the web application. YOu can append the relative path to this and give it to whatever needs it.\nAlso see this blog post by Rick Strahl for other interesting directories that may help you.\nYou could then append the relatvie path to \n" ]
[ 1 ]
[]
[]
[ ".net", "c#", "path", "relative_path", "web_config" ]
stackoverflow_0000112975_.net_c#_path_relative_path_web_config.txt
Q: How to copy DLL file from PC to vs.net`s pocket pc 2003 simulator? I want to copy a DLL file from PC to vs.net`s pocket pc 2003 simulator, so i use the shared folder of the simulator, but i can not see the dll file in file list of simulator. How to do it, please ? A: Add it to the project's primary output. A: Suggestion: if you need to use an external DLL to be copied along with your code, add a reference to the DLL, and in its properties pane on Visual Studio make sure that Copy Local is set to True. That might accomplish what you're trying to do.
How to copy DLL file from PC to vs.net`s pocket pc 2003 simulator?
I want to copy a DLL file from PC to vs.net`s pocket pc 2003 simulator, so i use the shared folder of the simulator, but i can not see the dll file in file list of simulator. How to do it, please ?
[ "Add it to the project's primary output.\n", "Suggestion: if you need to use an external DLL to be copied along with your code, add a reference to the DLL, and in its properties pane on Visual Studio make sure that Copy Local is set to True. That might accomplish what you're trying to do.\n" ]
[ 1, 0 ]
[]
[]
[ "deployment", "simulator" ]
stackoverflow_0000112976_deployment_simulator.txt
Q: Writing to the windows logs in Python Is it possible to write to the windows logs in python? A: Yes, just use Windows Python Extension, as stated here. import win32evtlogutil win32evtlogutil.ReportEvent(ApplicationName, EventID, EventCategory, EventType, Inserts, Data, SID)
Writing to the windows logs in Python
Is it possible to write to the windows logs in python?
[ "Yes, just use Windows Python Extension, as stated here.\nimport win32evtlogutil\nwin32evtlogutil.ReportEvent(ApplicationName, EventID, EventCategory,\n EventType, Inserts, Data, SID)\n\n" ]
[ 20 ]
[]
[]
[ "logging", "python", "windows" ]
stackoverflow_0000113007_logging_python_windows.txt
Q: What are the statistics of HTML vs. Text email usage What the latest figures are on people viewing their emails in text only mode vs. HTML? Wikipedia and it's source both seem to reference this research from 2006 which is an eternity ago in internet terms. An issue with combining both HTML and text based emails is taking a disproportionate amount of time to resolve given the likely number of users it is affecting. A: As with web browser usage statistics, it depends entirely on the audience. I have access to a bit of data on this subject and it seems that text-only email use is very low (for non-technical audiences, at least). <0.1% up to ~6% depending on demographic. It's not that much effort to do both (especially if you can find something to help you do the heavy lifting when creating multipart MIME containers), and you can always write a script to generate text from your HTML or something.
What are the statistics of HTML vs. Text email usage
What the latest figures are on people viewing their emails in text only mode vs. HTML? Wikipedia and it's source both seem to reference this research from 2006 which is an eternity ago in internet terms. An issue with combining both HTML and text based emails is taking a disproportionate amount of time to resolve given the likely number of users it is affecting.
[ "As with web browser usage statistics, it depends entirely on the audience.\nI have access to a bit of data on this subject and it seems that text-only email use is very low (for non-technical audiences, at least). <0.1% up to ~6% depending on demographic.\nIt's not that much effort to do both (especially if you can find something to help you do the heavy lifting when creating multipart MIME containers), and you can always write a script to generate text from your HTML or something.\n" ]
[ 3 ]
[]
[]
[ "email" ]
stackoverflow_0000112940_email.txt
Q: C101: the best way to fill an array from user input? I'm having a hard time understanding and therefore managing arrays and indexes manually in C. These are my two classic approaches but they doesn't seem to work as they keep looping when the condition is reached: #include<stdio.h> #define MAX 255 int main(){ int arr[MAX]={0}; int idx=0; /* Approach #1 */ printf("Enter elements, -1 to finish:\n"); scanf("%d", &arr[idx]); while(arr[idx-1] != -1 && idx < MAX){ printf("Enter elements, -1 to finish:\n"); scanf("%d", &arr[idx]); idx++; } /* Approach #2 */ do{ printf("Enter elements, -1 to finish:\n"); scanf("%d", &arr[idx]); idx++; }while(arr[idx-1] != -1 && idx < MAX); // Main func continues here. } Any advice would be much appreciated! Update: Now it works! thanks MUCH all of you guys for such instant responses. This is definitely a great community, it's helping me a lot. A: arr[idx] <= MAX should be idx <= MAX A: while(arr[idx] != -1 && idx <= MAX){ // Fixed by sklivvz printf("Enter elements, -1 to finish:\n"); scanf("%d", &arr[idx]); idx++; } First of all, you should check if the index variabel idx is smaller than MAX (not smaller than or equal to). You would go out of bounds if your index equals MAX. An array with MAX = 10 has index values 0 to and including 9, but not 10. Secondly, you add the first element to arr[0], you increment your index from 0 to 1, then you jump back to the while-condition and check if arr[1] == -1, which it is not. So check instead that arr[idx-1] != -1. Note however that the first time you get to the while-loop, you will actually check arr[-1] != -1, which is also out of bounds. ;) So you need to figure out how to solve this one. A: to Roman M: First of all, the guy asking the question has just started a programming course, and has probably not yet learned about pointers. Secondly, you now deal with both a counter and a pointer. I'm not sure I see the benefit of doing that versus using an index like this: for(idx=0; idx < MAX; ++idx) { scanf("%d", &arr[idx]); if(arr[idx] == -1) break; } A: Using a for loop you can eliminate the need for the messy idx-1 checking code: /* Approach #3*/ int i; int value; for (i = 0; i < MAX; ++i) { printf("Enter elements, -1 to finish:\n"); scanf("%d", &value); if (value == -1) break; arr[i] = value; } A: C arrays begin counting from 0. If you allocate an array of size MAX, accessing the element at MAX would be an error. Change the loop to; int arr[MAX]; for ( .... && idx < MAX ) A: arr[idx] <= MAX should be idx < MAX unless you are checking the item instead of the index. You are also always checking the "next" element for -1 (arr[idx] != -1) because you are incrementing idx prior to checking your added value. so if you had arr[idx-1] != -1 you would be fine. A: In your first while loop, the arr[idx] <= MAX line should read idx <= MAX In your second loop, you're incrementing idx before the test - it should end with } while ((arr[idx-1] != -1) && (idx-1 <= MAX)); I also tend to parenthesize all internal conditions just to be absolutely certain that the precedence is correct (hence the extra brackets above). A: I'd go with somthing like this. You don't have to worry about array bounds and other confusing conditions. int cnt = MAX; // how many elements in the array, in this case MAX int * p = &arr[0]; // p is a pointer to an integer and is initialize to the address of the first // element of the array. So now *p is the same as arr[0] and p is same as &arr[0] // iterate over all elements. stop when cnt == 0 while (cnt) { // do somthing scanf("%d", *p); // remember that *p is same as arr[some index] if (*p == -1) // inspect element to see what user entered break; cnt --; // loop counter p++; // incrementing p to point to next element in the array }
C101: the best way to fill an array from user input?
I'm having a hard time understanding and therefore managing arrays and indexes manually in C. These are my two classic approaches but they doesn't seem to work as they keep looping when the condition is reached: #include<stdio.h> #define MAX 255 int main(){ int arr[MAX]={0}; int idx=0; /* Approach #1 */ printf("Enter elements, -1 to finish:\n"); scanf("%d", &arr[idx]); while(arr[idx-1] != -1 && idx < MAX){ printf("Enter elements, -1 to finish:\n"); scanf("%d", &arr[idx]); idx++; } /* Approach #2 */ do{ printf("Enter elements, -1 to finish:\n"); scanf("%d", &arr[idx]); idx++; }while(arr[idx-1] != -1 && idx < MAX); // Main func continues here. } Any advice would be much appreciated! Update: Now it works! thanks MUCH all of you guys for such instant responses. This is definitely a great community, it's helping me a lot.
[ "arr[idx] <= MAX\n\nshould be\nidx <= MAX\n\n", "while(arr[idx] != -1 && idx <= MAX){ // Fixed by sklivvz\n printf(\"Enter elements, -1 to finish:\\n\");\n scanf(\"%d\", &arr[idx]);\n idx++; \n}\n\nFirst of all, you should check if the index variabel idx is smaller than MAX (not smaller than or equal to). You would go out of bounds if your index equals MAX. An array with MAX = 10 has index values 0 to and including 9, but not 10.\nSecondly, you add the first element to arr[0], you increment your index from 0 to 1, then you jump back to the while-condition and check if arr[1] == -1, which it is not. So check instead that arr[idx-1] != -1. Note however that the first time you get to the while-loop, you will actually check arr[-1] != -1, which is also out of bounds. ;) So you need to figure out how to solve this one.\n", "to Roman M:\nFirst of all, the guy asking the question has just started a programming course, and has probably not yet learned about pointers. Secondly, you now deal with both a counter and a pointer. I'm not sure I see the benefit of doing that versus using an index like this:\nfor(idx=0; idx < MAX; ++idx) {\nscanf(\"%d\", &arr[idx]);\nif(arr[idx] == -1)\n break;\n\n}\n", "Using a for loop you can eliminate the need for the messy idx-1 checking code:\n/* Approach #3*/\nint i;\nint value;\n\nfor (i = 0; i < MAX; ++i)\n{\n printf(\"Enter elements, -1 to finish:\\n\");\n scanf(\"%d\", &value);\n if (value == -1) break;\n arr[i] = value;\n}\n\n", "C arrays begin counting from 0. \nIf you allocate an array of size MAX, accessing the element at MAX would be an error.\nChange the loop to;\nint arr[MAX];\nfor ( .... && idx < MAX )\n\n", "arr[idx] <= MAX\n\nshould be\nidx < MAX\n\nunless you are checking the item instead of the index.\nYou are also always checking the \"next\" element for -1 (arr[idx] != -1) because you are incrementing idx prior to checking your added value.\nso if you had\narr[idx-1] != -1\n\nyou would be fine.\n", "In your first while loop, the \narr[idx] <= MAX\n\nline should read\nidx <= MAX\n\nIn your second loop, you're incrementing idx before the test - it should end with\n} while ((arr[idx-1] != -1) && (idx-1 <= MAX));\n\nI also tend to parenthesize all internal conditions just to be absolutely certain that the precedence is correct (hence the extra brackets above).\n", "I'd go with somthing like this.\nYou don't have to worry about array bounds and other confusing conditions.\nint cnt = MAX; // how many elements in the array, in this case MAX\nint * p = &arr[0]; // p is a pointer to an integer and is initialize to the address of the first\n // element of the array. So now *p is the same as arr[0] and p is same as &arr[0]\n\n// iterate over all elements. stop when cnt == 0\nwhile (cnt) {\n\n // do somthing\n scanf(\"%d\", *p); // remember that *p is same as arr[some index]\n if (*p == -1) // inspect element to see what user entered\n break;\n\n cnt --; // loop counter\n p++; // incrementing p to point to next element in the array\n}\n\n" ]
[ 4, 2, 2, 2, 1, 1, 1, 1 ]
[]
[]
[ "arrays", "c", "indexing" ]
stackoverflow_0000112582_arrays_c_indexing.txt
Q: How do I access my memory mapped I/O Device (FPGA) from a RTP in VxWorks? When using VxWorks, we are trying to access a memory mapped I/O device from a Real-Time Process. Since RTPs have memory protection, how can I access my I/O device from one? A: There are two methods you can use to access your I/O mapped device from an RTP. I/O Subsystem (preferred) You essentially create a small device driver. This driver can be integrated into the I/O Subsystem of VxWorks. Once integrated, the driver is available to the RTP by simply using standard I/O operations: open, close, read, write, ioctl. Note that "creating a device driver" doesn't have to be complicated. It could be as simple as just defining a wrapper for the ioctl function. See ioLib for more details. Map Memory Directly (not recommended) You can create a shared memory region via the sdOpen call. When creating the shared memory, you can specify what the physical address should be. Specify the address to be your device's I/O mapped region, and you can access the device directly. The problem is that a shared memory region is a public object that is available to any space, and poking directly at hardware goes against the philosophy behind RTPs.
How do I access my memory mapped I/O Device (FPGA) from a RTP in VxWorks?
When using VxWorks, we are trying to access a memory mapped I/O device from a Real-Time Process. Since RTPs have memory protection, how can I access my I/O device from one?
[ "There are two methods you can use to access your I/O mapped device from an RTP.\nI/O Subsystem (preferred)\nYou essentially create a small device driver. This driver can be integrated into the I/O Subsystem of VxWorks. Once integrated, the driver is available to the RTP by simply using standard I/O operations: open, close, read, write, ioctl.\nNote that \"creating a device driver\" doesn't have to be complicated. It could be as simple as just defining a wrapper for the ioctl function. See ioLib for more details.\nMap Memory Directly (not recommended)\nYou can create a shared memory region via the sdOpen call. When creating the shared memory, you can specify what the physical address should be. Specify the address to be your device's I/O mapped region, and you can access the device directly.\nThe problem is that a shared memory region is a public object that is available to any space, and poking directly at hardware goes against the philosophy behind RTPs.\n" ]
[ 4 ]
[]
[]
[ "vxworks" ]
stackoverflow_0000113001_vxworks.txt
Q: How do I add another run level (level 7) in Ubuntu? Ubuntu has 8 run levels (0-6 and S), I want to add the run level 7. I have done the following: 1.- Created the folder /etc/rc7.d/, which contains some symbolic links to /etc/init.d/ 2.- Created the file /etc/event.d/rc7 This is its content: # rc7 - runlevel 7 compatibility # # This task runs the old sysv-rc runlevel 7 ("multi-user") scripts. It # is usually started by the telinit compatibility wrapper. start on runlevel 7 stop on runlevel [!7] console output script set $(runlevel --set 7 || true) if [ "$1" != "unknown" ]; then PREVLEVEL=$1 RUNLEVEL=$2 export PREVLEVEL RUNLEVEL fi exec /etc/init.d/rc 7 end script I thought that would be enough, but telinit 7 still throws this error: telinit: illegal runlevel: 7 A: You cannot; the runlevels are hardcoded into the utilities. But why do you need to? Runlevel 4 is essentially unused. And while it's not the best idea, you could repurpose either runlevel 3 or runlevel 5 depending on if you always/never use X. Note that some *nix systems have support for more than 6 runlevels, but Linux is not one of them. A: I'm not sure how to add them (never needed to), but I'm pretty sure /etc/inittab is where you'd add runlevels. Although I'd have to agree with Zathrus that other runlevels are available but unused. On Debian, only 1 and 2 are used, really. I'm not sure how Ubuntu has it set up, though. However, if you have a specific purpose, it should be possible to do. I've just never had to.
How do I add another run level (level 7) in Ubuntu?
Ubuntu has 8 run levels (0-6 and S), I want to add the run level 7. I have done the following: 1.- Created the folder /etc/rc7.d/, which contains some symbolic links to /etc/init.d/ 2.- Created the file /etc/event.d/rc7 This is its content: # rc7 - runlevel 7 compatibility # # This task runs the old sysv-rc runlevel 7 ("multi-user") scripts. It # is usually started by the telinit compatibility wrapper. start on runlevel 7 stop on runlevel [!7] console output script set $(runlevel --set 7 || true) if [ "$1" != "unknown" ]; then PREVLEVEL=$1 RUNLEVEL=$2 export PREVLEVEL RUNLEVEL fi exec /etc/init.d/rc 7 end script I thought that would be enough, but telinit 7 still throws this error: telinit: illegal runlevel: 7
[ "You cannot; the runlevels are hardcoded into the utilities. But why do you need to? Runlevel 4 is essentially unused. And while it's not the best idea, you could repurpose either runlevel 3 or runlevel 5 depending on if you always/never use X.\nNote that some *nix systems have support for more than 6 runlevels, but Linux is not one of them.\n", "I'm not sure how to add them (never needed to), but I'm pretty sure /etc/inittab is where you'd add runlevels.\nAlthough I'd have to agree with Zathrus that other runlevels are available but unused. On Debian, only 1 and 2 are used, really. I'm not sure how Ubuntu has it set up, though. However, if you have a specific purpose, it should be possible to do. I've just never had to.\n" ]
[ 2, 0 ]
[]
[]
[ "runlevel", "ubuntu", "upstart" ]
stackoverflow_0000112964_runlevel_ubuntu_upstart.txt
Q: Python - When to use file vs open What's the difference between file and open in Python? When should I use which one? (Say I'm in 2.5) A: You should always use open(). As the documentation states: When opening a file, it's preferable to use open() instead of invoking this constructor directly. file is more suited to type testing (for example, writing "isinstance(f, file)"). Also, file() has been removed since Python 3.0. A: Two reasons: The python philosophy of "There ought to be one way to do it" and file is going away. file is the actual type (using e.g. file('myfile.txt') is calling its constructor). open is a factory function that will return a file object. In python 3.0 file is going to move from being a built-in to being implemented by multiple classes in the io library (somewhat similar to Java with buffered readers, etc.) A: file() is a type, like an int or a list. open() is a function for opening files, and will return a file object. This is an example of when you should use open: f = open(filename, 'r') for line in f: process(line) f.close() This is an example of when you should use file: class LoggingFile(file): def write(self, data): sys.stderr.write("Wrote %d bytes\n" % len(data)) super(LoggingFile, self).write(data) As you can see, there's a good reason for both to exist, and a clear use-case for both. A: Functionally, the two are the same; open will call file anyway, so currently the difference is a matter of style. The Python docs recommend using open. When opening a file, it's preferable to use open() instead of invoking the file constructor directly. The reason is that in future versions they is not guaranteed to be the same (open will become a factory function, which returns objects of different types depending on the path it's opening). A: Only ever use open() for opening files. file() is actually being removed in 3.0, and it's deprecated at the moment. They've had a sort of strange relationship, but file() is going now, so there's no need to worry anymore. The following is from the Python 2.6 docs. [bracket stuff] added by me. When opening a file, it’s preferable to use open() instead of invoking this [file()] constructor directly. file is more suited to type testing (for example, writing isinstance(f, file) A: According to Mr Van Rossum, although open() is currently an alias for file() you should use open() because this might change in the future.
Python - When to use file vs open
What's the difference between file and open in Python? When should I use which one? (Say I'm in 2.5)
[ "You should always use open().\nAs the documentation states:\n\nWhen opening a file, it's preferable\n to use open() instead of invoking this\n constructor directly. file is more\n suited to type testing (for example,\n writing \"isinstance(f, file)\").\n\nAlso, file() has been removed since Python 3.0.\n", "Two reasons: The python philosophy of \"There ought to be one way to do it\" and file is going away.\nfile is the actual type (using e.g. file('myfile.txt') is calling its constructor). open is a factory function that will return a file object.\nIn python 3.0 file is going to move from being a built-in to being implemented by multiple classes in the io library (somewhat similar to Java with buffered readers, etc.)\n", "file() is a type, like an int or a list. open() is a function for opening files, and will return a file object.\nThis is an example of when you should use open:\nf = open(filename, 'r')\nfor line in f:\n process(line)\nf.close()\n\nThis is an example of when you should use file:\nclass LoggingFile(file):\n def write(self, data):\n sys.stderr.write(\"Wrote %d bytes\\n\" % len(data))\n super(LoggingFile, self).write(data)\n\nAs you can see, there's a good reason for both to exist, and a clear use-case for both.\n", "Functionally, the two are the same; open will call file anyway, so currently the difference is a matter of style. The Python docs recommend using open. \n\nWhen opening a file, it's preferable to use open() instead of invoking the file constructor directly. \n\nThe reason is that in future versions they is not guaranteed to be the same (open will become a factory function, which returns objects of different types depending on the path it's opening).\n", "Only ever use open() for opening files. file() is actually being removed in 3.0, and it's deprecated at the moment. They've had a sort of strange relationship, but file() is going now, so there's no need to worry anymore.\nThe following is from the Python 2.6 docs. [bracket stuff] added by me.\n\nWhen opening a file, it’s preferable to use open() instead of invoking this [file()] constructor directly. file is more suited to type testing (for example, writing isinstance(f, file)\n\n", "According to Mr Van Rossum, although open() is currently an alias for file() you should use open() because this might change in the future.\n" ]
[ 157, 33, 19, 7, 4, 2 ]
[]
[]
[ "file", "python" ]
stackoverflow_0000112970_file_python.txt
Q: How to write an RSS feed with Java? I'm using Java, and need to generate a simple, standards-compliant RSS feed. How can I go about this? A: I recommend using Rome: // Feed header SyndFeed feed = new SyndFeedImpl(); feed.setFeedType("rss_2.0"); feed.setTitle("Sample Feed"); feed.setLink("http://example.com/"); // Feed entries List entries = new ArrayList(); feed.setEntries(entries); SyndEntry entry = new SyndEntryImpl(); entry.setTitle("Entry #1"); entry.setLink("http://example.com/post/1"); SyndContent description = new SyndContentImpl(); description.setType("text/plain"); description.setValue("There is text in here."); entry.setDescription(description); entries.add(entry); // Write the feed to XML StringWriter writer = new StringWriter(); new SyndFeedOutput().output(feed, writer); System.out.println(writer.toString());
How to write an RSS feed with Java?
I'm using Java, and need to generate a simple, standards-compliant RSS feed. How can I go about this?
[ "I recommend using Rome:\n// Feed header\nSyndFeed feed = new SyndFeedImpl();\nfeed.setFeedType(\"rss_2.0\");\nfeed.setTitle(\"Sample Feed\");\nfeed.setLink(\"http://example.com/\");\n\n// Feed entries\nList entries = new ArrayList();\nfeed.setEntries(entries);\n\nSyndEntry entry = new SyndEntryImpl();\nentry.setTitle(\"Entry #1\");\nentry.setLink(\"http://example.com/post/1\");\nSyndContent description = new SyndContentImpl();\ndescription.setType(\"text/plain\");\ndescription.setValue(\"There is text in here.\");\nentry.setDescription(description);\nentries.add(entry);\n\n// Write the feed to XML\nStringWriter writer = new StringWriter();\nnew SyndFeedOutput().output(feed, writer);\nSystem.out.println(writer.toString());\n\n" ]
[ 40 ]
[]
[]
[ "java", "rome", "rss" ]
stackoverflow_0000113063_java_rome_rss.txt
Q: Select Element in a Namespace with XPath I want to select the topmost element in a document that has a given namespace (prefix). More specifically: I have XML documents that either start with /html/body (in the XHTML namespace) or with one of several elements in a particular namespace. I effectively want to strip out /html/body and just return the body contents OR the entire root namespaced element. A: In XPath 2.0 and XQuery 1.0 you can test against the namespace prefix using the in-scope-prefixes() function in a predicate. e.g. //*[in-scope-prefixes(.)='html'] If you cant use v2, in XPath 1.0 you can use the namespace-uri() function to test against the namespace itself. e.g. //*[namespace-uri()='http://www.w3c.org/1999/xhtml'] A: The XPath expression that I want is: /html:html/html:body/node()|/foo:* Where the "html" prefix is mapped to the XHTML namespace, and the "foo" prefix is mapped to my target namespace.
Select Element in a Namespace with XPath
I want to select the topmost element in a document that has a given namespace (prefix). More specifically: I have XML documents that either start with /html/body (in the XHTML namespace) or with one of several elements in a particular namespace. I effectively want to strip out /html/body and just return the body contents OR the entire root namespaced element.
[ "In XPath 2.0 and XQuery 1.0 you can test against the namespace prefix using the in-scope-prefixes() function in a predicate. \ne.g.\n//*[in-scope-prefixes(.)='html']\n\nIf you cant use v2, in XPath 1.0 you can use the namespace-uri() function to test against the namespace itself.\ne.g.\n//*[namespace-uri()='http://www.w3c.org/1999/xhtml']\n\n", "The XPath expression that I want is:\n/html:html/html:body/node()|/foo:*\n\nWhere the \"html\" prefix is mapped to the XHTML namespace, and the \"foo\" prefix is mapped to my target namespace.\n" ]
[ 10, 3 ]
[]
[]
[ "namespaces", "xml", "xpath" ]
stackoverflow_0000112601_namespaces_xml_xpath.txt
Q: Directory Modification Monitoring I'm building a C# application that will monitor a specified directory for changes and additions and storing the information in a database. I would like to avoid checking each individual file for modifications, but I'm not sure if I can completely trust the file access time. What would be the best method to use for getting recently modified files in a directory? It would check for modifications only when the user asks it to, it will not be a constantly running service. A: Use the FileSystemWatcher object. Here is some code to do what you are looking for. // Declares the FileSystemWatcher object FileSystemWatcher watcher = new FileSystemWatcher(); // We have to specify the path which has to monitor watcher.Path = @"\\somefilepath"; // This property specifies which are the events to be monitored watcher.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.FileName | notifyFilters.DirectoryName; watcher.Filter = "*.*"; // Only watch text files. // Add event handlers for specific change events... watcher.Changed += new FileSystemEventHandler(OnChanged); watcher.Created += new FileSystemEventHandler(OnChanged); watcher.Deleted += new FileSystemEventHandler(OnChanged); watcher.Renamed += new RenamedEventHandler(OnRenamed); // Begin watching. watcher.EnableRaisingEvents = true; // Define the event handlers. private static void OnChanged(object source, FileSystemEventArgs e) { // Specify what is done when a file is changed, created, or deleted. } private static void OnRenamed(object source, RenamedEventArgs e) { // Specify what is done when a file is renamed. } A: I think what you want is provided by the FileSystemWatcher class. This tutorial describes how to use it to monitor a directory for changes in a simple windows service; How to implement a simple filewatcher Windows service in C# A: Hmm... interesting question. Initially I'd point you at the FileSystemWatcher class. If you are going to have it work, however, on user request, then it would seem you might need to store off the directory info initially and then compare each time the user requests. I'd probably go with a FileSystemWatcher and just store off your results anyhow. A: If you only need it to check when the user asks rather then all the time, don't use the FileSystemWatcher. Especially if it's a shared resource - the last thing you want is 50 client machines watching the same shared directory. It's probably just a typo, but you shouldn't be looking at the file access time, you want to look at the file modification time to pick up changes. Even that's not reliable. What I would do is implement some sort of checksum function on the file date and byte size, or other file system properties. That way I wouldn't be looking for changes in the complete file - only the properties of it and I can do it on request, rather than trying to hold a connection to a remote resource to monitor it. A heavier solution would be to do it the other way around, and install a service on the machine hosting the shared drive which could monitor the files and make note of the changes. Then you could query the service rather than touching the files at all - but it's probably overkill.
Directory Modification Monitoring
I'm building a C# application that will monitor a specified directory for changes and additions and storing the information in a database. I would like to avoid checking each individual file for modifications, but I'm not sure if I can completely trust the file access time. What would be the best method to use for getting recently modified files in a directory? It would check for modifications only when the user asks it to, it will not be a constantly running service.
[ "Use the FileSystemWatcher object. Here is some code to do what you are looking for.\n\n // Declares the FileSystemWatcher object\n FileSystemWatcher watcher = new FileSystemWatcher(); \n\n // We have to specify the path which has to monitor\n\n watcher.Path = @\"\\\\somefilepath\"; \n\n // This property specifies which are the events to be monitored\n watcher.NotifyFilter = NotifyFilters.LastAccess |\n NotifyFilters.LastWrite | NotifyFilters.FileName | notifyFilters.DirectoryName; \n watcher.Filter = \"*.*\"; // Only watch text files.\n\n // Add event handlers for specific change events...\n\n watcher.Changed += new FileSystemEventHandler(OnChanged);\n watcher.Created += new FileSystemEventHandler(OnChanged);\n watcher.Deleted += new FileSystemEventHandler(OnChanged);\n watcher.Renamed += new RenamedEventHandler(OnRenamed);\n // Begin watching.\n watcher.EnableRaisingEvents = true;\n\n\n // Define the event handlers.\n private static void OnChanged(object source, FileSystemEventArgs e)\n {\n // Specify what is done when a file is changed, created, or deleted.\n }\n\n private static void OnRenamed(object source, RenamedEventArgs e)\n {\n // Specify what is done when a file is renamed.\n }\n\n", "I think what you want is provided by the FileSystemWatcher class.\nThis tutorial describes how to use it to monitor a directory for changes in a simple windows service; How to implement a simple filewatcher Windows service in C#\n", "Hmm... interesting question. Initially I'd point you at the FileSystemWatcher class. If you are going to have it work, however, on user request, then it would seem you might need to store off the directory info initially and then compare each time the user requests. I'd probably go with a FileSystemWatcher and just store off your results anyhow.\n", "If you only need it to check when the user asks rather then all the time, don't use the FileSystemWatcher. Especially if it's a shared resource - the last thing you want is 50 client machines watching the same shared directory. \nIt's probably just a typo, but you shouldn't be looking at the file access time, you want to look at the file modification time to pick up changes. Even that's not reliable. \nWhat I would do is implement some sort of checksum function on the file date and byte size, or other file system properties. That way I wouldn't be looking for changes in the complete file - only the properties of it and I can do it on request, rather than trying to hold a connection to a remote resource to monitor it.\nA heavier solution would be to do it the other way around, and install a service on the machine hosting the shared drive which could monitor the files and make note of the changes. Then you could query the service rather than touching the files at all - but it's probably overkill.\n" ]
[ 6, 2, 1, 1 ]
[]
[]
[ "c#", "directory", "file", "filesystems" ]
stackoverflow_0000112276_c#_directory_file_filesystems.txt
Q: How do I run another web site or web service side by side with Sharepoint? I'm getting a 404 error when trying to run another web service on an IIS 6 server which is also running Sharepoint 2003. I'm pretty sure this is an issue with sharepoint taking over IIS configuration. Is there a way to make a certain web service or web site be ignored by whatever Sharepoint is doing? A: I found the command line solution. STSADM.EXE -o addpath -url http://localhost/<your web service/app> -type exclusion A: I depends on what you mean by side by side, if you are trying to make something inside the same URL path as sharepoint then the above answers about managed paths should do it for you, but there is also nothing stopping you from just creating another Web Site inside of IIS, sharepoint will only take over the requests coming to its specific web. A: you'll have to go into the SharePoint admin console and explicitely allow that web application to run on on the same web site as SharePoint. I believe it is under defined managed paths. Central Administration > Application Management > Define Managed Paths A: Hasn't this change from 2003 to 2007? There's no longer an excluded paths option.
How do I run another web site or web service side by side with Sharepoint?
I'm getting a 404 error when trying to run another web service on an IIS 6 server which is also running Sharepoint 2003. I'm pretty sure this is an issue with sharepoint taking over IIS configuration. Is there a way to make a certain web service or web site be ignored by whatever Sharepoint is doing?
[ "I found the command line solution.\nSTSADM.EXE -o addpath -url http://localhost/<your web service/app> -type exclusion\n\n", "I depends on what you mean by side by side, if you are trying to make something inside the same URL path as sharepoint then the above answers about managed paths should do it for you, but there is also nothing stopping you from just creating another Web Site inside of IIS, sharepoint will only take over the requests coming to its specific web.\n", "you'll have to go into the SharePoint admin console and explicitely allow that web application to run on on the same web site as SharePoint. \nI believe it is under defined managed paths.\nCentral Administration > Application Management > Define Managed Paths\n", "Hasn't this change from 2003 to 2007? There's no longer an excluded paths option.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "iis", "sharepoint" ]
stackoverflow_0000104293_iis_sharepoint.txt
Q: How do I use a list from a different site in MOSS? I have an announcements list on one site. I want to add it as a web part to the top of each subsite. How can I do this in MOSS? A: I've used the Data View Web Part in this case. Create a web service data source to get the data from the other site's list. Much like this: http://www.sharepointblogs.com/ssa/archive/2007/02/23/showing-web-service-data-in-a-data-view-web-part.aspx A: A couple of points. First, you specified that you are using WSS 3.0, so the CQWP is not available (you need MOSS and to have publishing turned on for this to be available). The enhanced community edition will also not work for you since it derives from the CQWP. Second, I would agree with Eugene Katz that a DataFormWebPart would be an easy approach, and I have a slightly different way of producing it than the link he posted presents. In Sharepoint Designer, open your desired site you want to place the web part on. Select the Data Source Library from the Task Panes menu, then click on "Connect to another library..." at the bottom of the pane, and browse/select your parent site that contains the announcement list. Now you can just add your announcement as a DataFormWebPart from the newly created node on the Data Source Library pane just as if it was on your site. Sharepoint Designer help shows how to do this if you are unfamiliar. After you have set up your DataFormWebPart to your liking, you can make adding this to additional sites much easier by doing the following: Highlight your newly built DataFormWebPart and select File/Export/Save Web Part to.../Site Gallery. It will now be available throughout the site collection as an addable web part. A: Out of box that is not possible. Lists are limited to one site only. The only option you have is to use content query web part (available in SharePoint Standard or better). Here is how you can use CQWP. There is also enhanced - community edition here. You can embed these in your subsite templates. A: You should be getting the SPList object of that particular list using SharePoint Object Model. Once u get the same, you can render the list using the RenderAsHtml() Method. Please note that the RenderAsHtml() Method takes an SPQuery Object as parameter. You need to create an SPQuery object with the appropriate Query string. This code could go into the override of the RenderWebPart() method of a custom webpart: SPSite site = new SPSite(siteURL); SPWeb web = site.OpenWeb(webName); SPList list = web.Lists[listName]; SPQuery query = new SPQuery(); query.Query = queryString; string html = list.RenderAsHtml(query); output.Write(html); //output is the HtmlTextWriter object in the RenderWebPart method. A: The Content Query Web Part or the open source Enhanced Content Query Web Part are good ways to accomplish this.. If you don't have MOSS but WSS, Mr. Katz's and Mr. Ashwin's answers are acceptable but different ways to answer this question. A: A really great web part for doing this is the Content By Type web part on Codeplex. It also supports showing items of a given content type from any list in any subsite. See: http://www.codeplex.com/eoffice
How do I use a list from a different site in MOSS?
I have an announcements list on one site. I want to add it as a web part to the top of each subsite. How can I do this in MOSS?
[ "I've used the Data View Web Part in this case. Create a web service data source to get the data from the other site's list.\nMuch like this:\nhttp://www.sharepointblogs.com/ssa/archive/2007/02/23/showing-web-service-data-in-a-data-view-web-part.aspx\n", "A couple of points. \nFirst, you specified that you are using WSS 3.0, so the CQWP is not available (you need MOSS and to have publishing turned on for this to be available). The enhanced community edition will also not work for you since it derives from the CQWP. \nSecond, I would agree with Eugene Katz that a DataFormWebPart would be an easy approach, and I have a slightly different way of producing it than the link he posted presents. In Sharepoint Designer, open your desired site you want to place the web part on. Select the Data Source Library from the Task Panes menu, then click on \"Connect to another library...\" at the bottom of the pane, and browse/select your parent site that contains the announcement list. Now you can just add your announcement as a DataFormWebPart from the newly created node on the Data Source Library pane just as if it was on your site. Sharepoint Designer help shows how to do this if you are unfamiliar.\nAfter you have set up your DataFormWebPart to your liking, you can make adding this to additional sites much easier by doing the following: Highlight your newly built DataFormWebPart and select File/Export/Save Web Part to.../Site Gallery. It will now be available throughout the site collection as an addable web part.\n", "Out of box that is not possible. Lists are limited to one site only. \nThe only option you have is to use content query web part (available in SharePoint Standard or better).\nHere is how you can use CQWP. \nThere is also enhanced - community edition here.\nYou can embed these in your subsite templates.\n", "You should be getting the SPList object of that particular list using SharePoint Object Model. Once u get the same, you can render the list using the RenderAsHtml() Method. Please note that the RenderAsHtml() Method takes an SPQuery Object as parameter. You need to create an SPQuery object with the appropriate Query string. This code could go into the override of the RenderWebPart() method of a custom webpart:\nSPSite site = new SPSite(siteURL);\nSPWeb web = site.OpenWeb(webName);\nSPList list = web.Lists[listName];\nSPQuery query = new SPQuery();\nquery.Query = queryString;\nstring html = list.RenderAsHtml(query);\noutput.Write(html); //output is the HtmlTextWriter object in the RenderWebPart method.\n", "The Content Query Web Part or the open source Enhanced Content Query Web Part are good ways to accomplish this.. If you don't have MOSS but WSS, Mr. Katz's and Mr. Ashwin's answers are acceptable but different ways to answer this question.\n", "A really great web part for doing this is the Content By Type web part on Codeplex. It also supports showing items of a given content type from any list in any subsite.\nSee: http://www.codeplex.com/eoffice\n" ]
[ 4, 2, 1, 0, 0, 0 ]
[]
[]
[ "moss", "sharepoint" ]
stackoverflow_0000092837_moss_sharepoint.txt
Q: Advancing through relative dates using strtotime() I'm trying to use strtotime() to respond to a button click to advance -1 and +1 days (two buttons) relative to the day advanced to on the previous click. Example: It's the 10th of the month, I click "-1 day" button, and now the date reads as the 9th. I click the "-1 day" button again and now the readout states the 8th day. I click the "+1 day" button and now the readout states it's the 9th. I understand the buttons and the displaying the date and using $_GET and PHP to pass info, but how do I get strtotime() to work on the relative date from the last time the time travel script was called? My work so far has let me show yesterday and today relative to now but not relative to, for example, the day before yesterday, or the day after tomorrow. Or if I use my "last monday" button, the day before or after whatever that day is. A: Working from previous calls to the same script isn't really a good idea for this type of thing. What you want to do is always pass two values to your script, the date, and the movement. (the below example is simplified so that you only pass the date, and it will always add one day to it) Example http://www.site.com/addOneDay.php?date=1999-12-31 <?php echo Date("Y-m-d",(strtoTime($_GET[date])+86400)); ?> Please note that you should check to make sure that isset($_GET[date]) before as well If you really want to work from previous calls to the same script, you're going to have to do it with sessions, so please specify if that is the case. A: Kevin, you work off a solid absolute base (i.e. a date / time), not a relative time period. You then convert to the relative time periods. So, for example, by default, if you were showing a calendar, you'd work from todays date. int strtotime ( string $time [, int $now ] ) You can see in the function definition here of strtotime, the second argument is now, i.e. you can change the date from which it's relative. This might be easier to display through a quick loop This will loop through the last 10 days using "yesterday" as the first argument. We then use date to print it out. $time = time(); for ($i = 0; $i < 10; $i++) { $time = strtotime("yesterday", $time); print date("r", $time) . "\n"; } So pass the time/date in via the URI so you can save the relative date. A: After a moment of inspiration, the solution to my question became apparent to me (I was riding my bike). The '$now' part of strtottime( string $time {,int $now ]) needs to be set as the current date. Not "$time()-now", but "the current date I'm concerned with / I'm looking at my log for. ie: if I'm looking at the timesheet summary for 8/10/2008, then that is "now" according to strtotime(); yesterday is 8/09 and tomorrow is 8/11. Once I creep up one day, "now" is 8/11, yesterday is 8/10, and tomorrow is 8/12. Here's the code example: <?php //catch variable $givendate=$_GET['given']; //convert given date to unix timestamp $date=strtotime($givendate); echo "Date Set As...: ".date('m/d/Y',$date)."<br />"; //use given date to show day before $yesterday=strtotime('-1 day',$date); echo "Day Before: ".date('m/d/Y',$yesterday)."<br />"; //same for next day $tomorrow=strtotime('+1 day',$date); echo "Next Day: ".date('m/d/Y',$tomorrow)."<br />"; $lastmonday=strtotime('last monday, 1 week ago',$date); echo "Last Moday: ".date('D m/d/Y',$lastmonday)."<br />"; //form echo "<form method=\"get\" action=\"{$_SERVER['PHP_SELF']}\">"; //link to subtract a day echo "<a href=\"newtimetravel.php?given=".date('m/d/Y',$yesterday)."\"><< </a>"; //show current day echo "<input type=\"text\" name=\"given\" value=\"$givendate\">"; //link to add a day echo "<a href=\"newtimetravel.php?given=".date('m/d/Y',$tomorrow)."\"> >></a><br />"; //submit manually entered day echo "<input type=\"submit\" name=\"changetime\" value=\"Set Current Date\">"; //close form echo "<form><br />"; ?> Clicking on the "<<" and ">>" advances and retreats the day in question
Advancing through relative dates using strtotime()
I'm trying to use strtotime() to respond to a button click to advance -1 and +1 days (two buttons) relative to the day advanced to on the previous click. Example: It's the 10th of the month, I click "-1 day" button, and now the date reads as the 9th. I click the "-1 day" button again and now the readout states the 8th day. I click the "+1 day" button and now the readout states it's the 9th. I understand the buttons and the displaying the date and using $_GET and PHP to pass info, but how do I get strtotime() to work on the relative date from the last time the time travel script was called? My work so far has let me show yesterday and today relative to now but not relative to, for example, the day before yesterday, or the day after tomorrow. Or if I use my "last monday" button, the day before or after whatever that day is.
[ "Working from previous calls to the same script isn't really a good idea for this type of thing.\nWhat you want to do is always pass two values to your script, the date, and the movement. (the below example is simplified so that you only pass the date, and it will always add one day to it)\nExample\nhttp://www.site.com/addOneDay.php?date=1999-12-31\n<?php\n echo Date(\"Y-m-d\",(strtoTime($_GET[date])+86400));\n?>\n\nPlease note that you should check to make sure that isset($_GET[date]) before as well\nIf you really want to work from previous calls to the same script, you're going to have to do it with sessions, so please specify if that is the case.\n", "Kevin, you work off a solid absolute base (i.e. a date / time), not a relative time period. You then convert to the relative time periods. So, for example, by default, if you were showing a calendar, you'd work from todays date. \nint strtotime ( string $time [, int $now ] )\n\nYou can see in the function definition here of strtotime, the second argument is now, i.e. you can change the date from which it's relative.\nThis might be easier to display through a quick loop\nThis will loop through the last 10 days using \"yesterday\" as the first argument. We then use date to print it out.\n$time = time();\n\nfor ($i = 0; $i < 10; $i++) {\n $time = strtotime(\"yesterday\", $time);\n print date(\"r\", $time) . \"\\n\";\n}\n\nSo pass the time/date in via the URI so you can save the relative date. \n", "After a moment of inspiration, the solution to my question became apparent to me (I was riding my bike). The '$now' part of \nstrtottime( string $time {,int $now ]) \n\nneeds to be set as the current date. Not \"$time()-now\", but \"the current date I'm concerned with / I'm looking at my log for.\nie: if I'm looking at the timesheet summary for 8/10/2008, then that is \"now\" according to strtotime(); yesterday is 8/09 and tomorrow is 8/11. Once I creep up one day, \"now\" is 8/11, yesterday is 8/10, and tomorrow is 8/12.\nHere's the code example:\n<?php\n\n//catch variable\n$givendate=$_GET['given'];\n\n//convert given date to unix timestamp\n$date=strtotime($givendate);\necho \"Date Set As...: \".date('m/d/Y',$date).\"<br />\";\n\n//use given date to show day before\n$yesterday=strtotime('-1 day',$date);\necho \"Day Before: \".date('m/d/Y',$yesterday).\"<br />\";\n\n//same for next day\n$tomorrow=strtotime('+1 day',$date);\necho \"Next Day: \".date('m/d/Y',$tomorrow).\"<br />\";\n$lastmonday=strtotime('last monday, 1 week ago',$date);\necho \"Last Moday: \".date('D m/d/Y',$lastmonday).\"<br />\";\n\n//form\necho \"<form method=\\\"get\\\" action=\\\"{$_SERVER['PHP_SELF']}\\\">\";\n\n//link to subtract a day\necho \"<a href=\\\"newtimetravel.php?given=\".date('m/d/Y',$yesterday).\"\\\"><< </a>\";\n\n//show current day\necho \"<input type=\\\"text\\\" name=\\\"given\\\" value=\\\"$givendate\\\">\";\n\n//link to add a day\necho \"<a href=\\\"newtimetravel.php?given=\".date('m/d/Y',$tomorrow).\"\\\"> >></a><br />\";\n\n//submit manually entered day\necho \"<input type=\\\"submit\\\" name=\\\"changetime\\\" value=\\\"Set Current Date\\\">\";\n\n//close form\necho \"<form><br />\";\n?>\n\nClicking on the \"<<\" and \">>\" advances and retreats the day in question\n" ]
[ 6, 1, 0 ]
[]
[]
[ "date", "php", "strtotime" ]
stackoverflow_0000008685_date_php_strtotime.txt
Q: How to use one object's method to update another object's attribute? I have three (C++) classes: Player, Hand, and Card. Player has a member, hand, that holds a Hand. It also has a method, getHand(), that returns the contents of hand. Hand Player::getHand() { return hand; } Hand has a method, addCard(Card c), that adds a card to the hand. I want to do this: player1.getHand().addCard(c); but it doesn't work. It doesn't throw an error, so it's doing something. But if I examine the contents of player1's hand afterward, the card hasn't been added. How can I get this to work? A: If getHand() returns by-value you're modifying a copy of the hand and not the original. A: If getHand() is not returning a reference you will be in trouble. A: A Player.addCardToHand() method is not unreasonable, if you have no reason to otherwise expose a Hand. This is probably ideal in some ways, as you can still provide copies of the Hand for win-checking comparisons, and no-one can modify them. A: Your method needs to return a pointer or a refernce to the player's Hand object. You could then call it like "player1.getHand()->addCard(c)". Note that that is the syntax you'd use it it were a pointer. A: Return a reference to the hand object eg. Hand &Player::getHand() { return hand; } Now your addCard() function is operating on the correct object. A: What is the declaration of getHand()? Is it returning a new Hand value, or is it returning a Hand& reference? A: As has been stated, you're probably modifying a copy instead of the original. To prevent this kind of mistake, you can explicitly declare copy constructors and equals operators as private. private: Hand(const Hand& rhs); Hand& operator=(const Hand& rhs); A: getX() is often the name of an accessor function for member x, similar to your own usage. However, a "getX" accessor is also very often a read-only function, so that it may be surprising in other situations of your code base to see a call to "getX" that modifies X. So I would suggest, instead of just using a reference as a return value, to actually modify the code design a bit. Some alternatives: Expose a getMutableHand method that returns a pointer (or reference). By returning a pointer, you strongly suggest that the caller uses the pointer notation, so that anyone reading the code sees that this variable is changing values, and is not read-only. Make Player a subclass of Hand, so that anything that manipulates a Hand also works directly on the Player. Intuitively, you could say that a Player is not a Hand, but functionally they have the right relationship - every Player has exactly one hand, and it seems that you do want to be able to have the same access to a Hand via Player as you would directly. Directly implement an addCard method for your Player class.
How to use one object's method to update another object's attribute?
I have three (C++) classes: Player, Hand, and Card. Player has a member, hand, that holds a Hand. It also has a method, getHand(), that returns the contents of hand. Hand Player::getHand() { return hand; } Hand has a method, addCard(Card c), that adds a card to the hand. I want to do this: player1.getHand().addCard(c); but it doesn't work. It doesn't throw an error, so it's doing something. But if I examine the contents of player1's hand afterward, the card hasn't been added. How can I get this to work?
[ "If getHand() returns by-value you're modifying a copy of the hand and not the original.\n", "If getHand() is not returning a reference you will be in trouble.\n", "A Player.addCardToHand() method is not unreasonable, if you have no reason to otherwise expose a Hand. This is probably ideal in some ways, as you can still provide copies of the Hand for win-checking comparisons, and no-one can modify them.\n", "Your method needs to return a pointer or a refernce to the player's Hand object. You could then call it like \"player1.getHand()->addCard(c)\". Note that that is the syntax you'd use it it were a pointer.\n", "Return a reference to the hand object eg.\nHand &Player::getHand() {\n return hand;\n}\n\nNow your addCard() function is operating on the correct object.\n", "What is the declaration of getHand()? Is it returning a new Hand value, or is it returning a Hand& reference?\n", "As has been stated, you're probably modifying a copy instead of the original.\nTo prevent this kind of mistake, you can explicitly declare copy constructors and equals operators as private.\n private:\n Hand(const Hand& rhs);\n Hand& operator=(const Hand& rhs);\n\n", "getX() is often the name of an accessor function for member x, similar to your own usage. However, a \"getX\" accessor is also very often a read-only function, so that it may be surprising in other situations of your code base to see a call to \"getX\" that modifies X.\nSo I would suggest, instead of just using a reference as a return value, to actually modify the code design a bit. Some alternatives:\n\nExpose a getMutableHand method that returns a pointer (or reference). By returning a pointer, you strongly suggest that the caller uses the pointer notation, so that anyone reading the code sees that this variable is changing values, and is not read-only.\nMake Player a subclass of Hand, so that anything that manipulates a Hand also works directly on the Player. Intuitively, you could say that a Player is not a Hand, but functionally they have the right relationship - every Player has exactly one hand, and it seems that you do want to be able to have the same access to a Hand via Player as you would directly.\nDirectly implement an addCard method for your Player class.\n\n" ]
[ 2, 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "c++", "methods", "oop" ]
stackoverflow_0000113033_c++_methods_oop.txt
Q: WPF: Org Chart TreeView Conditional Formatting The company has the traditional complex organizational structure, defining the amount of levels using the letter 'n' rather than an actual number. I will try and express the structure I'm trying to achieve in mono-spaced font: Alice ,--------|-------,------,------, Bob Fred Jack Kim Lucy | | Charlie Greg Darren Henry Eric As you can see it's not symmetrical, as Jack, Kim and Lucy report to Alice but have no reports of their own. Using a TreeView with an ItemsPanel containing a StackPanel and Orientation="Horizontal" is easy enough, but this can result in a very large TreeView once some people have 20 others reporting to them! You can also use Triggers to peek into whether a TreeViewItem has children with Property="TreeViewItem.HasItems", but this is not in the same context as the before-mentioned ItemsPanel. Eg: I can tell that Fred has reports, but not whether they have reports of their own. So, can you conditionally format TreeViewItems to be Vertical if they have no children of their own? A: Josh Smith has a excecllent CodeProject article about TreeView. Read it here A: I did end up using tips from the linked article, which I'd already read through but didn't think would help me. The meat of it happens here, in a converter: <ValueConversion(GetType(ItemsPresenter), GetType(Orientation))> _ Public Class ItemsPanelOrientationConverter Implements IValueConverter Public Function Convert(ByVal value As Object, ByVal targetType As System.Type, _ ByVal parameter As Object, ByVal culture As System.Globalization.CultureInfo) _ As Object Implements System.Windows.Data.IValueConverter.Convert 'The 'value' argument should reference an ItemsPresenter.' Dim itemsPresenter As ItemsPresenter = TryCast(value, ItemsPresenter) If itemsPresenter Is Nothing Then Return Binding.DoNothing End If 'The ItemsPresenter''s templated parent should be a TreeViewItem.' Dim item As TreeViewItem = TryCast(itemsPresenter.TemplatedParent, TreeViewItem) If item Is Nothing Then Return Binding.DoNothing End If For Each i As Object In item.Items Dim element As StaffMember = TryCast(i, StaffMember) If element.IsManager Then 'If this element has children, then return Horizontal' Return Orientation.Horizontal End If Next 'Must be a stub ItemPresenter' Return Orientation.Vertical End Function Which in turn gets consumed in a style I created for the TreeView: <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate > <ItemsPanelTemplate.Resources> <local:ItemsPanelOrientationConverter x:Key="conv" /> </ItemsPanelTemplate.Resources> <StackPanel IsItemsHost="True" Orientation="{Binding RelativeSource={x:Static RelativeSource.TemplatedParent}, Converter={StaticResource conv}}" /> </ItemsPanelTemplate> </Setter.Value> </Setter>
WPF: Org Chart TreeView Conditional Formatting
The company has the traditional complex organizational structure, defining the amount of levels using the letter 'n' rather than an actual number. I will try and express the structure I'm trying to achieve in mono-spaced font: Alice ,--------|-------,------,------, Bob Fred Jack Kim Lucy | | Charlie Greg Darren Henry Eric As you can see it's not symmetrical, as Jack, Kim and Lucy report to Alice but have no reports of their own. Using a TreeView with an ItemsPanel containing a StackPanel and Orientation="Horizontal" is easy enough, but this can result in a very large TreeView once some people have 20 others reporting to them! You can also use Triggers to peek into whether a TreeViewItem has children with Property="TreeViewItem.HasItems", but this is not in the same context as the before-mentioned ItemsPanel. Eg: I can tell that Fred has reports, but not whether they have reports of their own. So, can you conditionally format TreeViewItems to be Vertical if they have no children of their own?
[ "Josh Smith has a excecllent CodeProject article about TreeView. Read it here\n", "I did end up using tips from the linked article, which I'd already read through but didn't think would help me. \nThe meat of it happens here, in a converter: \n<ValueConversion(GetType(ItemsPresenter), GetType(Orientation))> _\nPublic Class ItemsPanelOrientationConverter\nImplements IValueConverter\n\nPublic Function Convert(ByVal value As Object, ByVal targetType As System.Type, _\nByVal parameter As Object, ByVal culture As System.Globalization.CultureInfo) _\nAs Object Implements System.Windows.Data.IValueConverter.Convert\n\n 'The 'value' argument should reference an ItemsPresenter.'\n Dim itemsPresenter As ItemsPresenter = TryCast(value, ItemsPresenter)\n If itemsPresenter Is Nothing Then\n Return Binding.DoNothing\n End If\n\n 'The ItemsPresenter''s templated parent should be a TreeViewItem.'\n Dim item As TreeViewItem = TryCast(itemsPresenter.TemplatedParent, TreeViewItem)\n If item Is Nothing Then\n Return Binding.DoNothing\n End If\n\n For Each i As Object In item.Items\n Dim element As StaffMember = TryCast(i, StaffMember)\n If element.IsManager Then\n 'If this element has children, then return Horizontal'\n Return Orientation.Horizontal\n End If\n Next\n\n 'Must be a stub ItemPresenter'\n Return Orientation.Vertical\n\nEnd Function\n\nWhich in turn gets consumed in a style I created for the TreeView: \n <Setter Property=\"ItemsPanel\">\n <Setter.Value>\n <ItemsPanelTemplate >\n <ItemsPanelTemplate.Resources>\n <local:ItemsPanelOrientationConverter x:Key=\"conv\" />\n </ItemsPanelTemplate.Resources>\n <StackPanel IsItemsHost=\"True\" \n Orientation=\"{Binding \n RelativeSource={x:Static RelativeSource.TemplatedParent}, \n Converter={StaticResource conv}}\" />\n </ItemsPanelTemplate>\n </Setter.Value>\n </Setter>\n\n" ]
[ 2, 0 ]
[]
[]
[ "wpf", "xaml" ]
stackoverflow_0000069928_wpf_xaml.txt
Q: Restore a SQL Server database from single instance to cluster I need to transfer a database from a SQL Server instance test server to a production environment that is clustered. But SQL Server doesn't allow you to use backup/restore to do it from single instance to cluster. I'm talking about a Microsoft CRM complex database here. Your help is greatly appreciated. A: Have a look at the Microsoft SQL Server Database Publishing Wizard: SQL Server Database Publishing Wizard enables the deployment of SQL Server databases into a hosted environment on either a SQL Server 2000 or 2005 server. It generates a single SQL script file which can be used to recreate a database (both schema and data) in a shared hosting environment where the only connectivity to a server is through a web-based control panel with a script execution window. If supported by the hosting service provider, the Database Publishing Wizard can also directly upload databases to servers located at the shared hosting provider. Optionally, SQL Server Database Publishing Wizard can integrate directly into Visual Studio 2005 and/or Visual Web Developer 2005 allowing easy publishing of databases from within the development environment. You don't have to use the server-side piece; the client-side 'create a script' piece is generally enough.
Restore a SQL Server database from single instance to cluster
I need to transfer a database from a SQL Server instance test server to a production environment that is clustered. But SQL Server doesn't allow you to use backup/restore to do it from single instance to cluster. I'm talking about a Microsoft CRM complex database here. Your help is greatly appreciated.
[ "Have a look at the Microsoft SQL Server Database Publishing Wizard:\n\nSQL Server Database Publishing Wizard\n enables the deployment of SQL Server\n databases into a hosted environment on\n either a SQL Server 2000 or 2005\n server. It generates a single SQL\n script file which can be used to\n recreate a database (both schema and\n data) in a shared hosting environment\n where the only connectivity to a\n server is through a web-based control\n panel with a script execution window.\n If supported by the hosting service\n provider, the Database Publishing\n Wizard can also directly upload\n databases to servers located at the\n shared hosting provider.\nOptionally, SQL Server Database\n Publishing Wizard can integrate\n directly into Visual Studio 2005\n and/or Visual Web Developer 2005\n allowing easy publishing of databases\n from within the development\n environment.\n\nYou don't have to use the server-side piece; the client-side 'create a script' piece is generally enough.\n" ]
[ 1 ]
[]
[]
[ "sql_server_2005" ]
stackoverflow_0000112676_sql_server_2005.txt
Q: Generics on ASP.NET page Class I want to implement Generics in my Page Class like : Public Class MyClass(Of TheClass) Inherits System.Web.UI.Page But for this to work, I need to be able to instantiate the Class (with the correct Generic Class Type) and load the page, instead of a regular Response.Redirect. Is there a way to do this ? A: I'm not sure to fully understand what you want to do. If you want something like a generic Page, you can use a generic BasePage and put your generic methods into that BasePage: Partial Public Class MyPage Inherits MyGenericBasePage(Of MyType) End Class Public Class MyGenericBasePage(Of T As New) Inherits System.Web.UI.Page Public Function MyGenericMethod() As T Return New T() End Function End Class Public Class MyType End Class A: The answer that says to derive a type from the generic type is a good one. However, if your solution involves grabbing a page based upon a type determined at runtime then you should be able to handle the PreRequestHandlerExecute event on the current HttpApplication. This event is called just before a Request is forwarded to a Handler, so I believe you can inject your page into the HttpContext.Current.Handler property. Then you can create the page however you wish.
Generics on ASP.NET page Class
I want to implement Generics in my Page Class like : Public Class MyClass(Of TheClass) Inherits System.Web.UI.Page But for this to work, I need to be able to instantiate the Class (with the correct Generic Class Type) and load the page, instead of a regular Response.Redirect. Is there a way to do this ?
[ "I'm not sure to fully understand what you want to do.\nIf you want something like a generic Page, you can use a generic BasePage and put your generic methods into that BasePage:\nPartial Public Class MyPage\n Inherits MyGenericBasePage(Of MyType)\n\nEnd Class\n\nPublic Class MyGenericBasePage(Of T As New)\n Inherits System.Web.UI.Page\n\n Public Function MyGenericMethod() As T\n Return New T()\n End Function\n\nEnd Class\n\nPublic Class MyType\n\nEnd Class\n\n", "The answer that says to derive a type from the generic type is a good one. However, if your solution involves grabbing a page based upon a type determined at runtime then you should be able to handle the PreRequestHandlerExecute event on the current HttpApplication.\nThis event is called just before a Request is forwarded to a Handler, so I believe you can inject your page into the HttpContext.Current.Handler property. Then you can create the page however you wish.\n" ]
[ 2, 0 ]
[]
[]
[ "asp.net", "generics", "vb.net" ]
stackoverflow_0000112977_asp.net_generics_vb.net.txt
Q: Where can I find some up to date information on OpenID authentication with rails? The question says it all. I can't seem to find any recent rails tutorials or whatever to set up an OpenID authentication system. I found RestfulOpenIDAuthentication but it's so much older than the vanilla Restful Authentication and the docs don't even mention Rails 2 that I am pretty wary. Does anyone have any tips? I'd like to do what stackoverflow does and only have OpenID support. Thanks! A: Check out the Railscast covering exactly this topic. It builds on the previous episode which discusses Restful Authentication.
Where can I find some up to date information on OpenID authentication with rails?
The question says it all. I can't seem to find any recent rails tutorials or whatever to set up an OpenID authentication system. I found RestfulOpenIDAuthentication but it's so much older than the vanilla Restful Authentication and the docs don't even mention Rails 2 that I am pretty wary. Does anyone have any tips? I'd like to do what stackoverflow does and only have OpenID support. Thanks!
[ "Check out the Railscast covering exactly this topic. It builds on the previous episode which discusses Restful Authentication.\n" ]
[ 1 ]
[]
[]
[ "openid", "ruby_on_rails" ]
stackoverflow_0000113113_openid_ruby_on_rails.txt
Q: Large Image resizing libraries Does anyone know of any good image resizing libraries that will handling resizing large images(~7573 x ~9485). Something that is fast and doesn't chew to much memory would be great. At the moment I am using IrfanView and just shell invoking it with arguments but I would like to find something that integrates into .net a bit more. Thanks. A: ImageMagick all the way. It's a codebase with nearly every image-related operation you could possibly want to do, implemented fairly efficiently in C. This includes various types of resizing, both interpolated (bilinear, trilinear, adaptive, etc.), and not (just decimating (sampling) or replicating pixels. There are a ton of APIs (language bindings) that you can use in your applications, including MagickNet. Also, not sure if it's at all relevant to what you're trying to do, but I thought this was a pretty darn cool SIGGRAPH paper, so here goes: ImageMagick also supports what they call "liquid rescaling", or seam carving, a technique shown in this cool demo here, and whose implementation and use in ImageMagick is discussed here. A: A couple years ago I used FreeImage in a program that needed to load some relatively big images (12-mega-pixel images). It performed really well (waaaay better than GDI+) and the API is quite simple to understand and start using. I even wrote a .NET wrapper and I think I still have it laying around somewhere, but I suppose there must be better wrappers/bindings for .NET by now. A: I've used ImageMagick in the past - note that you would have to invoke it from command line, too. The good news is that it's a breeze to integrate into your project, and it's a very powerful utility. A: Yes, I'd go for ImageMagick definitely. I'd give http://midimick.com/magicknet/ a shot if I were you..
Large Image resizing libraries
Does anyone know of any good image resizing libraries that will handling resizing large images(~7573 x ~9485). Something that is fast and doesn't chew to much memory would be great. At the moment I am using IrfanView and just shell invoking it with arguments but I would like to find something that integrates into .net a bit more. Thanks.
[ "ImageMagick all the way. It's a codebase with nearly every image-related operation you could possibly want to do, implemented fairly efficiently in C. This includes various types of resizing, both interpolated (bilinear, trilinear, adaptive, etc.), and not (just decimating (sampling) or replicating pixels. There are a ton of APIs (language bindings) that you can use in your applications, including MagickNet.\nAlso, not sure if it's at all relevant to what you're trying to do, but I thought this was a pretty darn cool SIGGRAPH paper, so here goes: ImageMagick also supports what they call \"liquid rescaling\", or seam carving, a technique shown in this cool demo here, and whose implementation and use in ImageMagick is discussed here.\n", "A couple years ago I used FreeImage in a program that needed to load some relatively big images (12-mega-pixel images). It performed really well (waaaay better than GDI+) and the API is quite simple to understand and start using. I even wrote a .NET wrapper and I think I still have it laying around somewhere, but I suppose there must be better wrappers/bindings for .NET by now.\n", "I've used ImageMagick in the past - note that you would have to invoke it from command line, too. The good news is that it's a breeze to integrate into your project, and it's a very powerful utility.\n", "Yes, I'd go for ImageMagick definitely. I'd give http://midimick.com/magicknet/ a shot if I were you..\n" ]
[ 5, 1, 0, 0 ]
[]
[]
[ ".net", "image", "image_manipulation", "resize" ]
stackoverflow_0000113144_.net_image_image_manipulation_resize.txt
Q: What are the Python equivalents of the sighold and sigrelse functions found in C? It appears the Python signal module doesn't have anything similar to the sighold and sigrelse functions found in C, using signal.h. Are there Python equivalents of any sort? Many thanks! A: There is no way to ``block'' signals temporarily from critical sections (since this is not supported by all Unix flavors). https://docs.python.org/library/signal.html A: There are no direct bindings for this in Python. Accessing them through ctypes is easy enough; here is an example. import ctypes, signal libc = ctypes.cdll.LoadLibrary("libc.so.6") libc.sighold(signal.SIGKILL) libc.sigrelse(signal.SIGKILL) I'm not familiar with the use of these calls, but be aware that Python's signal handlers work differently than C. When Python code is attached to a signal callback, the signal is caught on the C side of the interpreter and queued. The interpreter is occasionally interrupted for internal housekeeping (and thread switching, etc). It is during that interrupt the Python handler for the signal will be called. All that to say, just be aware that Python's signal handling is a little less asynchronous than normal C signal handlers.
What are the Python equivalents of the sighold and sigrelse functions found in C?
It appears the Python signal module doesn't have anything similar to the sighold and sigrelse functions found in C, using signal.h. Are there Python equivalents of any sort? Many thanks!
[ "There is no way to ``block'' signals temporarily from critical sections (since this is not supported by all Unix flavors).\nhttps://docs.python.org/library/signal.html\n", "There are no direct bindings for this in Python. Accessing them through ctypes is easy enough; here is an example.\nimport ctypes, signal\nlibc = ctypes.cdll.LoadLibrary(\"libc.so.6\")\nlibc.sighold(signal.SIGKILL)\nlibc.sigrelse(signal.SIGKILL)\n\nI'm not familiar with the use of these calls, but be aware that Python's signal handlers work differently than C. When Python code is attached to a signal callback, the signal is caught on the C side of the interpreter and queued. The interpreter is occasionally interrupted for internal housekeeping (and thread switching, etc). It is during that interrupt the Python handler for the signal will be called.\nAll that to say, just be aware that Python's signal handling is a little less asynchronous than normal C signal handlers.\n" ]
[ 2, 2 ]
[]
[]
[ "python", "signals" ]
stackoverflow_0000113170_python_signals.txt
Q: Bash reg-exp substitution Is there a way to run a regexp-string replace on the current line in the bash? I find myself rather often in the situation, where I have typed a long commandline and then realize, that I would like to change a word somewhere in the line. My current approach is to finish the line, press Ctrl+A (to get to the start of the line), insert a # (to comment out the line), press enter and then use the ^oldword^newword syntax (^oldword^newword executes the previous command after substituting oldword by newword). But there has to be a better (faster) way to achieve this. (The mouse is not possible, since I am in an ssh-sessions most of the time). Probably there is some emacs-like key-command for this, that I don't know about. Edit: I have tried using vi-mode. Something strange happened. Although I am a loving vim-user, I had serious trouble using my beloved bash. All those finger-movements, that have been burned into my subconscious suddenly stopped working. I quickly returned to emacs-mode and considered, giving emacs a try as my favorite editor (although I guess, the same thing might happen again). A: G'day, What about using vi mode instead? Just enter set -o vi Then you can go to the word you want to change and just do a cw or cW depending on what's in the word? Oops, forgot to add you enter a ESC k to o to the previous line in the command history. What do you normally use for an editor? cheers, Rob Edit: What I forgot to say in my original reply was that you need to think of the vi command line in bash using the commands you enter when you are in "ex" mode in vi, i.e. after you've entered the colon. Worst thing is that you need to move around through your command history using the ancient vi commands of h (to the left) and l (to the right). You can use w (or W) to bounce across words though. Once you get used to it though, you have all sorts of commands available, e.g. entering ESC / my_command will look back through you r history, most recent first, to find the first occurrance of the command line containing the text my_command. Once it has found that, you can then use n to find the next occurrance, etc. And N to reverse the direction of the search. I'd go have a read of the man page for bash to see what's available under vi mode. Once you get over the fact that up-arrow and down-arrow are replaced by ESC k, and then j, you'll see that vi mode offers more than emacs mode for command line editing in bash. IMHO natchurly! (-: Emacs? Eighty megs and constantly swapping! cheers, Rob A: in ksh, in vi mode, if you hit 'v' while in command mode it will spawn a full vi session on the contents of your current command line. You can then edit using the full range of vi commands (global search and replace in your case). When :wq from vi, the edited command is executed. I'm sure something similar exists for bash. Since bash tends to extend its predecessors, there's probably something similar. A: Unfortunately, no, there's not really a better way. If you're just tired of making the keystrokes, you can use macros to trim them down. Add the following to your ~/.inputrc: "\C-x6": "\C-a#\C-m^" "\C-x7": "\C-m\C-P\C-a\C-d\C-m" Now, in a new bash instance (or after reloading .inputrc in your current shell by pressing C-x C-r), you can do the following: Type a bogus command (e.g., ls abcxyz). Press Ctrl-x, then 6. The macro inserts a # at the beginning of the line, executes the commented line, and types your first ^. Type your correction (e.g., xyz^def). Press Ctrl-x, then 7. The macro completes your substitution, then goes up to the previous (commented) line, removes the comment character, and executes it again. It's not exactly elegant, but I think it's the best you're going to get with readline.
Bash reg-exp substitution
Is there a way to run a regexp-string replace on the current line in the bash? I find myself rather often in the situation, where I have typed a long commandline and then realize, that I would like to change a word somewhere in the line. My current approach is to finish the line, press Ctrl+A (to get to the start of the line), insert a # (to comment out the line), press enter and then use the ^oldword^newword syntax (^oldword^newword executes the previous command after substituting oldword by newword). But there has to be a better (faster) way to achieve this. (The mouse is not possible, since I am in an ssh-sessions most of the time). Probably there is some emacs-like key-command for this, that I don't know about. Edit: I have tried using vi-mode. Something strange happened. Although I am a loving vim-user, I had serious trouble using my beloved bash. All those finger-movements, that have been burned into my subconscious suddenly stopped working. I quickly returned to emacs-mode and considered, giving emacs a try as my favorite editor (although I guess, the same thing might happen again).
[ "G'day,\nWhat about using vi mode instead? Just enter set -o vi\nThen you can go to the word you want to change and just do a cw or cW depending on what's in the word?\nOops, forgot to add you enter a ESC k to o to the previous line in the command history.\nWhat do you normally use for an editor?\ncheers,\nRob\nEdit: What I forgot to say in my original reply was that you need to think of the vi command line in bash using the commands you enter when you are in \"ex\" mode in vi, i.e. after you've entered the colon.\nWorst thing is that you need to move around through your command history using the ancient vi commands of h (to the left) and l (to the right). You can use w (or W) to bounce across words though.\nOnce you get used to it though, you have all sorts of commands available, e.g. entering ESC / my_command will look back through you r history, most recent first, to find the first occurrance of the command line containing the text my_command. Once it has found that, you can then use n to find the next occurrance, etc. And N to reverse the direction of the search.\nI'd go have a read of the man page for bash to see what's available under vi mode. Once you get over the fact that up-arrow and down-arrow are replaced by ESC k, and then j, you'll see that vi mode offers more than emacs mode for command line editing in bash.\nIMHO natchurly! (-:\nEmacs? Eighty megs and constantly swapping!\ncheers,\nRob\n", "in ksh, in vi mode, if you hit 'v' while in command mode it will spawn a full vi session on the contents of your current command line. You can then edit using the full range of vi commands (global search and replace in your case). When :wq from vi, the edited command is executed. I'm sure something similar exists for bash. Since bash tends to extend its predecessors, there's probably something similar.\n", "Unfortunately, no, there's not really a better way. If you're just tired of making the keystrokes, you can use macros to trim them down. Add the following to your ~/.inputrc:\n\"\\C-x6\": \"\\C-a#\\C-m^\"\n\"\\C-x7\": \"\\C-m\\C-P\\C-a\\C-d\\C-m\"\n\nNow, in a new bash instance (or after reloading .inputrc in your current shell by pressing C-x C-r), you can do the following:\n\nType a bogus command (e.g., ls abcxyz).\nPress Ctrl-x, then 6. The macro inserts a # at the beginning of the line, executes the commented line, and types your first ^.\nType your correction (e.g., xyz^def).\nPress Ctrl-x, then 7. The macro completes your substitution, then goes up to the previous (commented) line, removes the comment character, and executes it again.\n\nIt's not exactly elegant, but I think it's the best you're going to get with readline.\n" ]
[ 3, 3, 1 ]
[]
[]
[ "bash", "regex" ]
stackoverflow_0000028224_bash_regex.txt
Q: How to release .Net apps without bundling .Net framework? I have a strange requirement to ship an application without bundling .Net framework (to save memory footprint and bandwidth). Is this possible? Customers may or may not have .Net runtime installed on their systems. Will doing Ngen take care of this problem? I was looking for something like the good old ways of releasing C++ apps (using linker to link only the binaries you need). A: One option without using Ngen may be to release using the .Net Framework 3.5 SP1 "Client Profile". This is a sub-set of the .Net Framework used for building client applications which can be downloaded as a separate, much smaller, package. See details from the BCL Team Blog here and Scott Guthrie here. A: Common solution in such situation which a the standard de-facto is that your customers should have the proper version of .Net framework, as soon as it's the part of Windows Update. So your installer should check availability of .NET of version your use on client's machine and propose to download it from Microsoft. This will prevent your company to transfer it through your channel and ensure your application has correct infrastructure, A: have you checked salamander?remotesoft A: Just FYI, This topic is already discussed. Unfortunately I can't find the link at the moment (SO search should be improved). Ok I found similar question: .NET Framework dependency I recall that there was exactly the same question, but I can't find it :( A: If your software requires .NET then your end users will need the same version of .NET. You cannot "link in" .NET into your executable to create a single .exe, like you can with MFC or Delphi. If your installer doesn't install the .NET runtime then you will need to ensure that the user is aware if this and point them to the .NET download from Microsoft. A: You can use "Client Profile", it is a subset of .NET Framework for desktop applications. Size of client profile is about 20 MB A: You can also include the bootstrapper 'setup.exe' that is created in VS. It'll detect whether you have the neccessary .net version, and if so, launch the installer; if not, it'll prompt you to download the framework.
How to release .Net apps without bundling .Net framework?
I have a strange requirement to ship an application without bundling .Net framework (to save memory footprint and bandwidth). Is this possible? Customers may or may not have .Net runtime installed on their systems. Will doing Ngen take care of this problem? I was looking for something like the good old ways of releasing C++ apps (using linker to link only the binaries you need).
[ "One option without using Ngen may be to release using the .Net Framework 3.5 SP1 \"Client Profile\". This is a sub-set of the .Net Framework used for building client applications which can be downloaded as a separate, much smaller, package.\nSee details from the BCL Team Blog here and Scott Guthrie here.\n", "Common solution in such situation which a the standard de-facto is that your customers should have the proper version of .Net framework, as soon as it's the part of Windows Update. So your installer should check availability of .NET of version your use on client's machine and propose to download it from Microsoft. This will prevent your company to transfer it through your channel and ensure your application has correct infrastructure,\n", "have you checked salamander?remotesoft\n", "Just FYI,\nThis topic is already discussed. Unfortunately I can't find the link at the moment (SO search should be improved).\n\nOk I found similar question:\n.NET Framework dependency\nI recall that there was exactly the same question, but I can't find it :(\n", "If your software requires .NET then your end users will need the same version of .NET. You cannot \"link in\" .NET into your executable to create a single .exe, like you can with MFC or Delphi. If your installer doesn't install the .NET runtime then you will need to ensure that the user is aware if this and point them to the .NET download from Microsoft.\n", "You can use \"Client Profile\", it is a subset of .NET Framework for desktop applications. Size of client profile is about 20 MB\n", "You can also include the bootstrapper 'setup.exe' that is created in VS. It'll detect whether you have the neccessary .net version, and if so, launch the installer; if not, it'll prompt you to download the framework.\n" ]
[ 3, 3, 2, 2, 0, 0, 0 ]
[]
[]
[ ".net", "linker" ]
stackoverflow_0000113233_.net_linker.txt
Q: Information on how to use margins I need some info on how to use margins and how exactly padding works. For example: Should I put a line to occupy the whole width of the page (no matter what resolution is used to display the web page) letting just a small border on each side, how could I achieve this? A: Have a look at this: http://redmelon.net/tstme/box_model/ Basically, an element consists of content, surrounded by its padding, then the border, then the margin. Background images only extend as far as the border. Margins are best described as 'the whitespace around this element'. But have a look at the URL above, and make yourself a test page to have a play with, it should all make sense. A: I think this should do what you want: <table width="100%" cellpadding="5"> <tr> <td> One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... </td> </tr> </table> Setting width to "100%" forces the text to use the full width, and setting cellpadding to "5" sets a unvisible border of 5 pixels. A: Another way to illustrate it is in the box model, padding is the inside "margin" of a box margin is the buffer around the box.
Information on how to use margins
I need some info on how to use margins and how exactly padding works. For example: Should I put a line to occupy the whole width of the page (no matter what resolution is used to display the web page) letting just a small border on each side, how could I achieve this?
[ "Have a look at this: http://redmelon.net/tstme/box_model/\nBasically, an element consists of content, surrounded by its padding, then the border, then the margin. Background images only extend as far as the border. Margins are best described as 'the whitespace around this element'.\nBut have a look at the URL above, and make yourself a test page to have a play with, it should all make sense.\n", "I think this should do what you want:\n<table width=\"100%\" cellpadding=\"5\">\n <tr>\n <td>\nOne, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... One, two, three ... \n </td>\n </tr>\n</table>\n\nSetting width to \"100%\" forces the text to use the full width, and setting cellpadding to \"5\" sets a unvisible border of 5 pixels.\n", "Another way to illustrate it is in the box model, padding is the inside \"margin\" of a box margin is the buffer around the box.\n" ]
[ 9, 0, 0 ]
[]
[]
[ "css", "html", "layout", "margins", "padding" ]
stackoverflow_0000104395_css_html_layout_margins_padding.txt
Q: What is the best way to encrypt a clob? I am using Oracle 9 and JDBC and would like to encyrpt a clob as it is inserted into the DB. Ideally I'd like to be able to just insert the plaintext and have it encrypted by a stored procedure: String SQL = "INSERT INTO table (ID, VALUE) values (?, encrypt(?))"; PreparedStatement ps = connection.prepareStatement(SQL); ps.setInt(id); ps.setString(plaintext); ps.executeUpdate(); The plaintext is not expected to exceed 4000 characters but encrypting makes text longer. Our current approach to encryption uses dbms_obfuscation_toolkit.DESEncrypt() but we only process varchars. Will the following work? FUNCTION encrypt(p_clob IN CLOB) RETURN CLOB IS encrypted_string CLOB; v_string CLOB; BEGIN dbms_lob.createtemporary(encrypted_string, TRUE); v_string := p_clob; dbms_obfuscation_toolkit.DESEncrypt( input_string => v_string, key_string => key_string, encrypted_string => encrypted_string ); RETURN UTL_RAW.CAST_TO_RAW(encrypted_string); END; I'm confused about the temporary clob; do I need to close it? Or am I totally off-track? Edit: The purpose of the obfuscation is to prevent trivial access to the data. My other purpose is to obfuscate clobs in the same way that we are already obfuscating the varchar columns. The oracle sample code does not deal with clobs which is where my specific problem lies; encrypting varchars (smaller than 2000 chars) is straightforward. A: There is an example in Oracle Documentation: http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_obtoo2.htm You do not need to close it DECLARE input_string VARCHAR2(16) := 'tigertigertigert'; raw_input RAW(128) := UTL_RAW.CAST_TO_RAW(input_string); key_string VARCHAR2(8) := 'scottsco'; raw_key RAW(128) := UTL_RAW.CAST_TO_RAW(key_string); encrypted_raw RAW(2048); encrypted_string VARCHAR2(2048); decrypted_raw RAW(2048); decrypted_string VARCHAR2(2048); error_in_input_buffer_length EXCEPTION; PRAGMA EXCEPTION_INIT(error_in_input_buffer_length, -28232); INPUT_BUFFER_LENGTH_ERR_MSG VARCHAR2(100) := '*** DES INPUT BUFFER NOT A MULTIPLE OF 8 BYTES - IGNORING EXCEPTION ***'; double_encrypt_not_permitted EXCEPTION; PRAGMA EXCEPTION_INIT(double_encrypt_not_permitted, -28233); DOUBLE_ENCRYPTION_ERR_MSG VARCHAR2(100) := '*** CANNOT DOUBLE ENCRYPT DATA - IGNORING EXCEPTION ***'; -- 1. Begin testing raw data encryption and decryption BEGIN dbms_output.put_line('> ========= BEGIN TEST RAW DATA ========='); dbms_output.put_line('> Raw input : ' || UTL_RAW.CAST_TO_VARCHAR2(raw_input)); BEGIN dbms_obfuscation_toolkit.DESEncrypt(input => raw_input, key => raw_key, encrypted_data => encrypted_raw ); dbms_output.put_line('> encrypted hex value : ' || rawtohex(encrypted_raw)); dbms_obfuscation_toolkit.DESDecrypt(input => encrypted_raw, key => raw_key, decrypted_data => decrypted_raw); dbms_output.put_line('> Decrypted raw output : ' || UTL_RAW.CAST_TO_VARCHAR2(decrypted_raw)); dbms_output.put_line('> '); if UTL_RAW.CAST_TO_VARCHAR2(raw_input) = UTL_RAW.CAST_TO_VARCHAR2(decrypted_raw) THEN dbms_output.put_line('> Raw DES Encyption and Decryption successful'); END if; EXCEPTION WHEN error_in_input_buffer_length THEN dbms_output.put_line('> ' || INPUT_BUFFER_LENGTH_ERR_MSG); END; dbms_output.put_line('> '); A: Slightly off-topic: What's the point of the encryption/obfuscation in the first place? An attacker having access to your database will be able to obtain the plaintext -- finding the above stored procedure will enable the attacker to perform the decryption. A: I note you are on Oracle 9, but just for the record in Oracle 10g+ the dbms_obfuscation_toolkit was deprecated in favour of dbms_crypto. dbms_crypto does include CLOB support: DBMS_CRYPTO.ENCRYPT( dst IN OUT NOCOPY BLOB, src IN CLOB CHARACTER SET ANY_CS, typ IN PLS_INTEGER, key IN RAW, iv IN RAW DEFAULT NULL); DBMS_CRYPT.DECRYPT( dst IN OUT NOCOPY CLOB CHARACTER SET ANY_CS, src IN BLOB, typ IN PLS_INTEGER, key IN RAW, iv IN RAW DEFAULT NULL);
What is the best way to encrypt a clob?
I am using Oracle 9 and JDBC and would like to encyrpt a clob as it is inserted into the DB. Ideally I'd like to be able to just insert the plaintext and have it encrypted by a stored procedure: String SQL = "INSERT INTO table (ID, VALUE) values (?, encrypt(?))"; PreparedStatement ps = connection.prepareStatement(SQL); ps.setInt(id); ps.setString(plaintext); ps.executeUpdate(); The plaintext is not expected to exceed 4000 characters but encrypting makes text longer. Our current approach to encryption uses dbms_obfuscation_toolkit.DESEncrypt() but we only process varchars. Will the following work? FUNCTION encrypt(p_clob IN CLOB) RETURN CLOB IS encrypted_string CLOB; v_string CLOB; BEGIN dbms_lob.createtemporary(encrypted_string, TRUE); v_string := p_clob; dbms_obfuscation_toolkit.DESEncrypt( input_string => v_string, key_string => key_string, encrypted_string => encrypted_string ); RETURN UTL_RAW.CAST_TO_RAW(encrypted_string); END; I'm confused about the temporary clob; do I need to close it? Or am I totally off-track? Edit: The purpose of the obfuscation is to prevent trivial access to the data. My other purpose is to obfuscate clobs in the same way that we are already obfuscating the varchar columns. The oracle sample code does not deal with clobs which is where my specific problem lies; encrypting varchars (smaller than 2000 chars) is straightforward.
[ "There is an example in Oracle Documentation:\nhttp://download.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_obtoo2.htm\nYou do not need to close it\nDECLARE\n input_string VARCHAR2(16) := 'tigertigertigert';\n raw_input RAW(128) := UTL_RAW.CAST_TO_RAW(input_string);\n key_string VARCHAR2(8) := 'scottsco';\n raw_key RAW(128) := UTL_RAW.CAST_TO_RAW(key_string);\n encrypted_raw RAW(2048);\n encrypted_string VARCHAR2(2048);\n decrypted_raw RAW(2048);\n decrypted_string VARCHAR2(2048); \n error_in_input_buffer_length EXCEPTION;\n PRAGMA EXCEPTION_INIT(error_in_input_buffer_length, -28232);\n INPUT_BUFFER_LENGTH_ERR_MSG VARCHAR2(100) :=\n '*** DES INPUT BUFFER NOT A MULTIPLE OF 8 BYTES - IGNORING \nEXCEPTION ***';\n double_encrypt_not_permitted EXCEPTION;\n PRAGMA EXCEPTION_INIT(double_encrypt_not_permitted, -28233);\n DOUBLE_ENCRYPTION_ERR_MSG VARCHAR2(100) :=\n '*** CANNOT DOUBLE ENCRYPT DATA - IGNORING EXCEPTION ***';\n\n -- 1. Begin testing raw data encryption and decryption\n BEGIN\n dbms_output.put_line('> ========= BEGIN TEST RAW DATA =========');\n dbms_output.put_line('> Raw input : ' || \n UTL_RAW.CAST_TO_VARCHAR2(raw_input));\n BEGIN \n dbms_obfuscation_toolkit.DESEncrypt(input => raw_input, \n key => raw_key, encrypted_data => encrypted_raw );\n dbms_output.put_line('> encrypted hex value : ' || \n rawtohex(encrypted_raw));\n dbms_obfuscation_toolkit.DESDecrypt(input => encrypted_raw, \n key => raw_key, decrypted_data => decrypted_raw);\n dbms_output.put_line('> Decrypted raw output : ' || \n UTL_RAW.CAST_TO_VARCHAR2(decrypted_raw));\n dbms_output.put_line('> '); \n if UTL_RAW.CAST_TO_VARCHAR2(raw_input) = \n UTL_RAW.CAST_TO_VARCHAR2(decrypted_raw) THEN\n dbms_output.put_line('> Raw DES Encyption and Decryption successful');\n END if;\n EXCEPTION\n WHEN error_in_input_buffer_length THEN\n dbms_output.put_line('> ' || INPUT_BUFFER_LENGTH_ERR_MSG);\n END;\n dbms_output.put_line('> ');\n\n", "Slightly off-topic: What's the point of the encryption/obfuscation in the first place? An attacker having access to your database will be able to obtain the plaintext -- finding the above stored procedure will enable the attacker to perform the decryption.\n", "I note you are on Oracle 9, but just for the record in Oracle 10g+ the dbms_obfuscation_toolkit was deprecated in favour of dbms_crypto.\ndbms_crypto does include CLOB support:\nDBMS_CRYPTO.ENCRYPT(\n dst IN OUT NOCOPY BLOB,\n src IN CLOB CHARACTER SET ANY_CS,\n typ IN PLS_INTEGER,\n key IN RAW,\n iv IN RAW DEFAULT NULL);\n\nDBMS_CRYPT.DECRYPT(\n dst IN OUT NOCOPY CLOB CHARACTER SET ANY_CS,\n src IN BLOB,\n typ IN PLS_INTEGER,\n key IN RAW,\n iv IN RAW DEFAULT NULL);\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "encryption", "java", "jdbc", "oracle", "plsql" ]
stackoverflow_0000096945_encryption_java_jdbc_oracle_plsql.txt
Q: What's the best way to handle one-to-one relationships in SQL? Let's say I've got Alpha things that may or may not be or be related to Bravo or Charlie things. These are one-to-one relationships: No Alpha will relate to more than one Bravo. And no Bravo will relate to more than one Alpha. I've got a few goals: a system that's easy to learn and maintain. data integrity enforced within my database. a schema that matches the real-world, logical organization of my data. classes/objects within my programming that map well to database tables (à la Linq to SQL) speedy read and write operations effective use of space (few null fields) I've got three ideas… PK = primary key FK = foreign key NU = nullable One table with many nullalbe fields (flat file)… Alphas -------- PK AlphaId AlphaOne AlphaTwo AlphaThree NU BravoOne NU BravoTwo NU BravoThree NU CharlieOne NU CharlieTwo NU CharlieThree Many tables with zero nullalbe fields… Alphas -------- PK AlphaId AlphaOne AlphaTwo AlphaThree Bravos -------- FK PK AlphaId BravoOne BravoTwo BravoThree Charlies -------- FK PK AlphaId CharlieOne CharlieTwo CharlieThree Best (or worst) of both: Lots of nullalbe foreign keys to many tables… Alphas -------- PK AlphaId AlphaOne AlphaTwo AlphaThree NU FK BravoId NU FK CharlieId Bravos -------- PK BravoId BravoOne BravoTwo BravoThree Charlies -------- PK CharlieId CharlieOne CharlieTwo CharlieThree What if an Alpha must be either Bravo or Charlie, but not both? What if instead of just Bravos and Charlies, Alphas could also be any of Deltas, Echos, Foxtrots, or Golfs, etc…? EDIT: This is a portion of the question: Which is the best database schema for my navigation? A: If you want each Alpha to be related to by only one Bravo I would vote for the possibility with using a combined FK/PK: Bravos -------- FK PK AlphaId BravoOne BravoTwo BravoThree This way one and only one Bravo may refer to your Alphas. If the Bravos and Charlies have to be mutually exclusive, the simplest method would probably to create a discriminator field: Alpha -------- PK AlphaId PK AlphaType NOT NULL IN ("Bravo", "Charlie") AlphaOne AlphaTwo AlphaThree Bravos -------- FK PK AlphaId FK PK AlphaType == "Bravo" BravoOne BravoTwo BravoThree Charlies -------- FK PK AlphaId FK PK AlphaType == "Charlie" CharlieOne CharlieTwo CharlieThree This way the AlphaType field forces the records to always belong to exactly one subtype. A: I'm assuming you will be using SQL Server 2000 / 2005. I have a standard pattern for 1-to-1 relationships which I use, which is not too dissimilar to your 2nd idea, but here are the differences: Every entity must have its own primary key first, so your Bravo, Charlie, etc tables should define their own surrogate key, in addition to the foreign key column for the Alpha table. You are making your domain model quite inflexible by specifying that the primary key of one table must be exactly the same as the primary key of another table. The entities therefore become very tightly coupled, and one entity cannot exist without another, which is not a business rule that needs to be enforced within database design. Add a foreign key constraint between the AlphaID columns in the Bravo and Charlie tables to the primary key column on the Alpha table. This gives you 1-to-many, and also allows you to specify whether the relationship is mandatory simply by setting the nullability of the FK column (something that isn't possible in your current design). Add a unique key constraint to tables Bravo, Charlie, etc on the AlphaID column. This creates a 1-to-1 relationship, with the added benefit that the unique key also acts as an index which can help to speed up queries that retrieve rows based on the foreign key value. The major benefit of this approach is that change is easier: Want 1-to-many back? Drop the relevant unique key, or just change it to a normal index Want Bravo to exist independently of Alpha? You've already got the surrogate key, all you do is set the AlphaID FK column to allow NULLs A: Personally, I've had lots of success with your second model, using a PK/FK on a single column. I have never had a situation where all Alphas were required to have a record in a Bravo or Charlie table. I've always dealt with 1 <-> 0..1, never 1 <-> 1. As for your last question, that's just that many more tables. A: One more approach is having 3 tables for storing the 3 entities and having a separate table for storing the relations. A: You could have a join table that specifies an Alpha and a related ID. You can then add another column specifing if it is an ID for Bravo, Charlie or whatever. Keeps the column creep down on Alpha but does add some complexity to joining queries. A: I have an example working pretty well so far that fits your model: I Have Charlie and Bravo Tables Having the Foreign Key alpha_id from Alpha. Like your first example, except alpha is not the Primary Key, bravo_id and charlie_id are. I use alpha_id on every table I need to address to those entities, so, to avoid a SQL that may cause some delay researching both Bravo and Charlie to find which one Alpha is, I created a AlphaType table and on Alpha table I have its id (alpha_type_id) as foreign key. That way I can know in a programmatic way which AlphaType I am dealing with without Joining tables that may have zillions of records. in tSQL: // For example sake lets think Id as a CHAR. // and pardon me on any mistake, I dont have the exact code here, // but you can get the idea SELECT (CASE alpha_type_id WHEN 'B' THEN '[Bravo].[Name]' WHEN 'C' THEN '[Charlie].[Name]' ELSE Null END) FROM ... A: You raise a lot of questions that make it hard to select any of your proposed solutions without a lot more clarification on the exact problem you are trying to solve. Consider not just my clarification questions, but the criteria that you will use to evaluate my questions, as an indication of the amount of detail required to solve your problem: a system that's easy to learn and maintain. What "System" will it be easy to learn and maintain? The source code of your app, or the app's data via it's end-user interface? data integrity enforced within my database. What do you mean by "enforced within my database"? Does this mean you cannot by any means control data integrity any other way, i.e. the project requires only DB-based data integrity rules? a schema that matches the real-world, logical organization of my data. Can you provide us the real world, logical organization to which you are referring? It's impossible to infer it from your three examples of the data you are trying to store -- i.e. suppose all three of your structures are completely wrong. How would we know that unless we know the real-world spec? classes/objects within my programming that map well to database tables (à la Linq to SQL) This requirement sounds like your hand is being forced to create this with linq to SQL, is that the case? speedy read and write operations What is "speedy"? .03 seconds? 3 seconds? 30 minutes? It's unclear because you're not specifying the data size and type of operations to which you are referring. effective use of space (few null fields) Effective use of space has nothing to do with the number of null fields. If you mean a normalized database structure, that will depend again on the real-world spec's and other design elements of the application that have not been provided in the question. A: I'd go with option 1 unless I had a significant reason not to. It might not cost you as much space as you think, esp. if you are using varchars in Bravo. Don't forget that splitting it will cost you for foreign keys, secondary identity and needed indexes. A place where you might run into trouble is if Bravo is unlikely to be needed (<%10) AND you need to quickly query by one of its fields so you index it. A: I would create a supertype / subtype relationship. THINGS ------ PK ThingId ALPHAS ------ FK ThingId (not null, identifying, exported from THINGS) AlphaCol1 AlphaCol2 AlphaCol3 BRAVOS ------ FK ThingId (not null, identifying, exported from THINGS) BravoCol1 BravoCol2 BravoCol3 CHARLIES -------- FK ThingId (not null, identifying, exported from THINGS) CharlieCol1 CharlieCol2 CharlieCol3 So, for example, an alpha that has a charlie but not a bravo:- insert into things values (1); insert into alphas values (1,'alpha col 1',5,'blue'); insert into charlies values (1,'charlie col 1',17,'Y'); Note, you can't create more than one charlie for the alpha, as if you tried to create a two charlies with a ThingId of 1 the second insert would get a unique index/constraint violation.
What's the best way to handle one-to-one relationships in SQL?
Let's say I've got Alpha things that may or may not be or be related to Bravo or Charlie things. These are one-to-one relationships: No Alpha will relate to more than one Bravo. And no Bravo will relate to more than one Alpha. I've got a few goals: a system that's easy to learn and maintain. data integrity enforced within my database. a schema that matches the real-world, logical organization of my data. classes/objects within my programming that map well to database tables (à la Linq to SQL) speedy read and write operations effective use of space (few null fields) I've got three ideas… PK = primary key FK = foreign key NU = nullable One table with many nullalbe fields (flat file)… Alphas -------- PK AlphaId AlphaOne AlphaTwo AlphaThree NU BravoOne NU BravoTwo NU BravoThree NU CharlieOne NU CharlieTwo NU CharlieThree Many tables with zero nullalbe fields… Alphas -------- PK AlphaId AlphaOne AlphaTwo AlphaThree Bravos -------- FK PK AlphaId BravoOne BravoTwo BravoThree Charlies -------- FK PK AlphaId CharlieOne CharlieTwo CharlieThree Best (or worst) of both: Lots of nullalbe foreign keys to many tables… Alphas -------- PK AlphaId AlphaOne AlphaTwo AlphaThree NU FK BravoId NU FK CharlieId Bravos -------- PK BravoId BravoOne BravoTwo BravoThree Charlies -------- PK CharlieId CharlieOne CharlieTwo CharlieThree What if an Alpha must be either Bravo or Charlie, but not both? What if instead of just Bravos and Charlies, Alphas could also be any of Deltas, Echos, Foxtrots, or Golfs, etc…? EDIT: This is a portion of the question: Which is the best database schema for my navigation?
[ "If you want each Alpha to be related to by only one Bravo I would vote for the possibility with using a combined FK/PK:\n Bravos\n --------\nFK PK AlphaId\n BravoOne\n BravoTwo\n BravoThree\n\nThis way one and only one Bravo may refer to your Alphas.\nIf the Bravos and Charlies have to be mutually exclusive, the simplest method would probably to create a discriminator field:\n Alpha\n --------\n PK AlphaId\n PK AlphaType NOT NULL IN (\"Bravo\", \"Charlie\")\n AlphaOne\n AlphaTwo\n AlphaThree\n\n Bravos\n --------\nFK PK AlphaId\nFK PK AlphaType == \"Bravo\"\n BravoOne\n BravoTwo\n BravoThree\n\n Charlies\n --------\nFK PK AlphaId\nFK PK AlphaType == \"Charlie\"\n CharlieOne\n CharlieTwo\n CharlieThree\n\nThis way the AlphaType field forces the records to always belong to exactly one subtype.\n", "I'm assuming you will be using SQL Server 2000 / 2005. I have a standard pattern for 1-to-1 relationships which I use, which is not too dissimilar to your 2nd idea, but here are the differences:\n\nEvery entity must have its own primary key first, so your Bravo, Charlie, etc tables should define their own surrogate key, in addition to the foreign key column for the Alpha table. You are making your domain model quite inflexible by specifying that the primary key of one table must be exactly the same as the primary key of another table. The entities therefore become very tightly coupled, and one entity cannot exist without another, which is not a business rule that needs to be enforced within database design.\nAdd a foreign key constraint between the AlphaID columns in the Bravo and Charlie tables to the primary key column on the Alpha table. This gives you 1-to-many, and also allows you to specify whether the relationship is mandatory simply by setting the nullability of the FK column (something that isn't possible in your current design).\nAdd a unique key constraint to tables Bravo, Charlie, etc on the AlphaID column. This creates a 1-to-1 relationship, with the added benefit that the unique key also acts as an index which can help to speed up queries that retrieve rows based on the foreign key value.\n\nThe major benefit of this approach is that change is easier:\n\nWant 1-to-many back? Drop the relevant unique key, or just change it to a normal index\nWant Bravo to exist independently of Alpha? You've already got the surrogate key, all you do is set the AlphaID FK column to allow NULLs\n\n", "Personally, I've had lots of success with your second model, using a PK/FK on a single column.\nI have never had a situation where all Alphas were required to have a record in a Bravo or Charlie table. I've always dealt with 1 <-> 0..1, never 1 <-> 1.\nAs for your last question, that's just that many more tables.\n", "One more approach is having 3 tables for storing the 3 entities and having a separate table for storing the relations.\n", "You could have a join table that specifies an Alpha and a related ID. You can then add another column specifing if it is an ID for Bravo, Charlie or whatever. Keeps the column creep down on Alpha but does add some complexity to joining queries.\n", "I have an example working pretty well so far that fits your model:\nI Have Charlie and Bravo Tables Having the Foreign Key alpha_id from Alpha. Like your first example, except alpha is not the Primary Key, bravo_id and charlie_id are.\nI use alpha_id on every table I need to address to those entities, so, to avoid a SQL that may cause some delay researching both Bravo and Charlie to find which one Alpha is, I created a AlphaType table and on Alpha table I have its id (alpha_type_id) as foreign key. That way I can know in a programmatic way which AlphaType I am dealing with without Joining tables that may have zillions of records. in tSQL:\n// For example sake lets think Id as a CHAR.\n// and pardon me on any mistake, I dont have the exact code here,\n// but you can get the idea\n\nSELECT \n (CASE alpha_type_id\n WHEN 'B' THEN '[Bravo].[Name]'\n WHEN 'C' THEN '[Charlie].[Name]'\n ELSE Null\n END)\nFROM ...\n\n", "You raise a lot of questions that make it hard to select any of your proposed solutions without a lot more clarification on the exact problem you are trying to solve. Consider not just my clarification questions, but the criteria that you will use to evaluate my questions, as an indication of the amount of detail required to solve your problem:\n\na system that's easy to learn and maintain. \n\nWhat \"System\" will it be easy to learn and maintain? The source code of your app, or the app's data via it's end-user interface? \n\ndata integrity enforced within my database.\n\nWhat do you mean by \"enforced within my database\"? Does this mean you cannot by any means control data integrity any other way, i.e. the project requires only DB-based data integrity rules?\n\na schema that matches the real-world, logical organization of my data.\n\nCan you provide us the real world, logical organization to which you are referring? It's impossible to infer it from your three examples of the data you are trying to store -- i.e. suppose all three of your structures are completely wrong. How would we know that unless we know the real-world spec?\n\nclasses/objects within my programming that map well to database tables (à la Linq to SQL)\n\nThis requirement sounds like your hand is being forced to create this with linq to SQL, is that the case?\n\nspeedy read and write operations\n\nWhat is \"speedy\"? .03 seconds? 3 seconds? 30 minutes? It's unclear because you're not specifying the data size and type of operations to which you are referring.\n\neffective use of space (few null fields)\n\nEffective use of space has nothing to do with the number of null fields. If you mean a normalized database structure, that will depend again on the real-world spec's and other design elements of the application that have not been provided in the question. \n", "I'd go with option 1 unless I had a significant reason not to. It might not cost you as much space as you think, esp. if you are using varchars in Bravo. Don't forget that splitting it will cost you for foreign keys, secondary identity and needed indexes.\nA place where you might run into trouble is if Bravo is unlikely to be needed (<%10) AND you need to quickly query by one of its fields so you index it.\n", "I would create a supertype / subtype relationship.\n THINGS\n ------\nPK ThingId \n\n ALPHAS\n ------\nFK ThingId (not null, identifying, exported from THINGS)\n AlphaCol1\n AlphaCol2\n AlphaCol3 \n\n BRAVOS\n ------\nFK ThingId (not null, identifying, exported from THINGS)\n BravoCol1\n BravoCol2\n BravoCol3 \n\n CHARLIES\n --------\nFK ThingId (not null, identifying, exported from THINGS)\n CharlieCol1\n CharlieCol2\n CharlieCol3\n\nSo, for example, an alpha that has a charlie but not a bravo:-\ninsert into things values (1);\ninsert into alphas values (1,'alpha col 1',5,'blue');\ninsert into charlies values (1,'charlie col 1',17,'Y');\n\nNote, you can't create more than one charlie for the alpha, as if you tried to create a two charlies with a ThingId of 1 the second insert would get a unique index/constraint violation.\n" ]
[ 8, 4, 3, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "database", "database_design", "linq", "linq_to_sql", "schema" ]
stackoverflow_0000057152_database_database_design_linq_linq_to_sql_schema.txt
Q: Using MS Access & ODBC to connect to a remote PostgreSQL I currently have an MS Access application that connects to a PostgreSQL database via ODBC. This successfully runs on a LAN with 20 users (each running their own version of Access). Now I am thinking through some disaster recovery scenarios, and it seems that a quick and easy method of protecting the data is to use log shipping to create a warm-standby. This lead me to think about putting this warm-standby at a remote location, but then I have the question: Is Access connecting to a remote database via ODBC usable? I.e. the remote database is maybe in the same country with ok ping times and I have a 1mbit SDSL line. A: onnodb, The PostgreSQL ODBC driver is actively developed and an Access front-end combined with PostgreSQL server, in my opinion makes a great option on a LAN for rapid development. I have been involved in a reasonably big system (100+ PostgreSQL tables, 200+ Access forms, 1000+ Access queries & reports) and it has run excellently for a few years, with ~20 users. Any queries running slow because Access is doing something stupid can generally just be solved by using views, and any really data-intensive code can easily be moved into PostgreSQL functions and then called from Access. The only main ODBC-related issue we have is that there is no way to kill a slow running query from Access, so we do often get users just killing Access and then massive queries are just left executing on the server. A: Yes. I don't have any experience using Access to hit PostgreSQL from a remote location but I have successfully used Access as a front-end to SQL Server & DB2 from a remote location with success. Ironically, what you don't want to do is use Access to front-end an Access database (mdb) from a remote location over a high-latency link. Since hitting the MDB uses file-based operations it's pretty easy to end up with a corrupt database if you have anything more than a trivial db. A: It depends a lot on the database you're using as a back-end. I've had rather terrible experiences with MySQL as a back-end. Make sure the ODBC link you're using is actively developed, stable and complete --- this was definitely not the case for MySQL. You may also want to check for any compatibility issues between Access and Postgre. And, of course, it won't hurt to test extensively. Oh, and I think it'd be absolutely great if you could post back here later with your experiences!
Using MS Access & ODBC to connect to a remote PostgreSQL
I currently have an MS Access application that connects to a PostgreSQL database via ODBC. This successfully runs on a LAN with 20 users (each running their own version of Access). Now I am thinking through some disaster recovery scenarios, and it seems that a quick and easy method of protecting the data is to use log shipping to create a warm-standby. This lead me to think about putting this warm-standby at a remote location, but then I have the question: Is Access connecting to a remote database via ODBC usable? I.e. the remote database is maybe in the same country with ok ping times and I have a 1mbit SDSL line.
[ "onnodb,\nThe PostgreSQL ODBC driver is actively developed and an Access front-end combined with PostgreSQL server, in my opinion makes a great option on a LAN for rapid development. I have been involved in a reasonably big system (100+ PostgreSQL tables, 200+ Access forms, 1000+ Access queries & reports) and it has run excellently for a few years, with ~20 users. Any queries running slow because Access is doing something stupid can generally just be solved by using views, and any really data-intensive code can easily be moved into PostgreSQL functions and then called from Access.\nThe only main ODBC-related issue we have is that there is no way to kill a slow running query from Access, so we do often get users just killing Access and then massive queries are just left executing on the server.\n", "Yes. \nI don't have any experience using Access to hit PostgreSQL from a remote location but I have successfully used Access as a front-end to SQL Server & DB2 from a remote location with success. \nIronically, what you don't want to do is use Access to front-end an Access database (mdb) from a remote location over a high-latency link. Since hitting the MDB uses file-based operations it's pretty easy to end up with a corrupt database if you have anything more than a trivial db.\n", "It depends a lot on the database you're using as a back-end. I've had rather terrible experiences with MySQL as a back-end. Make sure the ODBC link you're using is actively developed, stable and complete --- this was definitely not the case for MySQL. You may also want to check for any compatibility issues between Access and Postgre. And, of course, it won't hurt to test extensively.\nOh, and I think it'd be absolutely great if you could post back here later with your experiences!\n" ]
[ 11, 1, 1 ]
[ "PostgreSQL works great as a backend for MS Access, there are a couple of support functions you should use to make things easier. See here for more info on this:\nhttp://www.amsoftwaredesign.com/smf/index.php?board=8.0\n" ]
[ -1 ]
[ "ms_access", "odbc", "postgresql" ]
stackoverflow_0000037991_ms_access_odbc_postgresql.txt
Q: How do you get the Eclipse Package Explorer to show files whose names begins with a . (period)? When a folder in the Eclipse Package Explorer (one which is linked to a directory somewhere in the filesystem) contains files whose names begin with a . (period), those files do not appear. Can Eclipse be configured to show these files, and if so, how? A: Click the down-arrow in the package explorer (next to the editor linker). Then you just change the filters. Unmark the box that says '.*' resources.
How do you get the Eclipse Package Explorer to show files whose names begins with a . (period)?
When a folder in the Eclipse Package Explorer (one which is linked to a directory somewhere in the filesystem) contains files whose names begin with a . (period), those files do not appear. Can Eclipse be configured to show these files, and if so, how?
[ "Click the down-arrow in the package explorer (next to the editor linker). Then you just change the filters. Unmark the box that says '.*' resources.\n" ]
[ 6 ]
[]
[]
[ "eclipse", "package_explorer" ]
stackoverflow_0000113365_eclipse_package_explorer.txt