source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
75,759 | What's the best way to implement the enum idiom in Ruby? I'm looking for something which I can use (almost) like the Java/C# enums. | Two ways. Symbols ( :foo notation) or constants ( FOO notation). Symbols are appropriate when you want to enhance readability without littering code with literal strings. postal_code[:minnesota] = "MN"postal_code[:new_york] = "NY" Constants are appropriate when you have an underlying value that is important. Just declare a module to hold your constants and then declare the constants within that. module Foo BAR = 1 BAZ = 2 BIZ = 4end flags = Foo::BAR | Foo::BAZ # flags = 3 Added 2021-01-17 If you are passing the enum value around (for example, storing it in a database) and you need to be able to translate the value back into the symbol, there's a mashup of both approaches COMMODITY_TYPE = { currency: 1, investment: 2,}def commodity_type_string(value) COMMODITY_TYPE.key(value)endCOMMODITY_TYPE[:currency] This approach inspired by andrew-grimm's answer https://stackoverflow.com/a/5332950/13468 I'd also recommend reading through the rest of the answers here since there are a lot of ways to solve this and it really boils down to what it is about the other language's enum that you care about | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/75759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4110/"
]
} |
75,785 | Is there any complete guidance on doing AppBar docking (such as locking to the screen edge) in WPF? I understand there are InterOp calls that need to be made, but I'm looking for either a proof of concept based on a simple WPF form, or a componentized version that can be consumed. Related resources: http://www.codeproject.com/KB/dotnet/AppBar.aspx http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/05c73c9c-e85d-4ecd-b9b6-4c714a65e72b/ | Please Note: This question gathered a good amount of feedback, and some people below have made great points or fixes. Therefore, while I'll keep the code here (and possibly update it), I've also created a WpfAppBar project on github . Feel free to send pull requests. That same project also builds to a WpfAppBar nuget package I took the code from the first link provided in the question ( http://www.codeproject.com/KB/dotnet/AppBar.aspx ) and modified it to do two things: Work with WPF Be "standalone" - if you put this single file in your project, you can call AppBarFunctions.SetAppBar(...) without any further modification to the window. This approach doesn't create a base class. To use, just call this code from anywhere within a normal wpf window (say a button click or the initialize). Note that you can not call this until AFTER the window is initialized, if the HWND hasn't been created yet (like in the constructor), an error will occur. Make the window an appbar: AppBarFunctions.SetAppBar( this, ABEdge.Right ); Restore the window to a normal window: AppBarFunctions.SetAppBar( this, ABEdge.None ); Here's the full code to the file - note you'll want to change the namespace on line 7 to something apropriate. using System;using System.Collections.Generic;using System.Runtime.InteropServices;using System.Windows;using System.Windows.Interop;using System.Windows.Threading;namespace AppBarApplication{ public enum ABEdge : int { Left = 0, Top, Right, Bottom, None } internal static class AppBarFunctions { [StructLayout(LayoutKind.Sequential)] private struct RECT { public int left; public int top; public int right; public int bottom; } [StructLayout(LayoutKind.Sequential)] private struct APPBARDATA { public int cbSize; public IntPtr hWnd; public int uCallbackMessage; public int uEdge; public RECT rc; public IntPtr lParam; } private enum ABMsg : int { ABM_NEW = 0, ABM_REMOVE, ABM_QUERYPOS, ABM_SETPOS, ABM_GETSTATE, ABM_GETTASKBARPOS, ABM_ACTIVATE, ABM_GETAUTOHIDEBAR, ABM_SETAUTOHIDEBAR, ABM_WINDOWPOSCHANGED, ABM_SETSTATE } private enum ABNotify : int { ABN_STATECHANGE = 0, ABN_POSCHANGED, ABN_FULLSCREENAPP, ABN_WINDOWARRANGE } [DllImport("SHELL32", CallingConvention = CallingConvention.StdCall)] private static extern uint SHAppBarMessage(int dwMessage, ref APPBARDATA pData); [DllImport("User32.dll", CharSet = CharSet.Auto)] private static extern int RegisterWindowMessage(string msg); private class RegisterInfo { public int CallbackId { get; set; } public bool IsRegistered { get; set; } public Window Window { get; set; } public ABEdge Edge { get; set; } public WindowStyle OriginalStyle { get; set; } public Point OriginalPosition { get; set; } public Size OriginalSize { get; set; } public ResizeMode OriginalResizeMode { get; set; } public IntPtr WndProc(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled) { if (msg == CallbackId) { if (wParam.ToInt32() == (int)ABNotify.ABN_POSCHANGED) { ABSetPos(Edge, Window); handled = true; } } return IntPtr.Zero; } } private static Dictionary<Window, RegisterInfo> s_RegisteredWindowInfo = new Dictionary<Window, RegisterInfo>(); private static RegisterInfo GetRegisterInfo(Window appbarWindow) { RegisterInfo reg; if( s_RegisteredWindowInfo.ContainsKey(appbarWindow)) { reg = s_RegisteredWindowInfo[appbarWindow]; } else { reg = new RegisterInfo() { CallbackId = 0, Window = appbarWindow, IsRegistered = false, Edge = ABEdge.Top, OriginalStyle = appbarWindow.WindowStyle, OriginalPosition =new Point( appbarWindow.Left, appbarWindow.Top), OriginalSize = new Size( appbarWindow.ActualWidth, appbarWindow.ActualHeight), OriginalResizeMode = appbarWindow.ResizeMode, }; s_RegisteredWindowInfo.Add(appbarWindow, reg); } return reg; } private static void RestoreWindow(Window appbarWindow) { RegisterInfo info = GetRegisterInfo(appbarWindow); appbarWindow.WindowStyle = info.OriginalStyle; appbarWindow.ResizeMode = info.OriginalResizeMode; appbarWindow.Topmost = false; Rect rect = new Rect(info.OriginalPosition.X, info.OriginalPosition.Y, info.OriginalSize.Width, info.OriginalSize.Height); appbarWindow.Dispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new ResizeDelegate(DoResize), appbarWindow, rect); } public static void SetAppBar(Window appbarWindow, ABEdge edge) { RegisterInfo info = GetRegisterInfo(appbarWindow); info.Edge = edge; APPBARDATA abd = new APPBARDATA(); abd.cbSize = Marshal.SizeOf(abd); abd.hWnd = new WindowInteropHelper(appbarWindow).Handle; if( edge == ABEdge.None) { if( info.IsRegistered) { SHAppBarMessage((int)ABMsg.ABM_REMOVE, ref abd); info.IsRegistered = false; } RestoreWindow(appbarWindow); return; } if (!info.IsRegistered) { info.IsRegistered = true; info.CallbackId = RegisterWindowMessage("AppBarMessage"); abd.uCallbackMessage = info.CallbackId; uint ret = SHAppBarMessage((int)ABMsg.ABM_NEW, ref abd); HwndSource source = HwndSource.FromHwnd(abd.hWnd); source.AddHook(new HwndSourceHook(info.WndProc)); } appbarWindow.WindowStyle = WindowStyle.None; appbarWindow.ResizeMode = ResizeMode.NoResize; appbarWindow.Topmost = true; ABSetPos(info.Edge, appbarWindow); } private delegate void ResizeDelegate(Window appbarWindow, Rect rect); private static void DoResize(Window appbarWindow, Rect rect) { appbarWindow.Width = rect.Width; appbarWindow.Height = rect.Height; appbarWindow.Top = rect.Top; appbarWindow.Left = rect.Left; } private static void ABSetPos(ABEdge edge, Window appbarWindow) { APPBARDATA barData = new APPBARDATA(); barData.cbSize = Marshal.SizeOf(barData); barData.hWnd = new WindowInteropHelper(appbarWindow).Handle; barData.uEdge = (int)edge; if (barData.uEdge == (int)ABEdge.Left || barData.uEdge == (int)ABEdge.Right) { barData.rc.top = 0; barData.rc.bottom = (int)SystemParameters.PrimaryScreenHeight; if (barData.uEdge == (int)ABEdge.Left) { barData.rc.left = 0; barData.rc.right = (int)Math.Round(appbarWindow.ActualWidth); } else { barData.rc.right = (int)SystemParameters.PrimaryScreenWidth; barData.rc.left = barData.rc.right - (int)Math.Round(appbarWindow.ActualWidth); } } else { barData.rc.left = 0; barData.rc.right = (int)SystemParameters.PrimaryScreenWidth; if (barData.uEdge == (int)ABEdge.Top) { barData.rc.top = 0; barData.rc.bottom = (int)Math.Round(appbarWindow.ActualHeight); } else { barData.rc.bottom = (int)SystemParameters.PrimaryScreenHeight; barData.rc.top = barData.rc.bottom - (int)Math.Round(appbarWindow.ActualHeight); } } SHAppBarMessage((int)ABMsg.ABM_QUERYPOS, ref barData); SHAppBarMessage((int)ABMsg.ABM_SETPOS, ref barData); Rect rect = new Rect((double)barData.rc.left, (double)barData.rc.top, (double)(barData.rc.right - barData.rc.left), (double)(barData.rc.bottom - barData.rc.top)); //This is done async, because WPF will send a resize after a new appbar is added. //if we size right away, WPFs resize comes last and overrides us. appbarWindow.Dispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new ResizeDelegate(DoResize), appbarWindow, rect); } }} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/75785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7301/"
]
} |
75,786 | (Eclipse 3.4, Ganymede) I have an existing Dynamic Web Application project in Eclipse. When I created the project, I specified 'Default configuration for Apache Tomcat v6' under the 'Configuration' drop down. It's a month or 2 down the line, and I would now like to change the configuration to Tomcat 'v5.5'. (This will be the version of Tomcat on the production server.) I have tried the following steps (without success): I selected Targeted Runtimes under the Project Properties The Tomcat v5.5 option was disabled and The UI displayed this message: If the runtime you want to select is not displayed or is disabled you may need to uninstall one or more of the currently installed project facets. I then clicked on the Uninstall Facets... link. Under the Runtimes tab, only Tomcat 6 displayed. For Dynamic Web Module , I selected version 2.4 in place of 2.5 . Under the Runtimes tab, Tomcat 5.5 now displayed. However, the UI now displayed this message: Cannot change version of project facet Dynamic Web Module to 2.4. The Finish button was disabled - so I reached a dead-end. I CAN successfully create a NEW Project with a Tomcat v5.5 configuration. For some reason, though, it will not let me downgrade' an existing Project. As a work-around, I created a new Project and copied the source files from the old Project. Nonetheless, the work-around was fairly painful and somewhat clumsy. Can anyone explain how I can 'downgrade' the Project configuration from 'Tomcat 6' to 'Tomcat 5'? Or perhaps shed some light on why this happened? Thanks Pete | This is kind of hacking eclipse and you can get into trouble doing this but this should work: Open the navigator view and find that there is a .settings folder under your project expand it and then open the file: org.eclipse.wst.common.project.facet.core.xml you should see a line that says: <installed facet="jst.web" version="2.5"/> Change that to 2.4 and save. Just make sure that your project isn't using anything specific for 2.5 and you should be good. Also check your web.xml has the correct configuration: <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/75786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13360/"
]
} |
75,798 | I'm wondering if there's such a thing as Django-like ease of web app development combined with good deployment, debugging and other tools? Django is a very productive framework for building content-heavy sites; the best I've tried and a breath of fresh air compared to some of the Java monstrosities out there. However it's written in Python which means there's little real support in the way of deployment/packaging, debugging, profilers and other tools that make building and maintaining applications much easier. Ruby has similar issues and although I do like Ruby much better than I like Python, I get the impression that Rails is roughly in the same boat at Django when it comes to managing/supporting the app. Has anyone here tried both Django and Grails (or other web frameworks) for non-trivial projects? How did they compare? | You asked for someone who used both Grails and Django. I've done work on both for big projects. Here's my Thoughts: IDE's: Django works really well in Eclipse, Grails works really well in IntelliJ Idea. Debugging: Practically the same (assuming you use IntelliJ for Grails, and Eclipse for Python). Step debugging, inspecting variables, etc... never need a print statement for either. Sometimes django error messages can be useless but Grails error messages are usually pretty lengthy and hard to parse through. Time to run a unit test: django: 2 seconds.Grails: 20 seconds (the tests themselves both run in a fraction of a second, it's the part about loading the framework to run them that takes the rest... as you can see, Grails is frustratingly slow to load). Deployment: Django: copy & paste one file into an apache config, and to redeploy, just change the code and reload apache.Grails: create a .war file, deploy it on tomcat, rinse and repeat to redeploy. Programming languages: Groovy is TOTALLY awesome. I love it, more so than Python. But I certainly have no complaints. Plugins: Grails: lots of broken plugins (and can use every java lib ever).Django: a few stable plugins, but enough to do most of what you need. Database: Django: schema migrations using South, and generally intuitive relations.Grails: no schema migrations, and by default it deletes the database on startup... WTF Usage: Django: startups (especially in the Gov 2.0 space), independent web dev shops.Grails: enterprise Hope that helps! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/75798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13436/"
]
} |
75,809 | Given the case I made two independent changes in one file: eg. added a new method and changed another method. I often don't want to commit both changes as one commit, but as two independent commits. On a git repository I would use the Interactive Mode of git-add(1) to split the hunk into smaller ones: git add --patch What's the easiest way to do this with Subversion? (Maybe even using an Eclipse plug-in) Update: In The Thing About Git , Ryan calls it: “The Tangled Working Copy Problem.” | With git-svn you can make a local GIT repository of the remote SVN repository, work with it using the full GIT feature set (including partial commits) and then push it all back to the SVN repository. git-svn (1) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/75809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4308/"
]
} |
75,886 | Before I jump headlong into C#... I've always felt that C, or maybe C++, was best for developing drivers on Windows. I'm not keen on the idea of developing a driver on a .NET machine. But .NET seems to be the way MS is heading for applications development, and so I'm now wondering: Are people are using C# to develop drivers? Do you have to do a lot of API hooks, or does C# have the facilities to interface with the kernel without a lot of hackery? Can anyone speak to the reliability and safety of running a C# program closer to Ring 0 than would normally be the case? I want my devices to be usable in C#, and if driver dev in C# is mature that's obviously the way to go, but I don't want to spend a lot of effort there if it's not recommended. What are some good resources to get started, say, developing a simple virtual serial port driver? -Adam | You can not make kernel-mode device drivers in C# as the runtime can't be safely loaded into ring0 and operate as expected. Additionally, C# doesn't create binaries suitable for loading as device drivers, particularly regarding entry points that drivers need to expose. The dependency on the runtime to jump in and analyze and JIT the binary during loading prohibits the direct access the driver subsystem needs to load the binary. There is work underway, however, to lift some device drivers into user mode, you can see an interview here with Peter Wieland of the UDMF (User Mode Driver Framework) team. User-mode drivers would be much more suited for managed work, but you'll have to google a bit to find out if C# and .NET will be directly supported. All I know is that kernel level drivers are not doable in only C#. You can, however, probably make a C/C++ driver, and a C# service (or similar) and have the driver talk to the managed code, if you absolutely have to write a lot of code in C#. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/75886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2915/"
]
} |
75,891 | I need an algorithm that can determine whether two images are 'similar' and recognizes similar patterns of color, brightness, shape etc.. I might need some pointers as to what parameters the human brain uses to 'categorize' images. .. I have looked at hausdorff based matching but that seems mainly for matching transformed objects and patterns of shape. | I have done something similar, by decomposing images into signatures using wavelet transform . My approach was to pick the most significant n coefficients from each transformed channel, and recording their location. This was done by sorting the list of (power,location) tuples according to abs(power). Similar images will share similarities in that they will have significant coefficients in the same places. I found it was best to transform in the image into YUV format, which effectively allows you weight similarity in shape (Y channel) and colour (UV channels). You can in find my implementation of the above in mactorii , which unfortunately I haven't been working on as much as I should have :-) Another method, which some friends of mine have used with surprisingly good results, is to simply resize your image down to say, a 4x4 pixel and store that as your signature. How similar 2 images are can be scored by say, computing the Manhattan distance between the 2 images, using corresponding pixels. I don't have the details of how they performed the resizing, so you may have to play with the various algorithms available for that task to find one which is suitable. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/75891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13466/"
]
} |
75,924 | This may be a no-brainer for the WPF cognoscenti, but I'd like to know if there's a simple way to put text on the WPF ProgressBar. To me, an empty progress bar looks naked. That's screen real estate that could carry a message about what is in progress, or even just add numbers to the representation. Now, WPF is all about containers and extensions and I'm slowly wrapping my mind around that, but since I don't see a "Text" or "Content" property, I'm thinking I'm going to have to add something to the container that is my progress bar. Is there a technique or two out there that is more natural than my original WinForms impulses will be? What's the best, most WPF-natural way to add text to that progress bar? | If you are needing to have a reusable method for adding text, you can create a new Style/ControlTemplate that has an additional TextBlock to display the text. You can hijack the TextSearch.Text attached property to set the text on a progress bar. If it doesn't need to be reusable, simply put the progress bar in a Grid and add a TextBlock to the grid. Since WPF can compose elements together, this will work nicely. If you want, you can create a UserControl that exposes the ProgressBar and TextBlock as public properties, so it would be less work than creating a custom ControlTemplate. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/75924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1336/"
]
} |
75,943 | I'm working on a web page where I'm making an AJAX call that returns a chunk of HTML like: <div> <!-- some html --> <script type="text/javascript"> /** some javascript */ </script></div> I'm inserting the whole thing into the DOM, but the JavaScript isn't being run. Is there a way to run it? Some details: I can't control what's in the script block (so I can't change it to a function that could be called), I just need the whole block to be executed. I can't call eval on the response because the JavaScript is within a larger block of HTML. I could do some kind of regex to separate out the JavaScript and then call eval on it, but that's pretty yucky. Anyone know a better way? | Script added by setting the innerHTML property of an element doesn't get executed. Try creating a new div, setting its innerHTML, then adding this new div to the DOM. For example: <html><head><script type='text/javascript'>function addScript(){ var str = "<script>alert('i am here');<\/script>"; var newdiv = document.createElement('div'); newdiv.innerHTML = str; document.getElementById('target').appendChild(newdiv);}</script></head><body><input type="button" value="add script" onclick="addScript()"/><div>hello world</div><div id="target"></div></body></html> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/75943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4243/"
]
} |
75,976 | I've always been told that adding an element to an array happens like this: An empty copy of the array+1element is created and then the data from the original array is copied into it then the new data for the new element is then loaded If this is true, then using an array within a scenario that requires a lot of element activity is contra-indicated due to memory and CPU utilization, correct? If that is the case, shouldn't you try to avoid using an array as much as possible when you will be adding a lot of elements? Should you use iStringMap instead? If so, what happens if you need more than two dimensions AND need to add a lot of element additions. Do you just take the performance hit or is there something else that should be used? | Look at the generic List<T> as a replacement for arrays. They support most of the same things arrays do, including allocating an initial storage size if you want. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/75976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/730/"
]
} |
75,978 | In a .NET Win console application, I would like to access an App.config file in a location different from the console application binary. For example, how can C:\bin\Text.exe get its settings from C:\Test.exe.config? | using System.Configuration; Configuration config =ConfigurationManager.OpenExeConfiguration("C:\Test.exe"); You can then access the app settings, connection strings, etc from the config instance. This assumes of course that the config file is properly formatted and your app has read access to the directory. Notice the path is not "C:\Test.exe.config" The method looks for a config file associated with the file you specify. If you specify "C:\Test.exe.config" it will look for "C:\Test.exe.config.config" Kinda lame, but understandable, I guess. Reference here: http://msdn.microsoft.com/en-us/library/system.configuration.configurationmanager.openexeconfiguration.aspx | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/75978",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2748/"
]
} |
75,980 | When encoding a query string to be sent to a web server - when do you use escape() and when do you use encodeURI() or encodeURIComponent() : Use escape: escape("% +&="); OR use encodeURI() / encodeURIComponent() encodeURI("http://www.google.com?var1=value1&var2=value2");encodeURIComponent("var1=value1&var2=value2"); | escape() Don't use it! escape() is defined in section B.2.1.2 escape and the introduction text of Annex B says: ... All of the language features and behaviours specified in this annex have one or more undesirable characteristics and in the absence of legacy usage would be removed from this specification. ... ... Programmers should not use or assume the existence of these features and behaviours when writing new ECMAScript code.... Behaviour: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/escape Special characters are encoded with the exception of: @*_+-./ The hexadecimal form for characters, whose code unit value is 0xFF or less, is a two-digit escape sequence: %xx . For characters with a greater code unit, the four-digit format %uxxxx is used. This is not allowed within a query string (as defined in RFC3986 ): query = *( pchar / "/" / "?" )pchar = unreserved / pct-encoded / sub-delims / ":" / "@"unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"pct-encoded = "%" HEXDIG HEXDIGsub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "=" A percent sign is only allowed if it is directly followed by two hexdigits, percent followed by u is not allowed. encodeURI() Use encodeURI when you want a working URL. Make this call: encodeURI("http://www.example.org/a file with spaces.html") to get: http://www.example.org/a%20file%20with%20spaces.html Don't call encodeURIComponent since it would destroy the URL and return http%3A%2F%2Fwww.example.org%2Fa%20file%20with%20spaces.html Note that encodeURI, like encodeURIComponent, does not escape the ' character. encodeURIComponent() Use encodeURIComponent when you want to encode the value of a URL parameter. var p1 = encodeURIComponent("http://example.org/?a=12&b=55") Then you may create the URL you need: var url = "http://example.net/?param1=" + p1 + "¶m2=99"; And you will get this complete URL: http://example.net/?param1=http%3A%2F%2Fexample.org%2F%Ffa%3D12%26b%3D55¶m2=99 Note that encodeURIComponent does not escape the ' character. A common bug is to use it to create html attributes such as href='MyUrl' , which could suffer an injection bug. If you are constructing html from strings, either use " instead of ' for attribute quotes, or add an extra layer of encoding ( ' can be encoded as %27). For more information on this type of encoding you can check: http://en.wikipedia.org/wiki/Percent-encoding | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/75980",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1341/"
]
} |
76,065 | I know that with mysql you can write SQL statements into a .sql file and run the file from the mysql command line like this: mysql> source script.sql How do I pass a variable to the script? For example, if I want to run a script that retrieves all the employees in a department, I want to be able to pass in the number of the department as a variable. I am not trying to run queries through a shell script. There are simple queries I run from the mysql command line. I'm tired of retyping them all the time, and writing a shell script for them would be overkill. | Like this: set @department := 'Engineering'; Then, reference @department wherever you need to in script.sql: update employee set salary = salary + 10000 where department = @department; | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13519/"
]
} |
76,074 | I have a couple old services that I want to completely uninstall. How can I do this? | Use the SC command, like this (you need to be on a command prompt to execute the commands in this post): SC STOP shortservicenameSC DELETE shortservicename Note: You need to run the command prompt as an administrator, not just logged in as the administrator, but also with administrative rights. If you get errors above about not having the necessary access rights to stop and/or delete the service, run the command prompt as an administrator. You can do this by searching for the command prompt on your start menu and then right-clicking and selecting "Run as administrator". Note to PowerShell users: sc is aliased to set-content . So sc delete service will actually create a file called delete with the content service . To do this in Powershell, use sc.exe delete service instead If you need to find the short service name of a service, use the following command to generate a text file containing a list of services and their statuses: SC QUERY state= all >"C:\Service List.txt" For a more concise list, execute this command: SC QUERY state= all | FIND "_NAME" The short service name will be listed just above the display name, like this: SERVICE_NAME: MyServiceDISPLAY_NAME: My Special Service And thus to delete that service: SC STOP MyServiceSC DELETE MyService | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/76074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1204/"
]
} |
76,079 | can anyone please suggest a good code example of vb.net/c# code to put the application in system tray when minized. | Add a NotifyIcon control to your form, then use the following code: private void frm_main_Resize(object sender, EventArgs e) { if (this.WindowState == FormWindowState.Minimized) { this.ShowInTaskbar = false; this.Hide(); notifyIcon1.Visible = true; } } private void notifyIcon1_MouseDoubleClick(object sender, MouseEventArgs e) { this.Show(); this.WindowState = FormWindowState.Normal; this.ShowInTaskbar = true; notifyIcon1.Visible = false; } You may not need to set the ShowInTaskbar property. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13337/"
]
} |
76,134 | I have 4 2D points in screen-space, and I need to reverse-project them back into 3D space. I know that each of the 4 points is a corner of a 3D-rotated rigid rectangle, and I know the size of the rectangle. How can I get 3D coordinates from this? I am not using any particular API, and I do not have an existing projection matrix. I'm just looking for basic math to do this. Of course there isn't enough data to convert a single 2D point to 3D with no other reference, but I imagine that if you have 4 points, you know that they're all at right-angles to each other on the same plane, and you know the distance between them, you should be able to figure it out from there. Unfortunately I can't quite work out how though. This might fall under the umbrella of photogrammetry, but google searches for that haven't led me to any helpful information. | Alright, I came here looking for an answer and didn't find something simple and straightforward, so I went ahead and did the dumb but effective (and relatively simple) thing: Monte Carlo optimisation. Very simply put, the algorithm is as follows: Randomly perturb your projection matrix until it projects your known 3D coordinates to your known 2D coordinates. Here is a still photo from Thomas the Tank Engine: Let's say we use GIMP to find the 2D coordinates of what we think is a square on the ground plane (whether or not it is really a square depends on your judgment of the depth): I get four points in the 2D image: (318, 247) , (326, 312) , (418, 241) , and (452, 303) . By convention, we say that these points should correspond to the 3D points: (0, 0, 0) , (0, 0, 1) , (1, 0, 0) , and (1, 0, 1) . In other words, a unit square in the y=0 plane. Projecting each of these 3D coordinates into 2D is done by multiplying the 4D vector [x, y, z, 1] with a 4x4 projection matrix, then dividing the x and y components by z to actually get the perspective correction. This is more or less what gluProject() does, except gluProject() also takes the current viewport into account and takes a separate modelview matrix into account (we can just assume the modelview matrix is the identity matrix). It is very handy to look at the gluProject() documentation because I actually want a solution that works for OpenGL, but beware that the documentation is missing the division by z in the formula. Remember, the algorithm is to start with some projection matrix and randomly perturb it until it gives the projection that we want. So what we're going to do is project each of the four 3D points and see how close we get to the 2D points we wanted. If our random perturbations cause the projected 2D points to get closer to the ones we marked above, then we keep that matrix as an improvement over our initial (or previous) guess. Let's define our points: # Known 2D coordinates of our rectanglei0 = Point2(318, 247)i1 = Point2(326, 312)i2 = Point2(418, 241)i3 = Point2(452, 303)# 3D coordinates corresponding to i0, i1, i2, i3r0 = Point3(0, 0, 0)r1 = Point3(0, 0, 1)r2 = Point3(1, 0, 0)r3 = Point3(1, 0, 1) We need to start with some matrix, identity matrix seems a natural choice: mat = [ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1],] We need to actually implement the projection (which is basically a matrix multiplication): def project(p, mat): x = mat[0][0] * p.x + mat[0][1] * p.y + mat[0][2] * p.z + mat[0][3] * 1 y = mat[1][0] * p.x + mat[1][1] * p.y + mat[1][2] * p.z + mat[1][3] * 1 w = mat[3][0] * p.x + mat[3][1] * p.y + mat[3][2] * p.z + mat[3][3] * 1 return Point(720 * (x / w + 1) / 2., 576 - 576 * (y / w + 1) / 2.) This is basically what gluProject() does, 720 and 576 are the width and height of the image, respectively (i.e. the viewport), and we subtract from 576 to count for the fact that we counted y coordinates from the top while OpenGL typically counts them from the bottom. You'll notice we're not calculating z, that's because we don't really need it here (though it could be handy to ensure it falls within the range that OpenGL uses for the depth buffer). Now we need a function for evaluating how close we are to the correct solution. The value returned by this function is what we will use to check whether one matrix is better than another. I chose to go by sum of squared distances, i.e.: # The squared distance between two points a and bdef norm2(a, b): dx = b.x - a.x dy = b.y - a.y return dx * dx + dy * dydef evaluate(mat): c0 = project(r0, mat) c1 = project(r1, mat) c2 = project(r2, mat) c3 = project(r3, mat) return norm2(i0, c0) + norm2(i1, c1) + norm2(i2, c2) + norm2(i3, c3) To perturb the matrix, we simply pick an element to perturb by a random amount within some range: def perturb(amount): from copy import deepcopy from random import randrange, uniform mat2 = deepcopy(mat) mat2[randrange(4)][randrange(4)] += uniform(-amount, amount) (It's worth noting that our project() function doesn't actually use mat[2] at all, since we don't compute z, and since all our y coordinates are 0 the mat[*][1] values are irrelevant as well. We could use this fact and never try to perturb those values, which would give a small speedup, but that is left as an exercise...) For convenience, let's add a function that does the bulk of the approximation by calling perturb() over and over again on what is the best matrix we've found so far: def approximate(mat, amount, n=100000): est = evaluate(mat) for i in xrange(n): mat2 = perturb(mat, amount) est2 = evaluate(mat2) if est2 < est: mat = mat2 est = est2 return mat, est Now all that's left to do is to run it...: for i in xrange(100): mat = approximate(mat, 1) mat = approximate(mat, .1) I find this already gives a pretty accurate answer. After running for a while, the matrix I found was: [ [1.0836000765696232, 0, 0.16272110011060575, -0.44811064935115597], [0.09339193527789781, 1, -0.7990570384334473, 0.539087345090207 ], [0, 0, 1, 0 ], [0.06700844759602216, 0, -0.8333379578853196, 3.875290562060915 ],] with an error of around 2.6e-5 . (Notice how the elements we said were not used in the computation have not actually been changed from our initial matrix; that's because changing these entries would not change the result of the evaluation and so the change would never get carried along.) We can pass the matrix into OpenGL using glLoadMatrix() (but remember to transpose it first, and remember to load your modelview matrix with the identity matrix): def transpose(m): return [ [m[0][0], m[1][0], m[2][0], m[3][0]], [m[0][1], m[1][1], m[2][1], m[3][1]], [m[0][2], m[1][2], m[2][2], m[3][2]], [m[0][3], m[1][3], m[2][3], m[3][3]], ]glLoadMatrixf(transpose(mat)) Now we can for example translate along the z axis to get different positions along the tracks: glTranslate(0, 0, frame)frame = frame + 1glBegin(GL_QUADS)glVertex3f(0, 0, 0)glVertex3f(0, 0, 1)glVertex3f(1, 0, 1)glVertex3f(1, 0, 0)glEnd() For sure this is not very elegant from a mathematical point of view; you don't get a closed form equation that you can just plug your numbers into and get a direct (and accurate) answer. HOWEVER, it does allow you to add additional constraints without having to worry about complicating your equations; for example if we wanted to incorporate height as well, we could use that corner of the house and say (in our evaluation function) that the distance from the ground to the roof should be so-and-so, and run the algorithm again. So yes, it's a brute force of sorts, but works, and works well. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/76134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8409/"
]
} |
76,179 | What's the best way for determining whether the user's browser can view PDF files? Ideally, it shouldn't matter on the browser or the operating system. Is there a specific way of doing it in ASP.NET, or would the answer be just JavaScript? | Neither, none, don't try. Re dawnerd : Plug-in detection is not the right answer. I do not have a PDF plugin installed in my browser (Firefox on Ubuntu), yet I am able to view PDF files using the operating system's document viewer (which is not Acrobat Reader). Today, any operating system that can run a web browser can view PDF files out of the box. If a specific system does not have a PDF viewer installed and the browser configured to use it, that likely means that either it's a hand-made install of Windows, a very trimmed down alternate operating system, or something really retro. It is reasonable to assume that in any of those situation the user will know what a PDF file is and either deliberately choose not to be able to view them or know how to install the required software. If I am deluding myself, I would love to have it explained to me in which way I am wrong. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4092/"
]
} |
76,194 | I seldom use inheritance, but when I do, I never use protected attributes because I think it breaks the encapsulation of the inherited classes. Do you use protected attributes ? what do you use them for ? | In this interview on Design by Bill Venners, Joshua Bloch, the author of Effective Java says: Trusting Subclasses Bill Venners: Should I trust subclasses more intimately than non-subclasses? For example, do I make it easier for a subclass implementation to break me than I would for a non-subclass? In particular, how do you feel about protected data? Josh Bloch: To write something that is both subclassable and robust against a malicious subclass is actually a pretty tough thing to do, assuming you give the subclass access to your internal data structures. If the subclass does not have access to anything that an ordinary user doesn't, then it's harder for the subclass to do damage. But unless you make all your methods final, the subclass can still break your contracts by just doing the wrong things in response to method invocation. That's precisely why the security critical classes like String are final. Otherwise someone could write a subclass that makes Strings appear mutable, which would be sufficient to break security. So you must trust your subclasses. If you don't trust them, then you can't allow them, because subclasses can so easily cause a class to violate its contracts. As far as protected data in general, it's a necessary evil. It should be kept to a minimum. Most protected data and protected methods amount to committing to an implementation detail. A protected field is an implementation detail that you are making visible to subclasses. Even a protected method is a piece of internal structure that you are making visible to subclasses. The reason you make it visible is that it's often necessary in order to allow subclasses to do their job, or to do it efficiently. But once you've done it, you're committed to it. It is now something that you are not allowed to change, even if you later find a more efficient implementation that no longer involves the use of a particular field or method. So all other things being equal, you shouldn't have any protected members at all. But that said, if you have too few, then your class may not be usable as a super class, or at least not as an efficient super class. Often you find out after the fact. My philosophy is to have as few protected members as possible when you first write the class. Then try to subclass it. You may find out that without a particular protected method, all subclasses will have to do some bad thing. As an example, if you look at AbstractList , you'll find that there is a protected method to delete a range of the list in one shot ( removeRange ). Why is that in there? Because the normal idiom to remove a range, based on the public API, is to call subList to get a sub- List , and then call clear on that sub- List . Without this particular protected method, however, the only thing that clear could do is repeatedly remove individual elements. Think about it. If you have an array representation, what will it do? It will repeatedly collapse the array, doing order N work N times. So it will take a quadratic amount of work, instead of the linear amount of work that it should. By providing this protected method, we allow any implementation that can efficiently delete an entire range to do so. And any reasonable List implementation can delete a range more efficiently all at once. That we would need this protected method is something you would have to be way smarter than me to know up front. Basically, I implemented the thing. Then, as we started to subclass it, we realized that range delete was quadratic. We couldn't afford that, so I put in the protected method. I think that's the best approach with protected methods. Put in as few as possible, and then add more as needed. Protected methods represent commitments to designs that you may want to change. You can always add protected methods, but you can't take them out. Bill Venners: And protected data? Josh Bloch: The same thing, but even more. Protected data is even more dangerous in terms of messing up your data invariants. If you give someone else access to some internal data, they have free reign over it. Short version: it breaks encapsulation but it's a necessary evil that should be kept to a minimum. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13501/"
]
} |
76,204 | I am receiving a 3rd party feed of which I cannot be certain of the namespace so I am currently having to use the local-name() function in my XSLT to get the element values. However I need to get an attribute from one such element and I don't know how to do this when the namespaces are unknown (hence need for local-name() function). N.B. I am using .net 2.0 to process the XSLT Here is a sample of the XML: <?xml version="1.0" encoding="UTF-8"?><feed xmlns="http://www.w3.org/2005/Atom"> <id>some id</id> <title>some title</title> <updated>2008-09-11T15:53:31+01:00</updated> <link rel="self" href="http://www.somefeedurl.co.uk" /> <author> <name>some author</name> <uri>http://someuri.co.uk</uri> </author> <generator uri="http://aardvarkmedia.co.uk/">AardvarkMedia script</generator> <entry> <id>http://soemaddress.co.uk/branded3/80406</id> <title type="html">My Ttile</title> <link rel="alternate" href="http://www.someurl.co.uk" /> <updated>2008-02-13T00:00:00+01:00</updated> <published>2002-09-11T14:16:20+01:00</published> <category term="mycategorytext" label="restaurant">Test</category> <content type="xhtml"> <div xmlns="http://www.w3.org/1999/xhtml"> <div class="vcard"> <p class="fn org">some title</p> <p class="adr"> <abbr class="type" title="POSTAL" /> <span class="street-address">54 Some Street</span> , <span class="locality" /> , <span class="country-name">UK</span> </p> <p class="tel"> <span class="value">0123456789</span> </p> <div class="geo"> <span class="latitude">51.99999</span> , <span class="longitude">-0.123456</span> </div> <p class="note"> <span class="type">Review</span> <span class="value">Some content</span> </p> <p class="note"> <span class="type">Overall rating</span> <span class="value">8</span> </p> </div> </div> </content> <category term="cuisine-54" label="Spanish" /> <Point xmlns="http://www.w3.org/2003/01/geo/wgs84_pos#"> <lat>51.123456789</lat> <long>-0.11111111</long> </Point> </entry></feed> This is XSLT <?xml version="1.0" encoding="UTF-8" ?><xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:wgs="http://www.w3.org/2003/01/geo/wgs84_pos#" exclude-result-prefixes="atom wgs"> <xsl:output method="xml" indent="yes"/> <xsl:key name="uniqueVenuesKey" match="entry" use="id"/> <xsl:key name="uniqueCategoriesKey" match="entry" use="category/@term"/> <xsl:template match="/"> <locations> <!-- Get all unique venues --> <xsl:for-each select="/*[local-name()='feed']/*[local-name()='entry']"> <xsl:variable name="CurrentVenueKey" select="*[local-name()='id']" ></xsl:variable> <xsl:variable name="CurrentVenueName" select="*[local-name()='title']" ></xsl:variable> <xsl:variable name="CurrentVenueAddress1" select="*[local-name()='content']/*[local-name()='div']/*[local-name()='div']/*[local-name()='p'][@class='adr']/*[local-name()='span'][@class='street-address']" ></xsl:variable> <xsl:variable name="CurrentVenueCity" select="*[local-name()='content']/*[local-name()='div']/*[local-name()='div']/*[local-name()='p'][@class='adr']/*[local-name()='span'][@class='locality']" ></xsl:variable> <xsl:variable name="CurrentVenuePostcode" select="*[local-name()='postcode']" ></xsl:variable> <xsl:variable name="CurrentVenueTelephone" select="*[local-name()='telephone']" ></xsl:variable> <xsl:variable name="CurrentVenueLat" select="*[local-name()='Point']/*[local-name()='lat']" ></xsl:variable> <xsl:variable name="CurrentVenueLong" select="*[local-name()='Point']/*[local-name()='long']" ></xsl:variable> <xsl:variable name="CurrentCategory" select="WHATDOIPUTHERE"></xsl:variable> <location> <locationName> <xsl:value-of select = "$CurrentVenueName" /> </locationName> <category> <xsl:value-of select = "$CurrentCategory" /> </category> <description> <xsl:value-of select = "$CurrentVenueName" /> </description> <venueAddress> <streetName> <xsl:value-of select = "$CurrentVenueAddress1" /> </streetName> <town> <xsl:value-of select = "$CurrentVenueCity" /> </town> <postcode> <xsl:value-of select = "$CurrentVenuePostcode" /> </postcode> <wgs84_latitude> <xsl:value-of select = "$CurrentVenueLat" /> </wgs84_latitude> <wgs84_longitude> <xsl:value-of select = "$CurrentVenueLong" /> </wgs84_longitude> </venueAddress> <venuePhone> <phonenumber> <xsl:value-of select = "$CurrentVenueTelephone" /> </phonenumber> </venuePhone> </location> </xsl:for-each> </locations> </xsl:template></xsl:stylesheet> I'm trying to replace the $CurrentCategory variable the appropriate code to display mycategorytext | I don't have an XSLT editor here, but have you tried using *[local-name()='category']/@*[local-name()='term'] | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/258/"
]
} |
76,223 | I am working on a project where the requirement is to have a date calculated as being the last Friday of a given month. I think I have a solution that only uses standard Java, but I was wondering if anyone knew of anything more concise or efficient. Below is what I tested with for this year: for (int month = 0; month < 13; month++) { GregorianCalendar d = new GregorianCalendar(); d.set(d.MONTH, month); System.out.println("Last Week of Month in " + d.getDisplayName(d.MONTH, Calendar.LONG, Locale.ENGLISH) + ": " + d.getLeastMaximum(d.WEEK_OF_MONTH)); d.set(d.DAY_OF_WEEK, d.FRIDAY); d.set(d.WEEK_OF_MONTH, d.getActualMaximum(d.WEEK_OF_MONTH)); while (d.get(d.MONTH) > month || d.get(d.MONTH) < month) { d.add(d.WEEK_OF_MONTH, -1); } Date dt = d.getTime(); System.out.println("Last Friday of Last Week in " + d.getDisplayName(d.MONTH, Calendar.LONG, Locale.ENGLISH) + ": " + dt.toString()); } | Based on marked23's suggestion: public Date getLastFriday( int month, int year ) { Calendar cal = Calendar.getInstance(); cal.set( year, month + 1, 1 ); cal.add( Calendar.DAY_OF_MONTH, -( cal.get( Calendar.DAY_OF_WEEK ) % 7 + 1 ) ); return cal.getTime();} | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76223",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7008/"
]
} |
76,254 | Any advice on how to read auto-incrementing identity field assigned to newly created record from call through java.sql.Statement.executeUpdate ? I know how to do this in SQL for several DB platforms, but would like to know what database independent interfaces exist in java.sql to do this, and any input on people's experience with this across DB platforms. | The following snibblet of code should do ya': PreparedStatement stmt = conn.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS);// ...ResultSet res = stmt.getGeneratedKeys();while (res.next()) System.out.println("Generated key: " + res.getInt(1)); This is known to work on the following databases Derby MySQL SQL Server For databases where it doesn't work (HSQLDB, Oracle, PostgreSQL, etc), you will need to futz with database-specific tricks. For example, on PostgreSQL you would make a call to SELECT NEXTVAL(...) for the sequence in question. Note that the parameters for executeUpdate(...) are analogous. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5446/"
]
} |
76,314 | I'm trying to figure out what a Java applet's class file is doing under the hood. Opening it up with Notepad or Textpad just shows a bunch of gobbledy-gook. Is there any way to wrangle it back into a somewhat-readable format so I can try to figure out what it's doing? Environment == Windows w/ VS 2008 installed. | jd-gui is the best decompiler at the moment. it can handle newer features in Java, as compared to the getting-dusty JAD. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/76314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2312/"
]
} |
76,327 | I'm writing a Java application that runs on Linux (using Sun's JDK). It keeps creating /tmp/hsperfdata_username directories, which I would like to prevent. Is there any way to stop java from creating these files? | Try JVM option -XX:-UsePerfData more info The following might be helpful that is from link https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html -XX:+UsePerfData Enables the perfdata feature. This option is enabled by default to allow JVM monitoring and performance testing. Disabling it suppresses the creation of the hsperfdata_userid directories. To disable the perfdata feature, specify -XX:-UsePerfData. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76327",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13582/"
]
} |
76,346 | I just got surprised by something in TSQL. I thought that if xact_abort was on, calling something like raiserror('Something bad happened', 16, 1); would stop execution of the stored procedure (or any batch). But my ADO.NET error message just proved the opposite. I got both the raiserror error message in the exception message, plus the next thing that broke after that. This is my workaround (which is my habit anyway), but it doesn't seem like it should be necessary: if @somethingBadHappened begin; raiserror('Something bad happened', 16, 1); return; end; The docs say this: When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. Does that mean I must be using an explicit transaction? | This is By Design TM , as you can see on Connect by the SQL Server team's response to a similar question: Thank you for your feedback. By design, the XACT_ABORT set option does not impact the behavior of the RAISERROR statement. We will consider your feedback to modify this behavior for a future release of SQL Server. Yes, this is a bit of an issue for some who hoped RAISERROR with a high severity (like 16 ) would be the same as an SQL execution error - it's not. Your workaround is just about what you need to do, and using an explicit transaction doesn't have any effect on the behavior you want to change. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1219/"
]
} |
76,395 | In what domains do each of these software architectures shine or fail? Which key requirements would prompt you to choose one over the other? Please assume that you have developers available who can do good object oriented code as well as good database development. Also, please avoid holy wars :) all three technologies have pros and cons, I'm interested in where is most appropriate to use which. | Every one of these tools provides differing layers of abstraction, along with differing points to override behavior. These are architecture choices, and all architectural choices depend on trade-offs between technology, control, and organization, both of the application itself and the environment where it will be deployed. If you're dealing with a culture where DBAs 'rule the roost', then a stored-procedure-based architecture will be easier to deploy. On the other hand, it can be very difficult to manage and version stored procedures. Code generators shine when you use statically-typed languages, because you can catch errors at compile-time instead of at run-time. ORMs are ideal for integration tools, where you may need to deal with different RDBMSes and schemas on an installation-to-installation basis. Change one map and your application goes from working with PeopleSoft on Oracle to working with Microsoft Dynamics on SQL Server. I've seen applications where Generated Code is used to interface with Stored Procedures, because the stored procedures could be tweaked to get around limitations in the code generator. Ultimately the only correct answer will depend upon the problem you're trying to solve and the environment where the solution needs to execute. Anything else is arguing the correct pronunciation of 'potato'. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7028/"
]
} |
76,408 | First of all, I want to avoid a flame-war on languages. The languages to choose from are Perl, Python and Ruby . I want to mention that I'm comfortable with all of them, but the problem is that I can't focus just on one. If, for example, I see a cool Perl module, I have to try it out. If I see a nice Python app, I have to know how it's made. If I see a Ruby DSL or some Ruby voodoo, I'm hooked on Ruby for a while. Right now I'm working as a Java developer, but plan on taking CEH in the near future. My question is: for tool writing and exploit development, which language do you find to be the most appropriate? Again, I don't want to cause a flame-war or any trouble, I just want honest opinions from scripters that know what they're doing. One more thing: maybe some of you will ask "Why settle on one language?". To answer this: I would like to choose only one language, in order to try to master it. | You probably want Ruby, because it's the native language for Metasploit, which is the de facto standard open source penetration testing framework. Ruby's going to give you: Metasploit's framework, opcode and shellcode databases Metasploit's Ruby lorcon bindings for raw 802.11 work. Metasploit's KARMA bindings for 802.11 clientside redirection. Libcurl and net/http for web tool writing. EventMachine for web proxy and fuzzing work (or RFuzz, which extends the well-known Mongrel webserver). Metasm for shellcode generation. Distorm for x86 disassembly. BinData for binary file format fuzzing. Second place here goes to Python. There are more pentesting libraries available in Python than in Ruby (but not enough to offset Metasploit). Commercial tools tend to support Python as well --- if you're an Immunity CANVAS or CORE Impact customer, you want Python. Python gives you: Twisted for network access. PaiMei for program tracing and programmable debugging. CANVAS and Impact support. Dornseif's firewire libraries for remote debugging. Ready integration with WinDbg for remote Windows kernel debugging (there's still no good answer in Ruby for kernel debugging, which is why I still occasionally use Python). Peach Fuzzer and Sully for fuzzing. SpikeProxy for web penetration testing (also, OWASP Pantera ). Unsurprisingly, a lot of web work uses Java tools. The de facto standard web pentest tool is Burp Suite, which is a Java swing app. Both Ruby and Python have Java variants you can use to get access to tools like that. Also, both Ruby and Python offer: Direct integration with libpcap for raw packet work. OpenSSL bindings for crypto. IDA Pro extensions. Mature (or at least reasonable) C foreign function interfaces for API access. WxWindows for UI work, and decent web stacks for web UIs. You're not going to go wrong with either language, though for mainstream pentest work, Metasploit probably edges out all the Python benefits, and at present, for x86 reversing work, Python's superior debugging interfaces edge out all the Ruby benefits. Also: it's 2008. They're not "scripting languages". They're programming languages. ;) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11234/"
]
} |
76,412 | When developing WPF UserControls, what is the best way to expose a DependencyProperty of a child control as a DependencyProperty of the UserControl? The following example shows how I would currently expose the Text property of a TextBox inside a UserControl. Surely there is a better / simpler way to accomplish this? <UserControl x:Class="WpfApplication3.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <StackPanel Background="LightCyan"> <TextBox Margin="8" Text="{Binding Text, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}}" /> </StackPanel> </UserControl> using System; using System.Windows; using System.Windows.Controls; namespace WpfApplication3 { public partial class UserControl1 : UserControl { public static DependencyProperty TextProperty = DependencyProperty.Register("Text", typeof(string), typeof(UserControl1), new PropertyMetadata(null)); public string Text { get { return GetValue(TextProperty) as string; } set { SetValue(TextProperty, value); } } public UserControl1() { InitializeComponent(); } } } | That is how we're doing it in our team, without the RelativeSource search, rather by naming the UserControl and referencing properties by the UserControl's name. <UserControl x:Class="WpfApplication3.UserControl1" x:Name="UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <StackPanel Background="LightCyan"> <TextBox Margin="8" Text="{Binding Path=Text, ElementName=UserControl1}" /> </StackPanel></UserControl> Sometimes we've found ourselves making too many things UserControl's though, and have often times scaled back our usage. I'd also follow the tradition of naming things like that textbox along the lines of PART_TextDisplay or something, so that in the future you could template it out yet keep the code-behind the same. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/317/"
]
} |
76,455 | In C#.NET I am trying to programmatically change the color of the border in a group box. Update: This question was asked when I was working on a winforms system before we switched to .NET. | Building on the previous answer, a better solution that includes the label for the group box: groupBox1.Paint += PaintBorderlessGroupBox;private void PaintBorderlessGroupBox(object sender, PaintEventArgs p){ GroupBox box = (GroupBox)sender; p.Graphics.Clear(SystemColors.Control); p.Graphics.DrawString(box.Text, box.Font, Brushes.Black, 0, 0);} You might want to adjust the x/y for the text, but for my use this is just right. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/300930/"
]
} |
76,464 | I'd like to create a module in DNN that, similar to the Announcements control, offers a template that the portal admin can modify for formatting. I have a control that currently uses a Repeater control with templates. Is there a way to override the contents of the repeater ItemTemplate, HeaderTemplate, and FooterTemplate properties? | Building on the previous answer, a better solution that includes the label for the group box: groupBox1.Paint += PaintBorderlessGroupBox;private void PaintBorderlessGroupBox(object sender, PaintEventArgs p){ GroupBox box = (GroupBox)sender; p.Graphics.Clear(SystemColors.Control); p.Graphics.DrawString(box.Text, box.Font, Brushes.Black, 0, 0);} You might want to adjust the x/y for the text, but for my use this is just right. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13100/"
]
} |
76,482 | I have a file saved as UCS-2 Little Endian I want to change the encoding so I ran the following code: cat tmp.log -encoding UTF8 > new.log The resulting file is still in UCS-2 Little Endian. Is this because the pipeline is always in that format? Is there an easy way to pipe this to a new file as UTF8? | As suggested here : Get-Content tmp.log | Out-File -Encoding UTF8 new.log | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/76482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2582/"
]
} |
76,488 | Can't find anything relevant about Entity Framework/MySQL on Google so I'm hoping someone knows about it. | It's been released - Get the MySQL connector for .Net v6.5 - this has support for [Entity Framework] I was waiting for this the whole time, although the support is basic, works for most basic scenarios of db interaction. It also has basic Visual Studio integration. UPDATE http://dev.mysql.com/downloads/connector/net/ Starting with version 6.7, Connector/Net will no longer include the MySQL for Visual Studio integration. That functionality is now available in a separate product called MySQL for Visual Studio available using the MySQL Installer for Windows (see http://dev.mysql.com/tech-resources/articles/mysql-installer-for-windows.html ). | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/76488",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13594/"
]
} |
76,522 | We make infrastructure services (data retrieval and storage) and small smart client applications (fancy reporting mostly) for a commercial bank. Our team is large, 40 odd contractual employees that are C# .NET programmers. We support 50 odd applications and systems that we have developed. A few members of the team began making WPF , WF and WCF based applications. Given that they are the first, most members do not understand these technologies. What benefits do they convey that would overcome the cost of retraining the team? | WPF UI's are easier to design implement and maintain than the current C# alternatives, so if a lot of your codebase is responsible for handling UI, migrating may serve beneficial-- as in, you'll find your team will save time in dealing with their UI layer. If most of your code is business logic, it won't help all that much. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
76,526 | Under what circumstances - if any - does adding programmers to a team actually speed development of an already late project? | The exact circumstances are obviously very specific to your project ( e.g. development team, management style, process maturity, difficulty of the subject matter, etc.). In order to scope this a bit better so we can speak about it in anything but sweeping oversimplifications, I'm going to restate your question: Under what circumstances, if any, can adding team members to a software development project that is running late result in a reduction in the actual ship date with a level of quality equal to that if the existing team were allow to work until completion? There are a number of things that I think are necessary , but not sufficient, for this to occur (in no particular order): The proposed individuals to be added to the project must have: At least a reasonable understanding of the problem domain of the project Be proficient in the language of the project and the specific technologies that they would use for the tasks they would be given Their proficiency must /not/ be much less or much greater than the weakest or strongest existing member respectively. Weak members will drain your existing staff with tertiary problems while a new person who is too strong will disrupt the team with how everything they have done and are doing is wrong. Have good communication skills Be highly motivated (e.g. be able to work independently without prodding) The existing team members must have: Excellent communication skills Excellent time management skills The project lead/management must have: Good prioritization and resource allocation abilities A high level of respect from the existing team members Excellent communication skills The project must have: A good, completed, and documented software design specification Good documentation of things already implemented A modular design to allow clear chunks of responsibility to be carved out Sufficient automated processes for quality assurance for the required defect level These might include such things as: unit tests, regression tests, automated build deployments, etc.) A bug/feature tracking system that is currently in-place and in-use by the team (e.g. trac, SourceForge, FogBugz, etc). One of the first things that should be discussed is whether the ship date can be slipped, whether features can be cut, and if some combinations of the two will allow you to satisfy release with your existing staff. Many times its a couple features that are really hogging the resources of the team that won't deliver value equal to the investment. So give your project's priorities a serious review before anything else. If the outcome of the above paragraph isn't sufficient, then visit the list above. If you caught the schedule slip early, the addition of the right team members at the right time may save the release. Unfortunately, the closer you get to your expected ship date, the more things can go wrong with adding people. At one point, you'll cross the "point of no return" where no amount of change (other than shipping the current development branch) can save your release. I could go on and on but I think I hit the major points. Outside of the project and in terms of your career, the company's future success, etc. one of the things that you should definitely do is figure out why you were late, if anything could have been done alert you earlier, and what measures you need to take to prevent it in the future. A late project usually occurs because you were either: Were late before you started (morestuff than time) and/or slipped 1hr, 1day at time. Hope that helps! | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/76526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4200/"
]
} |
76,549 | An array of ints in java is stored as a block of 32-bit values in memory. How is an array of Integer objects stored? i.e. int[] vs. Integer[] I'd imagine that each element in the Integer array is a reference to an Integer object, and that the Integer object has object storage overheads, just like any other object. I'm hoping however that the JVM does some magical cleverness under the hood given that Integers are immutable and stores it just like an array of ints. Is my hope woefully naive? Is an Integer array much slower than an int array in an application where every last ounce of performance matters? | No VM I know of will store an Integer[] array like an int[] array for the following reasons: There can be null Integer objects in the array and you have no bits left for indicating this in an int array. The VM could store this 1-bit information per array slot in a hiden bit-array though. You can synchronize in the elements of an Integer array. This is much harder to overcome as the first point, since you would have to store a monitor object for each array slot. The elements of Integer[] can be compared for identity. You could for example create two Integer objects with the value 1 via new and store them in different array slots and later you retrieve them and compare them via ==. This must lead to false, so you would have to store this information somewhere. Or you keep a reference to one of the Integer objects somewhere and use this for comparison and you have to make sure one of the == comparisons is false and one true. This means the whole concept of object identity is quiet hard to handle for the optimized Integer array. You can cast an Integer[] to e.g. Object[] and pass it to methods expecting just an Object[]. This means all the code which handles Object[] must now be able to handle the special Integer[] object too, making it slower and larger. Taking all this into account, it would probably be possible to make a special Integer[] which saves some space in comparison to a naive implementation, but the additional complexity will likely affect a lot of other code, making it slower in the end. The overhead of using Integer[] instead of int[] can be quiet large in space and time. On a typical 32 bit VM an Integer object will consume 16 byte (8 byte for the object header, 4 for the payload and 4 additional bytes for alignment) while the Integer[] uses as much space as int[]. In 64 bit VMs (using 64bit pointers, which is not always the case) an Integer object will consume 24 byte (16 for the header, 4 for the payload and 4 for alignment). In addition a slot in the Integer[] will use 8 byte instead of 4 as in the int[]. This means you can expect an overhead of 16 to 28 byte per slot, which is a factor of 4 to 7 compared to plain int arrays. The performance overhead can be significant too for mainly two reasons: Since you use more memory, you put on much more pressure on the memory subsystem, making it more likely to have cache misses in the case of Integer[]. For example if you traverse the contents of the int[] in a linear manner, the cache will have most of the entries already fetched when you need them (since the layout is linear too). But in case of the Integer array, the Integer objects itself might be scattered randomly in the heap, making it hard for the cache to guess where the next memory reference will point to. The garbage collection has to do much more work because of the additional memory used and because it has to scan and move each Integer object separately, while in the case of int[] it is just one object and the contents of the object doesn't have to be scanned (they contain no reference to other objects). To sum it up, using an int[] in performance critical work will be both much faster and memory efficient than using an Integer array in current VMs and it is unlikely this will change much in the near future. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/974/"
]
} |
76,564 | All I want is to be able to change the color of a bullet in a list to a light gray. It defaults to black, and I can't figure out how to change it. I know I could just use an image; I'd rather not do that if I can help it. | The bullet gets its color from the text. So if you want to have a different color bullet than text in your list you'll have to add some markup. Wrap the list text in a span: <ul> <li><span>item #1</span></li> <li><span>item #2</span></li> <li><span>item #3</span></li></ul> Then modify your style rules slightly: li { color: red; /* bullet color */}li span { color: black; /* text color */} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/76564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7072/"
]
} |
76,595 | Is REST a better approach to doing Web Services or is SOAP? Or are they different tools for different problems? Or is it a nuanced issue - that is, is one slightly better in certain arenas than another, etc? I would especially appreciate information about those concepts and their relation to the PHP-universe and also modern high-end web-applications. | I built one of the first SOAP servers, including code generation and WSDL generation, from the original spec as it was being developed, when I was working at Hewlett-Packard. I do NOT recommend using SOAP for anything. The acronym "SOAP" is a lie. It is not Simple, it is not Object-oriented, it defines no Access rules. It is, arguably, a Protocol. It is Don Box's worst spec ever, and that's quite a feat, as he's the man who perpetrated "COM". There is nothing useful in SOAP that can't be done with REST for transport, and JSON, XML, or even plain text for data representation. For transport security, you can use https. For authentication, basic auth. For sessions, there's cookies. The REST version will be simpler, clearer, run faster, and use less bandwidth. XML-RPC clearly defines the request, response, and error protocols, and there are good libraries for most languages. However, XML is heavier than you need for many tasks. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/76595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13276/"
]
} |
76,624 | Is there a way to have a 64 bit enum in C++? Whilst refactoring some code I came across bunch of #defines which would be better as an enum, but being greater than 32 bit causes the compiler to error. For some reason I thought the following might work: enum MY_ENUM : unsigned __int64 { LARGE_VALUE = 0x1000000000000000, }; | I don't think that's possible with C++98. The underlying representation of enums is up to the compiler. In that case, you are better off using: const __int64 LARGE_VALUE = 0x1000000000000000L; As of C++11, it is possible to use enum classes to specify the base type of the enum: enum class MY_ENUM : unsigned __int64 { LARGE_VALUE = 0x1000000000000000ULL}; In addition enum classes introduce a new name scope. So instead of referring to LARGE_VALUE , you would reference MY_ENUM::LARGE_VALUE . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9236/"
]
} |
76,700 | I'm looking for a little shell script that will take anything piped into it, and dump it to a file.. for email debugging purposes. Any ideas? | The unix command tee does this. man tee | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12624/"
]
} |
76,712 | PHP stores its session information on the file system of the host of the server establishing that session. In a multiple-host PHP environment, where load is unintelligently distributed amongst each host, PHP session variables are not available to each request (unless by chance the request is assigned to the same host -- assume we have no control over the load balancer). This site, dubbed "The Hitchhikers Guide to PHP Load Balancing" suggests overriding PHPs session handler and storing session information in the shared database. What, in your humble opinion, is the best way to maintain session information in a multiple PHP host environment? UPDATE: Thanks for the great feedback. For anyone looking for example code, we found a useful tutorial on writing a Session Manager class for MySQL which I recommend checking out. | Database, or Database+Memcache. Generally speaking sessions should not be written to very often. Start with a database solution that only writes to the db when the session data has changed . Memcache should be added later as a performance enhancement. A db solution will be very fast because you are only ever looking up primary keys. Make sure the db has row locking, not table locking (myISAM). MemCache only is a bad idea... If it overflows, crashes, or is restarted, the users will be logged out. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76712",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4330/"
]
} |
76,760 | fellow anthropoids and lily pads and paddlewheels! I'm developing a Windows desktop app in C#/.NET/WPF, using VS 2008. The app is required to install and run on Vista and XP machines. I'm working on a Setup/Windows Installer Project to install the app. My app requires read/modify/write access to a SQLCE database file (.sdf) and some other database-type files related to a third-party control I'm using. These files should be shared among all users/log-ins on the PC, none of which can be required to be an Administrator. This means, of course, that the files can't go in the program's own installation directory (as such things often did before the arrival of Vista, yes, yes!). I had expected the solution to be simple. Vista and XP both have shared-application-data folders intended for this purpose. ("\ProgramData" in Vista, "\Documents and Settings\All Users\Application Data" in XP.) The .NET Environment.GetFolderPath(SpecialFolder.CommonApplicationData) call exists to find the paths to these folders on a given PC, yes, yes! But I can't figure out how to specify the shared-application-data folder as a target in the Setup project. The Setup project offers a "Common Files" folder, but that's intended for shared program components (not data files), is usually located under "\Program Files," and has the same security restrictions anything else in "\Program files" does, yes, yes! The Setup project offers a "User's Application Data" folder, but that's a per-user folder, which is exactly what I'm trying to avoid, yes, yes! Is it possible to add files to the shared-app-data folder in a robust, cross-Windows-version way from a VS 2008 setup project? Can anyone tell me how? | I have learned the answer to my question through other sources, yes, yes! Sadly, it didn't fix my problem! What's that make me -- a fixer-upper? Yes, yes! To put stuff in a sub-directory of the Common Application Data folder from a VS2008 Setup project, here's what you do: Right-click your setup project in the Solution Explorer and pick "View -> File System". Right-click "File system on target machine" and pick "Add Special Folder -> Custom Folder". Rename the custom folder to "Common Application Data Folder." (This isn't the name that will be used for the resulting folder, it's just to help you keep it straight.) Change the folder's DefaultLocation property to "[CommonAppDataFolder][Manufacturer]\[ProductName]". Note the similarity with the DefaultLocation property of the Application Folder, including the odd use of a single backslash. Marvel for a moment at the ridiculous (yet undeniable) fact that there is a folder property named "Property." Change the folder's Property property to "COMMONAPPDATAFOLDER". Data files placed in the "Common Application Data" folder will be copied to "\ProgramData\Manufacturer\ProductName" (on Vista) or "\Documents and Settings\All Users\Application Data\Manufacturer\ProductName" (on XP) when the installer is run. Now it turns out that under Vista, non-Administrators don't get modify/write access to the files in here. So all users get to read the files, but they get that in "\Program Files" as well. So what, I wonder, is the point of the Common Application Data folder? | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13726/"
]
} |
76,796 | What are some general tips to make sure I don't leak memory in C++ programs? How do I figure out who should free memory that has been dynamically allocated? | I thoroughly endorse all the advice about RAII and smart pointers, but I'd also like to add a slightly higher-level tip: the easiest memory to manage is the memory you never allocated. Unlike languages like C# and Java, where pretty much everything is a reference, in C++ you should put objects on the stack whenever you can. As I've see several people (including Dr Stroustrup) point out, the main reason why garbage collection has never been popular in C++ is that well-written C++ doesn't produce much garbage in the first place. Don't write Object* x = new Object; or even shared_ptr<Object> x(new Object); when you can just write Object x; | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/76796",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
76,809 | Is anyone out there* using web2py ? Specifically: In production? With what database? With Google Application Engine? by "out there" I mean at stackoverflow. | You are welcome to ask the same question on the google group . You will find more than 500 users there and some of them are development companies building projects for their clients. My impression is that most of them use postgresql (that's what I do to) and some others use the Google App Engine. In fact web2py is the only framework that allows you to write code once and the same code will run on GAE, SQLite, MySQL, PostgreSQL, Oracle, MSSQL and FireBird (with the limitations imposed by GAE). You can find the Reddish (reddit clone) appliance with source code for GAE here Here you can find links to some productions app. Some are running on GAE. @Armin: Nothing is wrong with Django or Pylons. They are excellent frameworks. I have used them before developing web2py. There are a few things you can do with web2py that you cannot with them. For example: web2py does distributed transactions with Postgresql, Armin requested this feature. the Django ORM does not do migrations natively (see South ), web2py does. the Django ORM does not allow partial sums (count(field)) and group by, web2py does. web2py can connect to multiple databases at once, Django and Pylons need to be hacked to do that, and web2py has a configuration file at the app, not at the project level, like them. webp2y logs all tracebacks server side for the administrator, Django and Pylons do not. web2py programs often run on GAE unmodified. web2py has built-in xmlrpc web services. web2py comes with jQuery. There are many things that web2py does better (using a more coherent API) and faster (processing templates and generating SQL for example). web2py is also very compact (all modules fit in 265K bytes) and therefore it is much easier to maintain than those competing projects. You only have to learn Python and 81 new function/classes (50 of which have the same names and attributes as corresponding HTML tags, BR , DIV , SPAN , etc. and 19 are validators, IS_IN_SET , IS_INT_IN_RANGE , etc.). Anyway, the most important issue is that web2py is easier than Django, Pylons, PHP and Rails. You will also notice that web2py is hosted on both Google Code and Launchpad and there are not open tickets. All past issues have been resolved in less than 24 hours. You can also check on the google mailing list that all threads (10056 messages today) ended up with an answer from me or one of the other developers within 24 hours. You can find a book on web2py on Amazon. Armin, I know you are the developer of Jinja. I like Jinja but have different design philosophies. Both Django and Jinja define their own template languages (and Jinja in particular has excellent documentation) but I do prefer to use pure Python in templates so that my users do no need to learn a template language at all. I am well aware of the pros and cons of each approach. Let's the users decide what they prefer. No need to criticize each other. @Andre: db.table.field refers to the field object. 'table.field' is a field name. You can always pass a field object when a field name is required because str(db.table.field) is 'table.field'. The only case you are required to use a string instead of an object is when you need to reference by name a field that has not already been defined... perhaps we should move this discussion to the proper place. ;-) I hope you will decide to give web2py a try and, whether you like it or not, I would love to hear your opinion. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/76809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/479/"
]
} |
76,812 | What factors determine which approach is more appropriate? | I think both have their places. You shouldn't simply use DoSomethingToThing(Thing n) just because you think "Functional programming is good". Likewise you shouldn't simply use Thing.DoSomething() because "Object Oriented programming is good". I think it comes down to what you are trying to convey. Stop thinking about your code as a series of instructions, and start thinking about it like a paragraph or sentence of a story. Think about which parts are the most important from the point of view of the task at hand. For example, if the part of the 'sentence' you would like to stress is the object, you should use the OO style. Example: fileHandle.close(); Most of the time when you're passing around file handles, the main thing you are thinking about is keeping track of the file it represents. CounterExample: string x = "Hello World";submitHttpRequest( x ); In this case submitting the HTTP request is far more important than the string which is the body, so submitHttpRequst(x) is preferable to x.submitViaHttp() Needless to say, these are not mutually exclusive. You'll probably actually have networkConnection.submitHttpRequest(x) in which you mix them both. The important thing is that you think about what parts are emphasized, and what you will be conveying to the future reader of the code. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76812",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/337/"
]
} |
76,870 | I want to load a different properties file based upon one variable. Basically, if doing a dev build use this properties file, if doing a test build use this other properties file, and if doing a production build use yet a third properties file. | Step 1 : Define a property in your NAnt script to track the environment you're building for (local, test, production, etc.). <property name="environment" value="local" /> Step 2 : If you don't already have a configuration or initialization target that all targets depends on, then create a configuration target, and make sure your other targets depend on it. <target name="config"> <!-- configuration logic goes here --></target><target name="buildmyproject" depends="config"> <!-- this target builds your project, but runs the config target first --></target> Step 3 : Update your configuration target to pull in an appropriate properties file based on the environment property. <target name="config"> <property name="configFile" value="${environment}.config.xml" /> <if test="${file::exists(configFile)}"> <echo message="Loading ${configFile}..." /> <include buildfile="${configFile}" /> </if> <if test="${not file::exists(configFile) and environment != 'local'}"> <fail message="Configuration file '${configFile}' could not be found." /> </if></target> Note, I like to allow team members to define their own local.config.xml files that don't get committed to source control. This provides a nice place to store local connection strings or other local environment settings. Step 4 : Set the environment property when you invoke NAnt, e.g.: nant -D:environment=dev nant -D:environment=test nant -D:environment=production | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/76870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9052/"
]
} |
76,882 | I have written a lot of code in Python, and I am very used to the syntax, object structure, and so forth of Python because of it. What is the best online guide or resource site to provide me with the basics, as well as a comparison or lookup guide with equivalent functions/features in VBA versus Python. For example, I am having trouble equating a simple List in Python to VBA code. I am also have issues with data structures, such as dictionaries, and so forth. What resources or tutorials are available that will provide me with a guide to porting python functionality to VBA, or just adapting to the VBA syntax from a strong OOP language background? | VBA is quite different from Python, so you should read at least the "Microsoft Visual Basic Help" as provided by the application you are going to use (Excel, Access…). Generally speaking, VBA has the equivalent of Python modules; they're called "Libraries", and they are not as easy to create as Python modules. I mention them because Libraries will provide you with higher-level types that you can use. As a start-up nudge, there are two types that can be substituted for list and dict . list VBA has the type Collection . It's available by default (it's in the library VBA ). So you just do a dim alist as New Collection and from then on, you can use its methods/properties: .Add(item) ( list.append(item) ), .Count ( len(list) ), .Item(i) ( list[i] ) and .Remove(i) ( del list[i] ). Very primitive, but it's there. You can also use the VBA Array type, which like python arrays are lists of same-type items, and unlike python arrays, you need to do ReDim to change their size (i.e. you can't just append and remove items) dict To have a dictionary-like object, you should add the Scripting library to your VBA project¹. Afterwards, you can Dim adict As New Dictionary and then use its properties/methods: .Add(key, item) ( dict[key] = item ), .Exists(key) ( dict.has_key[key] ), .Items() ( dict.values() ), .Keys() ( dict.keys() ), and others which you will find in the Object Browser². ¹ Open VBA editor (Alt+F11). Go to Tools→References, and check the "Microsoft Scripting Runtime" in the list. ² To see the Object Browser, in VBA editor press F2 (or View→Object Browser). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
76,939 | Is it possible to install the x86 Remote Debugger as a Service on a 64bit machine? I need to attach a debugger to managed code in a Session 0 process. The process runs 32bit but the debugger service that gets installed is 64bit and wont attach to the 32bit process. I tried creating the Service using the SC command, and was able to get the service to start, and verified that it was running in Task manager processes. However, when I tried to connect to it with visual studio, it said that the remote debugger monitor wasn't enabled. When I stopped the x86 service, and started the x64 service and it was able to find the monitor, but still got an error. Here is the error when I try to use the remote debugger:Unable to attach to the process. The 64-bit version of the Visual Studio Remote Debugging Monitor (MSVSMON.EXE) cannot debug 32-bit processes or 32-bit dumps. Please use the 32-bit version instead. Here is the error when I try to attach locally:Attaching to a process in a different terminal server session is not supported on this computer. Try remote debugging to the machine and running the Microsoft Visual Studio Remote Debugging Monitor in the process's session. If I try to run the 32bit remote debugger as an application, it wont work attach b/c the Remote Debugger is running in my session and not in session 0. | This works on my machine(TM) after installing rdbgsetup_x64.exe and going through the configuration wizard: sc stop msvsmon90sc config msvsmon90 binPath= "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger\x86\msvsmon.exe /service msvsmon90"sc start msvsmon90 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/76939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3291/"
]
} |
76,976 | Is it possible to get the progress of an XMLHttpRequest (bytes uploaded, bytes downloaded)? This would be useful to show a progress bar when the user is uploading a large file. The standard API doesn't seem to support it, but maybe there's some non-standard extension in any of the browsers out there? It seems like a pretty obvious feature to have after all, since the client knows how many bytes were uploaded/downloaded. note: I'm aware of the "poll the server for progress" alternative (it's what I'm doing right now). the main problem with this (other than the complicated server-side code) is that typically, while uploading a big file, the user's connection is completely hosed, because most ISPs offer poor upstream. So making extra requests is not as responsive as I'd hoped. I was hoping there'd be a way (maybe non-standard) to get this information, which the browser has at all times. | For the bytes uploaded it is quite easy. Just monitor the xhr.upload.onprogress event. The browser knows the size of the files it has to upload and the size of the uploaded data, so it can provide the progress info. For the bytes downloaded (when getting the info with xhr.responseText ), it is a little bit more difficult, because the browser doesn't know how many bytes will be sent in the server request. The only thing that the browser knows in this case is the size of the bytes it is receiving. There is a solution for this, it's sufficient to set a Content-Length header on the server script, in order to get the total size of the bytes the browser is going to receive. For more go to https://developer.mozilla.org/en/Using_XMLHttpRequest . Example:My server script reads a zip file (it takes 5 seconds): $filesize=filesize('test.zip');header("Content-Length: " . $filesize); // set header length// if the headers is not set then the evt.loaded will be 0readfile('test.zip');exit 0; Now I can monitor the download process of the server script, because I know it's total length: function updateProgress(evt) { if (evt.lengthComputable) { // evt.loaded the bytes the browser received // evt.total the total bytes set by the header // jQuery UI progress bar to show the progress on screen var percentComplete = (evt.loaded / evt.total) * 100; $('#progressbar').progressbar( "option", "value", percentComplete ); } } function sendreq(evt) { var req = new XMLHttpRequest(); $('#progressbar').progressbar(); req.onprogress = updateProgress; req.open('GET', 'test.php', true); req.onreadystatechange = function (aEvt) { if (req.readyState == 4) { //run any callback here } }; req.send(); } | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/76976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
76,993 | How can I set the protected DoubleBuffered property of the controls on a form that are suffering from flicker? | Here's a more generic version of Dummy's solution . We can use reflection to get at the protected DoubleBuffered property, and then it can be set to true . Note : You should pay your developer taxes and not use double-buffering if the user is running in a terminal services session (e.g. Remote Desktop) This helper method will not turn on double buffering if the person is running in remote desktop. public static void SetDoubleBuffered(System.Windows.Forms.Control c){ //Taxes: Remote Desktop Connection and painting //http://blogs.msdn.com/oldnewthing/archive/2006/01/03/508694.aspx if (System.Windows.Forms.SystemInformation.TerminalServerSession) return; System.Reflection.PropertyInfo aProp = typeof(System.Windows.Forms.Control).GetProperty( "DoubleBuffered", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance); aProp.SetValue(c, true, null); } | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/76993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12597/"
]
} |
77,005 | I am working on Linux with the GCC compiler. When my C++ program crashes I would like it to automatically generate a stacktrace. My program is being run by many different users and it also runs on Linux, Windows and Macintosh (all versions are compiled using gcc ). I would like my program to be able to generate a stack trace when it crashes and the next time the user runs it, it will ask them if it is ok to send the stack trace to me so I can track down the problem. I can handle the sending the info to me but I don't know how to generate the trace string. Any ideas? | For Linux and I believe Mac OS X, if you're using gcc, or any compiler that uses glibc, you can use the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault. Documentation can be found in the libc manual . Here's an example program that installs a SIGSEGV handler and prints a stacktrace to stderr when it segfaults. The baz() function here causes the segfault that triggers the handler: #include <stdio.h>#include <execinfo.h>#include <signal.h>#include <stdlib.h>#include <unistd.h>void handler(int sig) { void *array[10]; size_t size; // get void*'s for all entries on the stack size = backtrace(array, 10); // print out all the frames to stderr fprintf(stderr, "Error: signal %d:\n", sig); backtrace_symbols_fd(array, size, STDERR_FILENO); exit(1);}void baz() { int *foo = (int*)-1; // make a bad pointer printf("%d\n", *foo); // causes segfault}void bar() { baz(); }void foo() { bar(); }int main(int argc, char **argv) { signal(SIGSEGV, handler); // install our handler foo(); // this will call foo, bar, and baz. baz segfaults.} Compiling with -g -rdynamic gets you symbol info in your output, which glibc can use to make a nice stacktrace: $ gcc -g -rdynamic ./test.c -o test Executing this gets you this output: $ ./testError: signal 11:./test(handler+0x19)[0x400911]/lib64/tls/libc.so.6[0x3a9b92e380]./test(baz+0x14)[0x400962]./test(bar+0xe)[0x400983]./test(foo+0xe)[0x400993]./test(main+0x28)[0x4009bd]/lib64/tls/libc.so.6(__libc_start_main+0xdb)[0x3a9b91c4bb]./test[0x40086a] This shows the load module, offset, and function that each frame in the stack came from. Here you can see the signal handler on top of the stack, and the libc functions before main in addition to main , foo , bar , and baz . | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/77005",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13676/"
]
} |
77,086 | Which is faster, python webpages or php webpages? Does anyone know how the speed of pylons(or any of the other frameworks) compares to a similar website made with php? I know that serving a python base webpage via cgi is slower than php because of its long start up every time. I enjoy using pylons and I would still use it if it was slower than php. But if pylons was faster than php, I could maybe, hopefully, eventually convince my employer to allow me to convert the site over to pylons. | It sounds like you don't want to compare the two languages , but that you want to compare two web systems . This is tricky, because there are many variables involved. For example, Python web applications can take advantage of mod_wsgi to talk to web servers, which is faster than any of the typical ways that PHP talks to web servers (even mod_php ends up being slower if you're using Apache, because Apache can only use the Prefork MPM with mod_php rather than multi-threaded MPM like Worker). There is also the issue of code compilation. As you know, Python is compiled just-in-time to byte code (.pyc files) when a file is run each time the file changes. Therefore, after the first run of a Python file, the compilation step is skipped and the Python interpreter simply fetches the precompiled .pyc file. Because of this, one could argue that Python has a native advantage over PHP. However, optimizers and caching systems can be installed for PHP websites (my favorite is eAccelerator ) to much the same effect. In general, enough tools exist such that one can pretty much do everything that the other can do. Of course, as others have mentioned, there's more than just speed involved in the business case to switch languages. We have an app written in oCaml at my current employer, which turned out to be a mistake because the original author left the company and nobody else wants to touch it. Similarly, the PHP-web community is much larger than the Python-web community; Website hosting services are more likely to offer PHP support than Python support; etc. But back to speed. You must recognize that the question of speed here involves many moving parts. Fortunately, many of these parts can be independently optimized, affording you various avenues to seek performance gains. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/77086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13778/"
]
} |
77,126 | What are some suggestions for easy to use C++ compilers for a beginner? Free or open-source ones would be preferred. | GCC is a good choice for simple things. Visual Studio Express edition is the free version of the major windows C++ compiler. If you are on Windows I would use VS. If you are on linux you should use GCC. *I say GCC for simple things because for a more complicated project the build process isn't so easy | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77126",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846/"
]
} |
77,127 | I have exceptions created for every condition that my application does not expect. UserNameNotValidException , PasswordNotCorrectException etc. However I was told I should not create exceptions for those conditions. In my UML those ARE exceptions to the main flow, so why should it not be an exception? Any guidance or best practices for creating exceptions? | My personal guideline is: an exception is thrown when a fundamental assumption of the current code block is found to be false. Example 1: say I have a function which is supposed to examine an arbitrary class and return true if that class inherits from List<>. This function asks the question, "Is this object a descendant of List?" This function should never throw an exception, because there are no gray areas in its operation - every single class either does or does not inherit from List<>, so the answer is always "yes" or "no". Example 2: say I have another function which examines a List<> and returns true if its length is more than 50, and false if the length is less. This function asks the question, "Does this list have more than 50 items?" But this question makes an assumption - it assumes that the object it is given is a list. If I hand it a NULL, then that assumption is false. In that case, if the function returns either true or false, then it is breaking its own rules. The function cannot return anything and claim that it answered the question correctly. So it doesn't return - it throws an exception. This is comparable to the "loaded question" logical fallacy. Every function asks a question. If the input it is given makes that question a fallacy, then throw an exception. This line is harder to draw with functions that return void, but the bottom line is: if the function's assumptions about its inputs are violated, it should throw an exception instead of returning normally. The other side of this equation is: if you find your functions throwing exceptions frequently, then you probably need to refine their assumptions. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/77127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/279750/"
]
} |
77,171 | After reading Evan's and Nilsson's books I am still not sure how to manage Data access in a domain driven project. Should the CRUD methods be part of the repositories, i.e. OrderRepository.GetOrdersByCustomer(customer) or should they be part of the entities: Customer.GetOrders(). The latter approach seems more OO, but it will distribute Data Access for a single entity type among multiple objects, i.e. Customer.GetOrders(), Invoice.GetOrders(), ShipmentBatch.GetOrders() ,etc. What about Inserting and updating? | CRUD-ish methods should be part of the Repository...ish. But I think you should ask why you have a bunch of CRUD methods. What do they really do? What are they really for? If you actually call out the data access patterns your application uses I think it makes the repository a lot more useful and keeps you from having to do shotgun surgery when certain types of changes happen to your domain. CustomerRepo.GetThoseWhoHaventPaidTheirBill()// orGetCustomer(new HaventPaidBillSpecification())// is better thanforeach (var customer in GetCustomer()) { /* logic leaking all over the floor */} "Save" type methods should also be part of the repository. If you have aggregate roots, this keeps you from having a Repository explosion, or having logic spread out all over: You don't have 4 x # of entities data access patterns, just the ones you actually use on the aggregate roots. That's my $.02. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2133/"
]
} |
77,172 | Do you guys keep track of stored procedures and database schema in your source control system of choice? When you make a change (add a table, update an stored proc, how do you get the changes into source control? We use SQL Server at work, and I've begun using darcs for versioning, but I'd be curious about general strategies as well as any handy tools. Edit: Wow, thanks for all the great suggestions, guys! I wish I could select more than one "Accepted Answer"! | We choose to script everything, and that includes all stored procedures and schema changes. No wysiwyg tools, and no fancy 'sync' programs are necessary. Schema changes are easy, all you need to do is create and maintain a single file for that version, including all schema and data changes. This becomes your conversion script from version x to x+1. You can then run it against a production backup and integrate that into your 'daily build' to verify that it works without errors. Note it's important not to change or delete already written schema / data loading sql as you can end up breaking any sql written later. -- change #1234ALTER TABLE asdf ADD COLUMN MyNewID INTGO-- change #5678ALTER TABLE asdf DROP COLUMN SomeOtherIDGO For stored procedures, we elect for a single file per sproc, and it uses the drop/create form. All stored procedures are recreated at deployment. The downside is that if a change was done outside source control, the change is lost. At the same time, that's true for any code, but your DBA'a need to be aware of this. This really stops people outside the team mucking with your stored procedures, as their changes are lost in an upgrade. Using Sql Server, the syntax looks like this: if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[usp_MyProc]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)drop procedure [usp_MyProc]GOCREATE PROCEDURE [usp_MyProc]( @UserID INT)ASSET NOCOUNT ON-- stored procedure logic.SET NOCOUNT OFFGO The only thing left to do is write a utility program that collates all the individual files and creates a new file with the entire set of updates (as a single script). Do this by first adding the schema changes then recursing the directory structure and including all the stored procedure files. As an upside to scripting everything, you'll become much better at reading and writing SQL. You can also make this entire process more elaborate, but this is the basic format of how to source-control all sql without any special software. addendum: Rick is correct that you will lose permissions on stored procedures with DROP/CREATE, so you may need to write another script will re-enable specific permissions. This permission script would be the last to run. Our experience found more issues with ALTER verses DROP/CREATE semantics. YMMV | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/77172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7856/"
]
} |
77,193 | You've just written a pile of code to deliver some important feature under pressure. You've cut a few corners, you've mashed some code into some over-bloated classes with names like SerialIndirectionShutoffManager.. You tell your boss you're going to need a week to clean this stuff up. "Clean what up?" "My code - its a pigsty!" "You mean there's some more bug fixing?" "Not really, its more like.." "You're gonna make it run faster?" "Perhaps, buts thats not.." "Then you should have written it properly when you had the chance. Now I'm glad you're here, yeah, I'm gonna have to go ahead and ask you to come in this weekend.. " I've read Matin Fowler's book, but I'm not sure I agree with his advice on this matter: Encourage regular code reviews, so refactoring work is encouraged as a natural part of the development process. Just don't tell, you're the developer and its part of your duty. Both these methods squirm out of the need to communicate with your manager. What do you tell your boss? | It's important to include refactoring time in your original estimates. Going to your boss after you've delivered the product and then telling him that you're not actually done is lying about being done. You didn't actually make the deliverable deadline. It's like a surgeon doing surgery and then not making sure he put everything back the way it was supposed to be. It is important to include all the parts of development (e.g. refactoring, usability research, testing, QA, revisions) in your original schedules. Ultimately this isn't so much a management problem as a programmer problem. If, however, you've inherited a mess then you will have to explain to the boss that the last set of programmers in a rush to get the project out the door cut corners and that it's been limping along. You can band-aid the problem for awhile (as they likely did), but each band-aid just delays the problem and ultimately makes the problem that much more expensive to fix. Be honest with your boss and understand that a project isn't done until it's done. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/77193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13899/"
]
} |
77,213 | I have a large number of Enums that implement this interface: /** * Interface for an enumeration, each element of which can be uniquely identified by its code */public interface CodableEnum { /** * Get the element with a particular code * @param code * @return */ public CodableEnum getByCode(String code); /** * Get the code that identifies an element of the enum * @return */ public String getCode();} A typical example is: public enum IMType implements CodableEnum { MSN_MESSENGER("msn_messenger"), GOOGLE_TALK("google_talk"), SKYPE("skype"), YAHOO_MESSENGER("yahoo_messenger"); private final String code; IMType (String code) { this.code = code; } public String getCode() { return code; } public IMType getByCode(String code) { for (IMType e : IMType.values()) { if (e.getCode().equalsIgnoreCase(code)) { return e; } } }} As you can imagine these methods are virtually identical in all implementations of CodableEnum. I would like to eliminate this duplication, but frankly don't know how. I tried using a class such as the following: public abstract class DefaultCodableEnum implements CodableEnum { private final String code; DefaultCodableEnum(String code) { this.code = code; } public String getCode() { return this.code; } public abstract CodableEnum getByCode(String code); } But this turns out to be fairly useless because: An enum cannot extend a class Elements of an enum (SKYPE, GOOGLE_TALK, etc.) cannot extend a class I cannot provide a default implementation of getByCode(), because DefaultCodableEnum is not itself an Enum. I tried changing DefaultCodableEnum to extend java.lang.Enum, but this doesn't appear to be allowed. Any suggestions that do not rely on reflection?Thanks,Don | You could factor the duplicated code into a CodeableEnumHelper class: public class CodeableEnumHelper { public static CodeableEnum getByCode(String code, CodeableEnum[] values) { for (CodeableEnum e : values) { if (e.getCode().equalsIgnoreCase(code)) { return e; } } return null; }} Each CodeableEnum class would still have to implement a getByCode method, but the actual implementation of the method has at least been centralized to a single place. public enum IMType implements CodeableEnum { ... public IMType getByCode(String code) { return (IMType)CodeableEnumHelper.getByCode(code, this.values()); } } | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2648/"
]
} |
77,226 | What is the easiest way to capitalize the first letter in each word of a string? | As @brian is mentioning in the comments the currently accepted answer by @piCookie is wrong! $_="what's the wrong answer?";s/\b(\w)/\U$1/gprint; This will print "What'S The Wrong Answer?" notice the wrongly capitalized S As the FAQ says you are probably better off using s/([\w']+)/\u\L$1/g or Text::Autoformat | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/77226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13912/"
]
} |
77,266 | Can I persuade operator>> in C++ to read both a hex value AND and a decimal value? The following program demonstrates how reading hex goes wrong. I'd like the same istringstream to be able to read both hex and decimal . #include <iostream>#include <sstream>int main(int argc, char** argv){ int result = 0; // std::istringstream is("5"); // this works std::istringstream is("0x5"); // this fails while ( is.good() ) { if ( is.peek() != EOF ) is >> result; else break; } if ( is.fail() ) std::cout << "failed to read string" << std::endl; else std::cout << "successfully read string" << std::endl; std::cout << "result: " << result << std::endl;} | Use std::setbase(0) which enables prefix dependent parsing. It will be able to parse 10 (dec) as 10 decimal, 0x10 (hex) as 16 decimal and 010 (octal) as 8 decimal. #include <iomanip>is >> std::setbase(0) >> result; | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1359466/"
]
} |
77,278 | People also often ask "How can I compile Perl?" while what they really want is to create an executable that can run on machines even if they don't have Perl installed. There are several solutions, I know of: perl2exe of IndigoStarIt is commercial. I never tried. Its web site says it can cross compile Win32, Linux, and Solaris. Perl Dev Kit from ActiveState. It is commercial. I used it several years ago on Windows and it worked well for my needs. According to its web site it works on Windows, Mac OS X, Linux, Solaris, AIX and HP-UX. PAR or rather PAR::Packer that is free and open source. Based on the test reports it works on the Windows, Mac OS X, Linux, NetBSD and Solaris but theoretically it should work on other UNIX systems as well.Recently I have started to use PAR for packaging on Linux and will use it on Windows as well. Other recommended solutions? | In addition to the three tools listed in the question, there's another one called Cava Packager written by Mark Dootson, who has also contributed to PAR in the past. It only runs under Windows, has a nice Wx GUI and works differently from the typical three contenders in that it assembles all Perl dependencies in a source / lib directory instead of creating a single archive containing everything. There's a free version, but it's not Open Source. I haven't used this except for testing. As for PAR, it's really a toolkit. It comes with a packaging tool which does the dependency scanning and assembly of stand-alone executables, but it can also be used to generate and use so-called .par files, in analogy to Java's JARs. It also comes with client and server for automatically loading missing packages over the network, etc. The slides of my PAR talk at YAPC::EU 2008 go into more details on this.There's also an active mailing list: par at perl dot org. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11827/"
]
} |
77,280 | Let say that I have a website with some information that could be access externally. Those information need to be only change by the respected client. Example: Google Analytic or WordPress API key. How can I create a system that work like that (no matter the programming language)? | A number of smart people are working on a standard, and it's called OAuth . It already has a number of sample implementations , so it's pretty easy to get started. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77280",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13913/"
]
} |
77,342 | The use of XSLT (XML Stylesheet Language Transform) has never seen the same popularity of many of the other languages that came out during the internet boom. While it is in use, and in some cases by large successful companies (i.e. Blizzard Entertainment), it has never seemed to reach mainstream. Why do you think this is? | One problem is that XSLT looks complicated. Any developer should be able to pick up the language constructs as there are analogs in most other languages. The problem is that the constructs and data all look exactly the same which makes it difficult to distinguish between the two which makes XSLT more difficult to read than other languges. A second issue is that the uses for it are more limited than other languages. XSLT is great at what it does; making complicated or radical transformations on XML. But it doesn't apply to as wide a range of problems as other languages, so it is not used as much. Third, many programming languages have their own libraries for transforming XML. Much of the time when working with XML, only small changes or lookups are needed. The XML is also probably being generated or consumed by a program the developer is already writing in another language. These factors mean that using a language's built in utilities is just more convenient. Another problem that all of these issues contribute to is inertia. That is, people don't know it, they don't see that they have much need for it, so they avoid it as a solution if there is another option. What you end up with is a language that is the last choice of many developers when creating solutions. It is likely that XSLT is even avoided when it would be the best tool for the job as a result. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/77342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13930/"
]
} |
77,387 | In the Java collections framework, the Collection interface declares the following method: <T> T[] toArray(T[] a) Returns an array containing all of the elements in this collection; the runtime type of the returned array is that of the specified array. If the collection fits in the specified array, it is returned therein. Otherwise, a new array is allocated with the runtime type of the specified array and the size of this collection. If you wanted to implement this method, how would you create an array of the type of a , known only at runtime? | Use the static method java.lang.reflect.Array.newInstance(Class<?> componentType, int length) A tutorial on its use can be found here: http://java.sun.com/docs/books/tutorial/reflect/special/arrayInstance.html | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/77387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13979/"
]
} |
77,431 | I make a lot of web applications and from time to time I need a color picker. What's one that I can use like an API and doesn't require a lot of code to plug in? I also need it to work in all browsers. | Farbtastic is a nice jQuery color picker But apparently doesn't work in IE6 Here is another jQuery color picker that looks nice, not sure about it compatibility though. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13978/"
]
} |
77,434 | Suppose I have a vector that is nested in a dataframe one or two levels. Is there a quick and dirty way to access the last value, without using the length() function? Something ala PERL's $# special var? So I would like something like: dat$vec1$vec2[$#] instead of dat$vec1$vec2[length(dat$vec1$vec2)] | I use the tail function: tail(vector, n=1) The nice thing with tail is that it works on dataframes too, unlike the x[length(x)] idiom. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/77434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14008/"
]
} |
77,436 | I have an ant build that makes directories, calls javac and all the regular stuff. The issue I am having is that when I try to do a clean (delete all the stuff that was generated) the delete task reports that is was unable to delete some files. When I try to delete them manually it works just fine. The files are apparently not open by any other process but ant still does not manage to delete them. What can I do? | I encountered this problem once.It was because the file i tried to delete was a part of a classpath for another task. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8621/"
]
} |
77,485 | What do folks here see as the relative strengths and weaknesses of Git, Mercurial, and Bazaar? In considering each of them with one another and against version control systems like SVN and Perforce, what issues should be considered? In planning a migration from SVN to one of these distributed version control systems, what factors would you consider? | Git is very fast, scales very well, and is very transparent about its concepts. The down side of this is that it has a relatively steep learning curve. A Win32 port is available, but not quite a first-class citizen. Git exposes hashes as version numbers to users; this provides guarantees (in that a single hash always refers to the exact same content; an attacker cannot modify history without being detected), but can be cumbersome to the user. Git has a unique concept of tracking file contents, even as those contents move between files, and views files as first-level objects, but does not track directories. Another issue with git is that has many operations (such as rebase ) which make it easy to modify history (in a sense -- the content referred to by a hash will never change, but references to that hash may be lost); some purists (myself included) don't like that very much. Bazaar is reasonably fast (very fast for trees with shallow history, but presently scales poorly with history length), and is easy-to-learn to those familiar with the command-line interfaces of traditional SCMs (CVS, SVN, etc). Win32 is considered a first-class target by its development team. It has a pluggable architecture for different components, and replaces its storage format frequently; this allows them to introduce new features (such as better support for integration with revision control systems based on different concepts) and improve performance. The Bazaar team considers directory tracking and rename support first-class functionality. While globally unique revision-id identifiers are available for all revisions, tree-local revnos (standard revision numbers, more akin to those used by svn or other more conventional SCMs) are used in place of content hashes for identifying revisions. Bazaar has support for "lightweight checkouts", in which history is kept on a remote server instead of copied down to the local system and is automatically referred to over the network when needed; at present, this is unique among DSCMs. Both have some form of SVN integration available; however, bzr-svn is considerably more capable than git-svn, largely due to backend format revisions introduced for that purpose. [Update, as of 2014: The third-party commercial product SubGit provides a bidirectional interface between SVN and Git which is comparable in fidelity to bzr-svn, and considerably more polished; I strongly recommend its use over that of git-svn when budget and licensing constraints permit]. I have not used Mercurial extensively, and so cannot comment on it in detail -- except to note that it, like Git, has content-hash addressing for revisions; also like Git, it does not treat directories as first-class objects (and cannot store an empty directory). It is, however, faster than any other DSCM except for Git, and has far better IDE integration (especially for Eclipse) than any of its competitors. Given its performance characteristics (which lag only slightly behind those of Git) and its superior cross-platform and IDE support, Mercurial may be compelling for teams with significant number of win32-centric or IDE-bound members. One concern in migrating from SVN is that SVN's GUI frontends and IDE integration are more mature than those of any of the distributed SCMs. Also, if you currently make heavy use of precommit script automation with SVN (ie. requiring unit tests to pass before a commit can proceed), you'll probably want to use a tool similar to PQM for automating merge requests to your shared branches. SVK is a DSCM which uses Subversion as its backing store, and has quite good integration with SVN-centric tools. However, it has dramatically worse performance and scalability characteristics than any other major DSCM (even Darcs), and should be avoided for projects which are liable to grow large in terms of either length of history or number of files. [About the author: I use Git and Perforce for work, and Bazaar for my personal projects and as an embedded library; other parts of my employer's organization use Mercurial heavily. In a previous life I built a great deal of automation around SVN; before that I have experience with GNU Arch, BitKeeper, CVS and others. Git was quite off-putting at first -- it felt like GNU Arch inasmuch as being a concept-heavy environment, as opposed to toolkits built to conform to the user's choice of workflows -- but I've since come to be quite comfortable with it]. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/77485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13750/"
]
} |
77,528 | I'm currently running Vista and I would like to manually complete the same operations as my Windows Service. Since the Windows Service is running under the Local System Account, I would like to emulate this same behavior. Basically, I would like to run CMD.EXE under the Local System Account. I found information online which suggests lauching the CMD.exe using the DOS Task Scheduler AT command, but I received a Vista warning that "due to security enhancements, this task will run at the time excepted but not interactively." Here's a sample command: AT 12:00 /interactive cmd.exe Another solution suggested creating a secondary Windows Service via the Service Control (sc.exe) which merely launches CMD.exe. C:\sc create RunCMDAsLSA binpath= "cmd" type=own type=interactC:\sc start RunCMDAsLSA In this case the service fails to start and results it the following error message: FAILED 1053: The service did not respond to the start or control request in a timely fashion. The third suggestion was to launch CMD.exe via a Scheduled Task. Though you may run scheduled tasks under various accounts, I don't believe the Local System Account is one of them. I've tried using the Runas as well, but think I'm running into the same restriction as found when running a scheduled task. Thus far, each of my attempts have ended in failure. Any suggestions? | Though I haven't personally tested, I have good reason to believe that the above stated AT COMMAND solution will work for XP, 2000 and Server 2003. Per my and Bryant's testing, we've identified that the same approach does not work with Vista or Windows Server 2008 -- most probably due to added security and the /interactive switch being deprecated. However, I came across this article which demonstrates the use of PSTools from SysInternals (which was acquired by Microsoft in July, 2006.) I launched the command line via the following and suddenly I was running under the Local Admin Account like magic: psexec -i -s cmd.exe PSTools works well. It's a lightweight, well-documented set of tools which provides an appropriate solution to my problem. Many thanks to those who offered help. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/77528",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4115/"
]
} |
77,531 | When I installed Windows Server 2008 I didn't have the (activation) key. Now that I have it I can't find where to enter it. Anybody know? | Go to Control Panel\System and then under Windows Activation click "Change Product Key". | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
77,534 | When running wsdl.exe on a WSDL I created, I get this error: Error: Unable to import binding 'SomeBinding' from namespace 'SomeNS'. Unable to import operation 'someOperation'. These members may not be derived. I'm using the document-literal style, and to the best of my knowledge I'm following all the rules. To sum it up, I have a valid WSDL, but the tool doesn't like it. What I'm looking for is if someone has lots of experience with the wsdl.exe tool and knows about some secret gotcha that I don't. | I have came across to the same error message. After digging for a while, found out that one can supply xsd files in addition to wsdl file. So included/imported .xsd files in addition to .wsdl at the end of the wsdl command as follows: wsdl.exe myWebService.wsdl myXsd1.xsd myType1.xsd myXsd2.xsd ... Wsdl gave some warnings but it did create an ok service interface. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/77534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5726/"
]
} |
77,535 | Has anyone found a way to get gcc to build/install on SCO6? With 2.95 and 4.3 I get to the point where it needs to use (2.95) or find (4.3) the assembler and that's where it fails. If anyone has figured this out I would appreciate the info! Thanks | I have came across to the same error message. After digging for a while, found out that one can supply xsd files in addition to wsdl file. So included/imported .xsd files in addition to .wsdl at the end of the wsdl command as follows: wsdl.exe myWebService.wsdl myXsd1.xsd myType1.xsd myXsd2.xsd ... Wsdl gave some warnings but it did create an ok service interface. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/77535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14046/"
]
} |
77,552 | Why is it bad to name a variable id in Python? | id() is a fundamental built-in: Help on built-in function id in module __builtin__ : id(...) id(object) -> integer Return the identity of an object. This is guaranteed to be unique among simultaneously existing objects. (Hint: it's the object's memory address.) In general, using variable names that eclipse a keyword or built-in function in any language is a bad idea, even if it is allowed. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/77552",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5926/"
]
} |
77,632 | I thought I'd offer this softball to whomever would like to hit it out of the park. What are generics, what are the advantages of generics, why, where, how should I use them? Please keep it fairly basic. Thanks. | Allows you to write code/use library methods which are type-safe, i.e. a List<string> is guaranteed to be a list of strings. As a result of generics being used the compiler can perform compile-time checks on code for type safety, i.e. are you trying to put an int into that list of strings? Using an ArrayList would cause that to be a less transparent runtime error. Faster than using objects as it either avoids boxing/unboxing (where .net has to convert value types to reference types or vice-versa ) or casting from objects to the required reference type. Allows you to write code which is applicable to many types with the same underlying behaviour, i.e. a Dictionary<string, int> uses the same underlying code as a Dictionary<DateTime, double>; using generics, the framework team only had to write one piece of code to achieve both results with the aforementioned advantages too. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/77632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13578/"
]
} |
77,639 | When is it right for a constructor to throw an exception? (Or in the case of Objective C: when is it right for an init'er to return nil?) It seems to me that a constructor should fail -- and thus refuse to create an object -- if the object isn't complete. I.e., the constructor should have a contract with its caller to provide a functional and working object on which methods can be called meaningfully? Is that reasonable? | The constructor's job is to bring the object into a usable state. There are basically two schools of thought on this. One group favors two-stage construction. The constructor merely brings the object into a sleeper state in which it refuses to do any work. There's an additional function that does the actual initialization. I've never understood the reasoning behind this approach. I'm firmly in the group that supports one-stage construction, where the object is fully initialized and usable after construction. One-stage constructors should throw if they fail to fully initialize the object. If the object cannot be initialized, it must not be allowed to exist, so the constructor must throw. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/77639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14050/"
]
} |
77,694 | I've defined a view with the CCK and View 2 modules. I would like to quickly define a template specific to this view. Is there any tutorial or information on this? What are the files I need to modify? Here are my findings: (Edited) In fact, there are two ways to theme a view: the " field " way and the " node " way. In "edit View", you can choose " Row style: Node ", or " Row style: Fields ". with the " Node " way, you can create a node-contentname.tpl.php which will be called for each node in the view. You'll have access to your cck field values with $field_name[0]['value']. (edit2) You can use node-view-viewname.tpl.php which will be only called for each node displayed from this view. with the " Field " way, you add a views-view-field--viewname--field-name-value.tpl.php for each field you want to theme individually. Thanks to previous responses, I've used the following tools : In the 'Basic Settings' block, the 'Theme: Information' to see all the different templates you can modify. The Devel module 's "Theme developer" to quickly find the field variable names. View 2 documentation , especially the "Using Theme" page. | In fact there are two ways to theme a view : the " field " way and the " node " way. In "edit View", you can choose " Row style: Node ", or " Row style: Fields ". with the " Node " way, you can create a node-contentname.tpl.php wich will be called for each node in the view. You'll have access to your cck field values with $field_name[0]['value'] with the " Field " way, you add a views-view-field--viewname--field-name-value.tpl.php for each field you want to theme individually. Thanks to previous responses, I've used the following tools : In the 'Basic Settings' block, the 'Theme: Information' to see all the different templates you can modify. The Devel module 's "Theme developer" to quickly find the field variable names. View 2 documentation , especially the "Using Theme" page. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/77694",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8450/"
]
} |
77,695 | What do I need to set up and maintain a local CPAN mirror? What scripts and best practices should I be aware of? | CPAN::Mini is the way to go. Once you've mirrored CPAN locally, you'll want to set your mirror URL in CPAN.pm or CPANPLUS to the local directory using a "file:" URL like this: file:///path/to/my/cpan/mirror If you'd like your mirror to have copies of development versions of CPAN distribution, you can use CPAN::Mini::Devel . Update: The "What do I need to mirror CPAN?" FAQ given in another answer is for mirroring all of CPAN, usually to provide another public mirror. That includes old, outdated versions of distributions. CPAN::Mini just mirrors the latest versions. This is much smaller and for most users is generally what people would use for local or disconnected (laptop) access to CPAN. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77695",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9532/"
]
} |
77,718 | Coming from C++ to Java, the obvious unanswered question is why didn't Java include operator overloading? Isn't Complex a, b, c; a = b + c; much simpler than Complex a, b, c; a = b.add(c); ? Is there a known reason for this, valid arguments for not allowing operator overloading? Is the reason arbitrary, or lost to time? | There are a lot of posts complaining about operator overloading. I felt I had to clarify the "operator overloading" concepts, offering an alternative viewpoint on this concept. Code obfuscating? This argument is a fallacy. Obfuscating is possible in all languages... It is as easy to obfuscate code in C or Java through functions/methods as it is in C++ through operator overloads: // C++T operator + (const T & a, const T & b) // add ?{ T c ; c.value = a.value - b.value ; // subtract !!! return c ;}// Javastatic T add (T a, T b) // add ?{ T c = new T() ; c.value = a.value - b.value ; // subtract !!! return c ;}/* C */T add (T a, T b) /* add ? */{ T c ; c.value = a.value - b.value ; /* subtract !!! */ return c ;} ...Even in Java's standard interfaces For another example, let's see the Cloneable interface in Java: You are supposed to clone the object implementing this interface. But you could lie. And create a different object. In fact, this interface is so weak you could return another type of object altogether, just for the fun of it: class MySincereHandShake implements Cloneable{ public Object clone() { return new MyVengefulKickInYourHead() ; }} As the Cloneable interface can be abused/obfuscated, should it be banned on the same grounds C++ operator overloading is supposed to be? We could overload the toString() method of a MyComplexNumber class to have it return the stringified hour of the day. Should the toString() overloading be banned, too? We could sabotage MyComplexNumber.equals to have it return a random value, modify the operands... etc. etc. etc.. In Java, as in C++, or whatever language, the programmer must respect a minimum of semantics when writing code. This means implementing an add function that adds, and Cloneable implementation method that clones, and a ++ operator that increments. What's obfuscating anyway? Now that we know that code can be sabotaged even through the pristine Java methods, we can ask ourselves about the real use of operator overloading in C++? Clear and natural notation: methods vs. operator overloading? We'll compare below, for different cases, the "same" code in Java and C++, to have an idea of which kind of coding style is clearer. Natural comparisons: // C++ comparison for built-ins and user-defined typesbool isEqual = A == B ;bool isNotEqual = A != B ;bool isLesser = A < B ;bool isLesserOrEqual = A <= B ;// Java comparison for user-defined typesboolean isEqual = A.equals(B) ;boolean isNotEqual = ! A.equals(B) ;boolean isLesser = A.comparesTo(B) < 0 ;boolean isLesserOrEqual = A.comparesTo(B) <= 0 ; Please note that A and B could be of any type in C++, as long as the operator overloads are provided. In Java, when A and B are not primitives, the code can become very confusing, even for primitive-like objects (BigInteger, etc.)... Natural array/container accessors and subscripting: // C++ container accessors, more naturalvalue = myArray[25] ; // subscript operatorvalue = myVector[25] ; // subscript operatorvalue = myString[25] ; // subscript operatorvalue = myMap["25"] ; // subscript operatormyArray[25] = value ; // subscript operatormyVector[25] = value ; // subscript operatormyString[25] = value ; // subscript operatormyMap["25"] = value ; // subscript operator// Java container accessors, each one has its special notationvalue = myArray[25] ; // subscript operatorvalue = myVector.get(25) ; // method getvalue = myString.charAt(25) ; // method charAtvalue = myMap.get("25") ; // method getmyArray[25] = value ; // subscript operatormyVector.set(25, value) ; // method setmyMap.put("25", value) ; // method put In Java, we see that for each container to do the same thing (access its content through an index or identifier), we have a different way to do it, which is confusing. In C++, each container uses the same way to access its content, thanks to operator overloading. Natural advanced types manipulation The examples below use a Matrix object, found using the first links found on Google for " Java Matrix object " and " C++ Matrix object ": // C++ YMatrix matrix implementation on CodeProject// http://www.codeproject.com/KB/architecture/ymatrix.aspx// A, B, C, D, E, F are Matrix objects;E = A * (B / 2) ;E += (A - B) * (C + D) ;F = E ; // deep copy of the matrix// Java JAMA matrix implementation (seriously...)// http://math.nist.gov/javanumerics/jama/doc/// A, B, C, D, E, F are Matrix objects;E = A.times(B.times(0.5)) ;E.plusEquals(A.minus(B).times(C.plus(D))) ;F = E.copy() ; // deep copy of the matrix And this is not limited to matrices. The BigInteger and BigDecimal classes of Java suffer from the same confusing verbosity, whereas their equivalents in C++ are as clear as built-in types. Natural iterators: // C++ Random Access iterators++it ; // move to the next item--it ; // move to the previous itemit += 5 ; // move to the next 5th item (random access)value = *it ; // gets the value of the current item*it = 3.1415 ; // sets the value 3.1415 to the current item(*it).foo() ; // call method foo() of the current item// Java ListIterator<E> "bi-directional" iteratorsvalue = it.next() ; // move to the next item & return the valuevalue = it.previous() ; // move to the previous item & return the valueit.set(3.1415) ; // sets the value 3.1415 to the current item Natural functors: // C++ FunctorsmyFunctorObject("Hello World", 42) ;// Java Functors ???myFunctorObject.execute("Hello World", 42) ; Text concatenation: // C++ stream handling (with the << operator) stringStream << "Hello " << 25 << " World" ; fileStream << "Hello " << 25 << " World" ; outputStream << "Hello " << 25 << " World" ; networkStream << "Hello " << 25 << " World" ;anythingThatOverloadsShiftOperator << "Hello " << 25 << " World" ;// Java concatenationmyStringBuffer.append("Hello ").append(25).append(" World") ; Ok, in Java you can use MyString = "Hello " + 25 + " World" ; too... But, wait a second: This is operator overloading, isn't it? Isn't it cheating??? :-D Generic code? The same generic code modifying operands should be usable both for built-ins/primitives (which have no interfaces in Java), standard objects (which could not have the right interface), and user-defined objects. For example, calculating the average value of two values of arbitrary types: // C++ primitive/advanced typestemplate<typename T>T getAverage(const T & p_lhs, const T & p_rhs){ return (p_lhs + p_rhs) / 2 ;}int intValue = getAverage(25, 42) ;double doubleValue = getAverage(25.25, 42.42) ;complex complexValue = getAverage(cA, cB) ; // cA, cB are complexMatrix matrixValue = getAverage(mA, mB) ; // mA, mB are Matrix// Java primitive/advanced types// It won't really work in Java, even with generics. Sorry. Discussing operator overloading Now that we have seen fair comparisons between C++ code using operator overloading, and the same code in Java, we can now discuss "operator overloading" as a concept. Operator overloading existed since before computers Even outside of computer science, there is operator overloading: For example, in mathematics, operators like + , - , * , etc. are overloaded. Indeed, the signification of + , - , * , etc. changes depending on the types of the operands (numerics, vectors, quantum wave functions, matrices, etc.). Most of us, as part of our science courses, learned multiple significations for operators, depending on the types of the operands. Did we find them confusing, then? Operator overloading depends on its operands This is the most important part of operator overloading: Like in mathematics, or in physics, the operation depends on its operands' types. So, know the type of the operand, and you will know the effect of the operation. Even C and Java have (hard-coded) operator overloading In C, the real behavior of an operator will change according to its operands. For example, adding two integers is different than adding two doubles, or even one integer and one double. There is even the whole pointer arithmetic domain (without casting, you can add to a pointer an integer, but you cannot add two pointers...). In Java, there is no pointer arithmetic, but someone still found string concatenation without the + operator would be ridiculous enough to justify an exception in the "operator overloading is evil" creed. It's just that you, as a C (for historical reasons) or Java (for personal reasons , see below) coder, you can't provide your own. In C++, operator overloading is not optional... In C++, operator overloading for built-in types is not possible (and this is a good thing), but user-defined types can have user-defined operator overloads. As already said earlier, in C++, and to the contrary to Java, user-types are not considered second-class citizens of the language, when compared to built-in types. So, if built-in types have operators, user types should be able to have them, too. The truth is that, like the toString() , clone() , equals() methods are for Java ( i.e. quasi-standard-like ), C++ operator overloading is so much part of C++ that it becomes as natural as the original C operators, or the before mentioned Java methods. Combined with template programming, operator overloading becomes a well known design pattern. In fact, you cannot go very far in STL without using overloaded operators, and overloading operators for your own class. ...but it should not be abused Operator overloading should strive to respect the semantics of the operator. Do not subtract in a + operator (as in "do not subtract in a add function", or "return crap in a clone method"). Cast overloading can be very dangerous because they can lead to ambiguities. So they should really be reserved for well defined cases. As for && and || , do not ever overload them unless you really know what you're doing, as you'll lose the the short circuit evaluation that the native operators && and || enjoy. So... Ok... Then why it is not possible in Java? Because James Gosling said so: I left out operator overloading as a fairly personal choice because I had seen too many people abuse it in C++. James Gosling. Source: http://www.gotw.ca/publications/c_family_interview.htm Please compare Gosling's text above with Stroustrup's below: Many C++ design decisions have their roots in my dislike for forcing people to do things in some particular way [...] Often, I was tempted to outlaw a feature I personally disliked, I refrained from doing so because I did not think I had the right to force my views on others . Bjarne Stroustrup. Source: The Design and Evolution of C++ (1.3 General Background) Would operator overloading benefit Java? Some objects would greatly benefit from operator overloading (concrete or numerical types, like BigDecimal, complex numbers, matrices, containers, iterators, comparators, parsers etc.). In C++, you can profit from this benefit because of Stroustrup's humility. In Java, you're simply screwed because of Gosling's personal choice . Could it be added to Java? The reasons for not adding operator overloading now in Java could be a mix of internal politics, allergy to the feature, distrust of developers (you know, the saboteur ones that seem to haunt Java teams...), compatibility with the previous JVMs, time to write a correct specification, etc.. So don't hold your breath waiting for this feature... But they do it in C#!!! Yeah... While this is far from being the only difference between the two languages, this one never fails to amuse me. Apparently, the C# folks, with their "every primitive is a struct , and a struct derives from Object" , got it right at first try. And they do it in other languages !!! Despite all the FUD against used defined operator overloading, the following languages support it: Kotlin , Scala , Dart , Python , F# , C# , D , Algol 68 , Smalltalk , Groovy , Raku (formerly Perl 6) , C++, Ruby , Haskell , MATLAB , Eiffel , Lua , Clojure , Fortran 90 , Swift , Ada , Delphi 2005 ... So many languages, with so many different (and sometimes opposing) philosophies, and yet they all agree on that point. Food for thought... | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/77718",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
77,726 | I really like Xml for saving data, but when does sqlite/database become the better option? eg, when the xml has more than x items or is greater than y MB? I am coding an rss reader and I believe I made the wrong choice in using xml over a sqlite database to store a cache of all the feeds items. There are some feeds which have an xml file of ~1mb after a month, another has over 700 items, while most only have ~30 items and are ~50kb in size after a several months. I currently have no plans to implement a cap because I like to be able to search through everything. So, my questions are: When is the overhead of sqlite/databases justified over using xml? Are the few large xml files justification enough for the database when there are a lot of small ones, though even the small ones will grow over time? (a long long time) updated (more info) Every time a feed is selected in the GUI I reload all the items from that feeds xml file. I also need to modify the read/unread status which seems really hacky when I loop through all nodes in the xml to find the item and then set it to read/unread. | I basically agree with Mitchel , that this can be highly specific depending on what are you going to do with XML and SQLite. For your case (cache), it seems to me that using SQLite (or other embedded databases) makes more sense. First I don't really think that SQLite will need more overhead than XML. And I mean both development time overhead and runtime overhead. Only problem is that you have a dependence on SQLite library. But since you would need some library for XML anyway it doesn't matter (I assume project is in C/C++). Advantages of SQLite over XML: everything in one file, performance loss is lower than XML as cache gets bigger, you can keep feed metadata separate from cache itself (other table), but accessible in the same way, SQL is probably easier to work with than XPath for most people. Disadvantages of SQLite: can be problematic with multiple processes accessing same database (probably not your case), you should know at least basic SQL. Unless there will be hundreds of thousands of items in cache, I don't think you will need to optimize it much, maybe in some way it can be more dangerous from security standpoint (SQL injection). On the other hand, you are not coding web app, so this should not happen. Other things are on par for both solutions probably. To sum it up, answers to your questions respectively: You will not know, unless you test your specific application with both back ends. Otherwise it's always just a guess. Basic support for both caches should not be a problem to code. Then benchmark and compare. Because of the way XML files are organized, SQLite searches should always be faster (barring some corner cases where it doesn't matter anyway because it's blazingly fast). Speeding up searches in XML would require index database anyway, in your case that would mean having cache for cache, not a particularly good idea. But with SQLite you can have indexing as part of database. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/77726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1147/"
]
} |
77,744 | I have a command line program, which outputs logging to the screen. I want error lines to show up in red. Is there some special character codes I can output to switch the text color to red, then switch it back to white? I'm using ruby but I imagine this would be the same in any other language. Something like: red = "\0123" # character codewhite = "\0223"print "#{red} ERROR: IT BROKE #{white}"print "other stuff" | On windows, you can do it easily in three ways: require 'win32console'puts "\e[31mHello, World!\e[0m" Now you could extend String with a small method called red require 'win32console' class String def red "\e[31m#{self}\e[0m" end end puts "Hello, World!".red Also you can extend String like this to get more colors: require 'win32console'class String { :reset => 0, :bold => 1, :dark => 2, :underline => 4, :blink => 5, :negative => 7, :black => 30, :red => 31, :green => 32, :yellow => 33, :blue => 34, :magenta => 35, :cyan => 36, :white => 37, }.each do |key, value| define_method key do "\e[#{value}m" + self + "\e[0m" end endendputs "Hello, World!".red Or, if you can install gems: gem install term-ansicolor And in your program: require 'win32console'require 'term/ansicolor'class String include Term::ANSIColorendputs "Hello, World!".redputs "Hello, World!".blueputs "Annoy me!".blink.yellow.bold Please see the docs for term/ansicolor for more information and possible usage. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/234/"
]
} |
77,826 | One thing I've started doing more often recently is retrieving some data at the beginning of a task and storing it in a $_SESSION['myDataForTheTask'] . Now it seems very convenient to do so but I don't know anything about performance, security risks or similar, using this approach. Is it something which is regularly done by programmers with more expertise or is it more of an amateur thing to do? For example: if (!isset($_SESSION['dataentry'])){ $query_taskinfo = "SELECT participationcode, modulearray, wavenum FROM mng_wave WHERE wave_id=" . mysql_real_escape_string($_GET['wave_id']); $result_taskinfo = $db->query($query_taskinfo); $row_taskinfo = $result_taskinfo->fetch_row(); $dataentry = array("pcode" => $row_taskinfo[0], "modules" => $row_taskinfo[1], "data_id" => 0, "wavenum" => $row_taskinfo[2], "prequest" => FALSE, "highlight" => array()); $_SESSION['dataentry'] = $dataentry;} | Well Session variables are really one of the only ways (and probably the most efficient) of having these variables available for the entire time that visitor is on the website, there's no real way for a user to edit them (other than an exploit in your code, or in the PHP interpreter) so they are fairly secure. It's a good way of storing settings that can be changed by the user, as you can read the settings from database once at the beginning of a session and it is available for that entire session, you only need to make further database calls if the settings are changed and of course, as you show in your code, it's trivial to find out whether the settings already exist or whether they need to be extracted from database. I can't think of any other way of storing temporary variables securely (since cookies can easily be modified and this will be undesirable in most cases) so $_SESSION would be the way to go | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11995/"
]
} |
77,873 | Are there PHP libraries which can be used to fill PDF forms and then save (flatten) them to PDF files? | The libraries and frameworks mentioned here are good, but if all you want to do is fill in a form and flatten it, I recommend the command line tool called pdftk (PDF Toolkit). See https://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ You can call the command line from php, and the command is pdftk formfile.pdf fill_form fieldinfo.fdf output outputfile.pdf flatten You will need to find the format of an FDF file in order to generate the info to fill in the fields. Here's a good link for that: http://www.tgreer.com/fdfServe.html [Edit: The above link seems to be out of commission. Here is some more info...] The pdftk command can generate an FDF file from a PDF form file. You can then use the generated FDF file as a sample. The form fields are the portion of the FDF file that looks like ...<< /T(f1-1) /V(text of field) >><< /T(f1-2) /V(text of another field) >>... You might also check out php-pdftk , which is a library specific to PHP. I have not used it, but commenter Álvaro (below) recommends it. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/77873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14166/"
]
} |
77,887 | As someone who is just starting to learn the intricacies of computer debugging, for the life of me, I can't understand how to read the Stack Text of a dump in Windbg. I've no idea of where to start on how to interpret them or how to go about it. Can anyone offer direction to this poor soul? ie (the only dump I have on hand with me actually) >b69dd8f0 bfa1e255 016d2fc0 89efc000 00000040 nv4_disp+0x48b94b69dd8f4 016d2fc0 89efc000 00000040 00000006 nv4_disp+0x49255b69dd8f8 89efc000 00000040 00000006 bfa1dcc0 0x16d2fc0b69dd8fc 00000000 00000006 bfa1dcc0 e1e71018 0x89efc000 I know the problem is to do with the Nvidia display driver, but what I want to know is how to actually read the stack (eg, what is b69dd8f4?) :-[ | First, you need to have the proper symbols configured. The symbols will allow you to match memory addresses to function names. In order to do this you have to create a local folder in your machine in which you will store a local cache of symbols (for example: C:\symbols). Then you need to specify the symbols server path. To do this just go to: File > Symbol File Path and type: SRV*c:\symbols*http://msdl.microsoft.com/download/symbols You can find more information on how to correctly configure the symbols here . Once you have properly configured the Symbols server you can open the minidump from: File > Open Crash Dump. Once the minidump is opened it will show you on the left side of the command line the thread that was executing when the dump was generated. If you want to see what this thread was executing type: kpn 200 This might take some time the first you execute it since it has to download the necessary public Microsoft related symbols the first time. Once all the symbols are downloaded you'll get something like: 01 MODULE!CLASS.FUNCTIONNAME1(...)02 MODULE!CLASS.FUNCTIONNAME2(...)03 MODULE!CLASS.FUNCTIONNAME3(...)04 MODULE!CLASS.FUNCTIONNAME4(...) Where: THE FIRST NUMBER : Indicates the frame number MODULE : The DLL that contains the code CLASS : (Only on C++ code) will show you the class that contains the code FUNCTIONAME : The method that was called. If you have the correct symbols you will also see the parameters. You might also see something like 01 MODULE!+989823 This indicates that you don't have the proper Symbol for this DLL and therefore you are only able to see the method offset. So, what is a callstack? Imagine you have this code: void main(){ method1();}void method1(){ method2();}int method2(){ return 20/0;} In this code method2 basically will throw an Exception since we are trying to divide by 0 and this will cause the process to crash. If we got a minidump when this occurred we would see the following callstack: 01 MYDLL!method2()02 MYDLL!method1()03 MYDLL!main() You can follow from this callstack that "main" called "method1" that then called "method2" and it failed. In your case you've got this callstack (which I guess is the result of running "kb" command) b69dd8f0 bfa1e255 016d2fc0 89efc000 00000040 nv4_disp+0x48b94b69dd8f4 016d2fc0 89efc000 00000040 00000006 nv4_disp+0x49255b69dd8f8 89efc000 00000040 00000006 bfa1dcc0 0x16d2fc0b69dd8fc 00000000 00000006 bfa1dcc0 e1e71018 0x89efc000 The first column indicates the Child Frame Pointer, the second column indicates the Return address of the method that is executing, the next three columns show the first 3 parameters that were passed to the method, and the last part is the DLL name (nv4_disp) and the offset of the method that is being executed (+0x48b94). Since you don't have the symbols you are not able to see the method name. I doubt tha NVIDIA offers public access to their symbols so I gues you can't get much information from here. I recommend you run "kpn 200". This will show you the full callstack and you might be able to see the origin of the method that caused this crash (if it was a Microsoft DLL you should have the proper symbols in the steps that I provided you). At least you know it's related to a NVIDIA bug ;-) Try upgrading the DLLs of this driver to the latest version. In case you want to learn more about WinDBG debugging I recommend the following links: If broken it is, fix it you should TechNet Webcast: Windows Hang and Crash Dump Analysis Delicious.com popular links on WinDBG | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77887",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14173/"
]
} |
77,936 | As part of a project at work I have to calculate the centroid of a set of points in 3D space. Right now I'm doing it in a way that seems simple but naive -- by taking the average of each set of points, as in: centroid = average(x), average(y), average(z) where x , y and z are arrays of floating-point numbers. I seem to recall that there is a way to get a more accurate centroid, but I haven't found a simple algorithm for doing so. Anyone have any ideas or suggestions? I'm using Python for this, but I can adapt examples from other languages. | Contrary to the common refrain here, there are different ways to define (and calculate) a center of a point cloud. The first and most common solution has been suggested by you already and I will not argue that there is anything wrong with this: centroid = average(x), average(y), average(z) The "problem" here is that it will "distort" your center-point depending on the distribution of your points. If, for example, you assume that all your points are within a cubic box or some other geometric shape, but most of them happen to be placed in the upper half, your center-point will also shift in that direction. As an alternative you could use the mathematical middle (the mean of the extrema) in each dimension to avoid this: middle = middle(x), middle(y), middle(z) You can use this when you don't care much about the number of points, but more about the global bounding box, because that's all this is - the center of the bounding box around your points. Lastly, you could also use the median (the element in the middle) in each dimension: median = median(x), median(y), median(z) Now this will sort of do the opposite to the middle and actually help you ignore outliers in your point cloud and find a centerpoint based on the distribution of your points. A more and robust way to find a "good" centerpoint might be to ignore the top and bottom 10% in each dimension and then calculate the average or median . As you can see you can define the centerpoint in different ways. Below I am showing you examples of 2 2D point clouds with these suggestions in mind. The dark blue dot is the average (mean) centroid.The median is shown in green.And the middle is shown in red.In the second image you will see exactly what I was talking about earlier: The green dot is "closer" to the densest part of the point cloud, while the red dot is further way from it, taking into account the most extreme boundaries of the point cloud. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/676/"
]
} |
77,954 | How do you get Perl to stop and give a stack trace when you reference an undef value, rather than merely warning? It seems that use strict; isn't sufficient for this purpose. | use warnings FATAL => 'uninitialized';use Carp ();$SIG{__DIE__} = \&Carp::confess; The first line makes the warning fatal. The next two cause a stack trace when your program dies. See also man 3pm warnings for more details. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/77954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14193/"
]
} |
78,108 | E.g. Is it more secure to use mod_php instead of php-cgi ?Or is it more secure to use mod_perl instead of traditional cgi-scripts ? I'm mainly interested in security concerns, but speed might be an issue if there are significant differences. | Security in what sense? Either way it really depends on what script is running and how well it is written. Too many scripts these days are half-assed and do not properly do input validation. I personally prefer FastCGI to mod_php since if a FastCGI process dies a new one will get spawned, whereas I have seen mod_php kill the entirety of Apache. As for security, with FastCGI you could technically run the php process under a different user from the default web servers user. On a seperate note, if you are using Apache's new worker threading support you will want to make sure that you are not using mod_php as some of the extensions are not thread safe and will cause race conditions. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1994377/"
]
} |
78,125 | The following code snippet (correctly) gives a warning in C and an error in C++ (using gcc & g++ respectively, tested with versions 3.4.5 and 4.2.1; MSVC does not seem to care): char **a;const char** b = a; I can understand and accept this. The C++ solution to this problem is to change b to be a const char * const *, which disallows reassignment of the pointers and prevents you from circumventing const-correctness ( C++ FAQ ). char **a;const char* const* b = a; However, in pure C, the corrected version (using const char * const *) still gives a warning, and I don't understand why.Is there a way to get around this without using a cast? To clarify: 1) Why does this generate a warning in C? It should be entirely const-safe, and the C++ compiler seems to recognize it as such. 2) What is the correct way to go about accepting this char** as a parameter while saying (and having the compiler enforce) that I will not be modifying the characters it points to?For example, if I wanted to write a function: void f(const char* const* in) { // Only reads the data from in, does not write to it} And I wanted to invoke it on a char**, what would be the correct type for the parameter? | I had this same problem a few years ago and it irked me to no end. The rules in C are more simply stated (i.e. they don't list exceptions like converting char** to const char*const* ). Consequenlty, it's just not allowed. With the C++ standard, they included more rules to allow cases like this. In the end, it's just a problem in the C standard. I hope the next standard (or technical report) will address this. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/78125",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14266/"
]
} |
78,127 | Apple's CoreGraphics library defines two functions for describing an arc. CGPathAddArc adds an arc based on a center point, radius, and pair of angles. CGPathAddArcToPoint adds an arc based on a radius and a pair of tangent lines. The details are explained in the CGPath API reference . Why two functions? Simple convenience? Is one more efficient than the other? Is one defined in terms of the other? | CGContextAddArc does this: where the red line is what will be drawn, sA is startAngle , eA is the endAngle , r is radius , and x and y are x and y . If you have a previous point the function will line from this point to the start of the arc (unless you are careful this line won't be going in the same direction as the arc). CGContextAddArcToPoint works like this: Where P1 is the current point of the path, the x1, x2, y1, y2 match the functions x1 , x2 , y1 , y2 and r is radius . The arc will start in the same direction as the line between the current point and (x1, y1) and end in the direction between (x1, y1) and (x2, y2) . it won't line to (x2, y2) It will stop at the end of the circle. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/78127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10947/"
]
} |
78,172 | In the C programming language and Pthreads as the threading library; do variables/structures that are shared between threads need to be declared as volatile? Assuming that they might be protected by a lock or not (barriers perhaps). Does the pthread POSIX standard have any say about this, is this compiler-dependent or neither? Edit to add: Thanks for the great answers. But what if you're not using locks; what if you're using barriers for example? Or code that uses primitives such as compare-and-swap to directly and atomically modify a shared variable... | As long as you are using locks to control access to the variable, you do not need volatile on it. In fact, if you're putting volatile on any variable you're probably already wrong. https://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/ | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11688/"
]
} |
78,181 | If I am given a MemoryStream that I know has been populated with a String , how do I get a String back out? | This sample shows how to read and write a string to a MemoryStream. Imports System.IOModule Module1 Sub Main() ' We don't need to dispose any of the MemoryStream ' because it is a managed object. However, just for ' good practice, we'll close the MemoryStream. Using ms As New MemoryStream Dim sw As New StreamWriter(ms) sw.WriteLine("Hello World") ' The string is currently stored in the ' StreamWriters buffer. Flushing the stream will ' force the string into the MemoryStream. sw.Flush() ' If we dispose the StreamWriter now, it will close ' the BaseStream (which is our MemoryStream) which ' will prevent us from reading from our MemoryStream 'sw.Dispose() ' The StreamReader will read from the current ' position of the MemoryStream which is currently ' set at the end of the string we just wrote to it. ' We need to set the position to 0 in order to read ' from the beginning. ms.Position = 0 Dim sr As New StreamReader(ms) Dim myStr = sr.ReadToEnd() Console.WriteLine(myStr) ' We can dispose our StreamWriter and StreamReader ' now, though this isn't necessary (they don't hold ' any resources open on their own). sw.Dispose() sr.Dispose() End Using Console.WriteLine("Press any key to continue.") Console.ReadKey() End SubEnd Module | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/78181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/320/"
]
} |
78,257 | Lot of googling did not help me! Are there any good dictionary web based available? I am looking for a site which can send me the meaning of words if we pass the word through query string! | I found you a Big Huge Thesaurus with a web API, and a dictionary at Aonaware that looks like it uses SOAP | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78257",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12178/"
]
} |
78,277 | When can 64-bit writes be guaranteed to be atomic, when programming in C on an Intel x86-based platform (in particular, an Intel-based Mac running MacOSX 10.4 using the Intel compiler)? For example: unsigned long long int y;y = 0xfedcba87654321ULL;/* ... a bunch of other time-consuming stuff happens... */y = 0x12345678abcdefULL; If another thread is examining the value of y after the first assignment to y has finished executing, I would like to ensure that it sees either the value 0xfedcba87654321 or the value 0x12345678abcdef, and not some blend of them. I would like to do this without any locking, and if possible without any extra code. My hope is that, when using a 64-bit compiler (the 64-bit Intel compiler), on an operating system capable of supporting 64-bit code (MacOSX 10.4), that these 64-bit writes will be atomic. Is this always true? | Your best bet is to avoid trying to build your own system out of primitives, and instead use locking unless it really shows up as a hot spot when profiling. (If you think you can be clever and avoid locks, don't. You aren't. That's the general "you" which includes me and everybody else.) You should at minimum use a spin lock, see spinlock(3) . And whatever you do, don't try to implement "your own" locks. You will get it wrong. Ultimately, you need to use whatever locking or atomic operations your operating system provides. Getting these sorts of things exactly right in all cases is extremely difficult . Often it can involve knowledge of things like the errata for specific versions of specific processor. ("Oh, version 2.0 of that processor didn't do the cache-coherency snooping at the right time, it's fixed in version 2.0.1 but on 2.0 you need to insert a NOP .") Just slapping a volatile keyword on a variable in C is almost always insufficient. On Mac OS X, that means you need to use the functions listed in atomic(3) to perform truly atomic-across-all-CPUs operations on 32-bit, 64-bit, and pointer-sized quantities. (Use the latter for any atomic operations on pointers so you're 32/64-bit compatible automatically.) That goes whether you want to do things like atomic compare-and-swap, increment/decrement, spin locking, or stack/queue management. Fortunately the spinlock(3) , atomic(3) , and barrier(3) functions should all work correctly on all CPUs that are supported by Mac OS X. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
78,296 | What are some reasons why PHP would force errors to show, no matter what you tell it to disable? I have tried error_reporting(0);ini_set('display_errors', 0); with no luck. | Note the caveat in the manual at http://uk.php.net/error_reporting : Most of E_STRICT errors are evaluated at the compile time thus such errors are not reported in the file where error_reporting is enhanced to include E_STRICT errors (and vice versa). If your underlying system is configured to report E_STRICT errors, these may be output before your code is even considered. Don't forget, error_reporting/ini_set are runtime evaluations, and anything performed in a "before-run" phase will not see their effects. Based on your comment that your error is... Parse error: syntax error, unexpected T_VARIABLE, expecting ',' or ';' in /usr/home/REDACTED/public_html/dev.php on line 11 Then the same general concept applies. Your code is never run, as it is syntactically invalid (you forgot a ';'). Therefore, your change of error reporting is never encountered. Fixing this requires a change of the system level error reporting. For example, on Apache you may be able to place... php_value error_reporting 0 in a .htaccess file to suppress them all, but this is system configuration dependent. Pragmatically, don't write files with syntax errors :) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78296",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14322/"
]
} |
78,303 | I'm toying with my first remoting project and I need to create a RemotableType DLL. I know I can compile it by hand with csc, but I wonder if there are some facilities in place on Visual Studio to handle the Remoting case, or, more specificly, to tell it that a specific file should be compiled as a .dll without having to add another project to a solution exclusively to compile a class or two into DLLs. NOTE: I know I should toy with my first WCF project, but this has to run on 2.0. | Note the caveat in the manual at http://uk.php.net/error_reporting : Most of E_STRICT errors are evaluated at the compile time thus such errors are not reported in the file where error_reporting is enhanced to include E_STRICT errors (and vice versa). If your underlying system is configured to report E_STRICT errors, these may be output before your code is even considered. Don't forget, error_reporting/ini_set are runtime evaluations, and anything performed in a "before-run" phase will not see their effects. Based on your comment that your error is... Parse error: syntax error, unexpected T_VARIABLE, expecting ',' or ';' in /usr/home/REDACTED/public_html/dev.php on line 11 Then the same general concept applies. Your code is never run, as it is syntactically invalid (you forgot a ';'). Therefore, your change of error reporting is never encountered. Fixing this requires a change of the system level error reporting. For example, on Apache you may be able to place... php_value error_reporting 0 in a .htaccess file to suppress them all, but this is system configuration dependent. Pragmatically, don't write files with syntax errors :) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5190/"
]
} |
78,336 | I need to figure out the hard drive name for a solaris box and it is not clear to me what the device name is. On linux, it would be something like /dev/hda or /dev/sda , but on solaris I am getting a bit lost in the partitions and what the device is called. I think that entries like /dev/rdsk/c0t0d0s0 are the partitions, how is the whole hard drive referenced? | /dev/rdsk/c0t0d0s0 means Controller 0, SCSI target (ID) 0, and s means Slice (partition) 0. Typically, by convention, s2 is the entire disk. This partition overlaps with the other partitions. prtvtoc /dev/rdsk/c0t0d0s0 will show you the partition table for the disk, to make sure. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13912/"
]
} |
78,389 | I'm new to RhinoMocks, and trying to get a grasp on the syntax in addition to what is happening under the hood. I have a user object, we'll call it User, which has a property called IsAdministrator. The value for IsAdministrator is evaluated via another class that checks the User's security permissions, and returns either true or false based on those permissions. I'm trying to mock this User class, and fake the return value for IsAdministrator in order to isolate some Unit Tests. This is what I'm doing so far: public void CreateSomethingIfUserHasAdminPermissions(){ User user = _mocks.StrictMock<User>(); SetupResult.For(user.IsAdministrator).Return(true); // do something with my User object} Now, I'm expecting that Rhino is going to 'fake' the call to the property getter, and just return true to me. Is this incorrect? Currently I'm getting an exception because of dependencies in the IsAdministrator property. Can someone explain how I can achieve my goal here? | One quick note before I jump into this. Typically you want to avoid the use of a "Strict" mock because it makes for a brittle test. A strict mock will throw an exception if anything occurs that you do not explicitly tell Rhino will happen. Also I think you may be misunderstanding exactly what Rhino is doing when you make a call to create a mock. Think of it as a custom Object that has either been derived from, or implements the System.Type you defined. If you did it yourself it would look like this: public class FakeUserType: User{ //overriding code here} Since IsAdministrator is probably just a public property on the User type you can't override it in the inheriting type. As far as your question is concerned there are multiple ways you could handle this. You could implement IsAdministrator as a virtual property on your user class as aaronjensen mentioned as follows: public class User{ public virtual Boolean IsAdministrator { get; set; }} This is an ok approach, but only if you plan on inheriting from your User class. Also if you wan't to fake other members on this class they would also have to be virtual, which is probably not the desired behavior. Another way to accomplish this is through the use of interfaces. If it is truly the User class you are wanting to Mock then I would extract an interface from it. Your above example would look something like this: public interface IUser{ Boolean IsAdministrator { get; }}public class User : IUser{ private UserSecurity _userSecurity = new UserSecurity(); public Boolean IsAdministrator { get { return _userSecurity.HasAccess("AdminPermissions"); } }}public void CreateSomethingIfUserHasAdminPermissions(){ IUser user = _mocks.StrictMock<IUser>(); SetupResult.For(user.IsAdministrator).Return(true); // do something with my User object} You can get fancier if you want by using dependency injection and IOC but the basic principle is the same across the board. Typically you want your classes to depend on interfaces rather than concrete implementations anyway. I hope this helps. I have been using RhinoMocks for a long time on a major project now so don't hesitate to ask me questions about TDD and mocking. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/769/"
]
} |
78,474 | Using only ANSI C, what is the best way to, with fair certainty, determine if a C style string is either a integer or a real number (i.e float/double)? | Don't use atoi and atof as these functions return 0 on failure. Last time I checked 0 is a valid integer and float, therefore no use for determining type. use the strto{l,ul,ull,ll,d} functions, as these set errno on failure, and also report where the converted data ended. strtoul: http://www.opengroup.org/onlinepubs/007908799/xsh/strtoul.html this example assumes that the string contains a single value to be converted. #include <errno.h>char* to_convert = "some string";char* p = to_convert;errno = 0;unsigned long val = strtoul(to_convert, &p, 10);if (errno != 0) // conversion failed (EINVAL, ERANGE)if (to_convert == p) // conversion failed (no characters consumed)if (*p != 0) // conversion failed (trailing data) Thanks to Jonathan Leffler for pointing out that I forgot to set errno to 0 first. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/78474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9418/"
]
} |
78,493 | I once read that one way to obtain a unique filename in a shell for temp files was to use a double dollar sign ( $$ ). This does produce a number that varies from time to time... but if you call it repeatedly, it returns the same number. (The solution is to just use the time.) I am curious to know what $$ actually is, and why it would be suggested as a way to generate unique filenames. | In Bash $$ is the process ID, as noted in the comments it is not safe to use as a temp filename for a variety of reasons. For temporary file names, use the mktemp command. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/78493",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14345/"
]
} |
78,497 | Does anyone know of any resources that talk about best practices or design patterns for shell scripts (sh, bash etc.)? | I wrote quite complex shell scripts and my first suggestion is "don't". The reason is that is fairly easy to make a small mistake that hinders your script, or even make it dangerous. That said, I don't have other resources to pass you but my personal experience. Here is what I normally do, which is overkill, but tends to be solid, although very verbose. Invocation make your script accept long and short options. be careful because there are two commands to parse options, getopt and getopts. Use getopt as you face less trouble. CommandLineOptions__config_file=""CommandLineOptions__debug_level=""getopt_results=`getopt -s bash -o c:d:: --long config_file:,debug_level:: -- "$@"`if test $? != 0then echo "unrecognized option" exit 1fieval set -- "$getopt_results"while truedo case "$1" in --config_file) CommandLineOptions__config_file="$2"; shift 2; ;; --debug_level) CommandLineOptions__debug_level="$2"; shift 2; ;; --) shift break ;; *) echo "$0: unparseable option $1" EXCEPTION=$Main__ParameterException EXCEPTION_MSG="unparseable option $1" exit 1 ;; esacdoneif test "x$CommandLineOptions__config_file" == "x"then echo "$0: missing config_file parameter" EXCEPTION=$Main__ParameterException EXCEPTION_MSG="missing config_file parameter" exit 1fi Another important point is that a program should always return zero if completes successfully, non-zero if something went wrong. Function calls You can call functions in bash, just remember to define them before the call. Functions are like scripts, they can only return numeric values. This means that you have to invent a different strategy to return string values. My strategy is to use a variable called RESULT to store the result, and returning 0 if the function completed cleanly. Also, you can raise exceptions if you are returning a value different from zero, and then set two "exception variables" (mine: EXCEPTION and EXCEPTION_MSG), the first containing the exception type and the second a human readable message. When you call a function, the parameters of the function are assigned to the special vars $0, $1 etc. I suggest you to put them into more meaningful names. declare the variables inside the function as local: function foo { local bar="$0"} Error prone situations In bash, unless you declare otherwise, an unset variable is used as an empty string. This is very dangerous in case of typo, as the badly typed variable will not be reported, and it will be evaluated as empty. use set -o nounset to prevent this to happen. Be careful though, because if you do this, the program will abort every time you evaluate an undefined variable. For this reason, the only way to check if a variable is not defined is the following: if test "x${foo:-notset}" == "xnotset"then echo "foo not set"fi You can declare variables as readonly: readonly readonly_var="foo" Modularization You can achieve "python like" modularization if you use the following code: set -o nounsetfunction getScriptAbsoluteDir { # @description used to get the script path # @param $1 the script $0 parameter local script_invoke_path="$1" local cwd=`pwd` # absolute path ? if so, the first character is a / if test "x${script_invoke_path:0:1}" = 'x/' then RESULT=`dirname "$script_invoke_path"` else RESULT=`dirname "$cwd/$script_invoke_path"` fi}script_invoke_path="$0"script_name=`basename "$0"`getScriptAbsoluteDir "$script_invoke_path"script_absolute_dir=$RESULTfunction import() { # @description importer routine to get external functionality. # @description the first location searched is the script directory. # @description if not found, search the module in the paths contained in $SHELL_LIBRARY_PATH environment variable # @param $1 the .shinc file to import, without .shinc extension module=$1 if test "x$module" == "x" then echo "$script_name : Unable to import unspecified module. Dying." exit 1 fi if test "x${script_absolute_dir:-notset}" == "xnotset" then echo "$script_name : Undefined script absolute dir. Did you remove getScriptAbsoluteDir? Dying." exit 1 fi if test "x$script_absolute_dir" == "x" then echo "$script_name : empty script path. Dying." exit 1 fi if test -e "$script_absolute_dir/$module.shinc" then # import from script directory . "$script_absolute_dir/$module.shinc" elif test "x${SHELL_LIBRARY_PATH:-notset}" != "xnotset" then # import from the shell script library path # save the separator and use the ':' instead local saved_IFS="$IFS" IFS=':' for path in $SHELL_LIBRARY_PATH do if test -e "$path/$module.shinc" then . "$path/$module.shinc" return fi done # restore the standard separator IFS="$saved_IFS" fi echo "$script_name : Unable to find module $module." exit 1} you can then import files with the extension .shinc with the following syntax import "AModule/ModuleFile" Which will be searched in SHELL_LIBRARY_PATH. As you always import in the global namespace, remember to prefix all your functions and variables with a proper prefix, otherwise you risk name clashes. I use double underscore as the python dot. Also, put this as first thing in your module # avoid double inclusionif test "${BashInclude__imported+defined}" == "defined"then return 0fiBashInclude__imported=1 Object oriented programming In bash, you cannot do object oriented programming, unless you build a quite complex system of allocation of objects (I thought about that. it's feasible, but insane).In practice, you can however do "Singleton oriented programming": you have one instance of each object, and only one. What I do is: i define an object into a module (see the modularization entry). Then I define empty vars (analogous to member variables) an init function (constructor) and member functions, like in this example code # avoid double inclusionif test "${Table__imported+defined}" == "defined"then return 0fiTable__imported=1readonly Table__NoException=""readonly Table__ParameterException="Table__ParameterException"readonly Table__MySqlException="Table__MySqlException"readonly Table__NotInitializedException="Table__NotInitializedException"readonly Table__AlreadyInitializedException="Table__AlreadyInitializedException"# an example for module enum constants, used in the mysql table, in this casereadonly Table__GENDER_MALE="GENDER_MALE"readonly Table__GENDER_FEMALE="GENDER_FEMALE"# private: prefixed with p_ (a bash variable cannot start with _)p_Table__mysql_exec="" # will contain the executed mysql command p_Table__initialized=0function Table__init { # @description init the module with the database parameters # @param $1 the mysql config file # @exception Table__NoException, Table__ParameterException EXCEPTION="" EXCEPTION_MSG="" EXCEPTION_FUNC="" RESULT="" if test $p_Table__initialized -ne 0 then EXCEPTION=$Table__AlreadyInitializedException EXCEPTION_MSG="module already initialized" EXCEPTION_FUNC="$FUNCNAME" return 1 fi local config_file="$1" # yes, I am aware that I could put default parameters and other niceties, but I am lazy today if test "x$config_file" = "x"; then EXCEPTION=$Table__ParameterException EXCEPTION_MSG="missing parameter config file" EXCEPTION_FUNC="$FUNCNAME" return 1 fi p_Table__mysql_exec="mysql --defaults-file=$config_file --silent --skip-column-names -e " # mark the module as initialized p_Table__initialized=1 EXCEPTION=$Table__NoException EXCEPTION_MSG="" EXCEPTION_FUNC="" return 0}function Table__getName() { # @description gets the name of the person # @param $1 the row identifier # @result the name EXCEPTION="" EXCEPTION_MSG="" EXCEPTION_FUNC="" RESULT="" if test $p_Table__initialized -eq 0 then EXCEPTION=$Table__NotInitializedException EXCEPTION_MSG="module not initialized" EXCEPTION_FUNC="$FUNCNAME" return 1 fi id=$1 if test "x$id" = "x"; then EXCEPTION=$Table__ParameterException EXCEPTION_MSG="missing parameter identifier" EXCEPTION_FUNC="$FUNCNAME" return 1 fi local name=`$p_Table__mysql_exec "SELECT name FROM table WHERE id = '$id'"` if test $? != 0 ; then EXCEPTION=$Table__MySqlException EXCEPTION_MSG="unable to perform select" EXCEPTION_FUNC="$FUNCNAME" return 1 fi RESULT=$name EXCEPTION=$Table__NoException EXCEPTION_MSG="" EXCEPTION_FUNC="" return 0} Trapping and handling signals I found this useful to catch and handle exceptions. function Main__interruptHandler() { # @description signal handler for SIGINT echo "SIGINT caught" exit} function Main__terminationHandler() { # @description signal handler for SIGTERM echo "SIGTERM caught" exit} function Main__exitHandler() { # @description signal handler for end of the program (clean or unclean). # probably redundant call, we already call the cleanup in main. exit} trap Main__interruptHandler INTtrap Main__terminationHandler TERMtrap Main__exitHandler EXITfunction Main__main() { # body}# catch signals and exittrap exit INT TERM EXITMain__main "$@" Hints and tips If something does not work for some reason, try to reorder the code. Order is important and not always intuitive. do not even consider working with tcsh. it does not support functions, and it's horrible in general. Hope it helps, although please note. If you have to use the kind of things I wrote here, it means that your problem is too complex to be solved with shell. use another language. I had to use it due to human factors and legacy. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/78497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14437/"
]
} |
78,523 | I'm curious about OpenID. While I agree that the idea of unified credentials is great, I have a few reservations. What is to prevent an OpenID provider from going crazy and holding the OpenID accounts they have hostage until you pay $n? If I decide I don't like the provider I'm with this there a way to migrate to a different provider with out losing all my information at various sites? Edit: I feel like my question is being misunderstood. It has been said that I can simple create a delegation and this is partially true. I can do this if I haven't already created an account at, for example, SO. If I decide to set up my own OpenID provider at some point, there is no way that I can see to move and keep my account information. That is the sort of think I was wondering about. Second Edit: I see that there is a uservoice about adding this to SO. Link | This is why you can use OpenID delegation , i.e. you set up two META tags on your personal website and then you can use that site's URL as an alias for your current OpenID provider of choice. Should it get unfriendly you just switch to another and update your tags. Additionally you can always operate your own OpenID identity provider (if you have a server with, for example, a web server and PHP on it). I use phpMyID for this. Update : regarding the updated question: OpenID consumers (sites where you log in using OpenID) may allow you to switch the OpenID used for sign-on at their discretion. Sourceforge, for example, does. To prevent problems it's best to use delegation right from the start. Otherwise this is a necessary limitation imposed by OpenID's design. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/78523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10738/"
]
} |
78,536 | I want to do something like: MyObject myObj = GetMyObj(); // Create and fill a new objectMyObject newObj = myObj.Clone(); And then make changes to the new object that are not reflected in the original object. I don't often need this functionality, so when it's been necessary, I've resorted to creating a new object and then copying each property individually, but it always leaves me with the feeling that there is a better or more elegant way of handling the situation. How can I clone or deep copy an object so that the cloned object can be modified without any changes being reflected in the original object? | Whereas one approach is to implement the ICloneable interface (described here , so I won't regurgitate), here's a nice deep clone object copier I found on The Code Project a while ago and incorporated it into our code.As mentioned elsewhere, it requires your objects to be serializable. using System;using System.IO;using System.Runtime.Serialization;using System.Runtime.Serialization.Formatters.Binary;/// <summary>/// Reference Article http://www.codeproject.com/KB/tips/SerializedObjectCloner.aspx/// Provides a method for performing a deep copy of an object./// Binary Serialization is used to perform the copy./// </summary>public static class ObjectCopier{ /// <summary> /// Perform a deep copy of the object via serialization. /// </summary> /// <typeparam name="T">The type of object being copied.</typeparam> /// <param name="source">The object instance to copy.</param> /// <returns>A deep copy of the object.</returns> public static T Clone<T>(T source) { if (!typeof(T).IsSerializable) { throw new ArgumentException("The type must be serializable.", nameof(source)); } // Don't serialize a null object, simply return the default for that object if (ReferenceEquals(source, null)) return default; using var Stream stream = new MemoryStream(); IFormatter formatter = new BinaryFormatter(); formatter.Serialize(stream, source); stream.Seek(0, SeekOrigin.Begin); return (T)formatter.Deserialize(stream); }} The idea is that it serializes your object and then deserializes it into a fresh object. The benefit is that you don't have to concern yourself about cloning everything when an object gets too complex. In case of you prefer to use the new extension methods of C# 3.0, change the method to have the following signature: public static T Clone<T>(this T source){ // ...} Now the method call simply becomes objectBeingCloned.Clone(); . EDIT (January 10 2015) Thought I'd revisit this, to mention I recently started using (Newtonsoft) Json to do this, it should be lighter, and avoids the overhead of [Serializable] tags. ( NB @atconway has pointed out in the comments that private members are not cloned using the JSON method) /// <summary>/// Perform a deep Copy of the object, using Json as a serialization method. NOTE: Private members are not cloned using this method./// </summary>/// <typeparam name="T">The type of object being copied.</typeparam>/// <param name="source">The object instance to copy.</param>/// <returns>The copied object.</returns>public static T CloneJson<T>(this T source){ // Don't serialize a null object, simply return the default for that object if (ReferenceEquals(source, null)) return default; // initialize inner objects individually // for example in default constructor some list property initialized with some values, // but in 'source' these items are cleaned - // without ObjectCreationHandling.Replace default constructor values will be added to result var deserializeSettings = new JsonSerializerSettings {ObjectCreationHandling = ObjectCreationHandling.Replace}; return JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source), deserializeSettings);} | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/78536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3742/"
]
} |
78,548 | What is your way of passing data to Master Page (using ASP.NET MVC) without breaking MVC rules? Personally, I prefer to code abstract controller (base controller) or base class which is passed to all views. | If you prefer your views to have strongly typed view data classes this might work for you. Other solutions are probably more correct but this is a nice balance between design and practicality IMHO. The master page takes a strongly typed view data class containing only information relevant to it: public class MasterViewData{ public ICollection<string> Navigation { get; set; }} Each view using that master page takes a strongly typed view data class containing its information and deriving from the master pages view data: public class IndexViewData : MasterViewData{ public string Name { get; set; } public float Price { get; set; }} Since I don't want individual controllers to know anything about putting together the master pages data I encapsulate that logic into a factory which is passed to each controller: public interface IViewDataFactory{ T Create<T>() where T : MasterViewData, new()}public class ProductController : Controller{ public ProductController(IViewDataFactory viewDataFactory) ... public ActionResult Index() { var viewData = viewDataFactory.Create<ProductViewData>(); viewData.Name = "My product"; viewData.Price = 9.95; return View("Index", viewData); }} Inheritance matches the master to view relationship well but when it comes to rendering partials / user controls I will compose their view data into the pages view data, e.g. public class IndexViewData : MasterViewData{ public string Name { get; set; } public float Price { get; set; } public SubViewData SubViewData { get; set; }}<% Html.RenderPartial("Sub", Model.SubViewData); %> This is example code only and is not intended to compile as is. Designed for ASP.Net MVC 1.0. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/78548",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/347616/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.