source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
124,166
What is the best way to make a <div> fade away after a given amount of time (without using some of the JavaScript libraries available). I'm looking for a very lightweight solution not requiring a huge JavaScript library to be sent to the browser.
Not sure why you'd be so against using something like jQuery, which would make accomplishing this effect all but trivial, but essentially, you need to wrap a series of changes to the -moz-opacity, opacity, and filter:alpha CSS rules in a setTimeout(). Or, use jQuery, and wrap a fadeOut() call in setTimeout. Your choice.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124166", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16440/" ] }
124,167
Please explain to me why the very last echo statement is blank? I expect that XCODE is incremented in the while loop to a value of 1: #!/bin/bashOUTPUT="name1 ip ip status" # normally output of another command with multi line outputif [ -z "$OUTPUT" ]then echo "Status WARN: No messages from SMcli" exit $STATE_WARNINGelse echo "$OUTPUT"|while read NAME IP1 IP2 STATUS do if [ "$STATUS" != "Optimal" ] then echo "CRIT: $NAME - $STATUS" echo $((++XCODE)) else echo "OK: $NAME - $STATUS" fi donefiecho $XCODE I've tried using the following statement instead of the ++XCODE method XCODE=`expr $XCODE + 1` and it too won't print outside of the while statement. I think I'm missing something about variable scope here, but the ol' man page isn't showing it to me.
Because you're piping into the while loop, a sub-shell is created to run the while loop. Now this child process has its own copy of the environment and can't pass anyvariables back to its parent (as in any unix process). Therefore you'll need to restructure so that you're not piping into the loop.Alternatively you could run in a function, for example, and echo the value youwant returned from the sub-process. http://tldp.org/LDP/abs/html/subshells.html#SUBSHELL
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/124167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14230/" ] }
124,205
I would like to do a lookup of tables in my SQL Server 2005 Express database based on table name. In MySQL I would use SHOW TABLES LIKE "Datasheet%" , but in T-SQL this throws an error (it tries to look for a SHOW stored procedure and fails). Is this possible, and if so, how?
This will give you a list of the tables in the current database: Select Table_name as "Table name"From Information_schema.TablesWhere Table_type = 'BASE TABLE' and Objectproperty (Object_id(Table_name), 'IsMsShipped') = 0 Some other useful T-SQL bits can be found here: http://www.devx.com/tips/Tip/28529
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/124205", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21402/" ] }
124,210
In order to help my team write testable code, I came up with this simple list of best practices for making our C# code base more testable. (Some of the points refer to limitations of Rhino Mocks, a mocking framework for C#, but the rules may apply more generally as well.) Does anyone have any best practices that they follow? To maximize the testability of code, follow these rules: Write the test first, then the code. Reason: This ensures that you write testable code and that every line of code gets tests written for it. Design classes using dependency injection. Reason: You cannot mock or test what cannot be seen. Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter. Reason: Allows the business logic to be tested while the parts that can't be tested (the UI) is minimized. Do not write static methods or classes. Reason: Static methods are difficult or impossible to isolate and Rhino Mocks is unable to mock them. Program off interfaces, not classes. Reason: Using interfaces clarifies the relationships between objects. An interface should define a service that an object needs from its environment. Also, interfaces can be easily mocked using Rhino Mocks and other mocking frameworks. Isolate external dependencies. Reason: Unresolved external dependencies cannot be tested. Mark as virtual the methods you intend to mock. Reason: Rhino Mocks is unable to mock non-virtual methods.
Definitely a good list. Here are a few thoughts on it: Write the test first, then the code. I agree, at a high level. But, I'd be more specific: "Write a test first, then write just enough code to pass the test, and repeat." Otherwise, I'd be afraid that my unit tests would look more like integration or acceptance tests. Design classes using dependency injection. Agreed. When an object creates its own dependencies, you have no control over them. Inversion of Control / Dependency Injection gives you that control, allowing you to isolate the object under test with mocks/stubs/etc. This is how you test objects in isolation. Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter. Agreed. Note that even the presenter/controller can be tested using DI/IoC, by handing it a stubbed/mocked view and model. Check out Presenter First TDD for more on that. Do not write static methods or classes. Not sure I agree with this one. It is possible to unit test a static method/class without using mocks. So, perhaps this is one of those Rhino Mock specific rules you mentioned. Program off interfaces, not classes. I agree, but for a slightly different reason. Interfaces provide a great deal of flexibility to the software developer - beyond just support for various mock object frameworks. For example, it is not possible to support DI properly without interfaces. Isolate external dependencies. Agreed. Hide external dependencies behind your own facade or adapter (as appropriate) with an interface. This will allow you to isolate your software from the external dependency, be it a web service, a queue, a database or something else. This is especially important when your team doesn't control the dependency (a.k.a. external). Mark as virtual the methods you intend to mock. That's a limitation of Rhino Mocks. In an environment that prefers hand coded stubs over a mock object framework, that wouldn't be necessary. And, a couple of new points to consider: Use creational design patterns. This will assist with DI, but it also allows you to isolate that code and test it independently of other logic. Write tests using Bill Wake's Arrange/Act/Assert technique . This technique makes it very clear what configuration is necessary, what is actually being tested, and what is expected. Don't be afraid to roll your own mocks/stubs. Often, you'll find that using mock object frameworks makes your tests incredibly hard to read. By rolling your own, you'll have complete control over your mocks/stubs, and you'll be able to keep your tests readable. (Refer back to previous point.) Avoid the temptation to refactor duplication out of your unit tests into abstract base classes, or setup/teardown methods. Doing so hides configuration/clean-up code from the developer trying to grok the unit test. In this case, the clarity of each individual test is more important than refactoring out duplication. Implement Continuous Integration. Check-in your code on every "green bar." Build your software and run your full suite of unit tests on every check-in. (Sure, this isn't a coding practice, per se; but it is an incredible tool for keeping your software clean and fully integrated.)
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/124210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10475/" ] }
124,235
I work on Linux all the time and I'm clueless about Windows, not even having a Windows box. Is Git nowadays working on Windows? Or am I making problems for my Windows pals by using it?
As far as I can tell msysgit works perfectly well under Windows Vista. This after a whole 2-month experience checking out plugins and applications for Ruby on Rails :-) Anyway, it was a breeze to install, no problem.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124235", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6068/" ] }
124,240
Been using PHP/MySQL for a little while now, and I'm wondering if there are any specific advantages (performance or otherwise) to using mysql_fetch_object() vs mysql_fetch_assoc() / mysql_fetch_array() .
Performance-wise it doesn't matter what you use. The difference is that mysql_fetch_object returns object: while ($row = mysql_fetch_object($result)) { echo $row->user_id; echo $row->fullname;} mysql_fetch_assoc() returns associative array: while ($row = mysql_fetch_assoc($result)) { echo $row["userid"]; echo $row["fullname"];} and mysql_fetch_array() returns array: while ($row = mysql_fetch_array($result)) { echo $row[0]; echo $row[1] ;}
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2227/" ] }
124,266
What is an elegant way to sort objects in PHP? I would love to accomplish something similar to this. $sortedObjectArary = sort($unsortedObjectArray, $Object->weight); Basically specify the array I want to sort as well as the field I want to sort on. I looked into multidimensional array sorting and there might be something useful there, but I don't see anything elegant or obvious.
Almost verbatim from the manual: function compare_weights($a, $b) { if($a->weight == $b->weight) { return 0; } return ($a->weight < $b->weight) ? -1 : 1;} usort($unsortedObjectArray, 'compare_weights'); If you want objects to be able to sort themselves, see example 3 here: http://php.net/usort
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/124266", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8880/" ] }
124,269
What is the simplest SOAP example using Javascript? To be as useful as possible, the answer should: Be functional (in other words actually work) Send at least one parameter that can be set elsewhere in the code Process at least one result value that can be read elsewhere in the code Work with most modern browser versions Be as clear and as short as possible, without using an external library
This is the simplest JavaScript SOAP Client I can create. <html><head> <title>SOAP JavaScript Client Test</title> <script type="text/javascript"> function soap() { var xmlhttp = new XMLHttpRequest(); xmlhttp.open('POST', 'https://somesoapurl.com/', true); // build SOAP request var sr = '<?xml version="1.0" encoding="utf-8"?>' + '<soapenv:Envelope ' + 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ' + 'xmlns:api="http://127.0.0.1/Integrics/Enswitch/API" ' + 'xmlns:xsd="http://www.w3.org/2001/XMLSchema" ' + 'xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">' + '<soapenv:Body>' + '<api:some_api_call soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">' + '<username xsi:type="xsd:string">login_username</username>' + '<password xsi:type="xsd:string">password</password>' + '</api:some_api_call>' + '</soapenv:Body>' + '</soapenv:Envelope>'; xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4) { if (xmlhttp.status == 200) { alert(xmlhttp.responseText); // alert('done. use firebug/console to see network response'); } } } // Send the POST request xmlhttp.setRequestHeader('Content-Type', 'text/xml'); xmlhttp.send(sr); // send request // ... } </script></head><body> <form name="Demo" action="" method="post"> <div> <input type="button" value="Soap" onclick="soap();" /> </div> </form></body></html> <!-- typo -->
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/124269", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15985/" ] }
124,326
JavaScript allows functions to be treated as objects--if you first define a variable as a function, you can subsequently add properties to that function. How do you do the reverse, and add a function to an "object"? This works: var foo = function() { return 1; };foo.baz = "qqqq"; At this point, foo() calls the function, and foo.baz has the value "qqqq". However, if you do the property assignment part first, how do you subsequently assign a function to the variable? var bar = { baz: "qqqq" }; What can I do now to arrange for bar.baz to have the value "qqqq" and bar() to call the function?
It's easy to be confused here, but you can't (easily or clearly or as far as I know) do what you want. Hopefully this will help clear things up. First, every object in Javascript inherits from the Object object. //these do the same thingvar foo = new Object();var bar = {}; Second, functions ARE objects in Javascript. Specifically, they're a Function object. The Function object inherits from the Object object. Checkout the Function constructor var foo = new Function();var bar = function(){};function baz(){}; Once you declare a variable to be an "Object" you can't (easily or clearly or as far as I know) convert it to a Function object. You'd need to declare a new Object of type Function (with the function constructor, assigning a variable an anonymous function etc.), and copy over any properties of methods from your old object. Finally, anticipating a possible question, even once something is declared as a function, you can't (as far as I know) change the functionBody/source.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124326", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11543/" ] }
124,336
I'm not sure if this is a strange thing to do or not, or if it is some how code smell...but I was wondering if there was a way (some sort of oop pattern would be nice) to "cast" a base type to a form of its derived type. I know this makes little sense as the derived type will have additional functionality that the parent doesn't offer which is in its self not fundamentally sound. But is there some way to do this? Here is a code example to so I can better explain what I"m asking. public class SomeBaseClass { public string GetBaseClassName {get;set;} public bool BooleanEvaluator {get;set;}}public class SomeDerivedClass : SomeBaseClass { public void Insert(SqlConnection connection) { //...random connection stuff cmd.Parameters["IsItTrue"].Value = this.BooleanEvalutar; //... }}public static void Main(object[] args) { SomeBaseClass baseClass = new SomeBaseClass(); SomeDerivedClass derClass = (SomeDerivedClass)baseClass; derClass.Insert(new sqlConnection());} I know this seems goofy but is there any way to accomplish something of this sort?
Not soundly, in "managed" languages. This is downcasting , and there is no sane down way to handle it, for exactly the reason you described (subclasses provide more than base classes - where does this "more" come from?). If you really want a similar behaviour for a particular hierarchy, you could use constructors for derived types that will take the base type as a prototype. One could build something with reflection that handled the simple cases (more specific types that have no addition state). In general, just redesign to avoid the problem. Edit: Woops, can't write conversion operators between base/derived types. An oddity of Microsoft trying to "protect you" against yourself. Ah well, at least they're no where near as bad as Sun.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13688/" ] }
124,411
But here's an example: Dim desiredType as Typeif IsNumeric(desiredType) then ... EDIT: I only know the Type, not the Value as a string. Ok, so unfortunately I have to cycle through the TypeCode. But this is a nice way to do it: if ((desiredType.IsArray)) return 0; switch (Type.GetTypeCode(desiredType)) { case 3: case 6: case 7: case 9: case 11: case 13: case 14: case 15: return 1; } ;return 0;
A few years late here, but here's my solution (you can choose whether to include boolean). Solves for the Nullable case. XUnit test included /// <summary>/// Determines if a type is numeric. Nullable numeric types are considered numeric./// </summary>/// <remarks>/// Boolean is not considered numeric./// </remarks>public static bool IsNumericType( Type type ){ if (type == null) { return false; } switch (Type.GetTypeCode(type)) { case TypeCode.Byte: case TypeCode.Decimal: case TypeCode.Double: case TypeCode.Int16: case TypeCode.Int32: case TypeCode.Int64: case TypeCode.SByte: case TypeCode.Single: case TypeCode.UInt16: case TypeCode.UInt32: case TypeCode.UInt64: return true; case TypeCode.Object: if ( type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>)) { return IsNumericType(Nullable.GetUnderlyingType(type)); } return false; } return false;}/// <summary>/// Tests the IsNumericType method./// </summary>[Fact]public void IsNumericTypeTest(){ // Non-numeric types Assert.False(TypeHelper.IsNumericType(null)); Assert.False(TypeHelper.IsNumericType(typeof(object))); Assert.False(TypeHelper.IsNumericType(typeof(DBNull))); Assert.False(TypeHelper.IsNumericType(typeof(bool))); Assert.False(TypeHelper.IsNumericType(typeof(char))); Assert.False(TypeHelper.IsNumericType(typeof(DateTime))); Assert.False(TypeHelper.IsNumericType(typeof(string))); // Arrays of numeric and non-numeric types Assert.False(TypeHelper.IsNumericType(typeof(object[]))); Assert.False(TypeHelper.IsNumericType(typeof(DBNull[]))); Assert.False(TypeHelper.IsNumericType(typeof(bool[]))); Assert.False(TypeHelper.IsNumericType(typeof(char[]))); Assert.False(TypeHelper.IsNumericType(typeof(DateTime[]))); Assert.False(TypeHelper.IsNumericType(typeof(string[]))); Assert.False(TypeHelper.IsNumericType(typeof(byte[]))); Assert.False(TypeHelper.IsNumericType(typeof(decimal[]))); Assert.False(TypeHelper.IsNumericType(typeof(double[]))); Assert.False(TypeHelper.IsNumericType(typeof(short[]))); Assert.False(TypeHelper.IsNumericType(typeof(int[]))); Assert.False(TypeHelper.IsNumericType(typeof(long[]))); Assert.False(TypeHelper.IsNumericType(typeof(sbyte[]))); Assert.False(TypeHelper.IsNumericType(typeof(float[]))); Assert.False(TypeHelper.IsNumericType(typeof(ushort[]))); Assert.False(TypeHelper.IsNumericType(typeof(uint[]))); Assert.False(TypeHelper.IsNumericType(typeof(ulong[]))); // numeric types Assert.True(TypeHelper.IsNumericType(typeof(byte))); Assert.True(TypeHelper.IsNumericType(typeof(decimal))); Assert.True(TypeHelper.IsNumericType(typeof(double))); Assert.True(TypeHelper.IsNumericType(typeof(short))); Assert.True(TypeHelper.IsNumericType(typeof(int))); Assert.True(TypeHelper.IsNumericType(typeof(long))); Assert.True(TypeHelper.IsNumericType(typeof(sbyte))); Assert.True(TypeHelper.IsNumericType(typeof(float))); Assert.True(TypeHelper.IsNumericType(typeof(ushort))); Assert.True(TypeHelper.IsNumericType(typeof(uint))); Assert.True(TypeHelper.IsNumericType(typeof(ulong))); // Nullable non-numeric types Assert.False(TypeHelper.IsNumericType(typeof(bool?))); Assert.False(TypeHelper.IsNumericType(typeof(char?))); Assert.False(TypeHelper.IsNumericType(typeof(DateTime?))); // Nullable numeric types Assert.True(TypeHelper.IsNumericType(typeof(byte?))); Assert.True(TypeHelper.IsNumericType(typeof(decimal?))); Assert.True(TypeHelper.IsNumericType(typeof(double?))); Assert.True(TypeHelper.IsNumericType(typeof(short?))); Assert.True(TypeHelper.IsNumericType(typeof(int?))); Assert.True(TypeHelper.IsNumericType(typeof(long?))); Assert.True(TypeHelper.IsNumericType(typeof(sbyte?))); Assert.True(TypeHelper.IsNumericType(typeof(float?))); Assert.True(TypeHelper.IsNumericType(typeof(ushort?))); Assert.True(TypeHelper.IsNumericType(typeof(uint?))); Assert.True(TypeHelper.IsNumericType(typeof(ulong?))); // Testing with GetType because of handling with non-numerics. See: // http://msdn.microsoft.com/en-us/library/ms366789.aspx // Using GetType - non-numeric Assert.False(TypeHelper.IsNumericType((new object()).GetType())); Assert.False(TypeHelper.IsNumericType(DBNull.Value.GetType())); Assert.False(TypeHelper.IsNumericType(true.GetType())); Assert.False(TypeHelper.IsNumericType('a'.GetType())); Assert.False(TypeHelper.IsNumericType((new DateTime(2009, 1, 1)).GetType())); Assert.False(TypeHelper.IsNumericType(string.Empty.GetType())); // Using GetType - numeric types // ReSharper disable RedundantCast Assert.True(TypeHelper.IsNumericType((new byte()).GetType())); Assert.True(TypeHelper.IsNumericType(43.2m.GetType())); Assert.True(TypeHelper.IsNumericType(43.2d.GetType())); Assert.True(TypeHelper.IsNumericType(((short)2).GetType())); Assert.True(TypeHelper.IsNumericType(((int)2).GetType())); Assert.True(TypeHelper.IsNumericType(((long)2).GetType())); Assert.True(TypeHelper.IsNumericType(((sbyte)2).GetType())); Assert.True(TypeHelper.IsNumericType(2f.GetType())); Assert.True(TypeHelper.IsNumericType(((ushort)2).GetType())); Assert.True(TypeHelper.IsNumericType(((uint)2).GetType())); Assert.True(TypeHelper.IsNumericType(((ulong)2).GetType())); // ReSharper restore RedundantCast // Using GetType - nullable non-numeric types bool? nullableBool = true; Assert.False(TypeHelper.IsNumericType(nullableBool.GetType())); char? nullableChar = ' '; Assert.False(TypeHelper.IsNumericType(nullableChar.GetType())); DateTime? nullableDateTime = new DateTime(2009, 1, 1); Assert.False(TypeHelper.IsNumericType(nullableDateTime.GetType())); // Using GetType - nullable numeric types byte? nullableByte = 12; Assert.True(TypeHelper.IsNumericType(nullableByte.GetType())); decimal? nullableDecimal = 12.2m; Assert.True(TypeHelper.IsNumericType(nullableDecimal.GetType())); double? nullableDouble = 12.32; Assert.True(TypeHelper.IsNumericType(nullableDouble.GetType())); short? nullableInt16 = 12; Assert.True(TypeHelper.IsNumericType(nullableInt16.GetType())); short? nullableInt32 = 12; Assert.True(TypeHelper.IsNumericType(nullableInt32.GetType())); short? nullableInt64 = 12; Assert.True(TypeHelper.IsNumericType(nullableInt64.GetType())); sbyte? nullableSByte = 12; Assert.True(TypeHelper.IsNumericType(nullableSByte.GetType())); float? nullableSingle = 3.2f; Assert.True(TypeHelper.IsNumericType(nullableSingle.GetType())); ushort? nullableUInt16 = 12; Assert.True(TypeHelper.IsNumericType(nullableUInt16.GetType())); ushort? nullableUInt32 = 12; Assert.True(TypeHelper.IsNumericType(nullableUInt32.GetType())); ushort? nullableUInt64 = 12; Assert.True(TypeHelper.IsNumericType(nullableUInt64.GetType()));}
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/124411", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14484/" ] }
124,417
I want to write a query like this: SELECT o.OrderId, MAX(o.NegotiatedPrice, o.SuggestedPrice)FROM Order o But this isn't how the MAX function works, right? It is an aggregate function so it expects a single parameter and then returns the MAX of all rows. Does anyone know how to do it my way?
You'd need to make a User-Defined Function if you wanted to have syntax similar to your example, but could you do what you want to do, inline, fairly easily with a CASE statement, as the others have said. The UDF could be something like this: create function dbo.InlineMax(@val1 int, @val2 int)returns intasbegin if @val1 > @val2 return @val1 return isnull(@val2,@val1)end ... and you would call it like so ... SELECT o.OrderId, dbo.InlineMax(o.NegotiatedPrice, o.SuggestedPrice) FROM Order o
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/124417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14101/" ] }
124,462
Is there a way in PHP to make asynchronous HTTP calls? I don't care about the response, I just want to do something like file_get_contents() , but not wait for the request to finish before executing the rest of my code. This would be super useful for setting off "events" of a sort in my application, or triggering long processes. Any ideas?
The answer I'd previously accepted didn't work. It still waited for responses. This does work though, taken from How do I make an asynchronous GET request in PHP? function post_without_wait($url, $params){ foreach ($params as $key => &$val) { if (is_array($val)) $val = implode(',', $val); $post_params[] = $key.'='.urlencode($val); } $post_string = implode('&', $post_params); $parts=parse_url($url); $fp = fsockopen($parts['host'], isset($parts['port'])?$parts['port']:80, $errno, $errstr, 30); $out = "POST ".$parts['path']." HTTP/1.1\r\n"; $out.= "Host: ".$parts['host']."\r\n"; $out.= "Content-Type: application/x-www-form-urlencoded\r\n"; $out.= "Content-Length: ".strlen($post_string)."\r\n"; $out.= "Connection: Close\r\n\r\n"; if (isset($post_string)) $out.= $post_string; fwrite($fp, $out); fclose($fp);}
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10680/" ] }
124,492
I need a short code snippet to get a directory listing from an HTTP server. Thanks
A few important considerations before the code: The HTTP Server has to be configured to allow directories listing for the directories you want; Because directory listings are normal HTML pages there is no standard that defines the format of a directory listing; Due to consideration 2 you are in the land where you have to put specific code for each server. My choice is to use regular expressions. This allows for rapid parsing and customization. You can get specific regular expressions pattern per site and that way you have a very modular approach. Use an external source for mapping URL to regular expression patterns if you plan to enhance the parsing module with new sites support without changing the source code. Example to print directory listing from http://www.ibiblio.org/pub/ namespace Example{ using System; using System.Net; using System.IO; using System.Text.RegularExpressions; public class MyExample { public static string GetDirectoryListingRegexForUrl(string url) { if (url.Equals("http://www.ibiblio.org/pub/")) { return "<a href=\".*\">(?<name>.*)</a>"; } throw new NotSupportedException(); } public static void Main(String[] args) { string url = "http://www.ibiblio.org/pub/"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { using (StreamReader reader = new StreamReader(response.GetResponseStream())) { string html = reader.ReadToEnd(); Regex regex = new Regex(GetDirectoryListingRegexForUrl(url)); MatchCollection matches = regex.Matches(html); if (matches.Count > 0) { foreach (Match match in matches) { if (match.Success) { Console.WriteLine(match.Groups["name"]); } } } } } Console.ReadLine(); } }}
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
124,549
How exactly do DLL files work? There seems to be an awful lot of them, but I don't know what they are or how they work. So, what's the deal with them?
What is a DLL? Dynamic Link Libraries (DLL)s are like EXEs but they are not directly executable. They are similar to .so files in Linux/Unix. That is to say, DLLs are MS's implementation of shared libraries. DLLs are so much like an EXE that the file format itself is the same. Both EXE and DLLs are based on the Portable Executable (PE) file format. DLLs can also contain COM components and .NET libraries. What does a DLL contain? A DLL contains functions, classes, variables, UIs and resources (such as icons, images, files, ...) that an EXE, or other DLL uses. Types of libraries: On virtually all operating systems, there are 2 types of libraries. Static libraries and dynamic libraries. In windows the file extensions are as follows: Static libraries (.lib) and dynamic libraries (.dll). The main difference is that static libraries are linked to the executable at compile time; whereas dynamic linked libraries are not linked until run-time. More on static and dynamic libraries: You don't normally see static libraries though on your computer, because a static library is embedded directly inside of a module (EXE or DLL). A dynamic library is a stand-alone file. A DLL can be changed at any time and is only loaded at runtime when an EXE explicitly loads the DLL. A static library cannot be changed once it is compiled within the EXE.A DLL can be updated individually without updating the EXE itself. Loading a DLL: A program loads a DLL at startup, via the Win32 API LoadLibrary, or when it is a dependency of another DLL. A program uses the GetProcAddress to load a function or LoadResource to load a resource. Further reading: Please check MSDN or Wikipedia for further reading. Also the sources of this answer.
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/124549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1615/" ] }
124,585
This question is specifically related to overriding the equals() method for objects with a large number of fields. First off, let me say that this large object cannot be broken down into multiple components without violating OO principles, so telling me "no class should have more than x fields" won't help. Moving on, the problem came to fruition when I forgot to check one of the fields for equality. Therefore, my equals method was incorrect. Then I thought to use reflection: --code removed because it was too distracting-- The purpose of this post isn't necessarily to refactor the code (this isn't even the code I am using), but instead to get input on whether or not this is a good idea. Pros: If a new field is added, it is automatically included The method is much more terse than 30 if statements Cons: If a new field is added, it is automatically included, sometimes this is undesirable Performance: This has to be slower, I don't feel the need to break out a profiler Whitelisting certain fields to ignore in the comparison is a little ugly Any thoughts?
If you did want to whitelist for performance reasons, consider using an annotation to indicate which fields to compare. Also, this implementation won't work if your fields don't have good implementations for equals() . P.S. If you go this route for equals() , don't forget to do something similar for hashCode() . P.P.S. I trust you already considered HashCodeBuilder and EqualsBuilder .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402777/" ] }
124,604
Coming from a Perl 5 background, what are the advantages of moving to Perl 6 or Python?
There is no advantage to be gained by switching from Perl to Python. There is also no advantage to be gained by switching from Python to Perl. They are both equally capable. Choose your tools based on what you know and the problem you are trying to solve rather than on some sort of notion that one is somehow inherently better than the other. The only real advantage is if you are switching from a language you don't know to a language you do know, in which case your productivity will likely go up.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7598/" ] }
124,630
I'm currently turning an array of pixel values (originally created with a java.awt.image.PixelGrabber object) into an Image object using the following code: public Image getImageFromArray(int[] pixels, int width, int height) { MemoryImageSource mis = new MemoryImageSource(width, height, pixels, 0, width); Toolkit tk = Toolkit.getDefaultToolkit(); return tk.createImage(mis);} Is it possible to achieve the same result using classes from the ImageIO package(s) so I don't have to use the AWT Toolkit? Toolkit.getDefaultToolkit() does not seem to be 100% reliable and will sometimes throw an AWTError, whereas the ImageIO classes should always be available, which is why I'm interested in changing my method.
You can create the image without using ImageIO. Just create a BufferedImage using an image type matching the contents of the pixel array. public static Image getImageFromArray(int[] pixels, int width, int height) { BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); WritableRaster raster = (WritableRaster) image.getData(); raster.setPixels(0,0,width,height,pixels); return image; } When working with the PixelGrabber, don't forget to extract the RGBA info from the pixel array before calling getImageFromArray . There's an example of this in the handlepixelmethod in the PixelGrabber javadoc. Once you do that, make sure the image type in the BufferedImage constructor to BufferedImage.TYPE_INT_ARGB .
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124630", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1119/" ] }
124,649
In my Silverlight application, I can't seem to bring focus to a TextBox control. On the recommendation of various posts, I've set the IsTabStop property to True and I'm using TextBox.Focus(). Though the UserControl_Loaded event is firing, the TextBox control isn't getting focus. I've included my very simple code below. What am I missing? Thanks. Page.xaml <UserControl x:Class="TextboxFocusTest.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Loaded="UserControl_Loaded" Width="400" Height="300"> <Grid x:Name="LayoutRoot" Background="White"> <StackPanel Width="150" VerticalAlignment="Center"> <TextBox x:Name="RegularTextBox" IsTabStop="True" /> </StackPanel> </Grid></UserControl> Page.xaml.cs using System.Windows;using System.Windows.Controls;namespace PasswordTextboxTest{ public partial class Page : UserControl { public Page() { InitializeComponent(); } private void UserControl_Loaded(object sender, RoutedEventArgs e) { RegularTextBox.Focus(); } }}
I found this on silverlight.net, and was able to get it to work for me by adding a call to System.Windows.Browser.HtmlPage.Plugin.Focus() prior to calling RegularTextBox.Focus(): private void UserControl_Loaded(object sender, RoutedEventArgs e) { System.Windows.Browser.HtmlPage.Plugin.Focus(); RegularTextBox.Focus(); }
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4115/" ] }
124,667
This is probably a really stupid newbie-sounding question to you developer type people, but I'm at a loss :( I've been trying to learn how to use Subversion for keeping the history of my code, but I'm finding it pretty confusing. I read the 'book' that comes with Subversion, but I didn't find it all that helpful. I'm using Windows, and I downloaded the TortoiseSVN GUI for it. All I really want to know how to do is to create a new project, put a file in it (any old file), and then update that file, just so I can see how it works. I created a 'repository' (in svn_repository/test), and if anyone could tell me how I'm supposed to go about creating a new file/putting a file in it, and then updating that file I'd be really happy :) Knowing my luck it'll be something as simple as "drag and drop the file into the directory". Apologies for asking such a stupid question! Also if anyone could tell me how to go about making it work with Zend Studio, that would be extra awesome-points. Thanks!
The recommended directory structure for a subversion repo contains three folders: "branches", "tags" and "trunk". So, create these folders somewhere convenient, in a new folder. Right click in the parent folder of these folders, go to TortoiseSVN and select Import. Enter the url to the repository you created here (ie_ https://JUNK:8443/svn/Test/ is one I just made, on my local machine). Hit the ok button and the folders will be imported. Now browse to where you want the repo to live on your local machine (I've gone to C:\workspace\test). Right-click and go to SVN Checkout. Now, you want to check out from the trunk of your repo, so change the repository URL to reflect this ( https://JUNK:8443/svn/Test/trunk/ ). Hit the ok button. Create a new file in this directory. Right click on it and go to TortoiseSVN, then Add. Hit ok, and the file is now marked as a new file for the repo. Right click in the parent folder of the file and you should see SVN Update and SVN Commit. SVN Update will refresh the local files with files from the repository. SVN Commit will send local files that have been changed back into the repository. Have fun :)
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124667", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21442/" ] }
124,671
How do I pick a random element from a set?I'm particularly interested in picking a random element from aHashSet or a LinkedHashSet, in Java.Solutions for other languages are also welcome.
int size = myHashSet.size();int item = new Random().nextInt(size); // In real life, the Random object should be rather more shared than thisint i = 0;for(Object obj : myhashSet){ if (i == item) return obj; i++;}
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/124671", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21445/" ] }
124,682
Can you have custom client-side javascript Validation for standard ASP.NET Web Form Validators? For instance use a asp:RequiredFieldValidator leave the server side code alone but implement your own client notification using jQuery to highlight the field or background color for example.
Yes I have done so. I used Firebug to find out the Dot.Net JS functions and then hijacked the validator functions The following will be applied to all validators and is purely client side. I use it to change the way the ASP.Net validation is displayed, not the way the validation is actually performed. It must be wrapped in a $(document).ready() to ensure that it overwrites the original ASP.net validation. /** * Re-assigns a couple of the ASP.NET validation JS functions to * provide a more flexible approach */function UpgradeASPNETValidation(){ // Hi-jack the ASP.NET error display only if required if (typeof(Page_ClientValidate) != "undefined") { ValidatorUpdateDisplay = NicerValidatorUpdateDisplay; AspPage_ClientValidate = Page_ClientValidate; Page_ClientValidate = NicerPage_ClientValidate; }}/** * Extends the classic ASP.NET validation to add a class to the parent span when invalid */function NicerValidatorUpdateDisplay(val){ if (val.isvalid){ // do custom removing $(val).fadeOut('slow'); } else { // do custom show $(val).fadeIn('slow'); }}/** * Extends classic ASP.NET validation to include parent element styling */function NicerPage_ClientValidate(validationGroup){ var valid = AspPage_ClientValidate(validationGroup); if (!valid){ // do custom styling etc // I added a background colour to the parent object $(this).parent().addClass('invalidField'); }}
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3747/" ] }
124,745
How can I connect a system to a network and sniff for virus/spyware related traffic? I'd like to plug in a network cable, fire up an appropriate tool sand have it scan the data for any signs of problems. I don't expect this to find everything, and this is not to prevent initial infection but to help determine if there is anything trying to actively infect other system/causing network problems. Running a regular network sniffer and manually looking through the results is no good unless the traffic is really obvious,but I havn't been able to find any tool to scan a network data stream automatically.
I highly recommend running Snort on a machine somewhere near the core of your network, and span (mirror) one (or more) ports from somewhere along your core network path to the machine in question. Snort has the ability to scan network traffic it sees, and automatically notify you via various methods if it sees something suspicious. This could even be taken further, if desired, to automatically disconnect devices, et cetera, if it finds something.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17007/" ] }
124,764
I have seen this link: Implementing Mutual Exclusion in JavaScript .On the other hand, I have read that there are no threads in javascript, but what exactly does that mean? When events occur, where in the code can they interrupt? And if there are no threads in JS, do I need to use mutexes in JS or not? Specifically, I am wondering about the effects of using functions called by setTimeout() and XmlHttpRequest 's onreadystatechange on globally accessible variables.
Javascript is defined as a reentrant language which means there is no threading exposed to the user, there may be threads in the implementation. Functions like setTimeout() and asynchronous callbacks need to wait for the script engine to sleep before they're able to run. That means that everything that happens in an event must be finished before the next event will be processed. That being said, you may need a mutex if your code does something where it expects a value not to change between when the asynchronous event was fired and when the callback was called. For example if you have a data structure where you click one button and it sends an XmlHttpRequest which calls a callback the changes the data structure in a destructive way, and you have another button that changes the same data structure directly, between when the event was fired and when the call back was executed the user could have clicked and updated the data structure before the callback which could then lose the value. While you could create a race condition like that it's very easy to prevent that in your code since each function will be atomic. It would be a lot of work and take some odd coding patterns to create the race condition in fact.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/124764", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3751/" ] }
124,841
I have written the following simple test in trying to learn Castle Windsor's Fluent Interface: using NUnit.Framework;using Castle.Windsor;using System.Collections;using Castle.MicroKernel.Registration;namespace WindsorSample { public class MyComponent : IMyComponent { public MyComponent(int start_at) { this.Value = start_at; } public int Value { get; private set; } } public interface IMyComponent { int Value { get; } } [TestFixture] public class ConcreteImplFixture { [Test] public void ResolvingConcreteImplShouldInitialiseValue() { IWindsorContainer container = new WindsorContainer(); container.Register(Component.For<IMyComponent>().ImplementedBy<MyComponent>().Parameters(Parameter.ForKey("start_at").Eq("1"))); IMyComponent resolvedComp = container.Resolve<IMyComponent>(); Assert.AreEqual(resolvedComp.Value, 1); } }} When I execute the test through TestDriven.NET I get the following error: System.TypeLoadException : Could not load type 'Castle.MicroKernel.Registration.IRegistration' from assembly 'Castle.MicroKernel, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc'.at WindsorSample.ConcreteImplFixture.ResolvingConcreteImplShouldInitialiseValue() When I execute the test through the NUnit GUI I get: WindsorSample.ConcreteImplFixture.ResolvingConcreteImplShouldInitialiseValue:System.IO.FileNotFoundException : Could not load file or assembly 'Castle.Windsor, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc' or one of its dependencies. The system cannot find the file specified. If I open the Assembly that I am referencing in Reflector I can see its information is: Castle.MicroKernel, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc and that it definitely contains Castle.MicroKernel.Registration.IRegistration What could be going on? I should mention that the binaries are taken from the latest build of Castle though I have never worked with nant so I didn't bother re-compiling from source and just took the files in the bin directory. I should also point out that my project compiles with no problem.
Is the assembly in the Global Assembly Cache (GAC) or any place the might be overriding the assembly that you think is being loaded? This is usually the result of an incorrect assembly being loaded, for me it means I usually have something in the GAC overriding the version I have in bin/Debug.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/124841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5056/" ] }
124,844
I am asking this out of a curiosity. Basically my question is when you have a database which needs a row entry to have things which act like flags, what is the best practice? A good example of this would be the badges on stack overflow, or the operating system field in bugzilla. Any subset of the flags may be set for a given entry. Usually, I do c and c++ work, so my gut reaction is to use an unsigned integer field as a set of bits which can be flipped... But i know that isn't a good solution for several reasons. The most obvious of which is scale-ability, there will be a hard upper limit on how many flags I can have. I can also think of a couple of other solutions which scale better but would have performance issues because they would require multiple selects to get all the information. So, what is the "right" way to do this?
If you really need an unbounded selection from a closed set of flags (e.g. stackoverflow badges), then the "relational way" would be to create a table of flags and a separate table which relates those flags to your target entities. Thus, users, flags and usersToFlags. However, if space efficiency is a serious concern and query-ability is not, an unsigned mask would work almost as well.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13430/" ] }
124,851
The standard model has been that OpenGL is for professional apps (CAD) and Direct3D is for games. With the debacle of openGL 3.0, is openGl still the natural choice for technical 3D apps (cad/GIS)? Are there scenegraph libraries for Direct3D? (Of course Direct3D is windows only.)
D3D makes you pay the Microsoft "strategy tax." That is, D3D serves two masters. One is giving you features and performance. The other is to ensure lock-in to other MS products and the Windows platform generally. This has some consequences for you: A D3D app won't run on anything but Windows (including Xbox). Maybe you don't think that's important now. But if, down the road, you want to run on Mac, Linux, PS3, future consoles, etc., you may be glad you chose the platform-independent choice. MS can make some arbitrary decisions. Will the next version of D3D only run on an OS that requires new hardware, is expensive, and lots of people don't want to upgrade to? Will they make some other future decision you don't agree with? Historically, OpenGL has led D3D in quick exposure of new HW features. This is because there's a mechanism in the standard for vendors to add their own extensions, and for those extensions to eventually be folded into the main spec. D3D is whatever MS wants it to be, with input from vendors to be sure, but MS gets veto power. You could easily be in a situation like with Vista, where MS decided not to expose new HW features to the old DX, and only make the new DX available on Vista. This was quite a headache for game developers. Now then, this is the flavor of reasons why a "professional app" (CAD, animation, scientific visualization, GIS, etc.) would favor OGL -- apps like this want to be stable for many years, need ongoing maintenance and improvement, and want to run on many platforms. This is in contrast to games, which quite frequently are only on one platform, will be released but generally not "maintained" (there likely won't be a 2.0, an update for another OS three years hence, don't need to support older HW, etc.). Games want maximum performance and only need to work for a short time window and on a fixed number of platforms. If they need to target Windows anyway and D3D is a little faster, that may be the right choice since the negative D3D consequences won't hurt them like it would for a CAD app, say.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10897/" ] }
124,854
I have an <img> in an HTML document that I would like to highlight as though the user had highlighted it using the mouse. Is there a way to do that using JavaScript? I only need it to work in Mozilla, but any and all information is welcome. EDIT: The reason I want to select the image is actually not so that it appears highlighted, but so that I can then copy the selected image to the clipboard using XPCOM. So the img actually has to be selected for this to work.
Here's an example which selects the first image on the page (which will be the Stack Overflow logo if you test it out on this page in Firebug): var s = window.getSelection()var r = document.createRange();r.selectNode(document.images[0]);s.addRange(r) Relevant documentation: http://developer.mozilla.org/en/DOM/window.getSelection http://developer.mozilla.org/en/DOM/range.selectNode http://developer.mozilla.org/en/DOM/Selection/addRange
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7441/" ] }
124,856
I'd like to ensure my RAII class is always allocated on the stack. How do I prevent a class from being allocated via the 'new' operator?
All you need to do is declare the class' new operator private: class X{ private: // Prevent heap allocation void * operator new (size_t); void * operator new[] (size_t); void operator delete (void *); void operator delete[] (void*); // ... // The rest of the implementation for X // ...}; Making 'operator new' private effectively prevents code outside the class from using 'new' to create an instance of X. To complete things, you should hide 'operator delete' and the array versions of both operators. Since C++11 you can also explicitly delete the functions: class X{// public, protected, private ... does not matter static void *operator new (size_t) = delete; static void *operator new[] (size_t) = delete; static void operator delete (void*) = delete; static void operator delete[](void*) = delete;}; Related Question: Is it possible to prevent stack allocation of an object and only allow it to be instiated with ‘new’?
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6386/" ] }
124,865
At the office we are currently writing an application that will generate XML files against a schema that we were given. We have the schema in an .XSD file. Are there tool or libraries that we can use for automated testing to check that the generated XML matches the schema? We would prefer free tools that are appropriate for commercial use although we won't be bundling the schema checker so it only needs to be usable by devs during development. Our development language is C++ if that makes any difference, although I don't think it should as we could generate the xml file and then do validation by calling a separate program in the test.
After some research, I think the best answer is Xerces , as it implements all of XSD, is cross-platform and widely used. I've created a small Java project on github to validate from the command line using the default JRE parser, which is normally Xerces. This can be used on Windows/Mac/Linux. There is also a C++ version of Xerces available if you'd rather use that. The StdInParse utility can be used to call it from the command line. Also, a commenter below points to this more complete wrapper utility . You could also use xmllint, which is part of libxml . You may well already have it installed. Example usage: xmllint --noout --schema XSD_FILE XML_FILE One problem is that libxml doesn't implement all of the specification, so you may run into issues :( Alternatively, if you are on Windows, you can use msxml , but you will need some sort of wrapper to call it, such as the GUI one described in this DDJ article . However, it seems most people on Windows use an XML Editor, such as Notepad++ (as described in Nate's answer ) or XML Notepad 2007 as suggested by SteveC (there are also several commercial editors which I won't mention here). Finally, you'll find different programs will, unfortunately, give different results. This is largely due to the complexity of the XSD spec. You may want to test your schema with several tools. UPDATE : I've expanded on this in a blog post .
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/124865", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5113/" ] }
124,869
I recently learned about the basic structure of the .docx file (it's a specially structured zip archive). However, docx is not formated like a doc. How does a doc file work? What is the file format, structure, etc?
The full format for binary .doc files is documented in this pdf from ( the Wikipedia article on .doc )
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124869", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1615/" ] }
124,871
I'm a long-time Windows developer, having cut my teeth on win32 and early COM. I've been working with .NET since 2001, so I'm pretty fluent in C# and the CLR. I'd never heard of Castle Windsor until I started participating in Stack Overflow. I've read the Castle Windsor "Getting Started" guide, but it's not clicking. Teach this old dog new tricks, and tell me why I should be integrating Castle Windsor into my enterprise apps.
Castle Windsor is an inversion of control tool. There are others like it. It can give you objects with pre-built and pre-wired dependencies right in there. An entire object graph created via reflection and configuration rather than the "new" operator. Start here: http://tech.groups.yahoo.com/group/altdotnet/message/10434 Imagine you have an email sending class. EmailSender. Imagine you have another class WorkflowStepper. Inside WorkflowStepper you need to use EmailSender. You could always say new EmailSender().Send(emailMessage); but that - the use of new - creates a TIGHT COUPLING that is hard to change. (this is a tiny contrived example after all) So what if, instead of newing this bad boy up inside WorkflowStepper, you just passed it into the constructor? So then whoever called it had to new up the EmailSender. new WorkflowStepper(emailSender).Step() Imagine you have hundreds of these little classes that only have one responsibility (google SRP).. and you use a few of them in WorkflowStepper: new WorkflowStepper(emailSender, alertRegistry, databaseConnection).Step() Imagine not worrying about the details of EmailSender when you are writing WorkflowStepper or AlertRegistry You just worry about the concern you are working with. Imagine this whole graph (tree) of objects and dependencies gets wired up at RUN TIME, so that when you do this: WorkflowStepper stepper = Container.Get<WorkflowStepper>(); you get a real deal WorkflowStepper with all the dependencies automatically filled in where you need them. There is no new It just happens - because it knows what needs what. And you can write fewer defects with better designed, DRY code in a testable and repeatable way.
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/124871", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1181217/" ] }
124,880
Is it possible to prevent stack allocation of an object and only allow it to be instiated with 'new' on the heap?
One way you could do this would be to make the constructors private and only allow construction through a static method that returns a pointer. For example: class Foo{public: ~Foo(); static Foo* createFoo() { return new Foo(); }private: Foo(); Foo(const Foo&); Foo& operator=(const Foo&);};
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/124880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
124,886
I need to change the functionality of an application based on the executable name. Nothing huge, just changing strings that are displayed and some internal identifiers. The application is written in a mixture of native and .Net C++-CLI code. Two ways that I have looked at are to parse the GetCommandLine() function in Win32 and stuffing around with the AppDomain and other things in .Net. However using GetCommandLine won't always work as when run from the debugger the command line is empty. And the .Net AppDomain stuff seems to require a lot of stuffing around. So what is the nicest/simplest/most efficient way of determining the executable name in C++/CLI? (I'm kind of hoping that I've just missed something simple that is available in .Net.) Edit: One thing that I should mention is that this is a Windows GUI application using C++/CLI, therefore there's no access to the traditional C style main function, it uses the Windows WinMain() function.
Call GetModuleFileName() using 0 as a module handle. Note: you can also use the argv[0] parameter to main or call GetCommandLine() if there is no main. However, keep in mind that these methods will not necessarily give you the complete path to the executable file. They will give back the same string of characters that was used to start the program. Calling GetModuleFileName() , instead, will always give you a complete path and file name.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3719/" ] }
124,932
I have a bit of code that basically reads an XML document using the XMLDocument.Load(uri) method which works fine, but doesn't work so well if the call is made through a proxy. I was wondering if anyone knew of a way to make this call (or achieve the same effect) through a proxy?
Do you have to provide credentials to the proxy? If so, this should help:"Supplying Authentication Credentials to XmlResolver when Reading from a File" http://msdn.microsoft.com/en-us/library/aa720674.aspx Basically, you... Create an XmlTextReader using the URL Set the Credentials property of the reader's XmlResolver Create an XmlDocument instance and pass the reader to the Load method.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124932", "https://Stackoverflow.com", "https://Stackoverflow.com/users/493/" ] }
124,946
My question is based off of inheriting a great deal of legacy code that I can't do very much about. Basically, I have a device that will produce a block of data. A library which will call the device to create that block of data, for some reason I don't entirely understand and cannot change even if I wanted to, writes that block of data to disk. This write is not instantaneous, but can take up to 90 seconds. In that time, the user wants to get a partial view of the data that's being produced, so I want to have a consumer thread which reads the data that the other library is writing to disk. Before I even touch this legacy code, I want to mimic the problem using code I entirely control. I'm using C#, ostensibly because it provides a lot of the functionality I want. In the producer class, I have this code creating a random block of data: FileStream theFS = new FileStream(this.ScannerRawFileName, FileMode.OpenOrCreate, FileAccess.Write, FileShare.Read);//note that I need to be able to read this elsewhere...BinaryWriter theBinaryWriter = new BinaryWriter(theFS);int y, x;for (y = 0; y < imheight; y++){ ushort[] theData= new ushort[imwidth]; for(x = 0; x < imwidth;x++){ theData[x] = (ushort)(2*y+4*x); } byte[] theNewArray = new byte[imwidth * 2]; Buffer.BlockCopy(theImage, 0, theNewArray, 0, imwidth * 2); theBinaryWriter.Write(theNewArray); Thread.Sleep(mScanThreadWait); //sleep for 50 milliseconds Progress = (float)(y-1 >= 0 ? y-1 : 0) / (float)imheight;}theFS.Close(); So far, so good. This code works. The current version (using FileStream and BinaryWriter) appears to be equivalent (though slower, because of the copy) to using File.Open with the same options and a BinaryFormatter on the ushort[] being written to disk. But then I add a consumer thread: FileStream theFS;if (!File.Exists(theFileName)) { //do error handling return;}else { theFS = new FileStream(theFileName, FileMode.Open, FileAccess.Read, FileShare.Read); //very relaxed file opening}BinaryReader theReader = new BinaryReader(theFS);//gotta do this copying in order to handle byte array swaps//frustrating, but true.byte[] theNewArray = theReader.ReadBytes( (int)(imheight * imwidth * inBase.Progress) * 2);ushort[] theData = new ushort[((int)(theNewArray.Length/2))];Buffer.BlockCopy(theNewArray, 0, theData, 0, theNewArray.Length); Now, it's possible that the declaration of theNewArray is broken, and will cause some kind of read overflow. However, this code never gets that far, because it always always always breaks on trying to open the new FileStream with a System.IO.IOException that states that another process has opened the file. I'm setting the FileAccess and FileShare enumerations as stated in the FileStream documentation on MSDN, but it appears that I just can't do what I want to do (ie, write in one thread, read in another). I realize that this application is a bit unorthodox, but when I get the actual device involved, I'm going to have to do the same thing, but using MFC. In any event, What am I forgetting? Is what I'm wanting to do possible, since it's specified as possible in the documentation? Thanks!mmr
Your consumer must specify FileShare.ReadWrite. By trying to open the file as FileShare.Read in the consumer you are saying "I want to open the file and let others read it at the same time" ... since there is already a writer that call fails, you have to allow concurrent writes with the reader.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124946", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
124,958
I'm trying to display an array of files in order of date (last modified). I have done this buy looping through the array and sorting it into another array, but is there an easier (more efficient) way to do this?
Warning create_function() has been DEPRECATED as of PHP 7.2.0. Relying on this function is highly discouraged. For the sake of posterity, in case the forum post linked in the accepted answer is lost or unclear to some, the relevant code needed is: <?php$myarray = glob("*.*");usort($myarray, create_function('$a,$b', 'return filemtime($a) - filemtime($b);'));?> Tested this on my system and verified it does sort by file mtime as desired. I used a similar approach (written in Python) for determining the last updated files on my website as well.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/124958", "https://Stackoverflow.com", "https://Stackoverflow.com/users/910/" ] }
124,959
Whats the available solutions for PHP to create word document in linux environment?
real Word documents If you need to produce "real" Word documents you need a Windows-based web server and COM automation. I highly recommend Joel's article on this subject. fake HTTP headers for tricking Word into opening raw HTML A rather common (but unreliable) alternative is: header("Content-type: application/vnd.ms-word");header("Content-Disposition: attachment; filename=document_name.doc");echo "<html>";echo "<meta http-equiv=\"Content-Type\" content=\"text/html; charset=Windows-1252\">";echo "<body>";echo "<b>Fake word document</b>";echo "</body>";echo "</html>" Make sure you don't use external stylesheets. Everything should be in the same file. Note that this does not send an actual Word document. It merely tricks browsers into offering it as download and defaulting to a .doc file extension. Older versions of Word may often open this without any warning/security message, and just import the raw HTML into Word. PHP sending sending that misleading Content-Type header along does not constitute a real file format conversion.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/124959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
124,966
I want something like an std::map , but I only want to see if the item exists or not, I don't actually need a key AND a value. What should I use?
Looks like you need a std::set .
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/124966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4946/" ] }
125,019
As a complete beginner with no programming experience, I am trying to find beautiful Python code to study and play with. Please answer by pointing to a website, a book or some software project. I have the following criterias: complete code listings (working, hackable code) beautiful code (highly readable, simple but effective) instructional for the beginner (yes, hand-holding is needed) I've tried learning how to program for too long now, never gotten to the point where the rubber hits the road. My main agenda is best spelled out by Nat Friedman's "How to become a hacker ". I'm aware of O'Reilly's "Beautiful Code", but think of it as too advanced and confusing for a beginner.
Buy Programming Collective Intelligence . Great book of interesting AI algorithms based on mining data and all of the examples are in very easy to read Python. The other great book is Text Processing in Python
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4819/" ] }
125,050
...or are they the same thing? I notice that each has its own Wikipedia entry: Polymorphism , Multiple Dispatch , but I'm having trouble seeing how the concepts differ. Edit: And how does Overloading fit into all this?
Polymorphism is the facility that allows a language/program to make decisions during runtime on which method to invoke based on the types of the parameters sent to that method. The number of parameters used by the language/runtime determines the 'type' of polymorphism supported by a language. Single dispatch is a type of polymorphism where only one parameter is used (the receiver of the message - this , or self ) to determine the call. Multiple dispatch is a type of polymorphism where in multiple parameters are used in determining which method to call. In this case, the reciever as well as the types of the method parameters are used to tell which method to invoke. So you can say that polymorphism is the general term and multiple and single dispatch are specific types of polymorphism. Addendum: Overloading happens during compile time. It uses the type information available during compilation to determine which type of method to call. Single/multiple dispatch happens during runtime. Sample code: using NUnit.Framework;namespace SanityCheck.UnitTests.StackOverflow{ [TestFixture] public class DispatchTypes { [Test] public void Polymorphism() { Baz baz = new Baz(); Foo foo = new Foo(); // overloading - parameter type is known during compile time Assert.AreEqual("zap object", baz.Zap("hello")); Assert.AreEqual("zap foo", baz.Zap(foo)); // virtual call - single dispatch. Baz is used. Zapper zapper = baz; Assert.AreEqual("zap object", zapper.Zap("hello")); Assert.AreEqual("zap foo", zapper.Zap(foo)); // C# has doesn't support multiple dispatch so it doesn't // know that oFoo is actually of type Foo. // // In languages with multiple dispatch, the type of oFoo will // also be used in runtime so Baz.Zap(Foo) will be called // instead of Baz.Zap(object) object oFoo = foo; Assert.AreEqual("zap object", zapper.Zap(oFoo)); } public class Zapper { public virtual string Zap(object o) { return "generic zapper" ; } public virtual string Zap(Foo f) { return "generic zapper"; } } public class Baz : Zapper { public override string Zap(object o) { return "zap object"; } public override string Zap(Foo f) { return "zap foo"; } } public class Foo { } }}
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125050", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7598/" ] }
125,096
I have an app that has impersonation used throughout. But when a user is logged in as an admin, a few operation require them to write to the server itself. Now if these users do not have rights on the actual server (some don't) it will not let them write. What I want to do is turn off impersonation for just a couple commands. Is there a way to do something like this? using(HostingEnvironment.Impersonate.Off()) //I know this isn't a command, but you get the idea? Thank you.
Make sure the Application Pool do have the proper rights that you need. Then, when you want to revert to the application pool identity... run the following: private WindowsImpersonationContext context = null;public void RevertToAppPool(){ try { if (!WindowsIdentity.GetCurrent().IsSystem) { context = WindowsIdentity.Impersonate(System.IntPtr.Zero); } } catch { }}public void UndoImpersonation(){ try { if (context != null) { context.Undo(); } } catch { }}
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14777/" ] }
125,113
What is the most efficient way to convert a MySQL query to CSV in PHP please? It would be best to avoid temp files as this reduces portability (dir paths and setting file-system permissions required). The CSV should also include one top line of field names.
SELECT * INTO OUTFILE "c:/mydata.csv"FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'LINES TERMINATED BY "\n"FROM my_table; ( the documentation for this is here: http://dev.mysql.com/doc/refman/5.0/en/select.html ) or: $select = "SELECT * FROM table_name";$export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) );$fields = mysql_num_fields ( $export );for ( $i = 0; $i < $fields; $i++ ){ $header .= mysql_field_name( $export , $i ) . "\t";}while( $row = mysql_fetch_row( $export ) ){ $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n";}$data = str_replace( "\r" , "" , $data );if ( $data == "" ){ $data = "\n(0) Records Found!\n"; }header("Content-type: application/octet-stream");header("Content-Disposition: attachment; filename=your_desired_name.xls");header("Pragma: no-cache");header("Expires: 0");print "$header\n$data";
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/125113", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5165/" ] }
125,171
I need to pass a regex substitution as a variable: sub proc { my $pattern = shift; my $txt = "foo baz"; $txt =~ $pattern;}my $pattern = 's/foo/bar/';proc($pattern); This, of course, doesn't work. I tried eval'ing the substitution: eval("$txt =~ $pattern;"); but that didn't work either. What horribly obvious thing am I missing here?
I need to pass a regex substitution as a variable Do you? Why not pass a code reference? Example: sub modify{ my($text, $code) = @_; $code->($text); return $text;}my $new_text = modify('foo baz', sub { $_[0] =~ s/foo/bar/ }); In general, when you want to pass "something that does something" to a subroutine ("a regex substitution" in the case of your question) the answer is to pass a reference to a piece of code. Higher Order Perl is a good book on the topic.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125171", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8130/" ] }
125,177
Is there a good test suite or tool set that can automate website navigation -- with Javascript support -- and collect the HTML from the pages? Of course I can scrape straight HTML with BeautifulSoup. But this does me no good for sites that require Javascript. :)
You could use Selenium or Watir to drive a real browser. Ther are also some JavaScript-based headless browsers: PhantomJS is a headless Webkit browser. pjscrape is a scraping framework based on PhantomJS and jQuery. CasperJS is a navigation scripting & testing utility bsaed on PhantomJS, if you need to do a little more than point at URLs to be scraped. Zombie for Node.js Personally, I'm most familiar with Selenium, which has support for writing automation scripts in a good number of languagues and has more mature tooling, such as the excellent Selenium IDE extension for Firefox, which can be used to write and run testcases, and can export test scripts to many languages.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2321/" ] }
125,222
for working with MS word files in python, there is python win32 extensions, which can be used in windows. How do I do the same in linux?Is there any library?
You could make a subprocess call to antiword . Antiword is a linux commandline utility for dumping text out of a word doc. Works pretty well for simple documents (obviously it loses formatting). It's available through apt, and probably as RPM, or you could compile it yourself.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125222", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21480/" ] }
125,230
What MySQL query will do a text search and replace in one particular field in a table? I.e. search for foo and replace with bar so a record with a field with the value hello foo becomes hello bar .
Change table_name and field to match your table name and field in question: UPDATE table_name SET field = REPLACE(field, 'foo', 'bar') WHERE INSTR(field, 'foo') > 0; REPLACE (string functions) INSTR (string functions)
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/125230", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16536/" ] }
125,268
Is it possible to chain static methods together using a static class? Say I wanted to do something like this: $value = TestClass::toValue(5)::add(3)::subtract(2)::add(8)::result(); . . . and obviously I would want $value to be assigned the number 14. Is this possible? Update : It doesn't work (you can't return "self" - it's not an instance!), but this is where my thoughts have taken me: class TestClass { public static $currentValue; public static function toValue($value) { self::$currentValue = $value; } public static function add($value) { self::$currentValue = self::$currentValue + $value; return self; } public static function subtract($value) { self::$currentValue = self::$currentValue - $value; return self; } public static function result() { return self::$value; }} After working that out, I think it would just make more sense to simply work with a class instance rather than trying to chain static function calls (which doesn't look possible, unless the above example could be tweaked somehow).
I like the solution provided by Camilo above, essentially since all you're doing is altering the value of a static member, and since you do want chaining (even though it's only syntatic sugar), then instantiating TestClass is probably the best way to go. I'd suggest a Singleton pattern if you want to restrict instantiation of the class: class TestClass{ public static $currentValue; private static $_instance = null; private function __construct () { } public static function getInstance () { if (self::$_instance === null) { self::$_instance = new self; } return self::$_instance; } public function toValue($value) { self::$currentValue = $value; return $this; } public function add($value) { self::$currentValue = self::$currentValue + $value; return $this; } public function subtract($value) { self::$currentValue = self::$currentValue - $value; return $this; } public function result() { return self::$currentValue; }}// Example Usage:$result = TestClass::getInstance () ->toValue(5) ->add(3) ->subtract(2) ->add(8) ->result();
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5291/" ] }
125,269
A recent article on Ars Technica discusses a recent study performed by the Psychology Department of North Carolina State University, that showed users have a tendency to do whatever it takes to get rid of a dialog box to get back to their task at hand. Most of them would click OK or yes, minimize the dialog, or close the dialog, regardless of the message being displayed. Some of the dialog boxes displayed were real, and some of them were fake (like those popups displayed by webpages posing as an antivirus warning). The response times would indicate that those users aren't really reading those dialog boxes. So, knowing this, how would this effect your design, and what would you try to do about it (if anything)?
I try to design applications to be robust in the face of accidents -- either slips (inadvertent operations, such as clicking in the wrong place) or mistakes (cognitive errors, such as clicking Ok vs. Cancel on a dialog). Some ways to do this are: infinite (or at least multi-step) undo / redo integrate documentation with the interface, via dynamic tooltips and other context-sensitive means of communication (One paper that is particularly relevant is about 'Surprise, Explain, Reward' (direct link: SER ) -- using typical psychological responses to unexpected behavior to inform users) Incorporate the state of the system into said documentation (use the current user's data as examples, and make the documentation concrete by using data that they can see right now ) Expect user error. If there's a chance that someone will try to write to a:\ when there isn't a disk in place, then implement a time-out so the system can fail gracefully, and prompt for another location. Save the data in memory until it's secure on disk, etc. This boils down to two core things: (1) Program defensively, and (2) Keep the user as well informed as you can. If the system's interface is easy to use, and behaves according to their expectations then they are more likely to know which button to click when an annoying dialog appears. I also try very, very hard to avoid anything modal, so users can ignore most dialogs I have to use, at least for a while (and when they really need to pay attention to them, they have enough information to know what to do with it). It's impossible to make a system completely fool-proof, but I've found that the above techniques go a long way in the right direction. (and they have been incorporated in the systems used to develop Surprise Explain Reward and other tools that have been vetted by extensive user studies.)
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125269", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13630/" ] }
125,272
I'm relatively new to Mercurial and my team is trying it out right now as a replacement for Subversion. How can I commit and push a single file out to another repository while leaving other modifications in my working directory uncommitted (or at least not pushed to the other repository)? This happens for us with database migrations. We want to commit the migration to source control so a DBA can view and edit it while we're working on the code modifications to go along with that database migration. The changes aren't yet ready to go so we don't want to push all of them out. In subversion, I'd simply do: svn add my_migration.sql # commit only the migration, but not the other files I'm working onsvn commit -m "migration notes" my_mygration.sql and continue working locally. This doesn't work with mercurial as when I'm pushing it out to the other repository, if there are changes to it that I haven't pulled down, it wants me to pull them down, merge them, and commit that merge to the repository. Commits after a merge don't allow you to omit files so it forces you to commit everything in your local repository. The easiest thing that I can figure out is to commit the file to my local repository, clone my local repository, fetch any new changes from the actual repository, merge them and commit that merge, and them push my changes out. hg add my_migration.sql hg commit -m "migration notes" my_migration.sql cd ..hg clone project project-clonecd project-clonehg fetch http://hg/projecthg push http://hg/project This works, but it feels like I'm missing something easier, some way to tell mercurial to ignore the files already in my working directory, just do the merge and send the files along. I suspect mercurial queues can do this, but I don't fully grok mq yet.
There's a Mercurial feature that implements shelve and unshelve commands, which give you an interactive way to specify changes to store away until a later time: Shelve . Then you can hg shelve and hg unshelve to temporarily store changes away. It lets you work at the "patch hunk" level to pick and choose the items to shelve away. It didn't appear to shelve a file had listed for adding, only files already in the repo with modifications. It is included with Mercurial as an "extension" which just means you have to enable it in your hg config file. Notes for really old versions of Mercurial (before shelve was included -- this is no longer necessary): I didn't see any great install instructions with some googling, so here is the combined stuff I used to get it working: Get it with: hg clone http://freehg.org/u/tksoh/hgshelve/ hgshelve The only file (currently) in the project is the hgshelve.py file. Modify your ~/.hgrc to add the shelve extension, pointing to where you cloned the repo: [extensions] hgshelve=/Users/ted/Documents/workspace/hgshelve/hgshelve.py
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125272", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8912/" ] }
125,281
Given a string file path such as /foo/fizzbuzz.bar , how would I use bash to extract just the fizzbuzz portion of said string?
Here's how to do it with the # and % operators in Bash. $ x="/foo/fizzbuzz.bar"$ y=${x%.bar}$ echo ${y##*/}fizzbuzz ${x%.bar} could also be ${x%.*} to remove everything after a dot or ${x%%.*} to remove everything after the first dot. Example: $ x="/foo/fizzbuzz.bar.quux"$ y=${x%.*}$ echo $y/foo/fizzbuzz.bar$ y=${x%%.*}$ echo $y/foo/fizzbuzz Documentation can be found in the Bash manual . Look for ${parameter%word} and ${parameter%%word} trailing portion matching section.
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/125281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1512/" ] }
125,306
I'm writing an iPhone app that takes a photo and then uploads it to a server. How do I upload a photo to a server with Cocoa? I suppose I use NSUrl somewhere. Thanks!
Header: @interface EPUploader : NSObject { NSURL *serverURL; NSString *filePath; id delegate; SEL doneSelector; SEL errorSelector; BOOL uploadDidSucceed;}- (id)initWithURL: (NSURL *)serverURL filePath: (NSString *)filePath delegate: (id)delegate doneSelector: (SEL)doneSelector errorSelector: (SEL)errorSelector;- (NSString *)filePath;@end Main: #import "EPUploader.h"#import <zlib.h>static NSString * const BOUNDRY = @"0xKhTmLbOuNdArY";static NSString * const FORM_FLE_INPUT = @"uploaded";#define ASSERT(x) NSAssert(x, @"")@interface EPUploader (Private)- (void)upload;- (NSURLRequest *)postRequestWithURL: (NSURL *)url boundry: (NSString *)boundry data: (NSData *)data;- (NSData *)compress: (NSData *)data;- (void)uploadSucceeded: (BOOL)success;- (void)connectionDidFinishLoading:(NSURLConnection *)connection;@end@implementation EPUploader/* *----------------------------------------------------------------------------- * * -[Uploader initWithURL:filePath:delegate:doneSelector:errorSelector:] -- * * Initializer. Kicks off the upload. Note that upload will happen on a * separate thread. * * Results: * An instance of Uploader. * * Side effects: * None * *----------------------------------------------------------------------------- */- (id)initWithURL: (NSURL *)aServerURL // IN filePath: (NSString *)aFilePath // IN delegate: (id)aDelegate // IN doneSelector: (SEL)aDoneSelector // IN errorSelector: (SEL)anErrorSelector // IN{ if ((self = [super init])) { ASSERT(aServerURL); ASSERT(aFilePath); ASSERT(aDelegate); ASSERT(aDoneSelector); ASSERT(anErrorSelector); serverURL = [aServerURL retain]; filePath = [aFilePath retain]; delegate = [aDelegate retain]; doneSelector = aDoneSelector; errorSelector = anErrorSelector; [self upload]; } return self;}/* *----------------------------------------------------------------------------- * * -[Uploader dealloc] -- * * Destructor. * * Results: * None * * Side effects: * None * *----------------------------------------------------------------------------- */- (void)dealloc{ [serverURL release]; serverURL = nil; [filePath release]; filePath = nil; [delegate release]; delegate = nil; doneSelector = NULL; errorSelector = NULL; [super dealloc];}/* *----------------------------------------------------------------------------- * * -[Uploader filePath] -- * * Gets the path of the file this object is uploading. * * Results: * Path to the upload file. * * Side effects: * None * *----------------------------------------------------------------------------- */- (NSString *)filePath{ return filePath;}@end // Uploader@implementation EPUploader (Private)/* *----------------------------------------------------------------------------- * * -[Uploader(Private) upload] -- * * Uploads the given file. The file is compressed before beign uploaded. * The data is uploaded using an HTTP POST command. * * Results: * None * * Side effects: * None * *----------------------------------------------------------------------------- */- (void)upload{ NSData *data = [NSData dataWithContentsOfFile:filePath]; ASSERT(data); if (!data) { [self uploadSucceeded:NO]; return; } if ([data length] == 0) { // There's no data, treat this the same as no file. [self uploadSucceeded:YES]; return; }// NSData *compressedData = [self compress:data];// ASSERT(compressedData && [compressedData length] != 0);// if (!compressedData || [compressedData length] == 0) {// [self uploadSucceeded:NO];// return;// } NSURLRequest *urlRequest = [self postRequestWithURL:serverURL boundry:BOUNDRY data:data]; if (!urlRequest) { [self uploadSucceeded:NO]; return; } NSURLConnection * connection = [[NSURLConnection alloc] initWithRequest:urlRequest delegate:self]; if (!connection) { [self uploadSucceeded:NO]; } // Now wait for the URL connection to call us back.}/* *----------------------------------------------------------------------------- * * -[Uploader(Private) postRequestWithURL:boundry:data:] -- * * Creates a HTML POST request. * * Results: * The HTML POST request. * * Side effects: * None * *----------------------------------------------------------------------------- */- (NSURLRequest *)postRequestWithURL: (NSURL *)url // IN boundry: (NSString *)boundry // IN data: (NSData *)data // IN{ // from http://www.cocoadev.com/index.pl?HTTPFileUpload NSMutableURLRequest *urlRequest = [NSMutableURLRequest requestWithURL:url]; [urlRequest setHTTPMethod:@"POST"]; [urlRequest setValue: [NSString stringWithFormat:@"multipart/form-data; boundary=%@", boundry] forHTTPHeaderField:@"Content-Type"]; NSMutableData *postData = [NSMutableData dataWithCapacity:[data length] + 512]; [postData appendData: [[NSString stringWithFormat:@"--%@\r\n", boundry] dataUsingEncoding:NSUTF8StringEncoding]]; [postData appendData: [[NSString stringWithFormat: @"Content-Disposition: form-data; name=\"%@\"; filename=\"file.bin\"\r\n\r\n", FORM_FLE_INPUT] dataUsingEncoding:NSUTF8StringEncoding]]; [postData appendData:data]; [postData appendData: [[NSString stringWithFormat:@"\r\n--%@--\r\n", boundry] dataUsingEncoding:NSUTF8StringEncoding]]; [urlRequest setHTTPBody:postData]; return urlRequest;}/* *----------------------------------------------------------------------------- * * -[Uploader(Private) compress:] -- * * Uses zlib to compress the given data. * * Results: * The compressed data as a NSData object. * * Side effects: * None * *----------------------------------------------------------------------------- */- (NSData *)compress: (NSData *)data // IN{ if (!data || [data length] == 0) return nil; // zlib compress doc says destSize must be 1% + 12 bytes greater than source. uLong destSize = [data length] * 1.001 + 12; NSMutableData *destData = [NSMutableData dataWithLength:destSize]; int error = compress([destData mutableBytes], &destSize, [data bytes], [data length]); if (error != Z_OK) { NSLog(@"%s: self:0x%p, zlib error on compress:%d\n",__func__, self, error); return nil; } [destData setLength:destSize]; return destData;}/* *----------------------------------------------------------------------------- * * -[Uploader(Private) uploadSucceeded:] -- * * Used to notify the delegate that the upload did or did not succeed. * * Results: * None * * Side effects: * None * *----------------------------------------------------------------------------- */- (void)uploadSucceeded: (BOOL)success // IN{ [delegate performSelector:success ? doneSelector : errorSelector withObject:self];}/* *----------------------------------------------------------------------------- * * -[Uploader(Private) connectionDidFinishLoading:] -- * * Called when the upload is complete. We judge the success of the upload * based on the reply we get from the server. * * Results: * None * * Side effects: * None * *----------------------------------------------------------------------------- */- (void)connectionDidFinishLoading:(NSURLConnection *)connection // IN{ NSLog(@"%s: self:0x%p\n", __func__, self); [connection release]; [self uploadSucceeded:uploadDidSucceed];}/* *----------------------------------------------------------------------------- * * -[Uploader(Private) connection:didFailWithError:] -- * * Called when the upload failed (probably due to a lack of network * connection). * * Results: * None * * Side effects: * None * *----------------------------------------------------------------------------- */- (void)connection:(NSURLConnection *)connection // IN didFailWithError:(NSError *)error // IN{ NSLog(@"%s: self:0x%p, connection error:%s\n", __func__, self, [[error description] UTF8String]); [connection release]; [self uploadSucceeded:NO];}/* *----------------------------------------------------------------------------- * * -[Uploader(Private) connection:didReceiveResponse:] -- * * Called as we get responses from the server. * * Results: * None * * Side effects: * None * *----------------------------------------------------------------------------- */-(void) connection:(NSURLConnection *)connection // IN didReceiveResponse:(NSURLResponse *)response // IN{ NSLog(@"%s: self:0x%p\n", __func__, self);}/* *----------------------------------------------------------------------------- * * -[Uploader(Private) connection:didReceiveData:] -- * * Called when we have data from the server. We expect the server to reply * with a "YES" if the upload succeeded or "NO" if it did not. * * Results: * None * * Side effects: * None * *----------------------------------------------------------------------------- */- (void)connection:(NSURLConnection *)connection // IN didReceiveData:(NSData *)data // IN{ NSLog(@"%s: self:0x%p\n", __func__, self); NSString *reply = [[[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding] autorelease]; NSLog(@"%s: data: %s\n", __func__, [reply UTF8String]); if ([reply hasPrefix:@"YES"]) { uploadDidSucceed = YES; }}@end Usage: [[EPUploader alloc] initWithURL:[NSURL URLWithString:@"http://yourserver.com/uploadDB.php"] filePath:@"path/to/some/file" delegate:self doneSelector:@selector(onUploadDone:) errorSelector:@selector(onUploadError:)];
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125306", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
125,319
I have been running StyleCop over some C# code, and it keeps reporting that my using directives should be inside the namespace. Is there a technical reason for putting the using directives inside instead of outside the namespace?
There is actually a (subtle) difference between the two. Imagine you have the following code in File1.cs: // File1.csusing System;namespace Outer.Inner{ class Foo { static void Bar() { double d = Math.PI; } }} Now imagine that someone adds another file (File2.cs) to the project that looks like this: // File2.csnamespace Outer{ class Math { }} The compiler searches Outer before looking at those using directives outside the namespace, so it finds Outer.Math instead of System.Math . Unfortunately (or perhaps fortunately?), Outer.Math has no PI member, so File1 is now broken. This changes if you put the using inside your namespace declaration, as follows: // File1b.csnamespace Outer.Inner{ using System; class Foo { static void Bar() { double d = Math.PI; } }} Now the compiler searches System before searching Outer , finds System.Math , and all is well. Some would argue that Math might be a bad name for a user-defined class, since there's already one in System ; the point here is just that there is a difference, and it affects the maintainability of your code. It's also interesting to note what happens if Foo is in namespace Outer , rather than Outer.Inner . In that case, adding Outer.Math in File2 breaks File1 regardless of where the using goes. This implies that the compiler searches the innermost enclosing namespace before it looks at any using directive.
{ "score": 12, "source": [ "https://Stackoverflow.com/questions/125319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4490/" ] }
125,333
I'm looking for the simplest, most straightforward way to implement the following: The main program instantiates workerthreads to do a task. Only n tasks can be running at once. When n is reached, no more workersare started until the count ofrunning threads drops back below n .
I think that Executors.newFixedThreadPool fits your requirements. There are a number of different ways to use the resulting ExecutorService, depending on whether you want a result returned to the main thread, or whether the task is totally self-contained, and whether you have a collection of tasks to perform up front, or whether tasks are queued in response to some event. Collection<YourTask> tasks = new ArrayList<YourTask>(); YourTask yt1 = new YourTask(); ... tasks.add(yt1); ... ExecutorService exec = Executors.newFixedThreadPool(5); List<Future<YourResultType>> results = exec.invokeAll(tasks); Alternatively, if you have a new asynchronous task to perform in response to some event, you probably just want to use the ExecutorService's simple execute(Runnable) method.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/292/" ] }
125,341
Is there a simple out of the box way to impersonate a user in .NET? So far I've been using this class from code project for all my impersonation requirements. Is there a better way to do it by using .NET Framework? I have a user credential set, (username, password, domain name) which represents the identity I need to impersonate.
"Impersonation" in the .NET space generally means running code under a specific user account. It is a somewhat separate concept than getting access to that user account via a username and password, although these two ideas pair together frequently. Impersonation The APIs for impersonation are provided in .NET via the System.Security.Principal namespace: Newer code should generally use WindowsIdentity.RunImpersonated , which accepts a handle to the token of the user account, and then either an Action or Func<T> for the code to execute. WindowsIdentity.RunImpersonated(userHandle, () =>{ // do whatever you want as this user.}); or var result = WindowsIdentity.RunImpersonated(userHandle, () =>{ // do whatever you want as this user. return result;}); There's also WindowsIdentity.RunImpersonatedAsync for async tasks, available on .NET 5+, or older versions if you pull in the System.Security.Principal.Windows Nuget package. await WindowsIdentity.RunImpersonatedAsync(userHandle, async () =>{ // do whatever you want as this user.}); or var result = await WindowsIdentity.RunImpersonated(userHandle, async () =>{ // do whatever you want as this user. return result;}); Older code used the WindowsIdentity.Impersonate method to retrieve a WindowsImpersonationContext object. This object implements IDisposable , so generally should be called from a using block. using (WindowsImpersonationContext context = WindowsIdentity.Impersonate(userHandle)){ // do whatever you want as this user.} While this API still exists in .NET Framework, it should generally be avoided. Accessing the User Account The API for using a username and password to gain access to a user account in Windows is LogonUser - which is a Win32 native API. There is not currently a built-in managed .NET API for calling it. [DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Unicode)]internal static extern bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken); This is the basic call definition, however there is a lot more to consider to actually using it in production: Obtaining a handle with the "safe" access pattern. Closing the native handles appropriately Code access security (CAS) trust levels (in .NET Framework only) Passing SecureString when you can collect one safely via user keystrokes. Instead of writing that code yourself, consider using my SimpleImpersonation library, which provides a managed wrapper around the LogonUser API to get a user handle: using System.Security.Principal;using Microsoft.Win32.SafeHandles;using SimpleImpersonation;var credentials = new UserCredentials(domain, username, password);using SafeAccessTokenHandle userHandle = credentials.LogonUser(LogonType.Interactive); // or another LogonType You can now use that userHandle with any of the methods mentioned in the first section above. This is the preferred API as of version 4.0.0 of the SimpleImpersonation library. See the project readme for more details. Remote Computer Access It's important to recognize that impersonation is a local machine concept. One cannot impersonate using a user that is only known to a remote machine. If you want to access resources on a remote computer, the local machine and the remote machine must be attached to the same domain, or there needs to be a trust relationship between the domains of the two machines. If either computer is domainless, you cannot use LogonUser or SimpleImpersonation to connect to that machine.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/125341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1909/" ] }
125,359
In Java, web apps are bundled in to WARs. By default, many servlet containers will use the WAR name as the context name for the application. Thus myapp.war gets deployed to http://example.com/myapp . The problem is that the webapp considers its "root" to be, well, "root", or simply "/", whereas HTML would consider the root of your application to be "/myapp". The Servlet API and JSP have facilities to help manage this. For example, if, in a servlet, you do: response.sendRedirect("/mypage.jsp"), the container will prepend the context and create the url: http://example.com/myapp/mypage.jsp ". However, you can't do that with, say, the IMG tag in HTML. If you do <img src="/myimage.gif"/> you will likely get a 404, because what you really wanted was "/myapp/myimage.gif". Many frameworks have JSP tags that are context aware as well, and there are different ways of making correct URLs within JSP (none particularly elegantly). It's a nitty problem for coders to jump in an out of when to use an "App Relative" url, vs an absolute url. Finally, there's the issue of Javascript code that needs to create URLs on the fly, and embedded URLs within CSS (for background images and the like). I'm curious what techniques others use to mitigate and work around this issue. Many simply punt and hard code it, either to server root or to whatever context they happen to be using. I already know that answer, that's not what I'm looking for. What do you do?
You can use JSTL for creating urls. For example, <c:url value="/images/header.jpg" /> will prefix the context root. With CSS, this usually isn't an issue for me. I have a web root structure like this: /css /images In the CSS file, you then just need to use relative URLs (../images/header.jpg) and it doesn't need to be aware of the context root. As for JavaScript, what works for me is including some common JavaScript in the page header like this: <script type="text/javascript">var CONTEXT_ROOT = '<%= request.getContextPath() %>';</script> Then you can use the context root in all your scripts (or, you can define a function to build paths - may be a bit more flexible). Obviously this all depends on your using JSPs and JSTL, but I use JSF with Facelets and the techniques involved are similar - the only real difference is getting the context root in a different way.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13663/" ] }
125,367
What are the advantages and limitations of dynamic type languages compared to static type languages? See also : whats with the love of dynamic languages (a far more argumentative thread...)
The ability of the interpreter to deduce type and type conversions makes development time faster, but it also can provoke runtime failures which you just cannot get in a statically typed language where you catch them at compile time. But which one's better (or even if that's always true) is hotly discussed in the community these days (and since a long time). A good take on the issue is from Static Typing Where Possible, Dynamic Typing When Needed: The End of the Cold War Between Programming Languages by Erik Meijer and Peter Drayton at Microsoft: Advocates of static typing argue that the advantages of static typing include earlier detection of programming mistakes (e.g. preventing adding an integer to a boolean), better documentation in the form of type signatures (e.g. incorporating number and types of arguments when resolving names), more opportunities for compiler optimizations (e.g. replacing virtual calls by direct calls when the exact type of the receiver is known statically), increased runtime efficiency (e.g. not all values need to carry a dynamic type), and a better design time developer experience (e.g. knowing the type of the receiver, the IDE can present a drop-down menu of all applicable members). Static typing fanatics try to make us believe that “well-typed programs cannot go wrong”. While this certainly sounds impressive, it is a rather vacuous statement. Static type checking is a compile-time abstraction of the runtime behavior of your program, and hence it is necessarily only partially sound and incomplete. This means that programs can still go wrong because of properties that are not tracked by the type-checker, and that there are programs that while they cannot go wrong cannot be type-checked. The impulse for making static typing less partial and more complete causes type systems to become overly complicated and exotic as witnessed by concepts such as “phantom types” [11] and “wobbly types” [10]. This is like trying to run a marathon with a ball and chain tied to your leg and triumphantly shouting that you nearly made it even though you bailed out after the first mile. Advocates of dynamically typed languages argue that static typing is too rigid, and that the softness of dynamically languages makes them ideally suited for prototyping systems with changing or unknown requirements, or that interact with other systems that change unpredictably (data and application integration). Of course, dynamically typed languages are indispensable for dealing with truly dynamic program behavior such as method interception, dynamic loading, mobile code, runtime reflection, etc. In the mother of all papers on scripting [16], John Ousterhout argues that statically typed systems programming languages make code less reusable, more verbose, not more safe, and less expressive than dynamically typed scripting languages. This argument is parroted literally by many proponents of dynamically typed scripting languages. We argue that this is a fallacy and falls into the same category as arguing that the essence of declarative programming is eliminating assignment. Or as John Hughes says [8], it is a logical impossibility to make a language more powerful by omitting features. Defending the fact that delaying all type-checking to runtime is a good thing, is playing ostrich tactics with the fact that errors should be caught as early in the development process as possible.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/125367", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11034/" ] }
125,399
I have code that references a web service, and I'd like the address of that web service to be dynamic (read from a database, config file, etc.) so that it is easily changed. One major use of this will be to deploy to multiple environments where machine names and IP addresses are different. The web service signature will be the same across all deployments, just located elsewhere. Maybe I've just been spoiled by the Visual Studio "Add Web Reference" wizard - seems like this should be something relatively easy, though.
When you generate a web reference and click on the web reference in the Solution Explorer. In the properties pane you should see something like this: Changing the value to dynamic will put an entry in your app.config. Here is the CodePlex article that has more information.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7623/" ] }
125,400
Not sure if this is possible or if I'm expressing correctly what I'm looking for, but I have the following piece of code in my library repeatedly and would like to practice some DRY. I have set of SQL Server tables that I'm querying based on a simple user-supplied search field ala Google. I'm using LINQ to compose the final query based on what's in the search string. I'm looking for a way to use generics and passed in lambda functions to create a reusable routine out of this: string[] arrayOfQueryTerms = getsTheArray();var somequery = from q in dataContext.MyTable select q;if (arrayOfQueryTerms.Length == 1){ somequery = somequery.Where<MyTableEntity>( e => e.FieldName.StartsWith(arrayOfQueryTerms[0]));}else{ foreach(string queryTerm in arrayOfQueryTerms) { if (!String.IsNullOrEmpty(queryTerm)) { somequery = somequery .Where<MyTableEntity>( e => e.FieldName.Contains(queryTerm)); } }} I was hoping to create a generic method with signature that looks something like: private IQueryable<T> getQuery( T MyTableEntity, string[] arrayOfQueryTerms, Func<T, bool> predicate) I'm using the same search strategy across all my tables, so the only thing that really differs from usage to usage is the MyTable & MyTableEntity searched and the FieldName searched. Does this make sense? Is there a way with LINQ to dynamically pass in the name of the field to query in the where clause? Or can I pass in this as a predicate lambda? e => e.FieldName.Contains(queryTerm) I realize there a million and a half ways to do this in SQL, probably easier, but I'd love to keep everything in the LINQ family for this one. Also, I feel that generics should be handy for a problem like this. Any ideas?
It sounds like you're looking for Dynamic Linq. Take a look here . This allows you to pass strings as arguments to the query methods, like: var query = dataSource.Where("CategoryID == 2 && UnitPrice > 3") .OrderBy("SupplierID"); Edit: Another set of posts on this subject, using C# 4's Dynamic support: Part 1 and Part 2 .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17729/" ] }
125,449
I am using Excel where certain fields are allowed for user input and other cells are to be protected. I have used Tools Protect sheet, however after doing this I am not able to change the values in the VBA script. I need to restrict the sheet to stop user input, at the same time allow the VBA code to change the cell values based on certain computations.
Try using Worksheet.Protect "Password", UserInterfaceOnly := True If the UserInterfaceOnly parameter is set to true, VBA code can modify protected cells.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/125449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17266/" ] }
125,457
If I need to copy a stored procedure (SP) from one SQL Server to another I right click on the SP in SSMS and select Script Stored Procedure as > CREATE to > New Query Editor Window. I then change the connection by right clicking on that window and selecting Connection > Change Connection... and then selecting the new server and F5 to run the create on the new server. So my question is "What is the T-SQL syntax to connect to another SQL Server?" so that I can just paste that in the top of the create script and F5 to run it and it would switch to the new server and run the create script. While typing the question I realized that if I gave you the back ground to what I'm trying to do that you might come up with a faster and better way from me to accomplish this.
Also, make sure when you write the query involving the linked server, you include brackets like this: SELECT * FROM [LinkedServer].[RemoteDatabase].[User].[Table] I've found that at least on 2000/2005 the [] brackets are necessary, at least around the server name.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1463/" ] }
125,512
In my rails app I use the validation helpers in my active record objects and they are great. When there is a problem I see the standard "3 errors prohibited this foobar from being saved" on my web page along with the individual problems. Is there any way I can override this default message with my own?
The error_messages_for helper that you are using to display the errors accepts a :header_message option that allows you to change that default header text. As in: error_messages_for 'model', :header_message => "You have some errors that prevented saving this model" The RubyOnRails API is your friend.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16779/" ] }
125,536
I want to add a constant value onto an incoming bound integer. In fact I have several places where I want to bind to the same source value but add different constants. So the ideal solution would be something like this... <TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=5}"/><TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=8}"/><TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=24}"/> (NOTE: This is an example to show the idea, my actual binding scenario is not to the canvas property of a TextBox. But this shows the idea more clearly) At the moment the only solution I can think of is to expose many different source properties each of which adds on a different constant to the same internal value. So I could do something like this... <TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus5}"/><TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus8}"/><TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus24}"/> But this is pretty grim because in the future I might need to keep adding new properties for new constants. Also if I need to change the value added I need to go an alter the source object which is pretty naff. There must be a more generic way than this? Any WPF experts got any ideas?
I use a MathConverter that I created to do all simple arithmatic operations with. The code for the converter is here and it can be used like this: <TextBox Canvas.Top="{Binding SomeValue, Converter={StaticResource MathConverter}, ConverterParameter=@VALUE+5}" /> You can even use it with more advanced arithmatic operations such as Width="{Binding ElementName=RootWindow, Path=ActualWidth, Converter={StaticResource MathConverter}, ConverterParameter=((@VALUE-200)*.3)}"
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125536", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6276/" ] }
125,557
In a tic-tac-toe implementation I guess that the challenging part is to determine the best move to be played by the machine. What are the algorithms that can pursued? I'm looking into implementations from simple to complex. How would I go about tackling this part of the problem?
The strategy from Wikipedia for playing a perfect game (win or tie every time) seems like straightforward pseudo-code: Quote from Wikipedia (Tic Tac Toe#Strategy) A player can play a perfect game of Tic-tac-toe (to win or, at least, draw) if they choose the first available move from the following list, each turn, as used in Newell and Simon's 1972 tic-tac-toe program.[6] Win: If you have two in a row, play the third to get three in a row. Block: If the opponent has two in a row, play the third to block them. Fork: Create an opportunity where you can win in two ways. Block Opponent's Fork: Option 1: Create two in a row to force the opponent into defending, as long as it doesn't result in them creating a fork or winning. For example, if "X" has a corner, "O" has the center, and "X" has the opposite corner as well, "O" must not play a corner in order to win. (Playing a corner in this scenario creates a fork for "X" to win.) Option 2: If there is a configuration where the opponent can fork, block that fork. Center: Play the center. Opposite Corner: If the opponent is in the corner, play the opposite corner. Empty Corner: Play an empty corner. Empty Side: Play an empty side. Recognizing what a "fork" situation looks like could be done in a brute-force manner as suggested. Note: A "perfect" opponent is a nice exercise but ultimately not worth 'playing' against. You could, however, alter the priorities above to give characteristic weaknesses to opponent personalities.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19790/" ] }
125,580
So, I've been reading through and it appears that the Boost libraries get used a lot in practice (not at my shop, though). Why is this? and what makes it so wonderful?
Boost is used so extensively because: It is open-source and peer-reviewed. It provides a wide range of platform agnostic functionality that STL missed. It is a complement to STL rather than a replacement. Many of Boost developers are on the C++ standard committee. In fact, many parts of Boost is considered to be included in the next C++ standard library. It is documented nicely. Its license allows inclusion in open-source and closed-source projects. Its features are not usually dependent on each other so you can link only the parts you require. [ Luc Hermitte 's comment]
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/125580", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10774/" ] }
125,597
Boost is meant to be the standard non-standard C++ library that every C++ user can use. Is it reasonable to assume it's available for an open source C++ project, or is it a large dependency too far?
Basically your question boils down to “is it reasonable to have [free library xyz] as a dependency for a C++ open source project.” Now consider the following quote from Stroustrup and the answer is really a no-brainer: Without a good library, most interesting tasks are hard to do in C++; but given a good library, almost any task can be made easy Assuming that this is correct (and in my experience, it is) then writing a reasonably-sized C++ project without dependencies is downright unreasonable. Developing this argument further, the one C++ dependency (apart from system libraries) that can reasonably be expected on a (developer's) client system is the Boost libraries.I know that they aren't but it's not an unreasonable presumption for a software to make. If a software can't even rely on Boost, it can't rely on any library.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11177/" ] }
125,619
I'm working on an app that requires no user input, but I don't want the iPhone to enter the power saving mode. Is it possible to disable power saving from an app?
Objective-C [[UIApplication sharedApplication] setIdleTimerDisabled:YES]; Swift UIApplication.shared.isIdleTimerDisabled = true
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/125619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3740/" ] }
125,632
When providing a link to a PDF file on a website, is it possible to include information in the URL (request parameters) which will make the PDF browser plugin (if used) jump to a particular bookmark instead of just opening at the beginning? Something like: http://www.somehost.com/user-guide.pdf?bookmark=chapter3 ? If not a bookmark, would it be possible to go to a particular page? I'm assuming that if there is an answer it may be specific to Adobe's PDF reader plugin or something, and may have version limitations, but I'm mostly interested in whether the technique exists at all.
Yes, you can link to specific pages by number or named locations and that will always work if the user's browser uses Adobe Reader as plugin for viewing PDF files . For a specific page by number: <a href="http://www.domain.com/file.pdf#page=3">Link text</a> For a named location (destination): <a href="http://www.domain.com/file.pdf#nameddest=TOC">Link text</a> To create destinations within a PDF with Acrobat: Manually navigate through the PDF for the desired location Go to View > Navigation Tabs > Destinations Under Options, choose Scan Document Once this is completed, select New Destination from the Options menu and enter an appropriate name
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125632", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1119/" ] }
125,677
So I'm writing a framework on which I want to base a few apps that I'm working on (the framework is there so I have an environment to work with, and a system that will let me, for example, use a single sign-on) I want to make this framework, and the apps it has use a Resource Oriented Architecture. Now, I want to create a URL routing class that is expandable by APP writers (and possibly also by CMS App users, but that's WAYYYY ahead in the future) and I'm trying to figure out the best way to do it by looking at how other apps do it.
I prefer to use reg ex over making my own format since it is common knowledge. I wrote a small class that I use which allows me to nest these reg ex routing tables. I use to use something similar that was implemented by inheritance but it didn't need inheritance so I rewrote it. I do a reg ex on a key and map to my own control string. Take the below example. I visit /api/related/joe and my router class creates a new object ApiController and calls it's method relatedDocuments(array('tags' => 'joe')); // the 12 strips the subdirectory my app is running in$index = urldecode(substr($_SERVER["REQUEST_URI"], 12)); Route::process($index, array( "#^api/related/(.*)$#Di" => "ApiController/relatedDocuments/tags", "#^thread/(.*)/post$#Di" => "ThreadController/post/title", "#^thread/(.*)/reply$#Di" => "ThreadController/reply/title", "#^thread/(.*)$#Di" => "ThreadController/thread/title", "#^ajax/tag/(.*)/(.*)$#Di" => "TagController/add/id/tags", "#^ajax/reply/(.*)/post$#Di"=> "ThreadController/ajaxPost/id", "#^ajax/reply/(.*)$#Di" => "ArticleController/newReply/id", "#^ajax/toggle/(.*)$#Di" => "ApiController/toggle/toggle", "#^$#Di" => "HomeController",)); In order to keep errors down and simplicity up you can subdivide your table. This way you can put the routing table into the class that it controls. Taking the above example you can combine the three thread calls into a single one. Route::process($index, array( "#^api/related/(.*)$#Di" => "ApiController/relatedDocuments/tags", "#^thread/(.*)$#Di" => "ThreadController/route/uri", "#^ajax/tag/(.*)/(.*)$#Di" => "TagController/add/id/tags", "#^ajax/reply/(.*)/post$#Di"=> "ThreadController/ajaxPost/id", "#^ajax/reply/(.*)$#Di" => "ArticleController/newReply/id", "#^ajax/toggle/(.*)$#Di" => "ApiController/toggle/toggle", "#^$#Di" => "HomeController",)); Then you define ThreadController::route to be like this. function route($args) { Route::process($args['uri'], array( "#^(.*)/post$#Di" => "ThreadController/post/title", "#^(.*)/reply$#Di" => "ThreadController/reply/title", "#^(.*)$#Di" => "ThreadController/thread/title", ));} Also you can define whatever defaults you want for your routing string on the right. Just don't forget to document them or you will confuse people. I'm currently calling index if you don't include a function name on the right. Here is my current code. You may want to change it to handle errors how you like and or default actions.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20010/" ] }
125,703
I'm using Python, and would like to insert a string into a text file without deleting or copying the file. How can I do that?
Depends on what you want to do. To append you can open it with "a": with open("foo.txt", "a") as f: f.write("new line\n") If you want to preprend something you have to read from the file first: with open("foo.txt", "r+") as f: old = f.read() # read everything in the file f.seek(0) # rewind f.write("new line\n" + old) # write the new line before
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
125,719
Is there any way to edit column names in a DataGridView?
You can also change the column name by using: myDataGrid.Columns[0].HeaderText = "My Header" but the myDataGrid will need to have been bound to a DataSource .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11137/" ] }
125,726
Has anyone any reources for learning how to implement SVG with php/mysql (and possibly with php-gtk)? I am thinking of making a top-down garden designer, with drag and drop predefined elements (such as trees/bushes) and definable areas of planting (circles/squares). Gardeners could then track over time how well planting did in a certain area. I don´t really want to get into flash...
You can also change the column name by using: myDataGrid.Columns[0].HeaderText = "My Header" but the myDataGrid will need to have been bound to a DataSource .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
125,785
I need a function written in Excel VBA that will hash passwords using a standard algorithm such as SHA-1. Something with a simple interface like: Public Function CreateHash(Value As String) As String...End Function The function needs to work on an XP workstation with Excel 2003 installed, but otherwise must use no third party components. It can reference and use DLLs that are available with XP, such as CryptoAPI. Does anyone know of a sample to achieve this hashing functionality?
Here's a module for calculating SHA1 hashes that is usable for Excel formulas eg. '=SHA1HASH("test")'. To use it, make a new module called 'module_sha1' and copy and paste it all in.This is based on some VBA code from http://vb.wikia.com/wiki/SHA-1.bas , with changes to support passing it a string, and executable from formulas in Excel cells. ' Based on: http://vb.wikia.com/wiki/SHA-1.basOption ExplicitPrivate Type FourBytes A As Byte B As Byte C As Byte D As ByteEnd TypePrivate Type OneLong L As LongEnd TypeFunction HexDefaultSHA1(Message() As Byte) As String Dim H1 As Long, H2 As Long, H3 As Long, H4 As Long, H5 As Long DefaultSHA1 Message, H1, H2, H3, H4, H5 HexDefaultSHA1 = DecToHex5(H1, H2, H3, H4, H5)End FunctionFunction HexSHA1(Message() As Byte, ByVal Key1 As Long, ByVal Key2 As Long, ByVal Key3 As Long, ByVal Key4 As Long) As String Dim H1 As Long, H2 As Long, H3 As Long, H4 As Long, H5 As Long xSHA1 Message, Key1, Key2, Key3, Key4, H1, H2, H3, H4, H5 HexSHA1 = DecToHex5(H1, H2, H3, H4, H5)End FunctionSub DefaultSHA1(Message() As Byte, H1 As Long, H2 As Long, H3 As Long, H4 As Long, H5 As Long) xSHA1 Message, &H5A827999, &H6ED9EBA1, &H8F1BBCDC, &HCA62C1D6, H1, H2, H3, H4, H5End SubSub xSHA1(Message() As Byte, ByVal Key1 As Long, ByVal Key2 As Long, ByVal Key3 As Long, ByVal Key4 As Long, H1 As Long, H2 As Long, H3 As Long, H4 As Long, H5 As Long) 'CA62C1D68F1BBCDC6ED9EBA15A827999 + "abc" = "A9993E36 4706816A BA3E2571 7850C26C 9CD0D89D" '"abc" = "A9993E36 4706816A BA3E2571 7850C26C 9CD0D89D" Dim U As Long, P As Long Dim FB As FourBytes, OL As OneLong Dim i As Integer Dim W(80) As Long Dim A As Long, B As Long, C As Long, D As Long, E As Long Dim T As Long H1 = &H67452301: H2 = &HEFCDAB89: H3 = &H98BADCFE: H4 = &H10325476: H5 = &HC3D2E1F0 U = UBound(Message) + 1: OL.L = U32ShiftLeft3(U): A = U \ &H20000000: LSet FB = OL 'U32ShiftRight29(U) ReDim Preserve Message(0 To (U + 8 And -64) + 63) Message(U) = 128 U = UBound(Message) Message(U - 4) = A Message(U - 3) = FB.D Message(U - 2) = FB.C Message(U - 1) = FB.B Message(U) = FB.A While P < U For i = 0 To 15 FB.D = Message(P) FB.C = Message(P + 1) FB.B = Message(P + 2) FB.A = Message(P + 3) LSet OL = FB W(i) = OL.L P = P + 4 Next i For i = 16 To 79 W(i) = U32RotateLeft1(W(i - 3) Xor W(i - 8) Xor W(i - 14) Xor W(i - 16)) Next i A = H1: B = H2: C = H3: D = H4: E = H5 For i = 0 To 19 T = U32Add(U32Add(U32Add(U32Add(U32RotateLeft5(A), E), W(i)), Key1), ((B And C) Or ((Not B) And D))) E = D: D = C: C = U32RotateLeft30(B): B = A: A = T Next i For i = 20 To 39 T = U32Add(U32Add(U32Add(U32Add(U32RotateLeft5(A), E), W(i)), Key2), (B Xor C Xor D)) E = D: D = C: C = U32RotateLeft30(B): B = A: A = T Next i For i = 40 To 59 T = U32Add(U32Add(U32Add(U32Add(U32RotateLeft5(A), E), W(i)), Key3), ((B And C) Or (B And D) Or (C And D))) E = D: D = C: C = U32RotateLeft30(B): B = A: A = T Next i For i = 60 To 79 T = U32Add(U32Add(U32Add(U32Add(U32RotateLeft5(A), E), W(i)), Key4), (B Xor C Xor D)) E = D: D = C: C = U32RotateLeft30(B): B = A: A = T Next i H1 = U32Add(H1, A): H2 = U32Add(H2, B): H3 = U32Add(H3, C): H4 = U32Add(H4, D): H5 = U32Add(H5, E) WendEnd SubFunction U32Add(ByVal A As Long, ByVal B As Long) As Long If (A Xor B) < 0 Then U32Add = A + B Else U32Add = (A Xor &H80000000) + B Xor &H80000000 End IfEnd FunctionFunction U32ShiftLeft3(ByVal A As Long) As Long U32ShiftLeft3 = (A And &HFFFFFFF) * 8 If A And &H10000000 Then U32ShiftLeft3 = U32ShiftLeft3 Or &H80000000End FunctionFunction U32ShiftRight29(ByVal A As Long) As Long U32ShiftRight29 = (A And &HE0000000) \ &H20000000 And 7End FunctionFunction U32RotateLeft1(ByVal A As Long) As Long U32RotateLeft1 = (A And &H3FFFFFFF) * 2 If A And &H40000000 Then U32RotateLeft1 = U32RotateLeft1 Or &H80000000 If A And &H80000000 Then U32RotateLeft1 = U32RotateLeft1 Or 1End FunctionFunction U32RotateLeft5(ByVal A As Long) As Long U32RotateLeft5 = (A And &H3FFFFFF) * 32 Or (A And &HF8000000) \ &H8000000 And 31 If A And &H4000000 Then U32RotateLeft5 = U32RotateLeft5 Or &H80000000End FunctionFunction U32RotateLeft30(ByVal A As Long) As Long U32RotateLeft30 = (A And 1) * &H40000000 Or (A And &HFFFC) \ 4 And &H3FFFFFFF If A And 2 Then U32RotateLeft30 = U32RotateLeft30 Or &H80000000End FunctionFunction DecToHex5(ByVal H1 As Long, ByVal H2 As Long, ByVal H3 As Long, ByVal H4 As Long, ByVal H5 As Long) As String Dim H As String, L As Long DecToHex5 = "00000000 00000000 00000000 00000000 00000000" H = Hex(H1): L = Len(H): Mid(DecToHex5, 9 - L, L) = H H = Hex(H2): L = Len(H): Mid(DecToHex5, 18 - L, L) = H H = Hex(H3): L = Len(H): Mid(DecToHex5, 27 - L, L) = H H = Hex(H4): L = Len(H): Mid(DecToHex5, 36 - L, L) = H H = Hex(H5): L = Len(H): Mid(DecToHex5, 45 - L, L) = HEnd Function' Convert the string into bytes so we can use the above functions' From Chris Hulbert: http://splinter.com.au/blogPublic Function SHA1HASH(str) Dim i As Integer Dim arr() As Byte ReDim arr(0 To Len(str) - 1) As Byte For i = 0 To Len(str) - 1 arr(i) = Asc(Mid(str, i + 1, 1)) Next i SHA1HASH = Replace(LCase(HexDefaultSHA1(arr)), " ", "")End Function
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13087/" ] }
125,791
When Joel Spolsky and Jeff Atwood began the disagreement in their podcast over whether programmers should learn C, regardless of their industry and platform of delivery, it sparkled quite an explosive debate within the developer community that probably still rages amongst certain groups today. I have been reading a number of passages from a number of programmer bloggers with their take on the matter. The arguments from both sides certainly carry weight, both what I did not find is a perspective that is uniquely angled from the standpoint of developers focused on just the .NET Framework . Practically all of them were commenting on a general programmer standpoint. What am I trying to get at? Recall Jeff Atwood's opinion that most of the time developers at such high levels would spend would be on learning the business/domain , on top of whatever is needed to learn the technologies to achieve those domain requirements. In my working experience that is a very accurate description of the work life of many. Now supposing that .NET developers can fork the time for "extra curricular" learning, should that be C? For the record, I have learnt C back in school myself, and I can absolutely understand and appreciate what the proponents are reasoning for. But, when thinking things through, I personally feel .NET developers should not dive straight into C. Because, the thing I wish more developers would take some time to learn is - MSIL and CLR . Maybe I am stuck with the an unusual bunch of colleagues, I don't know, but it seems to me many people do not keep a conscious awareness that their C# or VB code compiles in IL first before JIT comes in and makes it raw machine code. Most do not know IL, and have no interest in how exactly the CLR handles the code they write. Reading Jeffrey Richter's CLR via C# was quite a shocker for me in so many areas; glad I read it despite colleagues dismissing it as "too low level". I am no expert in IL but with knowledge of the basics, I found myself following his text easier as I was already familiar with the stack behaviour of IL. I find myself disassembling assemblies to have a look at how the IL turns out when I write certain code. I learn the CLR and MSIL because I know that is the direct layer below me . The layer that allows me to carry out my own layer of work. C, is actually further down. Closer to our "reality" is the CLR and MSIL. That is why I would recommend others to have a go at those, because I do not see enough folks delving at that layer. Or, is your team already all conversant with MSIL?
Of course you should. The greatest way to become overly specialized and single-minded (and, correspondingly, have limited marketable skills) is to only work with a single type of language and eschew all others as "not related to your current task." Every programmer should have some experience with a modern JIT'd OO language (C#/Java), a lower-level simpler language (C, FORTRAN, etc), a very high level interpreted language (Python, Ruby, etc), and a functional language (Scheme, Lisp, Haskell, etc). Even if you don't use all of them on a day-to-day basis, the broadening of your thought process that such knowledge grants is quite useful.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125791", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2663/" ] }
125,813
How can I tell in JavaScript what path separator is used in the OS where the script is running?
Use path module in node.js returns the platform-specific file separator. example path.sep // on *nix evaluates to a string equal to "/" Edit: As per Sebas's comment below, to use this, you need to add this at the top of your js file: const path = require('path')
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/125813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
125,828
I'm trying to start an external application through system() - for example, system("ls") . I would like to capture its output as it happens so I can send it to another function for further processing. What's the best way to do that in C/C++?
From the popen manual: #include <stdio.h>FILE *popen(const char *command, const char *type);int pclose(FILE *stream);
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/125828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10010/" ] }
125,838
Does anyone know the keyboard shortcut in Visual Studio to open the context menu? i.e The equivalent of right clicking. Thanks.
Shift + F10 works in most Windows applications, but I don't have Visual Studio.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1127460/" ] }
125,849
We started some overseas merge replication 1 year ago and everything is going fine till now. My problem is that we have now so much data in our system that any crash on one of the subscriber's servers will be a disaster: reinitialising a subscription the standard way will take days (our connexions are definitely slow, but already very very expensive)! Among the ideas I have been following up are the following: make a copy of the originaldatabase, freeze it, send the filesby plane to the subscriber, andinitiate replication withoutsnapshot: this is something that wasdone traditionnaly with olderversions of SQL, but it sounds alittle bit messy to me: I would haveto put my publisher's data inread-only mode and stop allreplications untill the operation iscompleted. make a snapshot of the data,send the snapshot files abroad,install them on the subscriber, andindicate the new snapshot locationas an alternate location in thereplication properties. This onesounds fair to me (no necessity to suspend ongoing replications, no data freeze), but, on thispoint, Microsoft help does not ...help. I am sure some of you have already experienced such a situation. What was your choice? EDIT: of course, one could say "Why don't you just give a try to your ideas", but it will take hours (multiple instances of sql-servers, virtual machines, and all that stuff...), and I was thinking that the guy who did it will need only 2 minutes to explain his idea. And I'd be the most happy man if someone accepts to loose 2 minutes of his time to spare me hours of hard work ...
Shift + F10 works in most Windows applications, but I don't have Visual Studio.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125849", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11436/" ] }
125,877
(Not related to versioning the database schema) Applications that interfaces with databases often have domain objects that are composed with data from many tables. Suppose the application were to support versioning, in the sense of CVS, for these domain objects. For some arbitry domain object, how would you design a database schema to handle this requirement? Any experience to share?
Think carefully about the requirements for revisions. Once your code-base has pervasive history tracking built into the operational system it will get very complex. Insurance underwriting systems are particularly bad for this, with schemas often running in excess of 1000 tables. Queries also tend to be quite complex and this can lead to performance issues. If the historical state is really only required for reporting, consider implementing a 'current state' transactional system with a data warehouse structure hanging off the back for tracking history. Slowly Changing Dimensions are a much simpler structure for tracking historical state than trying to embed an ad-hoc history tracking mechanism directly into your operational system. Also, Changed Data Capture is simpler for a 'current state' system with changes being done to the records in place - the primary keys of the records don't change so you don't have to match records holding different versions of the same entity together. An effective CDC mechanism will make an incremental warehouse load process fairly lightweight and possible to run quite frequently. If you don't need up-to-the minute tracking of historical state (almost, but not quite, and oxymoron) this can be an effective solution with a much simpler code base than a full history tracking mechanism built directly into the application.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/125877", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13251/" ] }
125,951
What is a good command line tool to create screenshots of websites on Linux? I need to automatically generate screenshots of websites without human interaction. The only tool that I found was khtml2png , but I wonder if there are others that aren't based on khtml (i.e. have good JavaScript support, ...).
A little more detail might be useful... Start a firefox (or other browser) in an X session, either on your console or using a vncserver. You can use the --height and --width options to set the size of the window to full screen. Another firefox command can be used to set the URL being displayed in the first firefox window. Now you can grab the screen image with one of several commands, such as the "import" command from the Imagemagick package, or using gimp, or fbgrab, or xv. #!/bin/sh# start a server with a specific DISPLAYvncserver :11 -geometry 1024x768# start firefox in this vnc sessionfirefox --display :11# read URLs from a data file in a loopcount=1while read urldo # send URL to the firefox session firefox --display :11 $url # take a picture after waiting a bit for the load to finish sleep 5 import -window root image$count.jpg count=`expr $count + 1`done < url_list.txt# clean up when donevncserver -kill :11
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/125951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4936/" ] }
125,964
Is there an easier way to step through the code than to start the service through the Windows Service Control Manager and then attaching the debugger to the thread? It's kind of cumbersome and I'm wondering if there is a more straightforward approach.
If I want to quickly debug the service, I just drop in a Debugger.Break() in there. When that line is reached, it will drop me back to VS. Don't forget to remove that line when you are done. UPDATE: As an alternative to #if DEBUG pragmas, you can also use Conditional("DEBUG_SERVICE") attribute. [Conditional("DEBUG_SERVICE")]private static void DebugMode(){ Debugger.Break();} On your OnStart , just call this method: public override void OnStart(){ DebugMode(); /* ... do the rest */} There, the code will only be enabled during Debug builds. While you're at it, it might be useful to create a separate Build Configuration for service debugging.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/125964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16440/" ] }
126,002
I need to find out what ports are attached to which processes on a Unix machine (HP Itanium). Unfortunately, lsof is not installed and I have no way of installing it. Does anyone know an alternative method? A fairly lengthy Googling session hasn't turned up anything.
netstat -l (assuming it comes with that version of UNIX)
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/126002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21544/" ] }
126,012
I have a stored procedure in SQL 2005. The Stored Procedure is actually creating temporary tables in the beginning of SP and deleting it in the end. I am now debugging the SP in VS 2005. In between the SP i would want to know the contents into the temporary table. Can anybody help in in viewing the contents of the temporary table at run time. Thanks Vinod T
There are several kinds of temporary tables, I think you could use the table which is not dropped after SP used it. Just make sure you don't call the same SP twice or you'll get an error trying to create an existing table. Or just drop the temp table after you see it's content. So instead of using a table variable ( @table ) just use #table or ##table From http://arplis.com/temporary-tables-in-microsoft-sql-server/ : Local Temporary Tables Local temporary tables prefix with single number sign (#) as the first character of their names, like (#table_name). Local temporary tables are visible only in the current session OR you can say that they are visible only to the current connection for the user.They are deleted when the user disconnects from instances of Microsoft SQL Server. Global temporary tables Global temporary tables prefix with double number sign (##) as the first character of their names, like (##table_name). Global temporary tables are visible to all sessions OR you can say that they are visible to any user after they are created. They are deleted when all users referencing the table disconnect from Microsoft SQL Server.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/126012", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20951/" ] }
126,036
Is there a way to know and output the stack size needed by a function at compile time in C ?Here is what I would like to know : Let's take some function : void foo(int a) { char c[5]; char * s; //do something return;} When compiling this function, I would like to know how much stack space it will consume whent it is called. This might be useful to detect the on stack declaration of a structure hiding a big buffer. I am looking for something that would print something like this : file foo.c : function foo stack usage is n bytes Is there a way not to look at the generated assembly to know that ? Or a limit that can be set for the compiler ? Update : I am not trying to avoid runtime stack overflow for a given process, I am looking for a way to find before runtime, if a function stack usage, as determined by the compiler, is available as an output of the compilation process. Let's put it another way : is it possible to know the size of all the objects local to a function ? I guess compiler optimization won't be my friend, because some variable will disappear but a superior limit is fine.
Linux kernel code runs on a 4K stack on x86. Hence they care. What they use to check that, is a perl script they wrote, which you may find as scripts/checkstack.pl in a recent kernel tarball (2.6.25 has got it). It runs on the output of objdump, usage documentation is in the initial comment. I think I already used it for user-space binaries ages ago, and if you know a bit of perl programming, it's easy to fix that if it is broken. Anyway, what it basically does is to look automatically at GCC's output. And the fact that kernel hackers wrote such a tool means that there is no static way to do it with GCC (or maybe that it was added very recently, but I doubt so). Btw, with objdump from the mingw project and ActivePerl, or with Cygwin, you should be able to do that also on Windows and also on binaries obtained with other compilers.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/126036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11589/" ] }
126,070
I have an email subject of the form: =?utf-8?B?T3.....?= The body of the email is utf-8 base64 encoded - and has decoded fine.I am current using Perl's Email::MIME module to decode the email. What is the meaning of the =?utf-8 delimiter and how do I extract information from this string?
The encoded-word tokens (as per RFC 2047 ) can occur in values of some headers. They are parsed as follows: =?<charset>?<encoding>?<data>?= Charset is UTF-8 in this case, the encoding is B which means base64 (the other option is Q which means Quoted Printable). To read it, first decode the base64, then treat it as UTF-8 characters. Also read the various Internet Mail RFCs for more detail, mainly RFC 2047 . Since you are using Perl, Encode::MIME::Header could be of use: SYNOPSIS use Encode qw/encode decode/;$utf8 = decode('MIME-Header', $header);$header = encode('MIME-Header', $utf8); ABSTRACT This module implements RFC 2047 Mime Header Encoding. There are 3 variant encoding names; MIME-Header, MIME-B and MIME-Q. The difference is described below decode() encode() MIME-Header Both B and Q =?UTF-8?B?....?= MIME-B B only; Q croaks =?UTF-8?B?....?= MIME-Q Q only; B croaks =?UTF-8?Q?....?=
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/126070", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21553/" ] }
126,073
I've been looking into OSGi recently and think it looks like a really good idea for modular Java apps. However, I was wondering how OSGi would work in a web application, where you don't just have code to worry about - also HTML, images, CSS, that sort of thing. At work we're building an application which has multiple 'tabs', each tab being one part of the app. I think this could really benefit from taking an OSGi approach - however I'm really not sure what would be the best way to handle all the usual web app resources. I'm not sure whether it makes any difference, but we're using JSF and IceFaces (which adds another layer of problems because you have navigation rules and you have to specify all faces config files in your web.xml... doh!) Edit: according to this thread , faces-config.xml files can be loaded up from JAR files - so it is actually possible to have multiple faces-config.xml files included without modifying web.xml, provided you split up into JAR files. Any suggestions would be greatly appreciated :-)
You are very right in thinking there are synergies here, we have a modular web app where the app itself is assembled automatically from independent components (OSGi bundles) where each bundle contributes its own pages, resources, css and optionally javascript. We don't use JSF (Spring MVC here) so I can't comment on the added complexity of that framework in an OSGi context. Most frameworks or approaches out there still adhere to the "old" way of thinking: one WAR file representing your webapp and then many OSGi bundles and services but almost none concern themselves with the modularisation of the GUI itself. Prerequisites for a Design With OSGi the first question to solve is: what is your deployment scenario and who is the primary container? What I mean is that you can deploy your application on an OSGi runtime and use its infrastructure for everything. Alternatively, you can embed an OSGi runtime in a traditional app server and then you will need to re-use some infrastructure, specifically you want to use the AppServer's servlet engine. Our design is currently based on OSGi as the container and we use the HTTPService offered by OSGi as our servlet container. We are looking into providing some sort of transparent bridge between an external servlet container and the OSGi HTTPService but that work is ongoing. Architectural Sketch of a Spring MVC + OSGi modular webapp So the goal is not to just serve a web application over OSGi but to also apply OSGi's component model to the web UI itself, to make it composable, re-usable, dynamic. These are the components in the system: 1 central bundle that takes care of bridging Spring MVC with OSGi, specifically it uses code by Bernd Kolb to allow you to register the Spring DispatcherServlet with OSGi as a servlet. 1 custom URL Mapper that is injected into the DispatcherServlet and that provides the mapping of incoming HTTP requests to the correct controller. 1 central Sitemesh based decorator JSP that defines the global layout of the site, as well as the central CSS and Javascript libraries that we want to offer as defaults. Each bundle that wants to contribute pages to our web UI has to publish 1 or more Controllers as OSGi Services and make sure to register its own servlet and its own resources (CSS, JSP, images, etc) with the OSGi HTTPService. The registering is done with the HTTPService and the key methods are: httpService.registerResources()andhttpService.registerServlet() When a web ui contributing bundle activates and publishes its controllers, they are automatically picked up by our central web ui bundle and the aforementioned custom URL Mapper gathers these Controller services and keeps an up to date map of URLs to Controller instances. Then when an HTTP request comes in for a certain URL, it finds the associated controller and dispatches the request there. The Controller does its business and then returns any data that should be rendered and the name of the view (a JSP in our case). This JSP is located in the Controller's bundle and can be accessed and rendered by the central web ui bundle exactly because we went and registered the resource location with the HTTPService. Our central view resolver then merges this JSP with our central Sitemesh decorator and spits out the resulting HTML to the client. In know this is rather high level but without providing the complete implementation it's hard to fully explain. Our key learning point for this was to look at what Bernd Kolb did with his example JPetstore conversion to OSGi and to use that information to design our own architecture. IMHO there is currently way too much hype and focus on getting OSGi somehow embedded in traditional Java EE based apps and very little thought being put into actually making use of OSGi idioms and its excellent component model to really allow the design of componentized web applications.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/126073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
126,100
What's the fastest way to count the number of keys/properties of an object? Is it possible to do this without iterating over the object? I.e., without doing: var count = 0;for (k in myobj) if (myobj.hasOwnProperty(k)) ++count; (Firefox did provide a magic __count__ property, but this was removed somewhere around version 4.)
To do this in any ES5 -compatible environment , such as Node.js , Chrome, Internet Explorer 9+ , Firefox 4+, or Safari 5+: Object.keys(obj).length Browser compatibility Object.keys documentation (includes a method you can add to non-ES5 browsers)
{ "score": 13, "source": [ "https://Stackoverflow.com/questions/126100", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11543/" ] }
126,141
I need to determine which version of GTK+ is installed on Ubuntu Man does not seem to help
This suggestion will tell you which minor version of 2.0 is installed. Different major versions will have different package names because they can co-exist on the system (in order to support applications built with older versions). Even for development files, which normally would only let you have one version on the system, you can have a version of gtk 1.x and a version of gtk 2.0 on the same system (the include files are in directories called gtk-1.2 or gtk-2.0). So in short there isn't a simple answer to "what version of GTK is on the system". But... Try something like: dpkg -l libgtk* | grep -e '^i' | grep -e 'libgtk-*[0-9]' to list all the libgtk packages, including -dev ones, that are on your system. dpkg -l will list all the packages that dpkg knows about, including ones that aren't currently installed, so I've used grep to list only ones that are installed (line starts with i). Alternatively, and probably better if it's the version of the headers etc that you're interested in, use pkg-config: pkg-config --modversion gtk+ will tell you what version of GTK 1.x development files are installed, and pkg-config --modversion gtk+-2.0 will tell you what version of GTK 2.0. The old 1.x version also has its own gtk-config program that does the same thing. Similarly, for GTK+ 3: pkg-config --modversion gtk+-3.0
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/126141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15352/" ] }
126,179
If I have a query like, DELETE FROM table WHERE datetime_field < '2008-01-01 00:00:00' does having the datetime_field column indexed help? i.e. is the index only useful when using equality (or inequality) testing, or is it useful when doing an ordered comparison as well? (Suggestions for better executing this query, without recreating the table, would also be ok!)
Maybe. In general, if there is such an index, it will use a range scan on that index if there is no "better" index on the query. However, if the optimiser decides that the range would end up being too big (i.e. include more than, say 1/3 of the rows), it probably won't use the index at all, as a table scan would be likely to be faster. Use EXPLAIN (on a SELECT; you can't EXPLAIN a delete) to determine its decision in a specific case. This is likely to depend upon How many rows there are in the table What the range is that you're specifying What else is specified in the WHERE clause. It won't use a range scan of one index if there is another index which "looks better".
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/126179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966/" ] }
126,207
Are there any best practices (or even standards) to store addresses in a consistent and comprehensive way in a database ? To be more specific, I believe at this stage that there are two cases for address storage : you just need to associate an address to a person, a building or any item (the most common case). Then a flat table with text columns (address1, address2, zip, city) is probably enough. This is not the case I'm interested in. you want to run statistics on your addresses : how many items in a specific street, or city or... Then you want to avoid misspellings of any sorts, and ensure consistency. My question is about best practices in this specific case : what are the best ways to model a consistent address database ? A country specific design/solution would be an excellent start. ANSWER : There does not seem to exist a perfect answer to this question yet, but : xAL , as suggested by Hank , is the closest thing to a global standard that popped up. It seems to be quite an overkill though, and I am not sure many people would want to implement it in their database... To start one's own design (for a specific country), Dave's link to the Universal Postal Union (UPU) site is a very good starting point. As for France, there is a norm (non official, but de facto standard) for addresses, which bears the lovely name of AFNOR XP Z10-011 (french only), and has to be paid for. The UPU description for France is based on this norm. I happened to find the equivalent norm for Sweden : SS 613401 . At European level, some effort has been made, resulting in the norm EN 14142-1. It is obtainable via CEN national members .
I've been thinking about this myself as well. Here are my loose thoughts so far, and I'm wondering what other people think. xAL (and its sister that includes personal names, XNAL) is used by both Google and Yahoo's geocoding services, giving it some weight. But since the same address can be described in xAL in many different ways--some more specific than others--then I don't see how xAL itself is an acceptable format for data storage. Some of its field names could be used, however, but in reality the only basic format that can be used among the 16 countries that my company ships to is the following: enum address-fields { name, company-name, street-lines[], // up to 4 free-type street lines county/sublocality, city/town/district, state/province/region/territory, postal-code, country} That's easy enough to map into a single database table, just allowing for NULLs on most of the columns. And it seems that this is how Amazon and a lot of organizations actually store address data. So the question that remains is how should I model this in an object model that is easily used by programmers and by any GUI code. Do we have a base Address type with subclasses for each type of address, such as AmericanAddress , CanadianAddress , GermanAddress , and so forth? Each of these address types would know how to format themselves and optionally would know a little bit about the validation of the fields. They could also return some type of metadata about each of the fields, such as the following pseudocode data structure: structure address-field-metadata { field-number, // corresponds to the enumeration above field-index, // the order in which the field is usually displayed field-name, // a "localized" name; US == "State", CA == "Province", etc is-applicable, // whether or not the field is even looked at / valid is-required, // whether or not the field is required validation-regex, // an optional regex to apply against the field allowed-values[] // an optional array of specific values the field can be set to} In fact, instead of having individual address objects for each country, we could take the slightly less object-oriented approach of having an Address object that eschews .NET properties and uses an AddressStrategy to determine formatting and validation rules: object address{ set-field(field-number, field-value), address-strategy}object address-strategy{ validate-field(field-number, field-value), cleanse-address(address), format-address(address, formatting-options)} When setting a field, that Address object would invoke the appropriate method on its internal AddressStrategy object. The reason for using a SetField() method approach rather than properties with getters and setters is so that it is easier for code to actually set these fields in a generic way without resorting to reflection or switch statements. You can imagine the process going something like this: GUI code calls a factory method or some such to create an address based on a country. (The country dropdown, then, is the first thing that the customer selects, or has a good guess pre-selected for them based on culture info or IP address.) GUI calls address.GetMetadata() or a similar method and receives a list of the AddressFieldMetadata structures as described above. It can use this metadata to determine what fields to display (ignoring those with is-applicable set to false ), what to label those fields (using the field-name member), display those fields in a particular order, and perform cursory, presentation-level validation on that data (using the is-required , validation-regex , and allowed-values members). GUI calls the address.SetField() method using the field-number (which corresponds to the enumeration above) and its given values. The Address object or its strategy can then perform some advanced address validation on those fields, invoke address cleaners, etc. There could be slight variations on the above if we want to make the Address object itself behave like an immutable object once it is created. (Which I will probably try to do, since the Address object is really more like a data structure, and probably will never have any true behavior associated with itself.) Does any of this make sense? Am I straying too far off of the OOP path? To me, this represents a pretty sensible compromise between being so abstract that implementation is nigh-impossible (xAL) versus being strictly US-biased. Update 2 years later: I eventually ended up with a system similar to this and wrote about it at my defunct blog . I feel like this solution is the right balance between legacy data and relational data storage, at least for the e-commerce world.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/126207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8696/" ] }
126,242
This is probably explained more easily with an example. I'm trying to find a way of turning a relative URL, e.g. "/Foo.aspx" or "~/Foo.aspx" into a full URL, e.g. http://localhost/Foo.aspx . That way when I deploy to test or stage, where the domain under which the site runs is different, I will get http://test/Foo.aspx and http://stage/Foo.aspx . Any ideas?
Have a play with this (modified from here ) public string ConvertRelativeUrlToAbsoluteUrl(string relativeUrl) { return string.Format("http{0}://{1}{2}", (Request.IsSecureConnection) ? "s" : "", Request.Url.Host, Page.ResolveUrl(relativeUrl) );}
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/126242", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12277/" ] }
126,260
Since the release of Adobe AIR I am wondering why Java Web Start has not gained more attention in the past as to me it seems to be very similar, but web start is available for a much longer time. Is it mainly because of bad marketing from Sun, or are there more technical concerns other than the need of having the right JVM installed? Do you have bad experiences using Web Start? If yes, which? What are you recommendations when using Web Start for distributing applications?
In my company we used Java Web Start to deploy Eclipse RCP applications. It was a pain to setup, but it works very well once in place. So the only recommendation I could make is to start small, to get the hang of it. Deploying one simple application first. Trying to deploy a complete product that is already made without experience with JWS gets complicated rather quickly. Also, learning how to pass arguments to the JWS application was invaluable for debugging. Setting the Environment variable JAVAWS_VM_ARGS allows setting any arbitrary property to the Java Virtual machine. In my case: -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=4144 Helpful when you need to check problems during start-up (suspend=y) I think the main problem for the acceptance of Java Web Start is that it is relatively difficult to setup. Also, somehow there is this dissonance: When you have a desktop application, people expects a installer they can double click. When you have a web application, people expects they can use it right from the browser. Java Web Start is neither here not there... It is widely used in intranets, though.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/126260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7524/" ] }
126,271
Does someone have experience with storing key-value pairs in a database? I've been using this type of table: CREATE TABLE key_value_pairs ( itemid varchar(32) NOT NULL, itemkey varchar(32) NOT NULL, itemvalue varchar(32) NOT NULL, CONSTRAINT ct_primarykey PRIMARY KEY(itemid,itemkey)) Then for example the following rows could exist: itemid itemkey itemvalue ---------------- ------------- ------------ 123 Colour Red 123 Size Medium 123 Fabric Cotton The trouble with this scheme is the SQL syntax required to extract data is quite complex.Would it be better to just create a series of key/value columns? CREATE TABLE key_value_pairs ( itemid varchar(32) NOT NULL, itemkey1 varchar(32) NOT NULL, itemvalue1 varchar(32) NOT NULL, itemkey2 varchar(32) NOT NULL, itemvalue2 varchar(32) NOT NULL, . . .etc . . .) This will be easier and faster to query but lacks the extensibility of the first approach.Any advice?
Before you continue on your approach, I would humbly suggest you step back and consider if you really want to store this data in a "Key-Value Pair"table. I don't know your application but my experience has shown that every time I have done what you are doing, later on I wish I had created a color table, a fabric table and a size table. Think about referential integrity constraints, if you take the key-value pair approach, the database can't tell you when you are trying to store a color id in a size field Think about the performance benefits of joining on a table with 10 values versus a generic value that may have thousands of values across multiple domains. How useful is an index on Key Value really going to be? Usually the reasoning behind doing what you are doing is because the domains need to be "user definable". If that is the case then even I am not going to push you towards creating tables on the fly (although that is a feasible approach). However, if your reasoning is because you think it will be easier to manage than multiple tables, or because you are envisioning a maintenance user interface that is generic for all domains, then stop and think really hard before you continue.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/126271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2887/" ] }
126,277
Say you want to generate a matched list of identifiers and strings enum{NAME_ONE,NAME_TWO,NAME_THREE};myFunction(NAME_ONE, "NAME_ONE");myFunction(NAME_TWO, "NAME_TWO");myFunction(NAME_THREE, "NAME_THREE"); ..without repeating yourself, and without auto-generating the code, using C/C++ macros Initial guess: You could add an #include file containing myDefine(NAME_ONE)myDefine(NAME_TWO)myDefine(NAME_THREE) Then use it twice like: #define myDefine(a) a,enum {#include "definitions"}#undef myDefine#define myDefine(a) myFunc(a, "a");#include "definitions"#undef myDefine but #define doesn't let you put parameters within a string?
For your second #define, you need to use the # preprocessor operator, like this: #define myDefine(a) myFunc(a, #a); That converts the argument to a string.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/126277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/46478/" ] }
126,279
To my amazement I just discovered that the C99 stdint.h is missing from MS Visual Studio 2003 upwards. I'm sure they have their reasons, but does anyone know where I can download a copy? Without this header I have no definitions for useful types such as uint32_t, etc.
Turns out you can download a MS version of this header from: https://github.com/mattn/gntp-send/blob/master/include/msinttypes/stdint.h A portable one can be found here: http://www.azillionmonkeys.com/qed/pstdint.h Thanks to the Software Rambling s blog. NB: The Public Domain version of the header, mentioned by Michael Burr in a comment, can be find as an archived copy here . An updated version can be found in the Android source tree for libusb_aah .
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/126279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9236/" ] }
126,320
Suppose I have a set of commits in a repository folder... 123 (250 new files, 137 changed files, 14 deleted files)122 (150 changed files)121 (renamed folder)120 (90 changed files)119 (115 changed files, 14 deleted files, 12 added files)118 (113 changed files)117 (10 changed files) I want to get a working copy that includes all changes from revision 117 onward but does NOT include the changes for revisions 118 and 120. EDIT: To perhaps make the problem clearer, I want to undo the changes that were made in 118 and 120 while retaining all other changes. The folder contains thousands of files in hundreds of subfolders. What is the best way to achieve this? The answer , thanks to Bruno and Bert, is the command (in this case, for removing 120 after the full merge was performed) svn merge -c -120 . Note that the revision number must be specified with a leading minus. '-120' not '120'
To undo revisions 118 and 120: svn up -r HEAD # get latest revisionsvn merge -c -120 . # undo revision 120svn merge -c -118 . # undo revision 118svn commit # after solving problems (if any) Also see the description in Undoing changes . Note the minus in the -c -120 argument. The -c (or --change ) switch is supported since Subversion 1.4, older versions can use -r 120:119 .
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/126320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4200/" ] }
126,406
I would like to change the database files location of MySQL administrator to another drive of my computer. (I run Windows XP SP2 and MySQL Administrator 1.2.8.) --Under the startup variable --> General Parameters --> I changed Data directory: from C:/Program Files/MySQL/MySQL Server 5.0/data to D:/..... , but after I stopped the service and restarted it, the following error appeared: Could not re-connect to the MySQL Server.Server could not be started.Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist Has anyone else had this problem?
Normally it works like this: shut down MySQL change the [mysqld] and [mysqld_safe] datadir variable in the MySQL configuration change the basedir variable in the same section. move the location over restart MySQL If that doesn't work I have no idea. On linux you can try to move the socket to a new location too, but that shouldn't affect windows. Alternatively you can use a symbolic link on *nix what most people do I guess.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/126406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21608/" ] }
126,409
What are the ways to eliminate the use of switch in code?
Switch-statements are not an antipattern per se, but if you're coding object oriented you should consider if the use of a switch is better solved with polymorphism instead of using a switch statement. With polymorphism, this: foreach (var animal in zoo) { switch (typeof(animal)) { case "dog": echo animal.bark(); break; case "cat": echo animal.meow(); break; }} becomes this: foreach (var animal in zoo) { echo animal.speak();}
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/126409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12514/" ] }
126,430
Is it possible to change the natural order of columns in Postgres 8.1? I know that you shouldn't rely on column order - it's not essential to what I am doing - I only need it to make some auto-generated stuff come out in a way that is more pleasing, so that the field order matches all the way from pgadmin through the back end and out to the front end.
You can actually just straight up change the column order, but I'd hardly recommend it, and you should be very careful if you decide to do it. eg. # CREATE TABLE test (a int, b int, c int);# INSERT INTO test VALUES (1,2,3);# SELECT * FROM test; a | b | c ---+---+--- 1 | 2 | 3(1 row) Now for the tricky bit, you need to connect to your database using the postgres user so you can modify the system tables. # SELECT relname, relfilenode FROM pg_class WHERE relname='test'; relname | relfilenode ---------+------------- test_t | 27666(1 row)# SELECT attrelid, attname, attnum FROM pg_attribute WHERE attrelid=27666; attrelid | attname | attnum ----------+----------+-------- 27666 | tableoid | -7 27666 | cmax | -6 27666 | xmax | -5 27666 | cmin | -4 27666 | xmin | -3 27666 | ctid | -1 27666 | b | 1 27666 | a | 2 27666 | c | 3(9 rows) attnum is a unique column, so you need to use a temporary value when you're modifying the column numbers as such: # UPDATE pg_attribute SET attnum=4 WHERE attname='a' AND attrelid=27666;UPDATE 1# UPDATE pg_attribute SET attnum=1 WHERE attname='b' AND attrelid=27666;UPDATE 1# UPDATE pg_attribute SET attnum=2 WHERE attname='a' AND attrelid=27666;UPDATE 1# SELECT * FROM test; b | a | c ---+---+--- 1 | 2 | 3(1 row) Again, because this is playing around with database system tables, use extreme caution if you feel you really need to do this. This is working as of postgres 8.3, with prior versions, your milage may vary.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/126430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3408/" ] }
126,445
I am implementing a design that uses custom styled submit-buttons. They are quite simply light grey buttons with a slightly darker outer border: input.button { background: #eee; border: 1px solid #ccc;} This looks just right in Firefox, Safari and Opera. The problem is with Internet Explorer, both 6 and 7. Since the form is the first one on the page, it's counted as the main form - and thus active from the get go. The first submit button in the active form receives a solid black border in IE, to mark it as the main action. If I turn off borders, then the black extra border in IE goes away too. I am looking for a way to keep my normal borders, but remove the outline.
Well this works here: <html> <head> <style type="text/css"> span.button { background: #eee; border: 1px solid #ccc; } span.button input { background:none; border:0; margin:0; padding:0; } </style> </head> <body> <span class="button"><input type="button" name="..." value="Button"/></span> </body></html>
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/126445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1123/" ] }