text
stringlengths
8
267k
meta
dict
Q: PHP with SQL Server 2005+ Currently we have a hybrid ASP/PHP setup connecting to a SQL Server 2005 database. But all the query work is done on the client side, I'd like to move some of this to PHP. What driver and/or connection string is needed to connect to Sql Svr and what is the syntax to use in PHP? Update: OK so I was definitely trying to avoid using anything to do with copying DLLs etc. I'll look into the SQL2K5PHP driver (thanks Vincent). @jcarrascal for the sake of clarity, by "client side" I mean our application is an internal web app that runs as an HTA, with all queries done via javascript calls to an ASP which actually submits the DB request. A: You have two options: 1) php_mssql extension : If you'd like something that has the same API mysql and mysqli has, then use the php_mssql extension. But there is a catch, the bundled ntwdblib.dll file with PHP is not working. You have to find this file from a SQL Server 2000 installation or you can find it on the Internet. This API is supposedly not very reliable but I have been using it without problem for about one year. http://ca.php.net/mssql 2) Microsoft SQL Server 2005 PHP Driver : If you'd like something more modern but which does not have the same API and is missing some important functions (mssql_num_rows). The big plus is that it is supported by Microsoft and is likely to work with a future version. http://msdn.microsoft.com/en-us/data/cc299381.aspx A: Just use the mssql_connect() function like this: $conn = mssql_connect('localhost', 'sa' , '123456') or die('Can\'t connect.'); mssql_select_db('database', $conn) or die('Can\'t select the database'); Functions relating to SQL Server are defined in the PHP manual for the MSSQL driver. One question though, "all the query work is done on the client side" WTF? :D A: PHP provides an extension for accessing Microsoft SQL Server databases. To use the SQL Server extension, all that is required is to activate the extension in the PHP configuration file. Details on the MSDN page
{ "language": "en", "url": "https://stackoverflow.com/questions/19837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Ruby blocks/Java closures in C I've been trying to understand how Ruby blocks work, and to do that I've been trying to implement them in C. One easy way to implement closures is to pass a void* to the enclosing stack to the closure/function but Ruby blocks also seem to handle returns and break statements from the scope that uses the block. loop do break i if (i >= 4000) i *= 2 end I think one of the closures proposals for Java works like this also. So, how would you implement Ruby-blocks/Java-closures in C? A: I haven't actually implemented any of this, so take it with a sack of salt. There are two parts to a closure: the data environment and the code environment. Like you said, you can probably pass a void* to handle references to data. You could probably use setjmp and longjmp to implement the non-linear control flow jumps that the Ruby break requires. If you want closures you should probably be programming in a language that actually supports them. :-) UPDATE: Interesting things are happening in Clang. They've prototyped a closure for C. http://lists.cs.uiuc.edu/pipermail/cfe-dev/2008-August/002670.html might prove to be interesting reading. A: There's a good set of slides on Ruby Blocks as part of the "Rails with Passion" course: Ruby_Blocks.pdf This covers representing a block, how they get passed arguments and executed, and even further into things like Proc objects. It's very clearly explained. It might then be of interest to look at how the JRuby guys handled these in their parsing to Java. Take a look at the source at codehaus. A: The concept of closures requires the concept of contexts. C's context is based on the stack and the registers of the CPU, so to create a block/closure, you need to be able to manipulate the stack pointer in a correct (and reentrant) way, and store/restore registers as needed. The way this is done by interpreters or virtual machines is to have a context structure or something similar, and not use the stack and registers directly. This structure keeps track of a stack and optionally some registers, if you're designing a register based VM. At least, that's the simplest way to do it (though slightly less performant than actually mapping things correctly).
{ "language": "en", "url": "https://stackoverflow.com/questions/19838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Generics in c# & accessing the static members of T My question concerns c# and how to access Static members ... Well I don't really know how to explain it (which kind of is bad for a question isn't it?) I will just give you some sample code: Class test<T>{ int method1(Obj Parameter1){ //in here I want to do something which I would explain as T.TryParse(Parameter1); //my problem is that it does not work ... I get an error. //just to explain: if I declare test<int> (with type Integer) //I want my sample code to call int.TryParse(). If it were String //it should have been String.TryParse() } } So thank you guys for your answers (By the way the question is: how would I solve this problem without getting an error). This probably quite an easy question for you! Edit: Thank you all for your answers! Though I think the try - catch phrase is the most elegant, I know from my experience with vb that it can really be a bummer. I used it once and it took about 30 minutes to run a program, which later on only took 2 minutes to compute just because I avoided try - catch. This is why I chose the switch statement as the best answer. It makes the code more complicated but on the other hand I imagine it to be relatively fast and relatively easy to read. (Though I still think there should be a more elegant way ... maybe in the next language I learn) Though if you have some other suggestion I am still waiting (and willing to participate) A: The problem is that TryParse isn't defined on an interface or base class anywhere, so you can't make an assumption that the type passed into your class will have that function. Unless you can contrain T in some way, you'll run into this a lot. Constraints on Type Parameters A: To access a member of a specific class or interface you need to use the Where keyword and specify the interface or base class that has the method. In the above instance TryParse does not come from an interface or base class, so what you are trying to do above is not possible. Best just use Convert.ChangeType and a try/catch statement. class test<T> { T Method(object P) { try { return (T)Convert.ChangeType(P, typeof(T)); } catch(Exception e) { return null; } } } A: Short answer, you can't. Long answer, you can cheat: public class Example { internal static class Support { private delegate bool GenericParser<T>(string s, out T o); private static Dictionary<Type, object> parsers = MakeStandardParsers(); private static Dictionary<Type, object> MakeStandardParsers() { Dictionary<Type, object> d = new Dictionary<Type, object>(); // You need to add an entry for every type you want to cope with. d[typeof(int)] = new GenericParser<int>(int.TryParse); d[typeof(long)] = new GenericParser<long>(long.TryParse); d[typeof(float)] = new GenericParser<float>(float.TryParse); return d; } public static bool TryParse<T>(string s, out T result) { return ((GenericParser<T>)parsers[typeof(T)])(s, out result); } } public class Test<T> { public static T method1(string s) { T value; bool success = Support.TryParse(s, out value); return value; } } public static void Main() { Console.WriteLine(Test<int>.method1("23")); Console.WriteLine(Test<float>.method1("23.4")); Console.WriteLine(Test<long>.method1("99999999999999")); Console.ReadLine(); } } I made a static dictionary holding a delegate for the TryParse method of every type I might want to use. I then wrote a generic method to look up the dictionary and pass on the call to the appropriate delegate. Since every delegate has a different type, I just store them as object references and cast them back to the appropriate generic type when I retrieve them. Note that for the sake of a simple example I have omitted error checking, such as to check whether we have an entry in the dictionary for the given type. A: One more way to do it, this time some reflection in the mix: static class Parser { public static bool TryParse<TType>( string str, out TType x ) { // Get the type on that TryParse shall be called Type objType = typeof( TType ); // Enumerate the methods of TType foreach( MethodInfo mi in objType.GetMethods() ) { if( mi.Name == "TryParse" ) { // We found a TryParse method, check for the 2-parameter-signature ParameterInfo[] pi = mi.GetParameters(); if( pi.Length == 2 ) // Find TryParse( String, TType ) { // Build a parameter list for the call object[] paramList = new object[2] { str, default( TType ) }; // Invoke the static method object ret = objType.InvokeMember( "TryParse", BindingFlags.InvokeMethod, null, null, paramList ); // Get the output value from the parameter list x = (TType)paramList[1]; return (bool)ret; } } } // Maybe we should throw an exception here, because we were unable to find the TryParse // method; this is not just a unable-to-parse error. x = default( TType ); return false; } } The next step would be trying to implement public static TRet CallStaticMethod<TRet>( object obj, string methodName, params object[] args ); With full parameter type matching etc. A: This isn't really a solution, but in certain scenarios it could be a good alternative: We can pass an additional delegate to the generic method. To clarify what I mean, let's use an example. Let's say we have some generic factory method, that should create an instance of T, and we want it to then call another method, for notification or additional initialization. Consider the following simple class: public class Example { // ... public static void PostInitCallback(Example example) { // Do something with the object... } } And the following static method: public static T CreateAndInit<T>() where T : new() { var t = new T(); // Some initialization code... return t; } So right now we would have to do: var example = CreateAndInit<Example>(); Example.PostInitCallback(example); However, we could change our method to take an additional delegate: public delegate void PostInitCallback<T>(T t); public static T CreateAndInit<T>(PostInitCallback<T> callback) where T : new() { var t = new T(); // Some initialization code... callback(t); return t; } And now we can change the call to: var example = CreateAndInit<Example>(Example.PostInitCallback); Obviously this is only useful in very specific scenarios. But this is the cleanest solution in the sense that we get compile time safety, there is no "hacking" involved, and the code is dead simple. A: Do you mean to do something like this: Class test<T> { T method1(object Parameter1){ if( Parameter1 is T ) { T value = (T) Parameter1; //do something with value return value; } else { //Parameter1 is not a T return default(T); //or throw exception } } } Unfortunately you can't check for the TryParse pattern as it is static - which unfortunately means that it isn't particularly well suited to generics. A: The only way to do exactly what you're looking for would be to use reflection to check if the method exists for T. Another option is to ensure that the object you send in is a convertible object by restraining the type to IConvertible (all primitive types implement IConvertible). This would allow you to convert your parameter to the given type very flexibly. Class test<T> { int method1(IConvertible Parameter1){ IFormatProvider provider = System.Globalization.CultureInfo.CurrentCulture.GetFormat(typeof(T)); T temp = Parameter1.ToType(typeof(T), provider); } } You could also do a variation on this by using an 'object' type instead like you had originally. Class test<T> { int method1(object Parameter1){ if(Parameter1 is IConvertible) { IFormatProvider provider = System.Globalization.CultureInfo.CurrentCulture.GetFormat(typeof(T)); T temp = Parameter1.ToType(typeof(T), provider); } else { // Do something else } } } A: Ok guys: Thanks for all the fish. Now with your answers and my research (especially the article on limiting generic types to primitives) I will present you my solution. Class a<T>{ private void checkWetherTypeIsOK() { if (T is int || T is float //|| ... any other types you want to be allowed){ return true; } else { throw new exception(); } } public static a(){ ccheckWetherTypeIsOK(); } } A: You probably cant do it. First of all if it should be possible you would need a tighter bound on T so the typechecker could be sure that all possible substitutions for T actually had a static method called TryParse. A: You may want to read my previous post on limiting generic types to primitives. This may give you some pointers in limiting the type that can be passed to the generic (since TypeParse is obviously only available to a set number of primitives ( string.TryParse obviously being the exception, which doesn't make sense). Once you have more of a handle on the type, you can then work on trying to parse it. You may need a bit of an ugly switch in there (to call the correct TryParse ) but I think you can achieve the desired functionality. If you need me to explain any of the above further, then please ask :) A: Best code: restrict T to ValueType this way: class test1<T> where T: struct A "struct" here means a value type. String is a class, not a value type. int, float, Enums are all value types. btw the compiler does not accept to call static methods or access static members on 'type parameters' like in the following example which will not compile :( class MyStatic { public static int MyValue=0; } class Test<T> where T: MyStatic { public void TheTest() { T.MyValue++; } } => Error 1 'T' is a 'type parameter', which is not valid in the given context SL. A: That is not how statics work. You have to think of statics as sort of in a Global class even if they are are spread across a whole bunch of types. My recommendation is to make it a property inside the T instance that can access the necessary static method. Also T is an actual instance of something, and just like any other instance you are not able to access the statics for that type, through the instantiated value. Here is an example of what to do: class a { static StaticMethod1 () virtual Method1 () } class b : a { override Method1 () return StaticMethod1() } class c : a { override Method1 () return "XYZ" } class generic<T> where T : a { void DoSomething () T.Method1() }
{ "language": "en", "url": "https://stackoverflow.com/questions/19843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Maximum length of a MIME Content-Type header field? I'm just designing the schema for a database table which will hold details of email attachments - their size in bytes, filename and content-type (i.e. "image/jpg", "audio/mp3", etc). Does anybody know the maximum length that I can expect a content-type to be? A: In RFC 6838 which is latest standard and obsoletes RFC4288, there is a following statement. "Also note that while this syntax allows names of up to 127 characters, implementation limits may make such long names problematic. For this reason, <type-name> and <subtype-name> SHOULD be limited to 64 characters." 64+1+64 = 129. But I suspect the standard should mean 63+1+63=127. link: https://www.rfc-editor.org/rfc/rfc6838#section-4.2 A: I hope I havn't misread, but it looks like the length is max 127/127 or 255 total. RFC 4288 has a reference in 4.2 (page 6): Type and subtype names MUST conform to the following ABNF: type-name = reg-name subtype-name = reg-name reg-name = 1*127reg-name-chars reg-name-chars = ALPHA / DIGIT / "!" / "#" / "$" / "&" / "." / "+" / "-" / "^" / "_" It is not clear to me if the +suffix can add past the 127, but it appears not. A: We run an SaaS system that allows users to upload files. We'd originally designed it to store MIME Types up to 50 characters. In the last several days we've seen several attempts to upload 71-bytes types. So, we're changing to 250. 100 seemed "good" but it's only a few more than the max we're seeing now. 500 seems silly, so 250 is the selected one.
{ "language": "en", "url": "https://stackoverflow.com/questions/19852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Is there a bug/issue tracking system which integrates with Mercurial? I've used Trac/Subversion before and really like the integration. My current project is using Mercurial for distributed development and it'd be nice to be able to track issues/bugs and have this be integrated with Mercurial. I realized this could be tricky with the nature of DVCS. A: BugTracker.NET now supports Mercurial integration in the same way it supports Subversion and git. BugTracker.NET is a free, open source, ASP.NET bug tracking system. Other free, open source bug trackers that support Mercurial: * *Trac - http://trac.edgewall.org/wiki/TracMercurial *Redmine - http://www.redmine.org/wiki/1/RedmineRepositories *Roundup - https://www.mercurial-scm.org/wiki/Hook. The Mercurial development team themselves use Roundup. A: There is also a plugin to integrate Mercurial with Jira. See the webpage for the plugin. A: Mantis has a beta integration for Mercurial: blog-post and code. A: Bugs Everywhere is a distributed bugtracking system that supports Mercurial. A: I'd also like to add Redmine to the list. I started with Trac, but I found the mercurial support (and the administrative interface for everything) to be much better in Redmine. A: FogBugz has tight integration with Mercurial through their Kiln product. A: TracMercurial integrates Trac with Mercurial. Assembla provides free Mercurial hosting with Trac integration. The idea is that you have a central repository as your master and upload all the subsidiary changes from local repositories into the main one. A: Jira integrates using a plugin. Its a great tool. http://www.atlassian.com A: I just put together a command-line bug tracker called b for Mercurial which, although it's not as powerful as Trac and the like, is exactly what a lot of situations call for. It's best feature is how easy it is to set up - install the Mercurial extension, and all your repos have a bug tracker at their disposal. I find this incredibly useful on smaller projects that I can't/don't want to set up with a fully fledged tracker living on a server somewhere, just hg b and go. A: If you're open to another suggestion, you can try Artemis. Though I haven't used it yet, it looks easy enough. A: There's a BugzillaExtension for adding a comment to a Bugzilla bug each time you mention its number. A: I recently developed a Trac plugin that integrates some Mercurial functionality that TracMercurial Plugin doesn't support yet, it's called TracMercurialChangesetPlugin. It allows you to search in your changesets, to have the cache synced, to view a changelog in your related tickets... You can read about it at http://tumblr.com/x8tg5xbsh
{ "language": "en", "url": "https://stackoverflow.com/questions/19883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How do you embed binary data in XML? I have two applications written in Java that communicate with each other using XML messages over the network. I'm using a SAX parser at the receiving end to get the data back out of the messages. One of the requirements is to embed binary data in an XML message, but SAX doesn't like this. Does anyone know how to do this? UPDATE: I got this working with the Base64 class from the apache commons codec library, in case anyone else is trying something similar. A: I usually encode the binary data with MIME Base64 or URL encoding. A: Try Base64 encoding/decoding your binary data. Also look into CDATA sections A: Any binary-to-text encoding will do the trick. I use something like that <data encoding="yEnc> <![CDATA[ encoded binary data ]]> </data> A: Maybe encode them into a known set - something like base 64 is a popular choice. A: While the other answers are mostly fine, you could try another, more space-efficient, encoding method like yEnc. (yEnc wikipedia link) With yEnc also get checksum capability right "out of the box". Read and links below. Of course, because XML does not have a native yEnc type your XML schema should be updated to properly describe the encoded node. Why: Due to the encoding strategies base64/63, uuencode et al. encodings increase the amount of data (overhead) you need to store and transfer by roughly 40% (vs. yEnc's 1-2%). Depending on what you're encoding, 40% overhead could be/become an issue. yEnc - Wikipedia abstract: https://en.wikipedia.org/wiki/YEnc yEnc is a binary-to-text encoding scheme for transferring binary files in messages on Usenet or via e-mail. ... An additional advantage of yEnc over previous encoding methods, such as uuencode and Base64, is the inclusion of a CRC checksum to verify that the decoded file has been delivered intact. ‎ A: Base64 overhead is 33%. BaseXML for XML1.0 overhead is only 20%. But it's not a standard and only have a C implementation yet. Check it out if you're concerned with data size. Note that however browsers tends to implement compression so that it is less needed. I developed it after the discussion in this thread: Encoding binary data within XML : alternatives to base64. A: Base64 is indeed the right answer but CDATA is not, that's basically saying: "this could be anything", however it must not be just anything, it has to be Base64 encoded binary data. XML Schema defines Base 64 binary as a primitive datatype which you can use in your xsd. A: You could encode the binary data using base64 and put it into a Base64 element; the below article is a pretty good one on the subject. Handling Binary Data in XML Documents A: XML is so versatile... <DATA> <BINARY> <BIT index="0">0</BIT> <BIT index="1">0</BIT> <BIT index="2">1</BIT> ... <BIT index="n">1</BIT> </BINARY> </DATA> XML is like violence - If it doesn't solve your problem, you're not using enough of it. EDIT: BTW: Base64 + CDATA is probably the best solution (EDIT2: Whoever upmods me, please also upmod the real answer. We don't want any poor soul to come here and actually implement my method because it was the highest ranked on SO, right?) A: You can also Uuencode you original binary data. This format is a bit older but it does the same thing as base63 encoding. A: I had this problem just last week. I had to serialize a PDF file and send it, inside an XML file, to a server. If you're using .NET, you can convert a binary file directly to a base64 string and stick it inside an XML element. string base64 = Convert.ToBase64String(File.ReadAllBytes(fileName)); Or, there is a method built right into the XmlWriter object. In my particular case, I had to include Microsoft's datatype namespace: StringBuilder sb = new StringBuilder(); System.Xml.XmlWriter xw = XmlWriter.Create(sb); xw.WriteStartElement("doc"); xw.WriteStartElement("serialized_binary"); xw.WriteAttributeString("types", "dt", "urn:schemas-microsoft-com:datatypes", "bin.base64"); byte[] b = File.ReadAllBytes(fileName); xw.WriteBase64(b, 0, b.Length); xw.WriteEndElement(); xw.WriteEndElement(); string abc = sb.ToString(); The string abc looks something that looks like this: <?xml version="1.0" encoding="utf-16"?> <doc> <serialized_binary types:dt="bin.base64" xmlns:types="urn:schemas-microsoft-com:datatypes"> JVBERi0xLjMKJaqrrK0KNCAwIG9iago8PCAvVHlwZSAvSW5mbw...(plus lots more) </serialized_binary> </doc> A: If you have control over the XML format, you should turn the problem inside out. Rather than attaching the binary XML you should think about how to enclose a document that has multiple parts, one of which contains XML. The traditional solution to this is an archive (e.g. tar). But if you want to keep your enclosing document in a text-based format or if you don't have access to an file archiving library, there is also a standardized scheme that is used heavily in email and HTTP which is multipart/* MIME with Content-Transfer-Encoding: binary. For example if your servers communicate through HTTP and you want to send a multipart document, the primary being an XML document which refers to a binary data, the HTTP communication might look something like this: POST / HTTP/1.1 Content-Type: multipart/related; boundary="qd43hdi34udh34id344" ... other headers elided ... --qd43hdi34udh34id344 Content-Type: application/xml <myxml> <data href="cid:data.bin"/> </myxml> --qd43hdi34udh34id344 Content-Id: <data.bin> Content-type: application/octet-stream Content-Transfer-Encoding: binary ... binary data ... --qd43hdi34udh34id344-- As in above example, the XML refer to the binary data in the enclosing multipart by using a cid URI scheme which is an identifier to the Content-Id header. The overhead of this scheme would be just the MIME header. A similar scheme can also be used for HTTP response. Of course in HTTP protocol, you also have the option of sending a multipart document into separate request/response. If you want to avoid wrapping your data in a multipart is to use data URI: <myxml> <data href="data:application/something;charset=utf-8;base64,dGVzdGRhdGE="/> </myxml> But this has the base64 overhead.
{ "language": "en", "url": "https://stackoverflow.com/questions/19893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: How to copy a file in C# I want to copy a file from A to B in C#. How do I do that? A: Use the FileInfo class. FileInfo fi = new FileInfo("a.txt"); fi.CopyTo("b.txt"); A: This should work! using System.IO; ... var path = //your current filePath var outputPath = //the directory where you want your (.txt) file File.Copy(path,outputPath); A: Without any error handling code: File.Copy(path, path2); A: The File.Copy method: MSDN Link A: System.IO.File.Copy
{ "language": "en", "url": "https://stackoverflow.com/questions/19933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I redirect a user to a custom 404 page in ASP.NET MVC instead of throwing an exception? I want to be able to capture the exception that is thrown when a user requests a non-existent controller and re-direct it to a 404 page. How can I do this? For example, the user requests http://www.nosite.com/paeges/1 (should be /pages/). How do I make it so they get re-directed to the 404 rather than the exception screen? A: Take a look at this page for routing your 404-errors to a specified page. A: Just use a route: // We couldn't find a route to handle the request. Show the 404 page. routes.MapRoute("Error", "{*url}", new { controller = "Error", action = "404" } ); Since this will be a global handler, put it all the way at the bottom under the Default route. A: Found this on the same site - Strategies for Resource based 404s
{ "language": "en", "url": "https://stackoverflow.com/questions/19941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Algorithm to perform RFC calculation in Java The RFC for a Java class is set of all methods that can be invoked in response to a message to an object of the class or by some method in the class. RFC = M + R where M = Number of methods in the class. R = Total number of other methods directly invoked from the M. Thinking C is the .class and J is the .java file of which we need to calculate RFC. class J{ a(){} b(){} c(){ e1.e(); e1.f(); e1.g(); } h(){ i.k(); i.j(); } m(){} n(){ i.o(); i.p(); i.p(); i.p(); } } here M=6 and R=9 (Don't worry about call inside a loop. It's considered as a single call) Calculating M is easy. Load C using classloader and use reflection to get the count of methods. Calculating R is not direct. We need to count the number of method calls from the class. First level only. For calculating R I must use regex. Usually format would be (calls without using . are not counted) [variable_name].[method_name]([zero or more parameters]); or [variable_name].[method_name]([zero or more parameters]) with out semicolon when call return is directly becomes parameter to another method. or [variable_name].[method_name]([zero or more parameters]).method2(); this becomes two method calls What other patterns of the method call can you think of? Is there any other way other than using RegEx that can be used to calculate R. UPDATE: @McDowell Looks like using BCEL I can simplify the whole process. Let me try it. A: You could use the Byte Code Engineering Library with binaries. You can use a DescendingVisitor to visit a class' members and references. I've used it to find class dependencies. Alternatively, you could reuse some model of the source files. I'm pretty sure the Java editor in the Eclipse JDT is backed by some form of model. A: You should find your answer in the Java language specification. You have forgot static method call, method call inside parameters... A: Calling a method using reflection (the name of the method is in a string). A: Does M include calls to its own methods? Or calls to inner classes? For instance: class J { a() { } b() { this.a(); } c() { jj.aa(); } d() { i.k(); } e() { this.f().a(); } f() { return this; } g() { i.m().n(); } class JJ { aa() { a(); } } } What would the M value of this be? There's only three function calls to a method not defined in this class (the calls in the d() and g() functions). Do you want to include calls to inner classes, or calls to the main class made in the inner class? Do you want to include calls to other methods on the same class? If you're looking at any method calls, regardless of the source, then a regex could probably work, but would be tricky to get right (does your regex properly ignore strings that contain method-call like contents? Does it handle constructor calls properly?). If you care about the source of the method call then regexes probably won't get you what you want. You'd need to use reflection (though unfortunately I don't know enough about reflection to be helpful there).
{ "language": "en", "url": "https://stackoverflow.com/questions/19952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I stop MS Graph component popping up during Interop? When using Office Interop in C#, if you insert a chart object into a MS Word document, the Grap application loads up very briefly and then goes away. Is there a way to prevent this from happening? I have tried setting the Visible property of the application instance to false to no effect. EDIT: The Visible property does take effect when used against Word when interopping, and it does not pop up. I would expect there is a similar way to do this for MS Graph. A: This is common behaviour for a lot of component hosted in an executable binary. The host application will startup and then do the job. I don't know if there is a surefire way to prevent that since you have no control over the component nor over the process until the application is started and is responding. A hack I tried in the past (for something totally unrelated) was starting a process and constantly detecting if its main windows was created. As soon as it was created, I was hiding it. You could do this with the main module of the faulty application and hope it will be fast enough to hide the window before the user notices. Then you instanciate your component; the component will usually recycle an existing process, hopefuly the one with the hidden main window. I can't garentee you this will work in your situation, but it's worth a try it the issue is that important, or if you don't find a better way of course.
{ "language": "en", "url": "https://stackoverflow.com/questions/19953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What to use Windows CardSpace for? I'm doing some funky authentication work (and yes, I know, open-id is awesome, but then again my open-id doesn't work right at this moment!). Stumbling across Windows CardSpace I was wondering if anyone has used this in a real product-system. If you have used it, what were the pros and cons for you? And how can i use it in my open-id? A: Umm no you don't; you can accept information cards on a web site using a cheap and cheerful certificate (but not self signed) or no certificate at all. And yes, I've used it as part of a production system which grew out of a proof of concept I did at Microsoft. Cons: If you don't have an EV SSL certificate you get warnings. The code for parsing a card is incomplete at best (you have to hack it around for no-SSL), you have to explain to users what one is. Pros: Well that's more interesting; I was using managed cards and issuing them and then having 3rd parties use those to check claims; but for self issued cards; well, it's stronger than username password and doesn't have the same vulnerabilities OpenID has.
{ "language": "en", "url": "https://stackoverflow.com/questions/19956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best way of diffing Crystal Reports? If you have two versions of the same report (.rpt) and you want to establish what the exact differences are, what is the best way to go about this? I've seen some commercial tools to do this, but I'm not too interested in forking out cash for something that should be relatively straight forward. Can I hook into the Crystal API and simply list all of the properties of every field or something? Please someone tell me that there's an Open Source project somewhere that does this... @:-) @Kogus, wouldn't diffing the outputs as text hide any formatting differences? @ladoucep, I don't seem to be able to export the report without data. A: Can I hook into the Crystal API and simply list all of the properties of every field or something? Please someone tell me that there's an Open Source project somewhere that does this... @:-) There is in fact, such an API. I wrote a VB6 application to do just what you asked and more. I think I even migrated it to VB.Net. As it was for my own use, I didn't spend much time making it 'polished'. I've been intending to release it, but I haven't had the time... Another approach that I've used in the past is to create an Access application to help manage large, report-development projects. One of it's many features includes the ability to extract the tables that are used by the report, and the SQL statements used by its Commands and SQL Expressions. It's intent is to give one a global perspective of which reports use which tables. I probably still have it somewhere... ** edit 1 ** BusinessObjects Enterprise XI (R?) has a feature named 'Meta Manager'. It will periodically examine the contents of the Repository and save the results to a database. It uses the Report-Application Service (RAS) to generate the meta data. It's an additional, 5-figure license, of course. ** edit 2 ** Consider using PowerShell to do the work: PsCrystal. A: One helpful technique is to output both versions of the report to plain text, then diff those outputs. You could write something using the crystal report component to describe every property of the report, like you described. Then you could output that to text, and diff those. I'm not aware of any open source tool that does it for you, but it would not be terribly hard to write it. @question in the post: Diffing the outputs would only show formatting changes if the relative positions had changed. For example, if i had this: before: First name, last name, addresss after: Last Name, First Name, Address Then that would show up as a difference. But if I had just bumped the address column over a few pixels, or changed it from plain text to bold, then you are right, that would not show up. A: One technique I have used to great effect in the past is to print out reports from both versions based on the same data. I then take the first page from each version, lay one on top of the other (it is important not to mix them up) and hold them up to a window. It is generally quite easy to see any differences, and these differences can be manually annotated with a suitable writing instrument such as a pencil. Repeat for each page in the report. Admittedly, for large reports this can be quite time consuming and error prone, but these limitataions can be overcome with patience and care.
{ "language": "en", "url": "https://stackoverflow.com/questions/19963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Best way to keep an ordered list of windows (from most-recently created to oldest)? What is the best way to manage a list of windows (keeping them in order) to be able to promote the next window to the top-level when the current top-level window is closed. This is for a web application, so we're using jQuery Javascript. We'd talked through a few simplistic solutions, such as using an array and just treating [0] index as the top-most window. I'm wondering if there's any potentially more efficient or useful alternative to what we had brainstormed. A: I don't really know javascript, but couldn't you create a stack of windows? A: A stack if you want to just close the window on top. A queue if you also need to open windows at the end. A: Stack/queue in JS is a simple array, which can be manipulated with .push(val), .pop(), .shift(val) and .unshift().
{ "language": "en", "url": "https://stackoverflow.com/questions/19970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Being as DRY as possible in a Ruby on Rails App I'm currently using the awesome attachment-fu plugin for a Rails app, but as a novice developer, I've never encountered a scenario like the one I've found myself in. Essentially, I'm using the attachment-fu plugin on two levels. * *Is for user avatars in the user class. *Is to allow file attachments (PDFs, etc) in a messaging system. My question is what the best use practice would be in these situations to remain DRY, clear, and consistent. Clearly it would make no sense to define and execute the plugin in both classes, but there's something deeply strange to me (possibly unfounded) about just going ahead and setting it all up in the godly Application class. Is there something in between, or is the parent class the way to go? Thanks! A: What's the DRY issue with defining the attachment_fu settings twice? Unless the files are of the same type and being stored in the same place, you're not going to be repeating anything in the configuration. Sure, you'll have two has_attachment declarations, but the options will mostly differ (one declaration for your avatars and the other for your pdf's etc. 99.99% of the code to handle attachment will be buried in the attachment_fu libs, your configuration code should be pretty DRY by default =) A: Is "outsourcing" avatar support entirely to Gravatar an option? There are some Rails plugins that will display avatars hosted by Gravatar. You might not need to re-invent the wheel there. A: What wfarr is describing would be single table inheritance, which is what I currently do in this situation. I have one table for Assets which contains all the necessary attachment_fu columns, plus an extra column called type, which will hold the actual model name. I have a model for assets and additional models for specific upload types that inherit from assets: asset.rb: class Asset < ActiveRecord::Base ... attachment_fu logic ... end avatar.rb: class Avatar < Asset ... avatar specific attachment_fu logic ... end pdf.rb: class PDF < Asset ... PDF specific attachment_fu logic ... end A: I would lean towards using a parent class, with subclassing for the different ways you intend to actually use the attachments in your application. It may not be the DRYest solution available, however, it lends itself to a logical pattern rather well. A: Couldn't you use Polymorphic Associations? I'm about to hit this in my app with attachment_fu, so I'm not exactly sure on attachment_fu, but for the old school File Column plugin, I would use Polymorphic Associations. My "file" model would be: class FileUpload < ActiveRecord::Base belongs_to :fileable, :polymorphic => true file_column :name end and then any models that needed a file attachment would be like: class Company < ActiveRecord::Base has_many :file_uploads, :as => :fileable end File Column is no good anymore as it borks on Safari 3.x and is no longer maintained. It was nice and simple though... Ah, the good old days... A: For what it's worth, I think Patrick Berkeley has done a good job with handling multiple attachments through the Paperclip plugin. He's outlined his work here: http://gist.github.com/33011
{ "language": "en", "url": "https://stackoverflow.com/questions/19988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there some way to PUSH data from web server to browser? Of course I am aware of Ajax, but the problem with Ajax is that the browser should poll the server frequently to find whether there is new data. This increases server load. Is there any better method (even using Ajax) other than polling the server frequently? A: Yes, what you're looking for is COMET http://en.wikipedia.org/wiki/Comet_(programming). Other good Google terms to search for are AJAX-push and reverse-ajax. A: Comet is definitely what you want. Depending on your language/framework requirements, there are different server libraries available. For example, WebSync is an IIS-integrated comet server for ASP.NET/C#/IIS developers, and there are a bunch of other standalone servers as well if you need tighter integration with other languages. A: I would strongly suggest to invest some time on Comet, but I dont know an actual implementation or library you could use. For an sort of "callcenter control panel" of a web app that involved updating agent and call-queue status for a live Callcenter we developed an in-house solution that works, but is far away from a library you could use. What we did was to implement a small service on the server that talks to the phone-system, waits for new events and maintains a photograph of the situation. This service provides a small webserver. Our web-clients connects over HTTP to this webserver and ask for the last photo (coded in XML), displays it and then goes again, asking for the new photo. The webserver at this point can: * *Return the new photo, if there is one *Block the client for some seconds (30 in our setup) waiting for some event to ocurr and change the photograph. If no event was generated at that point, it returns the same photo, only to allow the connection to stay alive and not timeout the client. This way, when clients polls, it get a response in 0 to 30 seconds max. If a new event was already generated it gets it immediately), otherwise it blocks until new event is generated. It's basically polling, but it somewhat smart polling to not overheat the webserver. If Comet is not your answer, I'm sure this could be implemented using the same idea but using more extensively AJAX or coding in JSON for better results. This was designed pre-AJAX era, so there are lots of room for improvement. If someone can provide a actual lightweight implementation of this, great! A: An interesting alternative to Comet is to use sockets in Flash. A: Yet another, standard, way is SSE (Server-Sent Events, also known as EventSource, after the JavaScript object). A: Comet was actually coined by Alex Russell from Dojo Toolkit ( http://www.dojotoolkit.org ). Here is a link to more infomration http://cometdproject.dojotoolkit.org/ A: Yes, it's called Reverse Ajax or Comet. Comet is basically an umbrella term for different ways of opening long-lived HTTP requests in order to push data in real-time to a web browser. I'd recommend StreamHub Push Server, they have some cool demos and it's much easier to get started with than any of the other servers. Check out the Getting Started with Comet and StreamHub Tutorial for a quick intro. You can use the Community Edition which is available to download for free but is limited to 20 concurrent users. The commercial version is well worth it for the support alone plus you get SSL and Desktop .NET & Java client adapters. Help is available via the Google Group, there's a good bunch of tutorials on the net and there's a GWT Comet adapter too. A: Nowadays you should use WebSockets. This is 2011 standard that allows to initiate connections with HTTP and then upgrade them to two-directional client-server message-based communication. You can easily initiate the connection from javascript: var ws = new WebSocket("ws://your.domain.com/somePathIfYouNeed?args=any"); ws.onmessage = function (evt) { var message = evt.data; //decode message (with JSON or something) and do the needed }; The sever-side handling depend on your tenchnology stack. A: There are other methods. Not sure if they are "better" in your situation. You could have a Java applet that connects to the server on page load and waits for stuff to be sent by the server. It would be a quite a bit slower on start-up, but would allow the browser to receive data from the server on an infrequent basis, without polling. A: It's possible to achive what you're aiming at through the use of persistent http connections. Check out the Comet article over at wikipedia, that's a good place to start. You're not providing much info but if you're looking at building some kind of event-driven site (a'la digg spy) or something along the lines of that you'll probably be looking at implementing a hidden IFRAME that connects to a url where the connection never closes and then you'll push script-tags from the server to the client in order to perform the updates. A: You can use a Flash/Flex application on the client with BlazeDS or LiveCycle on the server side. Data can be pushed to the client using an RTMP connection. Be aware that RTMP uses a non standard port. But you can easily fall back to polling if the port is blocked. A: Might be worth checking out Meteor Server which is a web server designed for COMET. Nice demo and it also is used by twitterfall. A: Look into Comet (a spoof on the fact that Ajax is a cleaning agent and so is Comet) which is basically "reverse Ajax." Be aware that this requires a long-lived server connection for each user to receive notifications so be aware of the performance implications when writing your app. http://en.wikipedia.org/wiki/Comet_(programming) A: Once a connection is opened to the server it can be kept open and the server can Push content a long while ago I did with using multipart/x-mixed-replace but this didn't work in IE. I think you can do clever stuff with polling that makes it work more like push by not sending content unchanged headers but leaving the connection open but I've never done this. A: You could try out our Comet Component - though it's extremely experimental...! A: please check this library https://github.com/SignalR/SignalR to know how to push data to clients dynamically as it becomes available A: You can also look into Java Pushlets if you are using jsp pages. A: Might want to look at ReverseHTTP also.
{ "language": "en", "url": "https://stackoverflow.com/questions/19995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "138" }
Q: Repository layout for large Maven projects I have a large application (~50 modules) using a structure similar to the following: * *Application * *Communication modules * *Color communication module *SSN communication module *etc. communication module *Router module *Service modules * *Voting service module * *Web interface submodule for voting *Vote collector submodule for voting *etc. for voting *Quiz service module *etc. module I would like to import the application to Maven and Subversion. After some research I found that two practical approaches exists for this. One is using a tree structure just as the previous one. The drawback of this structure is that you need a ton of tweaking/hacks to get the multi-module reporting work well with Maven. Another downside is that in Subversion the standard trunk/tags/branches approach add even more complexity to the repository. The other approach uses a flat structure, where there are only one parent project and all the modules, submodules and parts-of-the-submodules are a direct child of the parent project. This approach works well for reporting and is easier in Subversion, however I feel I lose a bit of the structure this way. Which way would you choose in the long term and why? A: I think you're better off flattening your directory structure. Perhaps you want to come up with a naming convention for the directories such that they sort nicely when viewing all of the projects, but ultimately I don't think all of that extra hierarchy is necessary. Assuming you're using Eclipse as your IDE all of the projects are going to end up in a flat list once you import them anyway so you don't really gain anything from the additional sub directories. That in addition to the fact that the configuration is so much simpler without all the extra hierarchy makes the choice pretty clear in my mind. You might also want to consider combining some of the modules. I know nothing about your app or domain, but it seems like a lot of those leaf level modules might be better suited as just packages or sets of packages inside another top level module. I'm all for keeping jars cohesive, but it can be taken too far sometimes. A: We have a largish application (160+ OSGi bundles where each bundle is a Maven module) and the lesson we learned, and continue to learn, is that flat is better. The problem with encoding semantics in your hierarchy is that you lose flexibility. A module that is 100% say "communication" today may be partly "service" tomorrow and then you'll need to be moving things around in your repository and that will break all sorts of scripts, documentation, references, etc. So I would recommend a flat structure and to encode the semantics in another place (say for example an IDE workspace or documentation). I've answered a question about version control layout in some detail with examples at another question, it may be relevant to your situation.
{ "language": "en", "url": "https://stackoverflow.com/questions/20003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Version track, automate DB schema changes with django I'm currently looking at the Python framework Django for future db-based web apps as well as for a port of some apps currently written in PHP. One of the nastier issues during my last years was keeping track of database schema changes and deploying these changes to productive systems. I haven't dared asking for being able to undo them too, but of course for testing and debugging that would be a great feature. From other questions here (such as this one or this one), I can see that I'm not alone and that this is not a trivial problem. Also, I found many inspirations in the answers there. Now, as Django seems to be very powerful, does it have any tools to help with the above? Maybe it's even in their docs and I missed it? A: Last time I checked (version 0.97), syncdb will be able to add tables to sync your DB schema with your models.py file, but it cannot: * *Rename or add a column on a populated DB. You need to do that by hand. *Refactorize your model (like split a table into two) and repopulate your DB accordingly. It might be possible though to write a Django script to make the migration by playing with the two different managers, but that might take ages if your DB is large. A: There are at least two third party utilities to handle DB schema migrations, South and Django Evolution. I haven't tried either one, but I have heard some good things about South, though Evolution has been around a little longer. Also, look at SchemaEvolution on the Django wiki. It is just a wiki page about migrating the db. A: There was a panel session on DB schema changes at the recent DjangoCon; there is a video of the session (thanks to Google), which should provide some useful information on a number of these utilities. A: And now there's also dmigrations. From announcement: django-evolution attempts to address this problem the clever way, by detecting changes to models that are not yet reflected in the database schema and figuring out what needs to be done to bring the two back in sync. In contrast, dmigrations takes the stupid approach: it requires you to explicitly state the changes in a sequence of migrations, which will be applied in turn to bring a database up to the most recent state that reflects the underlying models. This means extra work for developers who create migrations, but it also makes the whole process completely transparent—for our projects, we decided to go with the simplest system that could possibly work. (My bold) A: I heard lot of good about Django Schema Evolution Branch and those were opions of actual users. It mostely works out of the box and do what it should do. A: U should lookup Dmigrations, it functions a little bit diffrent from django-eveoltions. It shows you everything it is doing and for compliccated things it asks you for your intervetnions. It should be great.
{ "language": "en", "url": "https://stackoverflow.com/questions/20021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Is Project Darkstar Realistic? Project Darkstar was the topic of the monthly JavaSIG meeting down at the Google offices in NYC last night. For those that don't know (probably everyone), Project Darkstar is a framework for massively multiplayer online games that attempts to take care of all of the "hard stuff." The basic idea is that you write your game server logic in such a way that all operations are broken up into tiny tasks. You pass these tasks to the Project Darkstar framework which handles distributing them to a specific node in the cluster, any concurrency issues, and finally persisting the data. Apparently doing this kind of thing is a much different problem for video games than it is for enterprise applications. Jim Waldo, who gave the lecture, claims that MMO games have a DB read/write ratio of 50/50, whereas enterprise apps are more like 90% read, 10% write. He also claims that most existing MMOs keep everything in memory exlcusively, and only dump to a DB every 6 hours of so. This means if a server goes down, you would lose all of the work since the last DB dump. Now, the project itself sounds really cool, but I don't think the industry will accept it. First, you have to write your server code in Java. The client code can be written in anything (Jim claims ActionScript 3 is the most popular, follow by C++), but the server stuff has to be Java. Sounds good to me, but I really get the impression that everyone in the games industry hates Java. Second, unlike other industries where developers prefer to use existing frameworks and libraries, the guys in the games industry seem to like to write everything themselves. Not only that, they like to rewrite everything for every new game they produce. Things are starting to change where developers are using Havok for physics, Unreal Engine 3 as their platform, etc., but for the most part it looks like everything is still proprietary. So, are the guys at Project Darkstar just wasting their time? Can a general framework like this really work for complex games with the performance that is required? Even if it does work, are game companies willing to use it? A: Sounds like useless tech to me. The MMO world is controlled by a few big game companies that already have their own tech in place. Indie game developers love trying to build MMO's and sometimes they do, but those games rarely gain traction. Larger companies breaking into the MMO world would probably license "proven" technology, or extend their own. Game companies reuse vast quantities of code from game to game. Most/many game companies have developed their own tech internally, and use it on every game they produce. Occasionally, they will do something like replace their physics code with a 3rd party physics engine. If their internal code base (game engine, design tools, internal pipeline) starts to age too much, or become unwieldy, they might switch to one of the big game engines like Unreal. Even then, major chunks of code will continue to be re-used from game to game. A: Edit: This was written before Oracle bought Sun and started a rampage to kill everything that does not make them a billion $ per day. See the comments for an OSS Fork. I still stand by my opinion that stuff like that (MMO Middleware) is realistic, you just need a company that doesn't suck behind it. The Market may be dominated by few large games, but that does not mean that there is not a lot of room for more niche games. Lets face it: If you want to reach 100.000+ players, you're ending up building your own technology stack, at least for the critical core. That's what CCP did for EVE Online (StacklessIO), that's what Blizzard did for World of Warcraft (although they do use many third-party libraries), that's what Mythic did for Warhammer Online (although they are based on Gamebryo). However, if you aim to be a small, niche MMO (like the dozens of Free-to-Play/Itemshop MMOs), then getting the Network stuff right is just insanely hard, data consistency is even harder and scalability is the biggest b*tch. But game technology is not your only problem - you also need to tackle Billing. Credit Card only? Have fun selling in Germany then, people there want ELV. That's where you need a reliable billing provider, but you still need to wire in the billing application with your accounts to make sure that accounts are blocked/reactivated when the billing fails. There are some companies already offering "MMO Infratructure Services" (i.e. Arvato's EEIS), but the bottom line is: Stuff like Project Darkstar IS realistic, but assuming that you can build a Multi-Billion-MMO entirely on a Third Party Stack is optimistic, possibly idealistic. But then again, entirely inventing all of the technology is even more stupid - use the Third Party stuff that you need (i.e. Billing, Font Rendering, Audio Output...), but write the stuff that really makes or breaks your business (i.e. Network stack, User interface etc.) on your own. (Note: Jeff's posting may be a bit flawed, but the overall direction is correct IMHO.) Addendum: Also, the game industry does license and reuse engines a lot. The most prominent game Engines are the Unreal Engine, Source Engine and id Tech, which fuel dozens, if not hundreds of games. But there are some lesser-known (outside of the industry) engines. There is Gamebryo, the Middleware behind games like Civilization 4 and Fallout 3, there was RenderWare that is now only EA-in-House, but used in games like Battlefield 2 or The Sims 3. There is the open source Ogre3d, which was used in some commercial titles. If you're just looking for Sound, there's stuff like FMOD or if you want to do font-rendering, why not give FreeType a spin? What I'm saying is: Third-Party Engines/Middleware do exist, and they ARE being successfully used since more than a decade (I know for sure that id's Wolfenstein Engine was licensed to other companies, and that was 1992), even by big companies in multi-million-dollar titles. The important thing is the support, because a good engine with no help in case of an issue is pretty much worthless or at least very expensive if the developer has to spend their game-development-time with unneccessary debugging of the Engine. If the Darkstar folks manage to get the support side right and 2 or 3 higher profile titles out, I do believe it could succeed in opening the MMO market to a lot more smaller developers and indies. A: From what I can tell, video game companies do not reuse most of their code, because if they do it implies that their new game is just a rehash of an old one. Um... if you're referring to the long tail of video game companies, maybe. Within a company that has had a series of successful games, there is usually some modicum of reuse. Major hardware changes can result in ditching a lot of work, but it really depends on the company. A: It sounds like fun to design and code, but I think it ultimately comes down to useless abstractions (to steal from Joel). A: It's very common for games to reuse "game engines," even those from third-parties. This sounds like another step in that direction. A: I think it's a great thing to do. Developers not having to worry about all these things that project darkstar takes care of, and it's very easy to use. But it's not all about just getting it to work and not having to learn everything about internet-communication, It's also about performance. Project darkstar has been under development for over 2 years and it keeps getting better,faster and more robust. I think it will be hard and probably not worth the time to write these things when aiming at a specific game, when technologies like this can be used instead. And you also get nice information during runtime telling you where in an application there's a cause of slowdown or deadlocks so you can improve that. A: I don't work in the games industry, but it sounds to me like this will do the same thing for video games as the Quake and Half-Life engines did. That is they will promote getting young developers interested in the industry and promote development of indie games. From what I can tell, video game companies do not reuse most of their code, because if they do it implies that their new game is just a rehash of an old one. Everyone wants a cool new physics engine, better graphics, new ways to play the game. Most video game engines and frameworks are made for a specific scenario and thus are not very bendable to other situations. Maybe Darkstar will get it right though, but I kinda doubt it, since generalizing only works for so much.
{ "language": "en", "url": "https://stackoverflow.com/questions/20034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: What are the most important things to learn about .net as a Project Manager? Thinking about getting into .net technology project management I've had plenty of experience with PHP projects: I'm aware of most of the existing frameworks and libraries, and I've written specs and case studies based on this knowledge. What should I know about .net? Which top resources would you recommend me to know so I can rapidly learn and later stay up to date on the technology? Edit (8.24.08): The answers I got so far essentially discuss being a good PM. Thanks, but this is not what I meant. Any .net essentials would be appreciated. A: The number one rule is do NOT just ask for status updates. It is Especially annoying when phrases like "where are we on this?" are used. If you aren't directly involved in the details then just make sure you have established communication times or plans so that you know whats going on rather than asking for updates. A: Start with the basics before you get to the higher level stuff like web services (though that is important too). The most important things you need to learn, as a project manager, are the things you're going to be questioning your underlings about later. For example, my PM (also a PHP guy) has absolutely no knowledge of garbage collection and its implications, which makes it incredibly difficult for me to explain to him why our .NET Windows service appears to be taking 80MB of RAM. Remember, you are not the one who needs to know everything. You should be issuing overarching directives, and let the people with the expertise sort out the details. That said, study up on the technicals a bit so that they can communicate effectively with you. Edit (8/24/08):You should know something about the underlying technicals; not necessarily all .NET stuff either (garbage collection, .config files, pipes and services if you're running services adjacent to your project's main focus, stuff like that). Higher-reaching concepts would probably include WPF (maybe Silverlight as well), LINQ (or your ORM of choice), as well as the Vista bridge and related bridging code if your project includes desktop apps at all. Those three things seem to be the focus for this round of .NET. Something else that's very important to have at least a passing knowledge of is the ways that .NET code can/must interoperate with native code: P/Invoke, Runtime Callable Wrapping and COM Callable Wrapping. There are still a lot of native things that don't have a .NET equivalent. As for resources, I'd highly recommend MSDN Magazine. They tend to preview upcoming technologies and tools well before average developers will ever see them. A: The biggest thing you'll probably want to learn is the differences between Windows and non-Windows programmers. They approach fundamental things differently. Knowing the difference will be key to successfully managing the project. If you listen to the stack overflow podcast, and Jeff and Joel have multiple discussions on this topic. Understanding the details of the underlying technology is mostly irrelevant and you'll never know it well enough to go toe to toe with someone who works in it day in and day out. You can probably pick it up as you go. A: The #1 thing you need to be aware of (and I'm guessing you probably already are) is that the guys doing the coding should know what they are doing. Depending on the personailties of the members of your team, you should be able to find someone who is willing and able to explain any of the intricacies to you on an as-required basis. In my experience, the biggest hinderence to a project is the PM who understands the project, but not how to accomplish it (not in itself a problem), but who is also unwilling to listen to what his team tell him. As with any project management, accept that you can't know everything, and be humble enough to ask for explanations where needed. A: This may be old, but should get your started on the high-level overview of the .NET Framework. http://news.zdnet.co.uk/software/0,1000000121,2134207,00.htm
{ "language": "en", "url": "https://stackoverflow.com/questions/20040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Diagnosing Deadlocks in SQL Server 2005 We're seeing some pernicious, but rare, deadlock conditions in the Stack Overflow SQL Server 2005 database. I attached the profiler, set up a trace profile using this excellent article on troubleshooting deadlocks, and captured a bunch of examples. The weird thing is that the deadlocking write is always the same: UPDATE [dbo].[Posts] SET [AnswerCount] = @p1, [LastActivityDate] = @p2, [LastActivityUserId] = @p3 WHERE [Id] = @p0 The other deadlocking statement varies, but it's usually some kind of trivial, simple read of the posts table. This one always gets killed in the deadlock. Here's an example SELECT [t0].[Id], [t0].[PostTypeId], [t0].[Score], [t0].[Views], [t0].[AnswerCount], [t0].[AcceptedAnswerId], [t0].[IsLocked], [t0].[IsLockedEdit], [t0].[ParentId], [t0].[CurrentRevisionId], [t0].[FirstRevisionId], [t0].[LockedReason], [t0].[LastActivityDate], [t0].[LastActivityUserId] FROM [dbo].[Posts] AS [t0] WHERE [t0].[ParentId] = @p0 To be perfectly clear, we are not seeing write / write deadlocks, but read / write. We have a mixture of LINQ and parameterized SQL queries at the moment. We have added with (nolock) to all the SQL queries. This may have helped some. We also had a single (very) poorly-written badge query that I fixed yesterday, which was taking upwards of 20 seconds to run every time, and was running every minute on top of that. I was hoping this was the source of some of the locking problems! Unfortunately, I got another deadlock error about 2 hours ago. Same exact symptoms, same exact culprit write. The truly strange thing is that the locking write SQL statement you see above is part of a very specific code path. It's only executed when a new answer is added to a question -- it updates the parent question with the new answer count and last date/user. This is, obviously, not that common relative to the massive number of reads we are doing! As far as I can tell, we're not doing huge numbers of writes anywhere in the app. I realize that NOLOCK is sort of a giant hammer, but most of the queries we run here don't need to be that accurate. Will you care if your user profile is a few seconds out of date? Using NOLOCK with Linq is a bit more difficult as Scott Hanselman discusses here. We are flirting with the idea of using SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED on the base database context so that all our LINQ queries have this set. Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly. I guess I'm a little frustrated that trivial reads in SQL 2005 can deadlock on writes. I could see write/write deadlocks being a huge issue, but reads? We're not running a banking site here, we don't need perfect accuracy every time. Ideas? Thoughts? Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls? Jeremy, we are sharing one static datacontext in the base Controller for the most part: private DBContext _db; /// <summary> /// Gets the DataContext to be used by a Request's controllers. /// </summary> public DBContext DB { get { if (_db == null) { _db = new DBContext() { SessionName = GetType().Name }; //_db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED"); } return _db; } } Do you recommend we create a new context for every Controller, or per Page, or .. more often? A: Will you care if your user profile is a few seconds out of date? Nope - that's perfectly acceptable. Setting the base transaction isolation level is probably the best/cleanest way to go. A: Typical read/write deadlock comes from index order access. Read (T1) locates the row on index A and then looks up projected column on index B (usually clustered). Write (T2) changes index B (the cluster) then has to update the index A. T1 has S-Lck on A, wants S-Lck on B, T2 has X-Lck on B, wants U-Lck on A. Deadlock, puff. T1 is killed. This is prevalent in environments with heavy OLTP traffic and just a tad too many indexes :). Solution is to make either the read not have to jump from A to B (ie. included column in A, or remove column from projected list) or T2 not have to jump from B to A (don't update indexed column). Unfortunately, linq is not your friend here... A: According to MSDN: http://msdn.microsoft.com/en-us/library/ms191242.aspx When either the READ COMMITTED SNAPSHOT or ALLOW SNAPSHOT ISOLATION database options are ON, logical copies (versions) are maintained for all data modifications performed in the database. Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb. Each version is marked with the transaction sequence number of the transaction that made the change. The versions of modified rows are chained using a link list. The newest row value is always stored in the current database and chained to the versioned rows stored in tempdb. For short-running transactions, a version of a modified row may get cached in the buffer pool without getting written into the disk files of the tempdb database. If the need for the versioned row is short-lived, it will simply get dropped from the buffer pool and may not necessarily incur I/O overhead. There appears to be a slight performance penalty for the extra overhead, but it may be negligible. We should test to make sure. Try setting this option and REMOVE all NOLOCKs from code queries unless it’s really necessary. NOLOCKs or using global methods in the database context handler to combat database transaction isolation levels are Band-Aids to the problem. NOLOCKS will mask fundamental issues with our data layer and possibly lead to selecting unreliable data, where automatic select / update row versioning appears to be the solution. ALTER Database [StackOverflow.Beta] SET READ_COMMITTED_SNAPSHOT ON A: NOLOCK and READ UNCOMMITTED are a slippery slope. You should never use them unless you understand why the deadlock is happening first. It would worry me that you say, "We have added with (nolock) to all the SQL queries". Needing to add WITH NOLOCK everywhere is a sure sign that you have problems in your data layer. The update statement itself looks a bit problematic. Do you determine the count earlier in the transaction, or just pull it from an object? AnswerCount = AnswerCount+1 when a question is added is probably a better way to handle this. Then you don't need a transaction to get the correct count and you don't have to worry about the concurrency issue that you are potentially exposing yourself to. One easy way to get around this type of deadlock issue without a lot of work and without enabling dirty reads is to use "Snapshot Isolation Mode" (new in SQL 2005) which will always give you a clean read of the last unmodified data. You can also catch and retry deadlocked statements fairly easily if you want to handle them gracefully. A: @Jeff - I am definitely not an expert on this, but I have had good results with instantiating a new context on almost every call. I think it's similar to creating a new Connection object on every call with ADO. The overhead isn't as bad as you would think, since connection pooling will still be used anyway. I just use a global static helper like this: public static class AppData { /// <summary> /// Gets a new database context /// </summary> public static CoreDataContext DB { get { var dataContext = new CoreDataContext { DeferredLoadingEnabled = true }; return dataContext; } } } and then I do something like this: var db = AppData.DB; var results = from p in db.Posts where p.ID = id select p; And I would do the same thing for updates. Anyway, I don't have nearly as much traffic as you, but I was definitely getting some locking when I used a shared DataContext early on with just a handful of users. No guarantees, but it might be worth giving a try. Update: Then again, looking at your code, you are only sharing the data context for the lifetime of that particular controller instance, which basically seems fine unless it is somehow getting used concurrently by mutiple calls within the controller. In a thread on the topic, ScottGu said: Controllers only live for a single request - so at the end of processing a request they are garbage collected (which means the DataContext is collected)... So anyway, that might not be it, but again it's probably worth a try, perhaps in conjunction with some load testing. A: Setting your default to read uncommitted is not a good idea. Your will undoubtedly introduce inconsistencies and end up with a problem that is worse than what you have now. Snapshot isolation might work well, but it is a drastic change to the way Sql Server works and puts a huge load on tempdb. Here is what you should do: use try-catch (in T-SQL) to detect the deadlock condition. When it happens, just re-run the query. This is standard database programming practice. There are good examples of this technique in Paul Nielson's Sql Server 2005 Bible. Here is a quick template that I use: -- Deadlock retry template declare @lastError int; declare @numErrors int; set @numErrors = 0; LockTimeoutRetry: begin try; -- The query goes here return; -- this is the normal end of the procedure end try begin catch set @lastError=@@error if @lastError = 1222 or @lastError = 1205 -- Lock timeout or deadlock begin; if @numErrors >= 3 -- We hit the retry limit begin; raiserror('Could not get a lock after 3 attempts', 16, 1); return -100; end; -- Wait and then try the transaction again waitfor delay '00:00:00.25'; set @numErrors = @numErrors + 1; goto LockTimeoutRetry; end; -- Some other error occurred declare @errorMessage nvarchar(4000), @errorSeverity int select @errorMessage = error_message(), @errorSeverity = error_severity() raiserror(@errorMessage, @errorSeverity, 1) return -100 end catch; A: Q. Why are you storing the AnswerCount in the Posts table in the first place? An alternative approach is to eliminate the "write back" to the Posts table by not storing the AnswerCount in the table but to dynamically calculate the number of answers to the post as required. Yes, this will mean you're running an additional query: SELECT COUNT(*) FROM Answers WHERE post_id = @id or more typically (if you're displaying this for the home page): SELECT p.post_id, p.<additional post fields>, a.AnswerCount FROM Posts p INNER JOIN AnswersCount_view a ON <join criteria> WHERE <home page criteria> but this typically results in an INDEX SCAN and may be more efficient in the use of resources than using READ ISOLATION. There's more than one way to skin a cat. Premature de-normalisation of a database schema can introduce scalability issues. A: You definitely want READ_COMMITTED_SNAPSHOT set to on, which it is not by default. That gives you MVCC semantics. It's the same thing Oracle uses by default. Having an MVCC database is so incredibly useful, NOT using one is insane. This allows you to run the following inside a transaction: Update USERS Set FirstName = 'foobar'; //decide to sleep for a year. meanwhile without committing the above, everyone can continue to select from that table just fine. If you are not familiar with MVCC, you will be shocked that you were ever able to live without it. Seriously. A: The OP question was to ask why this problem occured. This post hopes to answer that while leaving possible solutions to be worked out by others. This is probably an index related issue. For example, lets say the table Posts has a non-clustered index X which contains the ParentID and one (or more) of the field(s) being updated (AnswerCount, LastActivityDate, LastActivityUserId). A deadlock would occur if the SELECT cmd does a shared-read lock on index X to search by the ParentId and then needs to do a shared-read lock on the clustered index to get the remaining columns while the UPDATE cmd does a write-exclusive lock on the clustered index and need to get a write-exclusive lock on index X to update it. You now have a situation where A locked X and is trying to get Y whereas B locked Y and is trying to get X. Of course, we'll need the OP to update his posting with more information regarding what indexes are in play to confirm if this is actually the cause. A: One thing that has worked for me in the past is making sure all my queries and updates access resources (tables) in the same order. That is, if one query updates in order Table1, Table2 and a different query updates it in order of Table2, Table1 then you might see deadlocks. Not sure if it's possible for you to change the order of updates since you're using LINQ. But it's something to look at. A: I'm pretty uncomfortable about this question and the attendant answers. There's a lot of "try this magic dust! No that magic dust!" I can't see anywhere that you've anaylzed the locks that are taken, and determined what exact type of locks are deadlocked. All you've indicated is that some locks occur -- not what is deadlocking. In SQL 2005 you can get more info about what locks are being taken out by using: DBCC TRACEON (1222, -1) so that when the deadlock occurs you'll have better diagnostics. A: Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls? I originally tried the latter approach, and from what I remember, it caused unwanted locking in the DB. I now create a new context for every atomic operation. A: Before burning the house down to catch a fly with NOLOCK all over, you may want to take a look at that deadlock graph you should've captured with Profiler. Remember that a deadlock requires (at least) 2 locks. Connection 1 has Lock A, wants Lock B - and vice-versa for Connection 2. This is an unsolvable situation, and someone has to give. What you've shown so far is solved by simple locking, which Sql Server is happy to do all day long. I suspect you (or LINQ) are starting a transaction with that UPDATE statement in it, and SELECTing some other piece of info before hand. But, you really need to backtrack through the deadlock graph to find the locks held by each thread, and then backtrack through Profiler to find the statements that caused those locks to be granted. I expect that there's at least 4 statements to complete this puzzle (or a statement that takes multiple locks - perhaps there's a trigger on the Posts table?). A: Will you care if your user profile is a few seconds out of date? A few seconds would definitely be acceptable. It doesn't seem like it would be that long, anyways, unless a huge number of people are submitting answers at the same time. A: I agree with Jeremy on this one. You ask if you should create a new data context for each controller or per page - I tend to create a new one for every independent query. I'm building a solution at present which used to implement the static context like you do, and when I threw tons of requests at the beast of a server (million+) during stress tests, I was also getting read/write locks randomly. As soon as I changed my strategy to use a different data context at LINQ level per query, and trusted that SQL server could work its connection pooling magic, the locks seemed to disappear. Of course I was under some time pressure, so trying a number of things all around the same time, so I can't be 100% sure that is what fixed it, but I have a high level of confidence - let's put it that way. A: Now that I see Jeremy's answer, I think I remember hearing that the best practice is to use a new DataContext for each data operation. Rob Conery's written several posts about DataContext, and he always news them up rather than using a singleton. * *http://blog.wekeroad.com/2007/08/17/linqtosql-ranch-dressing-for-your-database-pizza/ *http://blog.wekeroad.com/mvc-storefront/mvcstore-part-9/ (see comments) Here's the pattern we used for Video.Show (link to source view in CodePlex): using System.Configuration; namespace VideoShow.Data { public class DataContextFactory { public static VideoShowDataContext DataContext() { return new VideoShowDataContext(ConfigurationManager.ConnectionStrings["VideoShowConnectionString"].ConnectionString); } public static VideoShowDataContext DataContext(string connectionString) { return new VideoShowDataContext(connectionString); } } } Then at the service level (or even more granular, for updates): private VideoShowDataContext dataContext = DataContextFactory.DataContext(); public VideoSearchResult GetVideos(int pageSize, int pageNumber, string sortType) { var videos = from video in DataContext.Videos where video.StatusId == (int)VideoServices.VideoStatus.Complete orderby video.DatePublished descending select video; return GetSearchResult(videos, pageSize, pageNumber); } A: You should implement dirty reads. SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED If you don't absolutely require perfect transactional integrity with your queries, you should be using dirty reads when accessing tables with high concurrency. I assume your Posts table would be one of those. This may give you so called "phantom reads", which is when your query acts upon data from a transaction that hasn't been committed. We're not running a banking site here, we don't need perfect accuracy every time Use dirty reads. You're right in that they won't give you perfect accuracy, but they should clear up your dead locking issues. Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly If you implement dirty reads on "the base database context", you can always wrap your individual calls using a higher isolation level if you need the transactional integrity. A: So what's the problem with implementing a retry mechanism? There will always be the possibility of a deadlock ocurring so why not have some logic to identify it and just try again? Won't at least some of the other options introduce performance penalties that are taken all the time when a retry system will kick in rarely? Also, don't forget some sort of logging when a retry happens so that you don't get into that situation of rare becoming often. A: I would have to agree with Greg so long as setting the isolation level to read uncommitted doesn't have any ill effects on other queries. I'd be interested to know, Jeff, how setting it at the database level would affect a query such as the following: Begin Tran Insert into Table (Columns) Values (Values) Select Max(ID) From Table Commit Tran A: It's fine with me if my profile is even several minutes out of date. Are you re-trying the read after it fails? It's certainly possible when firing a ton of random reads that a few will hit when they can't read. Most of the applications that I work with are very few writes compared to the number of reads and I'm sure the reads are no where near the number you are getting. If implementing "READ UNCOMMITTED" doesn't solve your problem, then it's tough to help without knowing a lot more about the processing. There may be some other tuning option that would help this behavior. Unless some MSSQL guru comes to the rescue, I recommend submitting the problem to the vendor. A: I would continue to tune everything; how are is the disk subsystem performing? What is the average disk queue length? If I/O's are backing up, the real problem might not be these two queries that are deadlocking, it might be another query that is bottlenecking the system; you mentioned a query taking 20 seconds that has been tuned, are there others? Focus on shortening the long-running queries, I'll bet the deadlock problems will disappear. A: Had the same problem, and cannot use the "IsolationLevel = IsolationLevel.ReadUncommitted" on TransactionScope because the server dont have DTS enabled (!). Thats what i did with an extension method: public static void SetNoLock(this MyDataContext myDS) { myDS.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED"); } So, for selects who use critical concurrency tables, we enable the "nolock" like this: using (MyDataContext myDS = new MyDataContext()) { myDS.SetNoLock(); // var query = from ...my dirty querys here... } Sugestions are welcome!
{ "language": "en", "url": "https://stackoverflow.com/questions/20047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "82" }
Q: Any tips on getting Rails to run with an Access back-end? I shudder to ask, but my client might offer no other SQL (or SQL-like) solution. I know Access has some SQL hooks; are they enough for basic ActiveRecord? Later: I appreciate all the suggestions to use other databases, but trust me: I've tried convincing them. There is an "approved" list, and no SQL databases are on it. Getting something onto the list could take more than a year, and this project will be done in three weeks. A: It's a long shot but there's an ODBC adapter for ActiveRecord that might work. A: There seems to be something of an Access connection adapter here: http://svn.behindlogic.com/public/rails/activerecord/lib/active_record/connection_adapters/msaccess_adapter.rb The database.yml file would look like this: development: adapter: msaccess database: C:\path\to\access_file.mdb I'll post more after I've tried it out with Rails 2.1 A: Another option that is more complicated but could work if you were forced to do it, is to write a layer of RESTful web services that will expose Access to rails. If you are careful in your design, those RESTful web services can be consumed directly by ActiveResoure which will give you a lot of the functionality of ActiveRecord. A: There are some wierd things in Access that might cause issues and I don't know if ODBC takes care of it. If it does @John Topley is right, ODBC would be your only cance. * *True in access = -1 not 1 *Access treats dates differently than regular TSQL. *You might run into trouble creating relations. If you go with access, will probably learn more about debuging AcriveRecord then you ever cared to ( which might not be a bad thing) A: Maudite wrote: True in access = -1 not 1 Not correct. True is defined as not being false. So, if you want to use True in a WHERE clause, use Not False instead. This will provide complete cross-platform compatibility with all SQL engines. All that said, it's hardly an issue, since whatever driver you're using to connect to your back end will properly translate True in WHERE clauses to the appropriate value. The only exception might be in passthrough queries, but in that case, you should be writing the SQL outside Access and testing it against your back end and just pasting the working SQL into the SQL view of your passthrough query in Access. Maudite wrote: Access treats dates differently than regular TSQL. Again, this is only going to be an issue if you don't go through the ODBC or OLEDB drivers, which will take care of translating Jet SQL into TSQL for you. Maudite wrote: You might run into trouble creating relations. I'm not sure why you'd want an Access application to be altering the schema of your back end, so this seems to me like a non-issue. A: You should really talk them into allowing SQLite. It is super-simple to setup, and operates like Access would (as a file sitting next to the app on the same server). A: Firstly, you really want to be using sqlite. In my experience Access itself is a pile of [redacted], but the Jet database engine it uses is actually pretty fast and can handle some pretty complex SQL queries. If you can find a rails adapter that actually works I'd say you'll be fine. Just don't open the DB with the access frontend while your rails app is running :-) If your client is anal enough to only allow you to develop with an approved list of databases, they may be more concerned by the fact that Jet is deprectated and will get no more support from MS. This might give you some ammunition in your quest to use a real database. Good luck
{ "language": "en", "url": "https://stackoverflow.com/questions/20054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Suggestions on starting a child programming What languages and tools do you consider a youngster starting out in programming should use in the modern era? Lots of us started with proprietary Basics and they didn't do all of us long term harm :) but given the experiences you have had since then and your knowledge of the domain now are there better options? There are related queries to this one such as "Best ways to teach a beginner to program?" and "One piece of advice" about starting adults programming both of which I submitted answers to but children might require a different tool. Disclosure: it's bloody hard choosing a 'correct' answer to a question like this so who ever has the best score in a few days will get the 'best answer' mark from me based on the communities choice. A: For a child, I would go with Alice. Any kid is going to like the drag-and-drop interaction that Alice uses better than trying to remember how to spell and punctuate any programming language. He/She will learn the basic programming structures (conditionals, loops, etc.) and will experience the fun of building an animated program they can show off to other family or friends. A beginner CS class at the local community college actually uses Alice to teach programming in a language-independent way. It provides a good foundation for moving into programming in a particular language (or a few languages) down the road. A: I recently saw a presentation about GreenFoot (a java based learning environment for children). It looked awesome. If I would have kids, I would give it a try Link to the presentation It is a very playful environment, where you could start with very basic methods. The kids learn thinking in an object oriented way (you cannot instantiate an animal, but you can instantiate a cat). And the better they get, the more of Java you can uncover for/with them. A: I'd go with Scratch, some points regarding it. * *It's a graphical programming language. It isn't text based (this might be positive or negative). It does make it more intuitive and easy for kids (7 and up). *It's actually highly object. The objects you write these graphical scripts have the code attached to them and can be reused and moved around. *Very Important: quick and impressive results. Kids need to get going fast and get results in order to get hooked. I'd like to note that although many of us started programing at a young age in basic or logo and because programmer later in life doesn't mean those are good languages to start with. I think that kids today have much better options, like scratch or Alice. Text based languages (python, ruby, basic, c# or even c) are dependent on external libraries and tools (editors, compilers) while something like Alice or scratch is all inclusive and will teach kids (not aimed at teens) programming concepts. Later they can move on and expand their learning. A: Check out Phrogram (formerly KPL) and Alice A: I'd say: give the kid a real C64, because that's how I got started. But, today... I'd say Ruby, but Ruby is a bit too chaotic. BASIC would be better in the long run. Processing is easy to learn, and it's basically Java. The reason I recommend a C64 is because it's BASIC, but you still have to learn certain computer-related things, like the memory model, pixels, characters, character maps, newlines, etc. etc, if you want to do more advanced stuff. Also, if your kid finds it boring, you know his heart really isn't into coding. A: I would pitch LOGO. It was something that was taught in my elementary school. It gives nearly immediate feedback, and will teach really basic programming concepts. Moving that little turtle around can be a lot of fun. A: I'd recommend python, because it's so terse and expressive. Seems less likely to frustrate when getting started, but offers plenty of room to learn more advanced concepts as well. A: For a child, I would go with Alice. Here is another vote for Alice. My 4 kids have had a ton of fun working with it and learning the basic concepts of programming. Of course to them it's all about socializing with fairies and ogres, but heck the darn legacy system I work on could use some faries and ogres too. A: I would suggest LEGO Mindstorm, it provides an intuitive drag and drop interface for programming and because it comes with hardware it provides something tangible for a child to grasp. Also, because it is "LEGO" they might think of it as more of a game then a programming exercise. A: Game Maker might be another approach. You can start simple with easy drag and drop development, and then introduce more advanced programming as you go. The book The Game Maker's Apprentice: Game Development for Beginners has a number of sample games and takes you through the steps required to make them. A: How old? Lots of us stared with BASIC at some point, but before then, I learned the concepts of stringing commands together, variables, and looping with LOGO. Figuring out how to draw a circle with a triangle that can only go in a straight line and turn was my very first programming accomplishment. Edit: This question & its answers make me feel old. A: I think python is a good alternative; it is a very powerful language also you can easily do a lot of things (not boring at all). A: Checkout Squeak developed by Alan Kay who think programming should be taught at early ages. A: My day job is in a school, and over the past few years I've seen or taught (or attempted to teach) various children, in various numbers, programming lessons. Children are all different - some are quick learners, some aren't. In particular, some have better literacy skills than others, and that definitely makes a difference to the speed at which they'll pick up programming. I bet that most of us here, as professional computer programmers and the kind of people who read and post to forums for fun, learnt to read at a pretty young age. For those kinds of children, and if it's your own child who you can teach one-on-one, you could do worse than JavaScript - it has the advantage that you can do real stuff with it right away, and the edit-test cycle is simply hitting "refresh" in the browser. It gets confusing when you start to run in to how JavaScript does everything asynchronously, and is tricky to debug, but for a bright child under close tuition these problems can be overcome. LEGO Mindstorms is definitely up there at the top of the list. Most schools now super-glue the bricks together to create pre-made models that can't have bits nicked off of them, but this shouldn't be a problem at home. Over on the Times Educational Supplement site (website forum for the UK's weekly teaching newspaper), the "what programming language is best for children?" topic comes up pretty regularly. Lots of recommendations over there for Scratch as an alternative to Mindstorms - bit more freedom than Mindstorms, again probably better for the brighter student who could also be given a soldering iron. I've found that slower pupils can still have problems with Mindstorms, even though the programming environment is "graphical" - there's still a lot going on on screen, and there's a fair bit to remember (this was an older version, mind - haven't tried the snazzy new one yet). In my experience, the best all-round introduction to programming is probably still LOGO - actually a considerably more powerful language than most people give it credit for. The original Mindstorms book by Seymour Papert (nothing to do with LEGO - they nicked the title of the book for their product), one of the originators of LOGO, is the canonical reference for teaching programming to children as a "thinking skill" and for the concept of Constructionism in learning. We've had classes of 7 or 8 year-olds programming LOGO. Note that we aren't aiming to make them "software developers", that's a career path they can decide on at some point post-16. At a young age we're trying to get them to think of "computer programming" as just another tool - how to set out a problem to be solved by a computer, in the same way they might use a mind map to help them organise and remember stuff for an exam. No poor child should be sat down and drilled in the minutia and use of a particular language, they should be left to explore and figure stuff out as they like. A: Though _why hasn't given it much love in the past year or so, for a while I was really excited about Hackety Hack. I think the key for most new programmers, especially children who are more than apt to losing interest in things, is instantaneous feedback. That was the really wonderful thing about Hackety Hack: a few lines of code, and suddenly you have something in front of you that does something. There are a few similar applications aimed at things like drawing graphics (one of which, I briefly assisted Nathan Weizenbaum on, Scribble!). Kids simply need positive feedback that they're doing something correct on a regular basis, else there's nothing to keep them interested in the task at hand. What I think the future is for teaching children to program is some sort of DSL built on top of a language with friendly syntax (these would include, arguably, Ruby, Python, and Scheme) whose purpose is to provide an intuitive environment for constructing simple games (say, Tic-Tac Toe, or Hangman). A: I think you should start them off in C. The sooner they can get the hang of pointers the better. See Understanding Pointers and Should I learn C. A: I think the first question is: what sort of program would it be interesting to create? One of the things that got me started with programming as a kid (in BBC basic and then QBasic) was the ease of writing graphical programs. I could write a couple of lines of code and see my program draw a line on the screen straight away. The closest I've seen to that sort of simplicity recently are the pygame library for python and Processing, a set of java libraries with an IDE. I imagine that hacking on web pages would be another good way to get started: that would entail HTML, Javascript (using a library like jQuery), perhaps PHP or something along those lines. Whatever tools you provide, the crucial thing is for it to be easy to get started straight away. If you have to write twenty lines of correct code and figure out how to invoke the compiler before you see any tangible results, progress is going to be slow. A: There are many good suggestions here already. I really agree with Kronikarz. Get a retro computer (or emulator) that you are interested in and teach with that. Why a retro computer? Basic is built in. Making sounds and primitive graphics is a trivial task. The real deal might be better than an emulator because it will be a bit more fascinating to a child who is used to seeing only modern devices. A: As I said here, I'd go for Squeakland and the famous Drive a Car example (powered by Squeak). Smalltalk syntax is simple, which is great for children. And later as the child evolves, he can learn more complex and even very advanced concepts that are also in Squeak (eg. programing statefull webapps with automated refactoring and automated unit tests!). And like @cpuguru and @Rotem said, Scratch (also Squeak based) is great too. A: I'll second Geoff's suggestions of Phrogram (used to be KPL), and Alice. My only other suggestion is Lego Mindstorms NXT. The NXT's programming language is drag-and-drop, is very easy to use, and can do some very complicated tasks once you learn it. Also young boys usually like seeing things move. :) I've used Alice and NXTs with some young kids, and they've taken to it very well. A: Two possibilities are: Scratch - developed at MIT - http://scratch.mit.edu/ and EToys from the One Laptop per Child fame - http://wiki.laptop.org/go/Squeak A: Full disclosure: I'm one of the guys who invented Kid's Programming Language, which is now http://www.Phrogram.com, which others have recommended here. Let me add some programmer-oriented info about it. It's a code IDE, rather than drag-and-drop, or designer-based. This was intentional on our part - we wanted to make it easy and fun to do real text-based programming, particularly programming games and graphics. This is a fundamental difference between us and Alice and Scratch. Which you pick is a matter of the kid, their age and aptitudes, your goals. Using them serially with the same beginner might be a great way to go - if you do that, I would recommend Scratch, Alice, Phrogram as the order. Phrogram has worked best for 12 years and up, but I know dads with 6 year olds who have taught their kids with it, and I know 10 year olds who have taught themselves with it. The language is as much like English as we could make it, and is as minimal as we could make it. The secret sauce is in the class-based object heirarchy, which is again as simple, intuitive and English-like as we could make it. The object heirarchy is optimized for games and graphics. 3D models are available, and 2D sprites. Absolute movement using screen coordinates is supported, or relative movement ala LOGO turtles - Forward(x), TurnLeft(y). The IDE comes with over 100 examples, some language examples (loops), some learning examples (arrays), some fully-functional games and sims (Pong, Missile Command, Game of Life). To give you a sense of how highly leveraged we made the language and the IDE: with 27 instructions you can fly a 3D spaceship model around a 3D skybox, using your keyboard. The same with a 2D sprite is 12 to 15 instructions. We are working on a Blade-compatible release of Phrogram that will allow programs to run on the XBox 360. Yeah, the XBox, on your big TV. Nice motivator for getting a kid started? :) Phrogram includes support for class-based programming, with methods and properties - but that's only encapsulation, not inheritance or polymorphism. A tutorial and user guide is available, My own ebook is available at Amazon and other places online, "Learn to Program with Phrogram!," and gets a beginner started by programming the classic Pong. Phrogram Programming for the Absolute Beginner, by Jerry Lee Ford, Jr., is also available, as a paperback, at Amazon and elsewhere. A: I think Java might be a good choice simply because you can make GUIs easily, and see "cool things" happening. For the same reason, maybe any of the .NET languages. I've also heard good things about scripting languages (Ruby and Python, especially) for getting kids to learn how to program. A: Well, if they're young and haven't learnt their ABC's you could try them on BF - non of those pesky letters and numbers to deal with. I'll get me' coat. Skizz A: I would go with what I wish I had known first: a simple MS-DOS box and the integrated assembler (debug). It is great to really learn and understand the basics of talking to a computer. If that does not scare away a child, then I would go the "next level up" and introduce C. This shouldn't be hard given that the basic concept of pointers, registers and instructions in general are well-understood by then. However, I am not entirely sure, where to go next. Take the big jump to Lisp, Haskell or similarly abstracted languages or should there be some simple object oriented languages (maybe even C++) be thrown in or would that more hurt than help? A: Looking at Alice, I see it is "designed for high school and college students". There appears to be another language/version called Story Telling Alice that is "designed for middle-school students" Alice Download Page A: I think Context Free Art might be a good choice, with output of graphics, it makes it a lot of fun learning about context-free grammar. A: Try [Guido van Robot][1]. It's an excellent introduction to robotics, and it's a great way to introduce kids to the programming side of things (vs the "building the robots" side). A: Wasn't Smalltalk designed for such a purpose? I think Ruby would be a good choice, as a descendant of Smalltalk. A: I know in the first few years of high school we were 'taught' Logo, and strangely, HTML. After that, the progression went to macros in MS Office, followed by basic VBA, followed by Visual Basic. A: There's a good article about this over on familyinternet.about.com. A: Although I have tinkered with LEGO Mindstorms (and enjoyed it) in the past I would thoroughly recommend XNA Game Studio for the following reasons: * *It involves creating something many children will be interested in (games). *It's free. *It's a real language (C#) and a real IDE (Visual Studio). *You get to learn OOP. *It's something the parents are going to find as much fun as the kids are. A: How about AIML? Not so much a programming language, but you get instant fulfillment and because its all about artificial intelligence it will likely trigger his (her?) sense of excitement. A: I started programming in Flash. "toy language" meh meh meh. and before that a tiny amount Logo at school I have no idea about mindstorms, but I imagine it would be good. I think that, unless there is a real driving urge to learn, then it could get frustrating with just input and output command line driven programs at the start. With a bit of instant gratification, had by moving some pictures around on the screen, triggering a few sounds here and there, can be a bit more appealing of a result than building a cash register program, making a fizz buzz program etc. "Look Grandma, I built a web page!" - even starting with HTML and some javascript, with tables and font tags everywhere, and being able to share what is developed with someone who is not technical will probably be more beneficial in the long run than 30 lines of C coded to appease a code crazy father. Which may or may not be the case A: What about Stagecast Creator? I've been using it with my 7 year old daughter (we started when she was 6). Don't be fooled by the kiddie interface. Once you start to use it, you realize it's teaching many complex ideas. It's sequential processing, and it's all graphic driven. You define rules for characters by defining 'if the picture looks like this then make it look like that' type functionality. Characters can change appearances, make sounds, move other characters, respond to the keyboard and mouse etc. It teaches about if..then..else logic. Order of operations (As it processes the first rule that is true). Has a debugger so you can step through your code etc. A very good tool for getting your young one discovering the thought processes behind programming, and a fun and easy way to determine if they're interested in this type of thing. Once you've determined that, you can move onto a 'real' language. A: Python is a great first programming language, and it can be used for exercising concepts of procedural and functional languages. The free book A Byte of Python is an easy introduction, written for beginners, and it's available in several languages. A: When my daughter was about 6 or 7 years old I showed her Logo - should thought it was fun drawing the shapes - but then lost interest. When she was 10 I then tried Squeak - and she thought that was great. She quickly picked up on the Smalltalk syntax and her much fun. I also tried Greenfoot - but with less success. I think Ruby might be worth a go to (I use Ruby from time to time - good stuff!) Now she is more interested in other - non-computing - activities. So these days, I would say that Squeak is worth a try. What about Hackety-Hack. haven't tried that with kids yet but looks interesting. A: There's a new book called "Hello World: Computer Programming for Kids and other Beginners" by Warren and Carter Sande that I bought for my 9 year-old to start out with. He'll learn programming, and I'll learn Python. A: Scratch. Don't let the cartoon-like results fool you. Kids love this thing and it offers most of what you'd expect in a programming language: loops; conditional logic; events; subroutines; and object-oriented programming. Other things to like: * *Excellent documentation *Versatility Some kids like games. Other like to tell stories or create cartoons. Others like making music or graphic effects. All can be done with Scratch. Kids can even post their programs to Websites they create as part of multimedia/web classes. *Environment Graphical development environment in which programming elements are snapped together. Shape and color are used very well as visual cues. *Social coding Large collection of community-created programs with ratings system that kids can use to get new ideas, figure out how to solve particular problems, or share their creations with their peers. *Hacking It's very easy for kids to add their own customized sounds and draw their own characters. Reminds me of digital construction paper. *Approachable The interface is simple enough that kids can start using it with very little in the way of introduction. Most importantly, Scratch can be run on Windows, Linux, and OS X, so schools with mixed hardware setups won't be left out. A: Logo is awesome A: Check out PythonTurtle. A: PHP or Visual Basic. I started out with PHP when I was 9 and now I only like a hundred languages lol. My favorite's PHP and C++. A: Brute force "Do it or else!" A: My sons (and me) had good fun using a combination of suggestions already mentioned here: * *Python as a very intuitive language *Logo turtle graphics *LEGO Mindstorms NXTurtle is a little mashup to get started... A: Im a 6th grader and I have been interested in the concept of programming ever since I saw a computer. I have tried many programming languages (.NET, Python, and Javascript) and I have to say, my favorite so far is Visual Basic (.NET) because the designing is easy and the code itself is easy to understand aswell. It is so cool as a kid to see a program that YOU made work and operate, and I think Visual Basic has the best way to do that. -Karl A: Anyone come across BigTrak? This was my first experience of programming. Essentially it is physically a giant logo turtle, in the shape of a battle tank, with a keypad on top of it, to type in a program. Probably suitable from age 5-8, i.e. even before children have the patience/coordination for typing at a screen.
{ "language": "en", "url": "https://stackoverflow.com/questions/20059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Store data from a C# application I've recently taken up learning some C# and wrote a Yahtzee clone. My next step (now that the game logic is in place and functioning correctly) is to integrate some method of keeping stats across all the games played. My question is this, how should I go about storing this information? My first thought would be to use a database and I have a feeling that's the answer I'll get... if that's the case, can you point me to a good resource for creating and accessing a database from a C# application? Storing in an XML file actually makes more sense to me, but I thought if I suggested that I'd get torn apart ;). I'm used to building web applications and for those, text files are generally frowned upon. So, going with an XML file, what classes should I be looking at that would allow for easy manipulation? A: A database may be overkill - have you thought about just storing the scores in a file? If you decide to go with a database, you might consider SQLite, which you can distribute just like a file. There's an open source .NET provider - System.Data.SQLite - that includes everything you need to get started. Accessing and reading from a database in .NET is quite easy - take a look at this question for sample code. A: SQL Express from MS is a great free, lightweight version of their SQL Server database. You could try that if you go the DB route. Alternatively, you could simply create datasets within the application and serialize them to xml, or you could use something like the newly minted Entity Framework that shipped with .NET 3.5 SP1 A: I don't know if a database is necessarily what you want. That may be overkill for storing stats for a simple game like that. Databases are good; but you should not automatically use one in every situation (I'm assuming that this is a client application, not an online game). Personally, for a game that exists only on the user's computer, I would just store the stats in a file (XML or binary - choice depends on whether you want it to be human-readable or not). A: I'd recommend saving your data in simple POCOs and either serializing them to xml or a binary file, like Brian did above. If you're hot for a database, I'd suggest Sql Server Compact Edition, or VistaDB. Both are hosted inproc within your application. A: Here is one idea: use Xml Serialization. Design your GameStats data structure and optionally use Xml attributes to influence the schema as you like. I like to use this method for small data sets because its quick and easy and all I need to do is design and manipulate the data structure. using (FileStream fs = new FileStream(....)) { // Read in stats XmlSerializer xs = new XmlSerializer(typeof(GameStats)); GameStats stats = (GameStats)xs.Deserialize(fs); // Manipulate stats here ... // Write out game stats XmlSerializer xs = new XmlSerializer(typeof(GameStats)); xs.Serialize(fs, stats); fs.Close(); } A: A database would probably be overkill for something like this - start with storing your information in an XML doc (or series of XML docs, if there's a lot of data). You get all that nifty XCopy deployment stuff, you can still use LINQ, and it would be a smooth transition to a database if you decided later you really needed performant relational query logic. A: You can either use the System::Xml namespace or the System::Data namespace. The first gives you raw XML, the latter gives you a handy wrapper to the XML. A: I would recommend just using a database. I would recommend using LINQ or an ORM tool to interact with the database. For learning LINQ, I would take a look at Scott Guthrie's posts. I think there are 9 of them all together. I linked part 1 below. If you want to go with an ORM tool, say nhibernate, then I would recommend checking out the Summer of nHibernate screencasts. They are a really good learning resource for nhibernate. I disagree with using XML. With reporting stats on a lot of data, you can't beat using a relational database. Yeah, XML is lightweight, but there are a lot of choices for light weight relational databases also, besides going with a full blown service based implementation. (i.e. SQL Server Compact, SQLite, etc...) * *Scott Guthrie on LINQ *Summer of nHibernate A: For this situation, the [Serializable] attribute on a nicely modelled Stats class and XmlSerializer are the way to go, IMO.
{ "language": "en", "url": "https://stackoverflow.com/questions/20061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What's the best way to parse command line arguments? What's the easiest, tersest, and most flexible method or library for parsing Python command line arguments? A: Using docopt Since 2012 there is a very easy, powerful and really cool module for argument parsing called docopt. Here is an example taken from its documentation: """Naval Fate. Usage: naval_fate.py ship new <name>... naval_fate.py ship <name> move <x> <y> [--speed=<kn>] naval_fate.py ship shoot <x> <y> naval_fate.py mine (set|remove) <x> <y> [--moored | --drifting] naval_fate.py (-h | --help) naval_fate.py --version Options: -h --help Show this screen. --version Show version. --speed=<kn> Speed in knots [default: 10]. --moored Moored (anchored) mine. --drifting Drifting mine. """ from docopt import docopt if __name__ == '__main__': arguments = docopt(__doc__, version='Naval Fate 2.0') print(arguments) So this is it: 2 lines of code plus your doc string which is essential and you get your arguments parsed and available in your arguments object. Using python-fire Since 2017 there's another cool module called python-fire. It can generate a CLI interface for your code with you doing zero argument parsing. Here's a simple example from the documentation (this small program exposes the function double to the command line): import fire class Calculator(object): def double(self, number): return 2 * number if __name__ == '__main__': fire.Fire(Calculator) From the command line, you can run: > calculator.py double 10 20 > calculator.py double --number=15 30 A: Just in case you might need to, this may help if you need to grab unicode arguments on Win32 (2K, XP etc): from ctypes import * def wmain(argc, argv): print argc for i in argv: print i return 0 def startup(): size = c_int() ptr = windll.shell32.CommandLineToArgvW(windll.kernel32.GetCommandLineW(), byref(size)) ref = c_wchar_p * size.value raw = ref.from_address(ptr) args = [arg for arg in raw] windll.kernel32.LocalFree(ptr) exit(wmain(len(args), args)) startup() A: Argparse code can be longer than actual implementation code! That's a problem I find with most popular argument parsing options is that if your parameters are only modest, the code to document them becomes disproportionately large to the benefit they provide. A relative new-comer to the argument parsing scene (I think) is plac. It makes some acknowledged trade-offs with argparse, but uses inline documentation and wraps simply around main() type function function: def main(excel_file_path: "Path to input training file.", excel_sheet_name:"Name of the excel sheet containing training data including columns 'Label' and 'Description'.", existing_model_path: "Path to an existing model to refine."=None, batch_size_start: "The smallest size of any minibatch."=10., batch_size_stop: "The largest size of any minibatch."=250., batch_size_step: "The step for increase in minibatch size."=1.002, batch_test_steps: "Flag. If True, show minibatch steps."=False): "Train a Spacy (http://spacy.io/) text classification model with gold document and label data until the model nears convergence (LOSS < 0.5)." pass # Implementation code goes here! if __name__ == '__main__': import plac; plac.call(main) A: The new hip way is argparse for these reasons. argparse > optparse > getopt update: As of py2.7 argparse is part of the standard library and optparse is deprecated. A: I prefer Click. It abstracts managing options and allows "(...) creating beautiful command line interfaces in a composable way with as little code as necessary". Here's example usage: import click @click.command() @click.option('--count', default=1, help='Number of greetings.') @click.option('--name', prompt='Your name', help='The person to greet.') def hello(count, name): """Simple program that greets NAME for a total of COUNT times.""" for x in range(count): click.echo('Hello %s!' % name) if __name__ == '__main__': hello() It also automatically generates nicely formatted help pages: $ python hello.py --help Usage: hello.py [OPTIONS] Simple program that greets NAME for a total of COUNT times. Options: --count INTEGER Number of greetings. --name TEXT The person to greet. --help Show this message and exit. A: argparse is the way to go. Here is a short summary of how to use it: 1) Initialize import argparse # Instantiate the parser parser = argparse.ArgumentParser(description='Optional app description') 2) Add Arguments # Required positional argument parser.add_argument('pos_arg', type=int, help='A required integer positional argument') # Optional positional argument parser.add_argument('opt_pos_arg', type=int, nargs='?', help='An optional integer positional argument') # Optional argument parser.add_argument('--opt_arg', type=int, help='An optional integer argument') # Switch parser.add_argument('--switch', action='store_true', help='A boolean switch') 3) Parse args = parser.parse_args() 4) Access print("Argument values:") print(args.pos_arg) print(args.opt_pos_arg) print(args.opt_arg) print(args.switch) 5) Check Values if args.pos_arg > 10: parser.error("pos_arg cannot be larger than 10") Usage Correct use: $ ./app 1 2 --opt_arg 3 --switch Argument values: 1 2 3 True Incorrect arguments: $ ./app foo 2 --opt_arg 3 --switch usage: convert [-h] [--opt_arg OPT_ARG] [--switch] pos_arg [opt_pos_arg] app: error: argument pos_arg: invalid int value: 'foo' $ ./app 11 2 --opt_arg 3 Argument values: 11 2 3 False usage: app [-h] [--opt_arg OPT_ARG] [--switch] pos_arg [opt_pos_arg] convert: error: pos_arg cannot be larger than 10 Full help: $ ./app -h usage: app [-h] [--opt_arg OPT_ARG] [--switch] pos_arg [opt_pos_arg] Optional app description positional arguments: pos_arg A required integer positional argument opt_pos_arg An optional integer positional argument optional arguments: -h, --help show this help message and exit --opt_arg OPT_ARG An optional integer argument --switch A boolean switch A: I prefer optparse to getopt. It's very declarative: you tell it the names of the options and the effects they should have (e.g., setting a boolean field), and it hands you back a dictionary populated according to your specifications. http://docs.python.org/lib/module-optparse.html A: I think the best way for larger projects is optparse, but if you are looking for an easy way, maybe http://werkzeug.pocoo.org/documentation/script is something for you. from werkzeug import script # actions go here def action_foo(name=""): """action foo does foo""" pass def action_bar(id=0, title="default title"): """action bar does bar""" pass if __name__ == '__main__': script.run() So basically every function action_* is exposed to the command line and a nice help message is generated for free. python foo.py usage: foo.py <action> [<options>] foo.py --help actions: bar: action bar does bar --id integer 0 --title string default title foo: action foo does foo --name string A: This answer suggests optparse which is appropriate for older Python versions. For Python 2.7 and above, argparse replaces optparse. See this answer for more information. As other people pointed out, you are better off going with optparse over getopt. getopt is pretty much a one-to-one mapping of the standard getopt(3) C library functions, and not very easy to use. optparse, while being a bit more verbose, is much better structured and simpler to extend later on. Here's a typical line to add an option to your parser: parser.add_option('-q', '--query', action="store", dest="query", help="query string", default="spam") It pretty much speaks for itself; at processing time, it will accept -q or --query as options, store the argument in an attribute called query and has a default value if you don't specify it. It is also self-documenting in that you declare the help argument (which will be used when run with -h/--help) right there with the option. Usually you parse your arguments with: options, args = parser.parse_args() This will, by default, parse the standard arguments passed to the script (sys.argv[1:]) options.query will then be set to the value you passed to the script. You create a parser simply by doing parser = optparse.OptionParser() These are all the basics you need. Here's a complete Python script that shows this: import optparse parser = optparse.OptionParser() parser.add_option('-q', '--query', action="store", dest="query", help="query string", default="spam") options, args = parser.parse_args() print 'Query string:', options.query 5 lines of python that show you the basics. Save it in sample.py, and run it once with python sample.py and once with python sample.py --query myquery Beyond that, you will find that optparse is very easy to extend. In one of my projects, I created a Command class which allows you to nest subcommands in a command tree easily. It uses optparse heavily to chain commands together. It's not something I can easily explain in a few lines, but feel free to browse around in my repository for the main class, as well as a class that uses it and the option parser A: Pretty much everybody is using getopt Here is the example code for the doc : import getopt, sys def main(): try: opts, args = getopt.getopt(sys.argv[1:], "ho:v", ["help", "output="]) except getopt.GetoptError: # print help information and exit: usage() sys.exit(2) output = None verbose = False for o, a in opts: if o == "-v": verbose = True if o in ("-h", "--help"): usage() sys.exit() if o in ("-o", "--output"): output = a So in a word, here is how it works. You've got two types of options. Those who are receiving arguments, and those who are just like switches. sys.argv is pretty much your char** argv in C. Like in C you skip the first element which is the name of your program and parse only the arguments : sys.argv[1:] Getopt.getopt will parse it according to the rule you give in argument. "ho:v" here describes the short arguments : -ONELETTER. The : means that -o accepts one argument. Finally ["help", "output="] describes long arguments ( --MORETHANONELETTER ). The = after output once again means that output accepts one arguments. The result is a list of couple (option,argument) If an option doesn't accept any argument (like --help here) the arg part is an empty string. You then usually want to loop on this list and test the option name as in the example. I hope this helped you. A: Use optparse which comes with the standard library. For example: #!/usr/bin/env python import optparse def main(): p = optparse.OptionParser() p.add_option('--person', '-p', default="world") options, arguments = p.parse_args() print 'Hello %s' % options.person if __name__ == '__main__': main() Source: Using Python to create UNIX command line tools However as of Python 2.7 optparse is deprecated, see: Why use argparse rather than optparse? A: Lightweight command line argument defaults Although argparse is great and is the right answer for fully documented command line switches and advanced features, you can use function argument defaults to handles straightforward positional arguments very simply. import sys def get_args(name='default', first='a', second=2): return first, int(second) first, second = get_args(*sys.argv) print first, second The 'name' argument captures the script name and is not used. Test output looks like this: > ./test.py a 2 > ./test.py A A 2 > ./test.py A 20 A 20 For simple scripts where I just want some default values, I find this quite sufficient. You might also want to include some type coercion in the return values or command line values will all be strings. A: consoleargs deserves to be mentioned here. It is very easy to use. Check it out: from consoleargs import command @command def main(url, name=None): """ :param url: Remote URL :param name: File name """ print """Downloading url '%r' into file '%r'""" % (url, name) if __name__ == '__main__': main() Now in console: % python demo.py --help Usage: demo.py URL [OPTIONS] URL: Remote URL Options: --name -n File name % python demo.py http://www.google.com/ Downloading url ''http://www.google.com/'' into file 'None' % python demo.py http://www.google.com/ --name=index.html Downloading url ''http://www.google.com/'' into file ''index.html'' A: Here's a method, not a library, which seems to work for me. The goals here are to be terse, each argument parsed by a single line, the args line up for readability, the code is simple and doesn't depend on any special modules (only os + sys), warns about missing or unknown arguments gracefully, use a simple for/range() loop, and works across python 2.x and 3.x Shown are two toggle flags (-d, -v), and two values controlled by arguments (-i xxx and -o xxx). import os,sys def HelpAndExit(): print("<<your help output goes here>>") sys.exit(1) def Fatal(msg): sys.stderr.write("%s: %s\n" % (os.path.basename(sys.argv[0]), msg)) sys.exit(1) def NextArg(i): '''Return the next command line argument (if there is one)''' if ((i+1) >= len(sys.argv)): Fatal("'%s' expected an argument" % sys.argv[i]) return(1, sys.argv[i+1]) ### MAIN if __name__=='__main__': verbose = 0 debug = 0 infile = "infile" outfile = "outfile" # Parse command line skip = 0 for i in range(1, len(sys.argv)): if not skip: if sys.argv[i][:2] == "-d": debug ^= 1 elif sys.argv[i][:2] == "-v": verbose ^= 1 elif sys.argv[i][:2] == "-i": (skip,infile) = NextArg(i) elif sys.argv[i][:2] == "-o": (skip,outfile) = NextArg(i) elif sys.argv[i][:2] == "-h": HelpAndExit() elif sys.argv[i][:1] == "-": Fatal("'%s' unknown argument" % sys.argv[i]) else: Fatal("'%s' unexpected" % sys.argv[i]) else: skip = 0 print("%d,%d,%s,%s" % (debug,verbose,infile,outfile)) The goal of NextArg() is to return the next argument while checking for missing data, and 'skip' skips the loop when NextArg() is used, keeping the flag parsing down to one liners. A: I extended Erco's approach to allow for required positional arguments and for optional arguments. These should precede the -d, -v etc. arguments. Positional and optional arguments can be retrieved with PosArg(i) and OptArg(i, default) respectively. When an optional argument is found the start position of searching for options (e.g. -i) is moved 1 ahead to avoid causing an 'unexpected' fatal. import os,sys def HelpAndExit(): print("<<your help output goes here>>") sys.exit(1) def Fatal(msg): sys.stderr.write("%s: %s\n" % (os.path.basename(sys.argv[0]), msg)) sys.exit(1) def NextArg(i): '''Return the next command line argument (if there is one)''' if ((i+1) >= len(sys.argv)): Fatal("'%s' expected an argument" % sys.argv[i]) return(1, sys.argv[i+1]) def PosArg(i): '''Return positional argument''' if i >= len(sys.argv): Fatal("'%s' expected an argument" % sys.argv[i]) return sys.argv[i] def OptArg(i, default): '''Return optional argument (if there is one)''' if i >= len(sys.argv): Fatal("'%s' expected an argument" % sys.argv[i]) if sys.argv[i][:1] != '-': return True, sys.argv[i] else: return False, default ### MAIN if __name__=='__main__': verbose = 0 debug = 0 infile = "infile" outfile = "outfile" options_start = 3 # --- Parse two positional parameters --- n1 = int(PosArg(1)) n2 = int(PosArg(2)) # --- Parse an optional parameters --- present, a3 = OptArg(3,50) n3 = int(a3) options_start += int(present) # --- Parse rest of command line --- skip = 0 for i in range(options_start, len(sys.argv)): if not skip: if sys.argv[i][:2] == "-d": debug ^= 1 elif sys.argv[i][:2] == "-v": verbose ^= 1 elif sys.argv[i][:2] == "-i": (skip,infile) = NextArg(i) elif sys.argv[i][:2] == "-o": (skip,outfile) = NextArg(i) elif sys.argv[i][:2] == "-h": HelpAndExit() elif sys.argv[i][:1] == "-": Fatal("'%s' unknown argument" % sys.argv[i]) else: Fatal("'%s' unexpected" % sys.argv[i]) else: skip = 0 print("Number 1 = %d" % n1) print("Number 2 = %d" % n2) print("Number 3 = %d" % n3) print("Debug = %d" % debug) print("verbose = %d" % verbose) print("infile = %s" % infile) print("outfile = %s" % outfile)
{ "language": "en", "url": "https://stackoverflow.com/questions/20063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "340" }
Q: Best way to encapsulate complex Oracle PL/SQL cursor logic as a view? I've written PL/SQL code to denormalize a table into a much-easer-to-query form. The code uses a temporary table to do some of its work, merging some rows from the original table together. The logic is written as a pipelined table function, following the pattern from the linked article. The table function uses a PRAGMA AUTONOMOUS_TRANSACTION declaration to permit the temporary table manipulation, and also accepts a cursor input parameter to restrict the denormalization to certain ID values. I then created a view to query the table function, passing in all possible ID values as a cursor (other uses of the function will be more restrictive). My question: is this all really necessary? Have I completely missed a much more simple way of accomplishing the same thing? Every time I touch PL/SQL I get the impression that I'm typing way too much. Update: I'll add a sketch of the table I'm dealing with to give everyone an idea of the denormalization that I'm talking about. The table stores a history of employee jobs, each with an activation row, and (possibly) a termination row. It's possible for an employee to have multiple simultaneous jobs, as well as the same job over and over again in non-contiguous date ranges. For example: | EMP_ID | JOB_ID | STATUS | EFF_DATE | other columns... | 1 | 10 | A | 10-JAN-2008 | | 2 | 11 | A | 13-JAN-2008 | | 1 | 12 | A | 20-JAN-2008 | | 2 | 11 | T | 01-FEB-2008 | | 1 | 10 | T | 02-FEB-2008 | | 2 | 11 | A | 20-FEB-2008 | Querying that to figure out who is working when in what job is non-trivial. So, my denormalization function populates the temporary table with just the date ranges for each job, for any EMP_IDs passed in though the cursor. Passing in EMP_IDs 1 and 2 would produce the following: | EMP_ID | JOB_ID | START_DATE | END_DATE | | 1 | 10 | 10-JAN-2008 | 02-FEB-2008 | | 2 | 11 | 13-JAN-2008 | 01-FEB-2008 | | 1 | 12 | 20-JAN-2008 | | | 2 | 11 | 20-FEB-2008 | | (END_DATE allows NULLs for jobs that don't have a predetermined termination date.) As you can imagine, this denormalized form is much, much easier to query, but creating it--so far as I can tell--requires a temporary table to store the intermediate results (e.g., job records for which the activation row has been found, but not the termination...yet). Using the pipelined table function to populate the temporary table and then return its rows is the only way I've figured out how to do it. A: I think a way to approach this is to use analytic functions... I set up your test case using: create table employee_job ( emp_id integer, job_id integer, status varchar2(1 char), eff_date date ); insert into employee_job values (1,10,'A',to_date('10-JAN-2008','DD-MON-YYYY')); insert into employee_job values (2,11,'A',to_date('13-JAN-2008','DD-MON-YYYY')); insert into employee_job values (1,12,'A',to_date('20-JAN-2008','DD-MON-YYYY')); insert into employee_job values (2,11,'T',to_date('01-FEB-2008','DD-MON-YYYY')); insert into employee_job values (1,10,'T',to_date('02-FEB-2008','DD-MON-YYYY')); insert into employee_job values (2,11,'A',to_date('20-FEB-2008','DD-MON-YYYY')); commit; I've used the lead function to get the next date and then wrapped it all as a sub-query just to get the "A" records and add the end date if there is one. select emp_id, job_id, eff_date start_date, decode(next_status,'T',next_eff_date,null) end_date from ( select emp_id, job_id, eff_date, status, lead(eff_date,1,null) over (partition by emp_id, job_id order by eff_date, status) next_eff_date, lead(status,1,null) over (partition by emp_id, job_id order by eff_date, status) next_status from employee_job ) where status = 'A' order by start_date, emp_id, job_id I'm sure there's some use cases I've missed but you get the idea. Analytic functions are your friend :) EMP_ID JOB_ID START_DATE END_DATE 1 10 10-JAN-2008 02-FEB-2008 2 11 13-JAN-2008 01-FEB-2008 2 11 20-FEB-2008 1 12 20-JAN-2008 A: Rather than having the input parameter as a cursor, I would have a table variable (don't know if Oracle has such a thing I'm a TSQL guy) or populate another temp table with the ID values and join on it in the view/function or wherever you need to. The only time for cursors in my honest opinion is when you have to loop. And when you have to loop I always recommend to do that outside of the database in the application logic. A: It sounds like you are giving away some read consistency here ie: it will be possible for the contents of your temporary table to be out of sync with the source data, if you have concurrent modification data modification. Without knowing the requirements, nor complexity of what you want to achieve. I would attempt * *to define a view, containing (possibly complex) logic in SQL, else I'd add some PL/SQL to the mix with; *A pipelined table function, but using an SQL collection type (instead of the temporary table ). A simple example is here: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4447489221109 Number 2 would give you less moving parts and solve your consistency issue. Mathew Butler A: The real problem here is the "write-only" table design - by which I mean, it's easy to insert data into it, but tricky and inefficient to get useful information out of it! Your "temporary" table has the structure the "permanent" table should have had in the first place. Could you perhaps do this: * *Create a permanent table with the better structure *Populate it to match the data in the first table *Define a database trigger on the original table to keep the new table in sync from now on Then you can just select from the new table to perform your reporting. A: I couldn't agree with you more, HollyStyles. I also used to be a TSQL guy, and find some of Oracle's idiosyncrasies more than a little perplexing. Unfortunately, temp tables aren't as convenient in Oracle, and in this case, other existing SQL logic is expecting to directly query a table, so I give it this view instead. There's really no application logic that exists outside of the database in this system. Oracle developers do seem to use cursors much more eagerly than I would have thought. Given the bondage & discipline nature of PL/SQL, that's all the more surprising. A: The simplest solution is: * *Create a global temporary table containing just IDs you need: CREATE GLOBAL TEMPORARY TABLE tab_ids (id INTEGER) ON COMMIT DELETE ROWS; *Populate the temporary table with the IDs you need. *Use EXISTS operation in your procedure to select the rows that are only in the IDs table: SELECT yt.col1, yt.col2 FROM your\_table yt WHERE EXISTS ( SELECT 'X' FROM tab_ids ti WHERE ti.id = yt.id ) You can also pass a comma-separated string of IDs as a function parameter and parse it into a table. This is performed by a single SELECT. Want to know more - ask me how :-) But it's got to be a separate question.
{ "language": "en", "url": "https://stackoverflow.com/questions/20081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: XML Serialization and Inherited Types Following on from my previous question I have been working on getting my object model to serialize to XML. But I have now run into a problem (quelle surprise!). The problem I have is that I have a collection, which is of a abstract base class type, which is populated by the concrete derived types. I thought it would be fine to just add the XML attributes to all of the classes involved and everything would be peachy. Sadly, thats not the case! So I have done some digging on Google and I now understand why it's not working. In that the XmlSerializer is in fact doing some clever reflection in order to serialize objects to/from XML, and since its based on the abstract type, it cannot figure out what the hell it's talking to. Fine. I did come across this page on CodeProject, which looks like it may well help a lot (yet to read/consume fully), but I thought I would like to bring this problem to the StackOverflow table too, to see if you have any neat hacks/tricks in order to get this up and running in the quickest/lightest way possible. One thing I should also add is that I DO NOT want to go down the XmlInclude route. There is simply too much coupling with it, and this area of the system is under heavy development, so the it would be a real maintenance headache! A: One thing to look at is the fact that in the XmlSerialiser constructor you can pass an array of types that the serialiser might be having difficulty resolving. I've had to use that quite a few times where a collection or complex set of datastructures needed to be serialised and those types lived in different assemblies etc. XmlSerialiser Constructor with extraTypes param EDIT: I would add that this approach has the benefit over XmlInclude attributes etc that you can work out a way of discovering and compiling a list of your possible concrete types at runtime and stuff them in. A: Problem Solved! OK, so I finally got there (admittedly with a lot of help from here!). So summarise: Goals: * *I didn't want to go down the XmlInclude route due to the maintenence headache. *Once a solution was found, I wanted it to be quick to implement in other applications. *Collections of Abstract types may be used, as well as individual abstract properties. *I didn't really want to bother with having to do "special" things in the concrete classes. Identified Issues/Points to Note: * *XmlSerializer does some pretty cool reflection, but it is very limited when it comes to abstract types (i.e. it will only work with instances of the abstract type itself, not subclasses). *The Xml attribute decorators define how the XmlSerializer treats the properties its finds. The physical type can also be specified, but this creates a tight coupling between the class and the serializer (not good). *We can implement our own XmlSerializer by creating a class that implements IXmlSerializable . The Solution I created a generic class, in which you specify the generic type as the abstract type you will be working with. This gives the class the ability to "translate" between the abstract type and the concrete type since we can hard-code the casting (i.e. we can get more info than the XmlSerializer can). I then implemented the IXmlSerializable interface, this is pretty straight forward, but when serializing we need to ensure we write the type of the concrete class to the XML, so we can cast it back when de-serializing. It is also important to note it must be fully qualified as the assemblies that the two classes are in are likely to differ. There is of course a little type checking and stuff that needs to happen here. Since the XmlSerializer cannot cast, we need to provide the code to do that, so the implicit operator is then overloaded (I never even knew you could do this!). The code for the AbstractXmlSerializer is this: using System; using System.Collections.Generic; using System.Text; using System.Xml.Serialization; namespace Utility.Xml { public class AbstractXmlSerializer<AbstractType> : IXmlSerializable { // Override the Implicit Conversions Since the XmlSerializer // Casts to/from the required types implicitly. public static implicit operator AbstractType(AbstractXmlSerializer<AbstractType> o) { return o.Data; } public static implicit operator AbstractXmlSerializer<AbstractType>(AbstractType o) { return o == null ? null : new AbstractXmlSerializer<AbstractType>(o); } private AbstractType _data; /// <summary> /// [Concrete] Data to be stored/is stored as XML. /// </summary> public AbstractType Data { get { return _data; } set { _data = value; } } /// <summary> /// **DO NOT USE** This is only added to enable XML Serialization. /// </summary> /// <remarks>DO NOT USE THIS CONSTRUCTOR</remarks> public AbstractXmlSerializer() { // Default Ctor (Required for Xml Serialization - DO NOT USE) } /// <summary> /// Initialises the Serializer to work with the given data. /// </summary> /// <param name="data">Concrete Object of the AbstractType Specified.</param> public AbstractXmlSerializer(AbstractType data) { _data = data; } #region IXmlSerializable Members public System.Xml.Schema.XmlSchema GetSchema() { return null; // this is fine as schema is unknown. } public void ReadXml(System.Xml.XmlReader reader) { // Cast the Data back from the Abstract Type. string typeAttrib = reader.GetAttribute("type"); // Ensure the Type was Specified if (typeAttrib == null) throw new ArgumentNullException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because no 'type' attribute was specified in the XML."); Type type = Type.GetType(typeAttrib); // Check the Type is Found. if (type == null) throw new InvalidCastException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because the type specified in the XML was not found."); // Check the Type is a Subclass of the AbstractType. if (!type.IsSubclassOf(typeof(AbstractType))) throw new InvalidCastException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because the Type specified in the XML differs ('" + type.Name + "')."); // Read the Data, Deserializing based on the (now known) concrete type. reader.ReadStartElement(); this.Data = (AbstractType)new XmlSerializer(type).Deserialize(reader); reader.ReadEndElement(); } public void WriteXml(System.Xml.XmlWriter writer) { // Write the Type Name to the XML Element as an Attrib and Serialize Type type = _data.GetType(); // BugFix: Assembly must be FQN since Types can/are external to current. writer.WriteAttributeString("type", type.AssemblyQualifiedName); new XmlSerializer(type).Serialize(writer, _data); } #endregion } } So, from there, how do we tell the XmlSerializer to work with our serializer rather than the default? We must pass our type within the Xml attributes type property, for example: [XmlRoot("ClassWithAbstractCollection")] public class ClassWithAbstractCollection { private List<AbstractType> _list; [XmlArray("ListItems")] [XmlArrayItem("ListItem", Type = typeof(AbstractXmlSerializer<AbstractType>))] public List<AbstractType> List { get { return _list; } set { _list = value; } } private AbstractType _prop; [XmlElement("MyProperty", Type=typeof(AbstractXmlSerializer<AbstractType>))] public AbstractType MyProperty { get { return _prop; } set { _prop = value; } } public ClassWithAbstractCollection() { _list = new List<AbstractType>(); } } Here you can see, we have a collection and a single property being exposed, and all we need to do is add the type named parameter to the Xml declaration, easy! :D NOTE: If you use this code, I would really appreciate a shout-out. It will also help drive more people to the community :) Now, but unsure as to what to do with answers here since they all had their pro's and con's. I'll upmod those that I feel were useful (no offence to those that weren't) and close this off once I have the rep :) Interesting problem and good fun to solve! :) A: Seriously, an extensible framework of POCOs will never serialize to XML reliably. I say this because I can guarantee someone will come along, extend your class, and botch it up. You should look into using XAML for serializing your object graphs. It is designed to do this, whereas XML serialization isn't. The Xaml serializer and deserializer handles generics without a problem, collections of base classes and interfaces as well (as long as the collections themselves implement IList or IDictionary). There are some caveats, such as marking your read only collection properties with the DesignerSerializationAttribute, but reworking your code to handle these corner cases isn't that hard. A: Just a quick update on this, I have not forgotten! Just doing some more research, looks like I am on to a winner, just need to get the code sorted. So far, I have the following: * *The XmlSeralizer is basically a class that does some nifty reflection on the classes it is serializing. It determines the properties that are serialized based on the Type. *The reason the problem occurs is because a type mismatch is occurring, it is expecting the BaseType but in fact receives the DerivedType .. While you may think that it would treat it polymorphically, it doesn't since it would involve a whole extra load of reflection and type-checking, which it is not designed to do. This behaviour appears to be able to be overridden (code pending) by creating a proxy class to act as the go-between for the serializer. This will basically determine the type of the derived class and then serialize that as normal. This proxy class then will feed that XML back up the line to the main serializer.. Watch this space! ^_^ A: It's certainly a solution to your problem, but there is another problem, which somewhat undermines your intention to use "portable" XML format. Bad thing happens when you decide to change classes in the next version of your program and you need to support both formats of serialization -- the new one and the old one (because your clients still use thier old files/databases, or they connect to your server using old version of your product). But you can't use this serializator anymore, because you used type.AssemblyQualifiedName which looks like TopNamespace.SubNameSpace.ContainingClass+NestedClass, MyAssembly, Version=1.3.0.0, Culture=neutral, PublicKeyToken=b17a5c561934e089 that is contains your assembly attributes and version... Now if you try to change your assembly version, or you decide to sign it, this deserialization is not going to work... A: I've done things similar to this. What I normally do is make sure all the XML serialization attributes are on the concrete class, and just have the properties on that class call through to the base classes (where required) to retrieve information that will be de/serialized when the serializer calls on those properties. It's a bit more coding work, but it does work much better than attempting to force the serializer to just do the right thing. A: Even better, using notation: [XmlRoot] public class MyClass { public abstract class MyAbstract {} public class MyInherited : MyAbstract {} [XmlArray(), XmlArrayItem(typeof(MyInherited))] public MyAbstract[] Items {get; set; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/20084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: Is there a way to make Firefox ignore invalid ssl-certificates? I am maintaining a few web applications. The development and qa environments use invalid/outdated ssl-certificates. Although it is generally a good thing, that Firefox makes me click like a dozen times to accept the certificate, this is pretty annoying. Is there a configuration-parameter to make Firefox (and possibly IE too) accept any ssl-certificate? EDIT: I have accepted the solution, that worked. But thanks to all the people that have advised to use self-signed certificates. I am totally aware, that the accepted solution leaves me with a gaping security hole. Nonetheless I am to lazy to change the certificate for all the applications and all the environments... But I also advice anybody strongly to leave validation enabled! A: Try Add Exception: FireFox -> Tools -> Advanced -> View Certificates -> Servers -> Add Exception. A: I ran into this issue when trying to get to one of my companies intranet sites. Here is the solution I used: * *enter about:config into the firefox address bar and agree to continue. *search for the preference named security.ssl.enable_ocsp_stapling. *double-click this item to change its value to false. This will lower your security as you will be able to view sites with invalid certs. Firefox will still prompt you that the cert is invalid and you have the choice to proceed forward, so it was worth the risk for me. A: Go to Tools > Options > Advanced "Tab"(?) > Encryption Tab Click the "Validation" button, and uncheck the checkbox for checking validity Be advised though that this is pretty unsecure as it leaves you wide open to accept any invalid certificate. I'd only do this if using the browser on an Intranet where the validity of the cert isn't a concern to you, or you aren't concerned in general. A: Instead of using invalid/outdated SSL certificates, why not use self-signed SSL certificates? Then you can add an exception in Firefox for just that site. A: Using a free certificate is a better idea if your developers use Firefox 3. Firefox 3 complains loudly about self-signed certificates, and it is a major annoyance. A: If you have a valid but untrusted ssl-certificates you can import it in Extras/Properties/Advanced/Encryption --> View Certificates. After Importing ist as "Servers" you have to "Edit trust" to "Trust the authenticity of this certifikate" and that' it. I always have trouble with recording secure websites with HP VuGen and Performance Center A: In the current Firefox browser (v. 99.0.1) I was getting this error when looking at Web Developer Tools \ Network tab: MOZILLA_PKIX_ERROR_SELF_SIGNED_CERT I was trying to debug an Angular app which is served at https://localhost:4200... however the real port it's pointing to and being debugged from in Visual Studio 2022 is 44322. I had to follow these steps to fix the issue: * *Open Firefox Settings; *Look for Privacy & Security tab on the left; *Scroll down to the bottom and look for Certificates; *View Certificates; *In this window you must click Add Exception and enter the location. In my case it was: https://localhost:44322 *Click Get Certificate button; *Click Confirm Security Exception button. After that, try reloading your page. A: For a secure alternative, try the Perspectives Firefox add-on If this link doesn't work try this one: https://addons.mozilla.org/en-US/firefox/addon/perspectives/ A: Create some nice new 10 year certificates and install them. The procedure is fairly easy. Start at (1B) Generate your own CA (Certificate Authority) on this web page: Creating Certificate Authorities and self-signed SSL certificates and generate your CA Certificate and Key. Once you have these, generate your Server Certificate and Key. Create a Certificate Signing Request (CSR) and then sign the Server Key with the CA Certificate. Now install your Server Certificate and Key on the web server as usual, and import the CA Certificate into Internet Explorer's Trusted Root Certification Authority Store (used by the Flex uploader and Chrome as well) and into Firefox's Certificate Manager Authorities Store on each workstation that needs to access the server using the self-signed, CA-signed server key/certificate pair. You now should not see any warning about using self-signed Certificates as the browsers will find the CA certificate in the Trust Store and verify the server key has been signed by this trusted certificate. Also in e-commerce applications like Magento, the Flex image uploader will now function in Firefox without the dreaded "Self-signed certificate" error message. A: The MitM Me addon will do this - but I think self-signed certificates is probably a better solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/20088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: YUI Reset CSS Makes this not work This line in YUI's Reset CSS is causing trouble for me: address,caption,cite,code,dfn,em,strong,th,var { font-style: normal; font-weight: normal; } It makes my em not italic and my strong not bold. Which is okay. I know how to override that in my own stylesheet. strong, b { font-weight: bold; } em, i { font-style: italic; } The problem comes in when I have text that's both em and strong. <strong>This is bold, <em>and this is italic, but not bold</em></strong> My rule for strong makes it bold, but YUI's rule for em makes it normal again. How do I fix that? A: I would use this rule to override the YUI reset: strong, b, strong *, b * { font-weight: bold; } em, i, em *, i * { font-style: italic; } A: If in addition to using YUI reset.css, you also use YUI base.css, then you will be all set with a standard set of cross browser base styles. LINK: http://developer.yahoo.com/yui/base/ A: I had a similar problem when I added the YUI Reset to the top of my stock CSS file. I found that the best thing for me was to simply remove all of the font-weight: normal; declarations from the YUI Reset. I haven't noticed that this has affected anything "cross-browser." All my declarations were after the YUI Reset so I'm not sure why they weren't taking affect. A: As long as your styles are loaded after the reset ones they should work. What browser is this? because I work in a similar way myself and I've not hit this problem I wonder if it's something in my testing at fault. A: Reset stylesheets are best used as a base. If you don't want to reset em or strong, remove them from the stylesheet. A: As Chris said, you don't have to use the exact CSS they provide religiously. I would just save a copy to your server, and edit to your needs. A: If your strong declaration comes after YUI's yours should override it. You can force it like this: strong, b, strong *, b * { font-weight: bold; } em, i, em *, i * { font-style: italic; } If you still support IE7 you'll need to add !important. strong, b, strong *, b * { font-weight: bold !important; } em, i, em *, i * { font-style: italic !important; } This works - see for yourself: /*YUI styles*/ address,caption,cite,code,dfn,em,strong,th,var { font-style: normal; font-weight: normal; } /*End YUI styles =*/ strong, b, strong *, b * { font-weight: bold; } em, i, em *, i * { font-style: italic; } <strong>Bold</strong> - <em>Italic</em> - <strong>Bold and <em>Italic</em></strong> A: I would suggest avoiding anything which involves hacking the YUI files. You need to be able to update external libraries in the future and if your site relies on edited versions there is a good chance it will get cocked up. I think this is general good practice for any 3rd party library you use. So I thought this answer was amongst the better ones If in addition to using YUI reset.css, you also use YUI base.css, then you will be all set with a standard set of cross browser base styles. A: I thought I had an ideal solution: strong, b { font-weight: bold; font-style: inherit; } em, i { font-style: italic; font-weight: inherit; } Unfortunately, Internet Explorer doesn't support "inherit." :-( A: I see what you are saying. I guess you can add a CSS rule like this: strong em { font-weight: bold; } or: strong * { font-weight: bold; }
{ "language": "en", "url": "https://stackoverflow.com/questions/20107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Virtual Machine Optimization I am messing around with a toy interpreter in Java and I was considering trying to write a simple compiler that can generate bytecode for the Java Virtual Machine. Which got me thinking, how much optimization needs to be done by compilers that target virtual machines such as JVM and CLI? Do Just In Time (JIT) compilers do constant folding, peephole optimizations etc? A: I'm just gonna add two links which explain Java's bytecode pretty well and some of the various optimization of the JVM during runtime. A: Optimisation is what makes JVMs viable as environments for long running applications, you can bet that SUN, IBM and friends are doing their best to ensure they can optimise your bytecode and JIT-compiled code in an efficient a manner as possible. With that being said, if you think you can pre-optimise your bytecode then it probably won't do much harm. It is worth being aware, however, that JVMs can tend towards performing better (and not crashing) when presented with just the sort of bytecode the Java compiler tends to construct. It is not unknown for optimisations to be missed or even for the JVM to crash when permutations of bytecode occur that are correct but unlike what would be produced by javac. Hopefully that sort of thing is more in the past now, but may be something to be aware of. A: Optimising bytecode is probably an oxymoron in most cases I don't think that's true. Optimizations like hoisting loop invariants and propagating constants can never hurt, even if the JVM is smart enough to do them on its own, by simple virtue of making the code do less work. A: Obfuscators such as ProGuard will perform many static optimisations on your bytecode for you. A: The HotSpot compiler will optimize your code at runtime better than is possible at compile-time - it has more information to work with, after all. The only time you should be optimizing the bytecode instead of just your algorithm is when you are targeting mobile devices, such as the Blackberry, where the JVM for that platform is not powerful enough to optimize code at runtime and just executes the bytecode. A: Optimising bytecode is probably an oxymoron in most cases. Unless you control the VM, you have no idea what it does to speed up code execution, if anything. The compiler would need to know the details of the VM in order to generate optimised code. A: Note to Aseraphim: It can also be useful to optimise bytecode for non-embedded applications in some limited cases: * *When delivering code over the wire, eg for WebStart apps, to minimise deliverable/cache size and because you don't necessarily know the capability/speed of the client. *For code that you know is performance critical and used at start-up before (say) HotSpot has had time to gather any stats. Again, the transformations that a good optimiser/obfuscator performs can be very helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/20127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to create a temporary file (for writing to) in C#? I'm looking for something like the tempfile module in Python: A (preferably) secure way to open a file for writing to. This should be easy to delete when I'm done too... It seems, .NET does not have the "batteries included" features of the tempfile module, which not only creates the file, but returns the file descriptor (old school, I know...) to it along with the path. At the same time, it makes sure only the creating user can access the file and whatnot (mkstemp() I think): https://docs.python.org/library/tempfile.html Ah, yes, I can see that. But GetTempFileName does have a drawback: There is a race condition between when the file was created (upon call to GetTempFileName a 0-Byte file gets created) and when I get to open it (after return of GetTempFileName). This might be a security issue, although not for my current application... A: Path.GetTempFileName and Path.GetTempPath. Then you can use this link to read/write encrypted data to the file. Note, .NET isn't the best platform for critical security apps. You have to be well versed in how the CLR works in order to avoid some of the pitfalls that might expose your critical data to hackers. Edit: About the race condition... You could use GetTempPath, then create a temporary filename by using Path.Combine(Path.GetTempPath(), Path.ChangeExtension(Guid.NewGuid().ToString(), ".TMP")) A: I've also had the same requirement before, and I've created a small class to solve it: public sealed class TemporaryFile : IDisposable { public TemporaryFile() : this(Path.GetTempPath()) { } public TemporaryFile(string directory) { Create(Path.Combine(directory, Path.GetRandomFileName())); } ~TemporaryFile() { Delete(); } public void Dispose() { Delete(); GC.SuppressFinalize(this); } public string FilePath { get; private set; } private void Create(string path) { FilePath = path; using (File.Create(FilePath)) { }; } private void Delete() { if (FilePath == null) return; File.Delete(FilePath); FilePath = null; } } It creates a temporary file in a folder you specify or in the system temporary folder. It's a disposable class, so at the end of its life (either Dispose or the destructor), it deletes the file. You get the name of the file created (and path) through the FilePath property. You can certainly extend it to also open the file for writing and return its associated FileStream. An example usage: using (var tempFile = new TemporaryFile()) { // use the file through tempFile.FilePath... } A: I don't know of any built in (within the framework) classes to do this, but I imagine it wouldn't be too much of an issue to roll your own.. Obviously it depends on the type of data you want to write to it, and the "security" required.. This article on DevFusion may be a good place to start?
{ "language": "en", "url": "https://stackoverflow.com/questions/20146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: MyISAM versus InnoDB I'm working on a projects which involves a lot of database writes, I'd say (70% inserts and 30% reads). This ratio would also include updates which I consider to be one read and one write. The reads can be dirty (e.g. I don't need 100% accurate information at the time of read). The task in question will be doing over 1 million database transactions an hour. I've read a bunch of stuff on the web about the differences between MyISAM and InnoDB, and MyISAM seems like the obvious choice to me for the particular database/tables that I'll be using for this task. From what I seem to be reading, InnoDB is good if transactions are needed since row level locking is supported. Does anybody have any experience with this type of load (or higher)? Is MyISAM the way to go? A: A bit late to the game...but here's a quite comprehensive post I wrote a few months back, detailing the major differences between MYISAM and InnoDB. Grab a cuppa (and maybe a biscuit), and enjoy. The major difference between MyISAM and InnoDB is in referential integrity and transactions. There are also other difference such as locking, rollbacks, and full-text searches. Referential Integrity Referential integrity ensures that relationships between tables remains consistent. More specifically, this means when a table (e.g. Listings) has a foreign key (e.g. Product ID) pointing to a different table (e.g. Products), when updates or deletes occur to the pointed-to table, these changes are cascaded to the linking table. In our example, if a product is renamed, the linking table’s foreign keys will also update; if a product is deleted from the ‘Products’ table, any listings which point to the deleted entry will also be deleted. Furthermore, any new listing must have that foreign key pointing to a valid, existing entry. InnoDB is a relational DBMS (RDBMS) and thus has referential integrity, while MyISAM does not. Transactions & Atomicity Data in a table is managed using Data Manipulation Language (DML) statements, such as SELECT, INSERT, UPDATE and DELETE. A transaction group two or more DML statements together into a single unit of work, so either the entire unit is applied, or none of it is. MyISAM do not support transactions whereas InnoDB does. If an operation is interrupted while using a MyISAM table, the operation is aborted immediately, and the rows (or even data within each row) that are affected remains affected, even if the operation did not go to completion. If an operation is interrupted while using an InnoDB table, because it using transactions, which has atomicity, any transaction which did not go to completion will not take effect, since no commit is made. Table-locking vs Row-locking When a query runs against a MyISAM table, the entire table in which it is querying will be locked. This means subsequent queries will only be executed after the current one is finished. If you are reading a large table, and/or there are frequent read and write operations, this can mean a huge backlog of queries. When a query runs against an InnoDB table, only the row(s) which are involved are locked, the rest of the table remains available for CRUD operations. This means queries can run simultaneously on the same table, provided they do not use the same row. This feature in InnoDB is known as concurrency. As great as concurrency is, there is a major drawback that applies to a select range of tables, in that there is an overhead in switching between kernel threads, and you should set a limit on the kernel threads to prevent the server coming to a halt. Transactions & Rollbacks When you run an operation in MyISAM, the changes are set; in InnoDB, those changes can be rolled back. The most common commands used to control transactions are COMMIT, ROLLBACK and SAVEPOINT. 1. COMMIT - you can write multiple DML operations, but the changes will only be saved when a COMMIT is made 2. ROLLBACK - you can discard any operations that have not yet been committed yet 3. SAVEPOINT - sets a point in the list of operations to which a ROLLBACK operation can rollback to Reliability MyISAM offers no data integrity - Hardware failures, unclean shutdowns and canceled operations can cause the data to become corrupt. This would require full repair or rebuilds of the indexes and tables. InnoDB, on the other hand, uses a transactional log, a double-write buffer and automatic checksumming and validation to prevent corruption. Before InnoDB makes any changes, it records the data before the transactions into a system tablespace file called ibdata1. If there is a crash, InnoDB would autorecover through the replay of those logs. FULLTEXT Indexing InnoDB does not support FULLTEXT indexing until MySQL version 5.6.4. As of the writing of this post, many shared hosting providers’ MySQL version is still below 5.6.4, which means FULLTEXT indexing is not supported for InnoDB tables. However, this is not a valid reason to use MyISAM. It’s best to change to a hosting provider that supports up-to-date versions of MySQL. Not that a MyISAM table that uses FULLTEXT indexing cannot be converted to an InnoDB table. Conclusion In conclusion, InnoDB should be your default storage engine of choice. Choose MyISAM or other data types when they serve a specific need. A: For a load with more writes and reads, you will benefit from InnoDB. Because InnoDB provides row-locking rather than table-locking, your SELECTs can be concurrent, not just with each other but also with many INSERTs. However, unless you are intending to use SQL transactions, set the InnoDB commit flush to 2 (innodb_flush_log_at_trx_commit). This gives you back a lot of raw performance that you would otherwise lose when moving tables from MyISAM to InnoDB. Also, consider adding replication. This gives you some read scaling and since you stated your reads don't have to be up-to-date, you can let the replication fall behind a little. Just be sure that it can catch up under anything but the heaviest traffic or it will always be behind and will never catch up. If you go this way, however, I strongly recommend you isolate reading from the slaves and replication lag management to your database handler. It is so much simpler if the application code does not know about this. Finally, be aware of different table loads. You will not have the same read/write ratio on all tables. Some smaller tables with near 100% reads could afford to stay MyISAM. Likewise, if you have some tables that are near 100% write, you may benefit from INSERT DELAYED, but that is only supported in MyISAM (the DELAYED clause is ignored for an InnoDB table). But benchmark to be sure. A: To add to the wide selection of responses here covering the mechanical differences between the two engines, I present an empirical speed comparison study. In terms of pure speed, it is not always the case that MyISAM is faster than InnoDB but in my experience it tends to be faster for PURE READ working environments by a factor of about 2.0-2.5 times. Clearly this isn't appropriate for all environments - as others have written, MyISAM lacks such things as transactions and foreign keys. I've done a bit of benchmarking below - I've used python for looping and the timeit library for timing comparisons. For interest I've also included the memory engine, this gives the best performance across the board although it is only suitable for smaller tables (you continually encounter The table 'tbl' is full when you exceed the MySQL memory limit). The four types of select I look at are: * *vanilla SELECTs *counts *conditional SELECTs *indexed and non-indexed sub-selects Firstly, I created three tables using the following SQL CREATE TABLE data_interrogation.test_table_myisam ( index_col BIGINT NOT NULL AUTO_INCREMENT, value1 DOUBLE, value2 DOUBLE, value3 DOUBLE, value4 DOUBLE, PRIMARY KEY (index_col) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 with 'MyISAM' substituted for 'InnoDB' and 'memory' in the second and third tables.   1) Vanilla selects Query: SELECT * FROM tbl WHERE index_col = xx Result: draw The speed of these is all broadly the same, and as expected is linear in the number of columns to be selected. InnoDB seems slightly faster than MyISAM but this is really marginal. Code: import timeit import MySQLdb import MySQLdb.cursors import random from random import randint db = MySQLdb.connect(host="...", user="...", passwd="...", db="...", cursorclass=MySQLdb.cursors.DictCursor) cur = db.cursor() lengthOfTable = 100000 # Fill up the tables with random data for x in xrange(lengthOfTable): rand1 = random.random() rand2 = random.random() rand3 = random.random() rand4 = random.random() insertString = "INSERT INTO test_table_innodb (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" insertString2 = "INSERT INTO test_table_myisam (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" insertString3 = "INSERT INTO test_table_memory (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" cur.execute(insertString) cur.execute(insertString2) cur.execute(insertString3) db.commit() # Define a function to pull a certain number of records from these tables def selectRandomRecords(testTable,numberOfRecords): for x in xrange(numberOfRecords): rand1 = randint(0,lengthOfTable) selectString = "SELECT * FROM " + testTable + " WHERE index_col = " + str(rand1) cur.execute(selectString) setupString = "from __main__ import selectRandomRecords" # Test time taken using timeit myisam_times = [] innodb_times = [] memory_times = [] for theLength in [3,10,30,100,300,1000,3000,10000]: innodb_times.append( timeit.timeit('selectRandomRecords("test_table_innodb",' + str(theLength) + ')', number=100, setup=setupString) ) myisam_times.append( timeit.timeit('selectRandomRecords("test_table_myisam",' + str(theLength) + ')', number=100, setup=setupString) ) memory_times.append( timeit.timeit('selectRandomRecords("test_table_memory",' + str(theLength) + ')', number=100, setup=setupString) )   2) Counts Query: SELECT count(*) FROM tbl Result: MyISAM wins This one demonstrates a big difference between MyISAM and InnoDB - MyISAM (and memory) keeps track of the number of records in the table, so this transaction is fast and O(1). The amount of time required for InnoDB to count increases super-linearly with table size in the range I investigated. I suspect many of the speed-ups from MyISAM queries that are observed in practice are due to similar effects. Code: myisam_times = [] innodb_times = [] memory_times = [] # Define a function to count the records def countRecords(testTable): selectString = "SELECT count(*) FROM " + testTable cur.execute(selectString) setupString = "from __main__ import countRecords" # Truncate the tables and re-fill with a set amount of data for theLength in [3,10,30,100,300,1000,3000,10000,30000,100000]: truncateString = "TRUNCATE test_table_innodb" truncateString2 = "TRUNCATE test_table_myisam" truncateString3 = "TRUNCATE test_table_memory" cur.execute(truncateString) cur.execute(truncateString2) cur.execute(truncateString3) for x in xrange(theLength): rand1 = random.random() rand2 = random.random() rand3 = random.random() rand4 = random.random() insertString = "INSERT INTO test_table_innodb (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" insertString2 = "INSERT INTO test_table_myisam (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" insertString3 = "INSERT INTO test_table_memory (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" cur.execute(insertString) cur.execute(insertString2) cur.execute(insertString3) db.commit() # Count and time the query innodb_times.append( timeit.timeit('countRecords("test_table_innodb")', number=100, setup=setupString) ) myisam_times.append( timeit.timeit('countRecords("test_table_myisam")', number=100, setup=setupString) ) memory_times.append( timeit.timeit('countRecords("test_table_memory")', number=100, setup=setupString) )   3) Conditional selects Query: SELECT * FROM tbl WHERE value1<0.5 AND value2<0.5 AND value3<0.5 AND value4<0.5 Result: MyISAM wins Here, MyISAM and memory perform approximately the same, and beat InnoDB by about 50% for larger tables. This is the sort of query for which the benefits of MyISAM seem to be maximised. Code: myisam_times = [] innodb_times = [] memory_times = [] # Define a function to perform conditional selects def conditionalSelect(testTable): selectString = "SELECT * FROM " + testTable + " WHERE value1 < 0.5 AND value2 < 0.5 AND value3 < 0.5 AND value4 < 0.5" cur.execute(selectString) setupString = "from __main__ import conditionalSelect" # Truncate the tables and re-fill with a set amount of data for theLength in [3,10,30,100,300,1000,3000,10000,30000,100000]: truncateString = "TRUNCATE test_table_innodb" truncateString2 = "TRUNCATE test_table_myisam" truncateString3 = "TRUNCATE test_table_memory" cur.execute(truncateString) cur.execute(truncateString2) cur.execute(truncateString3) for x in xrange(theLength): rand1 = random.random() rand2 = random.random() rand3 = random.random() rand4 = random.random() insertString = "INSERT INTO test_table_innodb (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" insertString2 = "INSERT INTO test_table_myisam (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" insertString3 = "INSERT INTO test_table_memory (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" cur.execute(insertString) cur.execute(insertString2) cur.execute(insertString3) db.commit() # Count and time the query innodb_times.append( timeit.timeit('conditionalSelect("test_table_innodb")', number=100, setup=setupString) ) myisam_times.append( timeit.timeit('conditionalSelect("test_table_myisam")', number=100, setup=setupString) ) memory_times.append( timeit.timeit('conditionalSelect("test_table_memory")', number=100, setup=setupString) )   4) Sub-selects Result: InnoDB wins For this query, I created an additional set of tables for the sub-select. Each is simply two columns of BIGINTs, one with a primary key index and one without any index. Due to the large table size, I didn't test the memory engine. The SQL table creation command was CREATE TABLE subselect_myisam ( index_col bigint NOT NULL, non_index_col bigint, PRIMARY KEY (index_col) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; where once again, 'MyISAM' is substituted for 'InnoDB' in the second table. In this query, I leave the size of the selection table at 1000000 and instead vary the size of the sub-selected columns. Here the InnoDB wins easily. After we get to a reasonable size table both engines scale linearly with the size of the sub-select. The index speeds up the MyISAM command but interestingly has little effect on the InnoDB speed. subSelect.png Code: myisam_times = [] innodb_times = [] myisam_times_2 = [] innodb_times_2 = [] def subSelectRecordsIndexed(testTable,testSubSelect): selectString = "SELECT * FROM " + testTable + " WHERE index_col in ( SELECT index_col FROM " + testSubSelect + " )" cur.execute(selectString) setupString = "from __main__ import subSelectRecordsIndexed" def subSelectRecordsNotIndexed(testTable,testSubSelect): selectString = "SELECT * FROM " + testTable + " WHERE index_col in ( SELECT non_index_col FROM " + testSubSelect + " )" cur.execute(selectString) setupString2 = "from __main__ import subSelectRecordsNotIndexed" # Truncate the old tables, and re-fill with 1000000 records truncateString = "TRUNCATE test_table_innodb" truncateString2 = "TRUNCATE test_table_myisam" cur.execute(truncateString) cur.execute(truncateString2) lengthOfTable = 1000000 # Fill up the tables with random data for x in xrange(lengthOfTable): rand1 = random.random() rand2 = random.random() rand3 = random.random() rand4 = random.random() insertString = "INSERT INTO test_table_innodb (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" insertString2 = "INSERT INTO test_table_myisam (value1,value2,value3,value4) VALUES (" + str(rand1) + "," + str(rand2) + "," + str(rand3) + "," + str(rand4) + ")" cur.execute(insertString) cur.execute(insertString2) for theLength in [3,10,30,100,300,1000,3000,10000,30000,100000]: truncateString = "TRUNCATE subselect_innodb" truncateString2 = "TRUNCATE subselect_myisam" cur.execute(truncateString) cur.execute(truncateString2) # For each length, empty the table and re-fill it with random data rand_sample = sorted(random.sample(xrange(lengthOfTable), theLength)) rand_sample_2 = random.sample(xrange(lengthOfTable), theLength) for (the_value_1,the_value_2) in zip(rand_sample,rand_sample_2): insertString = "INSERT INTO subselect_innodb (index_col,non_index_col) VALUES (" + str(the_value_1) + "," + str(the_value_2) + ")" insertString2 = "INSERT INTO subselect_myisam (index_col,non_index_col) VALUES (" + str(the_value_1) + "," + str(the_value_2) + ")" cur.execute(insertString) cur.execute(insertString2) db.commit() # Finally, time the queries innodb_times.append( timeit.timeit('subSelectRecordsIndexed("test_table_innodb","subselect_innodb")', number=100, setup=setupString) ) myisam_times.append( timeit.timeit('subSelectRecordsIndexed("test_table_myisam","subselect_myisam")', number=100, setup=setupString) ) innodb_times_2.append( timeit.timeit('subSelectRecordsNotIndexed("test_table_innodb","subselect_innodb")', number=100, setup=setupString2) ) myisam_times_2.append( timeit.timeit('subSelectRecordsNotIndexed("test_table_myisam","subselect_myisam")', number=100, setup=setupString2) ) I think the take-home message of all of this is that if you are really concerned about speed, you need to benchmark the queries that you're doing rather than make any assumptions about which engine will be more suitable. A: I have briefly discussed this question in a table so you can conclude whether to go with InnoDB or MyISAM. Here is a small overview of which db storage engine you should use in which situation: MyISAM InnoDB ---------------------------------------------------------------- Required full-text search Yes 5.6.4 ---------------------------------------------------------------- Require transactions Yes ---------------------------------------------------------------- Frequent select queries Yes ---------------------------------------------------------------- Frequent insert, update, delete Yes ---------------------------------------------------------------- Row locking (multi processing on single table) Yes ---------------------------------------------------------------- Relational base design Yes Summary * *In almost all circumstances, InnoDB is the best way to go *But, frequent reading, almost no writing, use MyISAM *Full-text search in MySQL <= 5.5, use MyISAM A: In my experience, MyISAM was a better choice as long as you don't do DELETEs, UPDATEs, a whole lot of single INSERT, transactions, and full-text indexing. BTW, CHECK TABLE is horrible. As the table gets older in terms of the number of rows, you don't know when it will end. A: I've figure out that even though Myisam has locking contention, it's still faster than InnoDb in most scenarios because of the rapid lock acquisition scheme it uses. I've tried several times Innodb and always fall back to MyIsam for one reason or the other. Also InnoDB can be very CPU intensive in huge write loads. A: Every application has it's own performance profile for using a database, and chances are it will change over time. The best thing you can do is to test your options. Switching between MyISAM and InnoDB is trivial, so load some test data and fire jmeter against your site and see what happens. A: I tried to run insertion of random data into MyISAM and InnoDB tables. The result was quite shocking. MyISAM needed a few seconds less for inserting 1 million rows than InnoDB for just 10 thousand! A: Slightly off-topic, but for documentation purposes and completeness, I would like to add the following. In general using InnoDB will result in a much LESS complex application, probably also more bug-free. Because you can put all referential integrity (Foreign Key-constraints) into the datamodel, you don't need anywhere near as much application code as you will need with MyISAM. Every time you insert, delete or replace a record, you will HAVE to check and maintain the relationships. E.g. if you delete a parent, all children should be deleted too. For instance, even in a simple blogging system, if you delete a blogposting record, you will have to delete the comment records, the likes, etc. In InnoDB this is done automatically by the database engine (if you specified the contraints in the model) and requires no application code. In MyISAM this will have to be coded into the application, which is very difficult in web-servers. Web-servers are by nature very concurrent / parallel and because these actions should be atomical and MyISAM supports no real transactions, using MyISAM for web-servers is risky / error-prone. Also in most general cases, InnoDB will perform much better, for a multiple of reasons, one them being able to use record level locking as opposed to table-level locking. Not only in a situation where writes are more frequent than reads, also in situations with complex joins on large datasets. We noticed a 3 fold performance increase just by using InnoDB tables over MyISAM tables for very large joins (taking several minutes). I would say that in general InnoDB (using a 3NF datamodel complete with referential integrity) should be the default choice when using MySQL. MyISAM should only be used in very specific cases. It will most likely perform less, result in a bigger and more buggy application. Having said this. Datamodelling is an art seldom found among webdesigners / -programmers. No offence, but it does explain MyISAM being used so much. A: InnoDB offers: ACID transactions row-level locking foreign key constraints automatic crash recovery table compression (read/write) spatial data types (no spatial indexes) In InnoDB all data in a row except for TEXT and BLOB can occupy 8,000 bytes at most. No full text indexing is available for InnoDB. In InnoDB the COUNT(*)s (when WHERE, GROUP BY, or JOIN is not used) execute slower than in MyISAM because the row count is not stored internally. InnoDB stores both data and indexes in one file. InnoDB uses a buffer pool to cache both data and indexes. MyISAM offers: fast COUNT(*)s (when WHERE, GROUP BY, or JOIN is not used) full text indexing smaller disk footprint very high table compression (read only) spatial data types and indexes (R-tree) MyISAM has table-level locking, but no row-level locking. No transactions. No automatic crash recovery, but it does offer repair table functionality. No foreign key constraints. MyISAM tables are generally more compact in size on disk when compared to InnoDB tables. MyISAM tables could be further highly reduced in size by compressing with myisampack if needed, but become read-only. MyISAM stores indexes in one file and data in another. MyISAM uses key buffers for caching indexes and leaves the data caching management to the operating system. Overall I would recommend InnoDB for most purposes and MyISAM for specialized uses only. InnoDB is now the default engine in new MySQL versions. A: myisam is a NOGO for that type of workload (high concurrency writes), i dont have that much experience with innodb (tested it 3 times and found in each case that the performance sucked, but it's been a while since the last test) if you're not forced to run mysql, consider giving postgres a try as it handles concurrent writes MUCH better A: In short, InnoDB is good if you are working on something that needs a reliable database that can handles a lot of INSERT and UPDATE instructions. and, MyISAM is good if you needs a database that will mostly be taking a lot of read (SELECT) instructions rather than write (INSERT and UPDATES), considering its drawback on the table-lock thing. you may want to check out; Pros and Cons of InnoDB Pros and Cons of MyISAM A: I'm not a database expert, and I do not speak from experience. However: MyISAM tables use table-level locking. Based on your traffic estimates, you have close to 200 writes per second. With MyISAM, only one of these could be in progress at any time. You have to make sure that your hardware can keep up with these transaction to avoid being overrun, i.e., a single query can take no more than 5ms. That suggests to me you would need a storage engine which supports row-level locking, i.e., InnoDB. On the other hand, it should be fairly trivial to write a few simple scripts to simulate the load with each storage engine, then compare the results. A: If you use MyISAM, you won't be doing any transactions per hour, unless you consider each DML statement to be a transaction (which in any case, won't be durable or atomic in the event of a crash). Therefore I think you have to use InnoDB. 300 transactions per second sounds like quite a lot. If you absolutely need these transactions to be durable across power failure make sure your I/O subsystem can handle this many writes per second easily. You will need at least a RAID controller with battery backed cache. If you can take a small durability hit, you could use InnoDB with innodb_flush_log_at_trx_commit set to 0 or 2 (see docs for details), you can improve performance. There are a number of patches which can increase concurrency from Google and others - these may be of interest if you still can't get enough performance without them. A: The Question and most of the Answers are out of date. Yes, it is an old wives' tale that MyISAM is faster than InnoDB. notice the Question's date: 2008; it is now almost a decade later. InnoDB has made significant performance strides since then. The dramatic graph was for the one case where MyISAM wins: COUNT(*) without a WHERE clause. But is that really what you spend your time doing? If you run concurrency test, InnoDB is very likely to win, even against MEMORY. If you do any writes while benchmarking SELECTs, MyISAM and MEMORY are likely to lose because of table-level locking. In fact, Oracle is so sure that InnoDB is better that they have all but removed MyISAM from 8.0. The Question was written early in the days of 5.1. Since then, these major versions were marked "General Availability": * *2010: 5.5 (.8 in Dec.) *2013: 5.6 (.10 in Feb.) *2015: 5.7 (.9 in Oct.) *2018: 8.0 (.11 in Apr.) Bottom line: Don't use MyISAM A: People often talk about performance, reads vs. writes, foreign keys, etc. but there's one other must-have feature for a storage engine in my opinion: atomic updates. Try this: * *Issue an UPDATE against your MyISAM table that takes 5 seconds. *While the UPDATE is in progress, say 2.5 seconds in, hit Ctrl-C to interrupt it. *Observe the effects on the table. How many rows were updated? How many were not updated? Is the table even readable, or was it corrupted when you hit Ctrl-C? *Try the same experiment with UPDATE against an InnoDB table, interrupting the query in progress. *Observe the InnoDB table. Zero rows were updated. InnoDB has assured you have atomic updates, and if the full update could not be committed, it rolls back the whole change. Also, the table is not corrupt. This works even if you use killall -9 mysqld to simulate a crash. Performance is desirable of course, but not losing data should trump that. A: I know this won't be popular but here goes: myISAM lacks support for database essentials like transactions and referential integrity which often results in glitchy / buggy applications. You cannot not learn proper database design fundamentals if they are not even supported by your db engine. Not using referential integrity or transactions in the database world is like not using object oriented programming in the software world. InnoDB exists now, use that instead! Even MySQL developers have finally conceded to change this to the default engine in newer versions, despite myISAM being the original engine that was the default in all legacy systems. No it does not matter if you are reading or writing or what performance considerations you have, using myISAM can result in a variety of problems, such as this one I just ran into: I was performing a database sync and at the same time someone else accessed an application that accessed a table set to myISAM. Due to the lack of transaction support and the generally poor reliability of this engine, this crashed the entire database and I had to manually restart mysql! Over the past 15 years of development I have used many databases and engines. myISAM crashed on me about a dozen times during this period, other databases, only once! And that was a microsoft SQL database where some developer wrote faulty CLR code (common language runtime - basically C# code that executes inside the database) by the way, it was not the database engine's fault exactly. I agree with the other answers here that say that quality high-availability, high-performance applications should not use myISAM as it will not work, it is not robust or stable enough to result in a frustration-free experience. See Bill Karwin's answer for more details. P.S. Gotta love it when myISAM fanboys downvote but can't tell you which part of this answer is incorrect. A: I've worked on a high-volume system using MySQL and I've tried both MyISAM and InnoDB. I found that the table-level locking in MyISAM caused serious performance problems for our workload which sounds similar to yours. Unfortunately I also found that performance under InnoDB was also worse than I'd hoped. In the end I resolved the contention issue by fragmenting the data such that inserts went into a "hot" table and selects never queried the hot table. This also allowed deletes (the data was time-sensitive and we only retained X days worth) to occur on "stale" tables that again weren't touched by select queries. InnoDB seems to have poor performance on bulk deletes so if you're planning on purging data you might want to structure it in such a way that the old data is in a stale table which can simply be dropped instead of running deletes on it. Of course I have no idea what your application is but hopefully this gives you some insight into some of the issues with MyISAM and InnoDB. A: Also check out some drop-in replacements for MySQL itself: MariaDB http://mariadb.org/ MariaDB is a database server that offers drop-in replacement functionality for MySQL. MariaDB is built by some of the original authors of MySQL, with assistance from the broader community of Free and open source software developers. In addition to the core functionality of MySQL, MariaDB offers a rich set of feature enhancements including alternate storage engines, server optimizations, and patches. Percona Server https://launchpad.net/percona-server An enhanced drop-in replacement for MySQL, with better performance, improved diagnostics, and added features. A: Please note that my formal education and experience is with Oracle, while my work with MySQL has been entirely personal and on my own time, so if I say things that are true for Oracle but are not true for MySQL, I apologize. While the two systems share a lot, the relational theory/algebra is the same, and relational databases are still relational databases, there are still plenty of differences!! I particularly like (as well as row-level locking) that InnoDB is transaction-based, meaning that you may be updating/inserting/creating/altering/dropping/etc several times for one "operation" of your web application. The problem that arises is that if only some of those changes/operations end up being committed, but others do not, you will most times (depending on the specific design of the database) end up with a database with conflicting data/structure. Note: With Oracle, create/alter/drop statements are called "DDL" (Data Definition) statements, and implicitly trigger a commit. Insert/update/delete statements, called "DML" (Data Manipulation), are not committed automatically, but only when a DDL, commit, or exit/quit is performed (or if you set your session to "auto-commit", or if your client auto-commits). It's imperative to be aware of that when working with Oracle, but I am not sure how MySQL handles the two types of statements. Because of this, I want to make it clear that I'm not sure of this when it comes to MySQL; only with Oracle. An example of when transaction-based engines excel: Let's say that I or you are on a web-page to sign up to attend a free event, and one of the main purposes of the system is to only allow up to 100 people to sign up, since that is the limit of the seating for the event. Once 100 sign-ups are reached, the system would disable further signups, at least until others cancel. In this case, there may be a table for guests (name, phone, email, etc.), and a second table which tracks the number of guests that have signed up. We thus have two operations for one "transaction". Now suppose that after the guest info is added to the GUESTS table, there is a connection loss, or an error with the same impact. The GUESTS table was updated (inserted into), but the connection was lost before the "available seats" could be updated. Now we have a guest added to the guest table, but the number of available seats is now incorrect (for example, value is 85 when it's actually 84). Of course there are many ways to handle this, such as tracking available seats with "100 minus number of rows in guests table," or some code that checks that the info is consistent, etc.... But with a transaction-based database engine such as InnoDB, either ALL of the operations are committed, or NONE of them are. This can be helpful in many cases, but like I said, it's not the ONLY way to be safe, no (a nice way, however, handled by the database, not the programmer/script-writer). That's all "transaction-based" essentially means in this context, unless I'm missing something -- that either the whole transaction succeeds as it should, or nothing is changed, since making only partial changes could make a minor to SEVERE mess of the database, perhaps even corrupting it... But I'll say it one more time, it's not the only way to avoid making a mess. But it is one of the methods that the engine itself handles, leaving you to code/script with only needing to worry about "was the transaction successful or not, and what do I do if not (such as retry)," instead of manually writing code to check it "manually" from outside of the database, and doing a lot more work for such events. Lastly, a note about table-locking vs row-locking: DISCLAIMER: I may be wrong in all that follows in regard to MySQL, and the hypothetical/example situations are things to look into, but I may be wrong in what exactly is possible to cause corruption with MySQL. The examples are however very real in general programming, even if MySQL has more mechanisms to avoid such things... Anyway, I am fairly confident in agreeing with those who have argued that how many connections are allowed at a time does not work around a locked table. In fact, multiple connections are the entire point of locking a table!! So that other processes/users/apps are not able to corrupt the database by making changes at the same time. How would two or more connections working on the same row make a REALLY BAD DAY for you?? Suppose there are two processes both want/need to update the same value in the same row, let's say because the row is a record of a bus tour, and each of the two processes simultaneously want to update the "riders" or "available_seats" field as "the current value plus 1." Let's do this hypothetically, step by step: * *Process one reads the current value, let's say it's empty, thus '0' so far. *Process two reads the current value as well, which is still 0. *Process one writes (current + 1) which is 1. *Process two should be writing 2, but since it read the current value before process one write the new value, it too writes 1 to the table. I'm not certain that two connections could intermingle like that, both reading before the first one writes... But if not, then I would still see a problem with: * *Process one reads the current value, which is 0. *Process one writes (current + 1), which is 1. *Process two reads the current value now. But while process one DID write (update), it has not committed the data, thus only that same process can read the new value that it updated, while all others see the older value, until there is a commit. Also, at least with Oracle databases, there are isolation levels, which I will not waste our time trying to paraphrase. Here is a good article on that subject, and each isolation level having it's pros and cons, which would go along with how important transaction-based engines may be in a database... Lastly, there may likely be different safeguards in place within MyISAM, instead of foreign-keys and transaction-based interaction. Well, for one, there is the fact that an entire table is locked, which makes it less likely that transactions/FKs are needed. And alas, if you are aware of these concurrency issues, yes you can play it less safe and just write your applications, set up your systems so that such errors are not possible (your code is then responsible, rather than the database itself). However, in my opinion, I would say that it is always best to use as many safeguards as possible, programming defensively, and always being aware that human error is impossible to completely avoid. It happens to everyone, and anyone who says they are immune to it must be lying, or hasn't done more than write a "Hello World" application/script. ;-) I hope that SOME of that is helpful to some one, and even more-so, I hope that I have not just now been a culprit of assumptions and being a human in error!! My apologies if so, but the examples are good to think about, research the risk of, and so on, even if they are not potential in this specific context. Feel free to correct me, edit this "answer," even vote it down. Just please try to improve, rather than correcting a bad assumption of mine with another. ;-) This is my first response, so please forgive the length due to all the disclaimers, etc... I just don't want to sound arrogant when I am not absolutely certain! A: I think this is an excellent article on explaining the differences and when you should use one over the other: http://tag1consulting.com/MySQL_Engines_MyISAM_vs_InnoDB A: For that ratio of read/writes I would guess InnoDB will perform better. Since you are fine with dirty reads, you might (if you afford) replicate to a slave and let all your reads go to the slave. Also, consider inserting in bulk, rather than one record at a time. A: Almost every time I start a new project I Google this same question to see if I come up with any new answers. It eventually boils down to - I take the latest version of MySQL and run tests. I have tables where I want to do key/value lookups... and that's all. I need to get the value (0-512 bytes) for a hash key. There is not a lot of transactions on this DB. The table gets updates occasionally (in it's entirety), but 0 transactions. So we're not talking about a complex system here, we are talking about a simple lookup,.. and how (other than making the table RAM resident) we can optimize performance. I also do tests on other databases (ie NoSQL) to see if there is anywhere I can get an advantage. The biggest advantage I have found is in key mapping but as far as the lookup goes, MyISAM is currently topping them all. Albeit, I wouldn't perform financial transactions with MyISAM tables but for simple lookups, you should test it out.. typically 2x to 5x the queries/sec. Test it, I welcome debate. A: If it is 70% inserts and 30% reads then it is more like on the InnoDB side. A: bottomline: if you are working offline with selects on large chunks of data, MyISAM will probably give you better (much better) speeds. there are some situations when MyISAM is infinitely more efficient than InnoDB: when manipulating large data dumps offline (because of table lock). example: I was converting a csv file (15M records) from NOAA which uses VARCHAR fields as keys. InnoDB was taking forever, even with large chunks of memory available. this an example of the csv (first and third fields are keys). USC00178998,20130101,TMAX,-22,,,7,0700 USC00178998,20130101,TMIN,-117,,,7,0700 USC00178998,20130101,TOBS,-28,,,7,0700 USC00178998,20130101,PRCP,0,T,,7,0700 USC00178998,20130101,SNOW,0,T,,7, since what i need to do is run a batch offline update of observed weather phenomena, i use MyISAM table for receiving data and run JOINS on the keys so that i can clean the incoming file and replace VARCHAR fields with INT keys (which are related to external tables where the original VARCHAR values are stored).
{ "language": "en", "url": "https://stackoverflow.com/questions/20148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "891" }
Q: Is there an easy way to create ordinals in C#? Is there an easy way in C# to create Ordinals for a number? For example: * *1 returns 1st *2 returns 2nd *3 returns 3rd *...etc Can this be done through String.Format() or are there any functions available to do this? A: While I haven't benchmarked this yet, you should be able to get better performance by avoiding all the conditional case statements. This is java, but a port to C# is trivial: public class NumberUtil { final static String[] ORDINAL_SUFFIXES = { "th", "st", "nd", "rd", "th", "th", "th", "th", "th", "th" }; public static String ordinalSuffix(int value) { int n = Math.abs(value); int lastTwoDigits = n % 100; int lastDigit = n % 10; int index = (lastTwoDigits >= 11 && lastTwoDigits <= 13) ? 0 : lastDigit; return ORDINAL_SUFFIXES[index]; } public static String toOrdinal(int n) { return new StringBuffer().append(n).append(ordinalSuffix(n)).toString(); } } Note, the reduction of conditionals and the use of the array lookup should speed up performance if generating a lot of ordinals in a tight loop. However, I also concede that this isn't as readable as the case statement solution. A: Remember internationalisation! The solutions here only work for English. Things get a lot more complex if you need to support other languages. For example, in Spanish "1st" would be written as "1.o", "1.a", "1.os" or "1.as" depending on whether the thing you're counting is masculine, feminine or plural! So if your software needs to support different languages, try to avoid ordinals. A: This page gives you a complete listing of all custom numerical formatting rules: Custom numeric format strings As you can see, there is nothing in there about ordinals, so it can't be done using String.Format. However its not really that hard to write a function to do it. public static string AddOrdinal(int num) { if( num <= 0 ) return num.ToString(); switch(num % 100) { case 11: case 12: case 13: return num + "th"; } switch(num % 10) { case 1: return num + "st"; case 2: return num + "nd"; case 3: return num + "rd"; default: return num + "th"; } } Update: Technically Ordinals don't exist for <= 0, so I've updated the code above. Also removed the redundant ToString() methods. Also note, this is not internationalized. I've no idea what ordinals look like in other languages. A: Similar to Ryan's solution, but even more basic, I just use a plain array and use the day to look up the correct ordinal: private string[] ordinals = new string[] {"","st","nd","rd","th","th","th","th","th","th","th","th","th","th","th","th","th","th","th","th","th","st","nd","rd","th","th","th","th","th","th","th","st" }; DateTime D = DateTime.Now; String date = "Today's day is: "+ D.Day.ToString() + ordinals[D.Day]; I have not had the need, but I would assume you could use a multidimensional array if you wanted to have multiple language support. From what I can remember from my Uni days, this method requires minimal effort from the server. A: private static string GetOrd(int num) => $"{num}{(!(Range(11, 3).Any(n => n == num % 100) ^ Range(1, 3).All(n => n != num % 10)) ? new[] { "ˢᵗ", "ⁿᵈ", "ʳᵈ" }[num % 10 - 1] : "ᵗʰ")}"; If anyone is looking for one-liner. A: Simple, clean, quick private static string GetOrdinalSuffix(int num) { string number = num.ToString(); if (number.EndsWith("11")) return "th"; if (number.EndsWith("12")) return "th"; if (number.EndsWith("13")) return "th"; if (number.EndsWith("1")) return "st"; if (number.EndsWith("2")) return "nd"; if (number.EndsWith("3")) return "rd"; return "th"; } Or better yet, as an extension method public static class IntegerExtensions { public static string DisplayWithSuffix(this int num) { string number = num.ToString(); if (number.EndsWith("11")) return number + "th"; if (number.EndsWith("12")) return number + "th"; if (number.EndsWith("13")) return number + "th"; if (number.EndsWith("1")) return number + "st"; if (number.EndsWith("2")) return number + "nd"; if (number.EndsWith("3")) return number + "rd"; return number + "th"; } } Now you can just call int a = 1; a.DisplayWithSuffix(); or even as direct as 1.DisplayWithSuffix(); A: My version of Jesse's version of Stu's and samjudson's versions :) Included unit test to show that the accepted answer is incorrect when number < 1 /// <summary> /// Get the ordinal value of positive integers. /// </summary> /// <remarks> /// Only works for english-based cultures. /// Code from: http://stackoverflow.com/questions/20156/is-there-a-quick-way-to-create-ordinals-in-c/31066#31066 /// With help: http://www.wisegeek.com/what-is-an-ordinal-number.htm /// </remarks> /// <param name="number">The number.</param> /// <returns>Ordinal value of positive integers, or <see cref="int.ToString"/> if less than 1.</returns> public static string Ordinal(this int number) { const string TH = "th"; string s = number.ToString(); // Negative and zero have no ordinal representation if (number < 1) { return s; } number %= 100; if ((number >= 11) && (number <= 13)) { return s + TH; } switch (number % 10) { case 1: return s + "st"; case 2: return s + "nd"; case 3: return s + "rd"; default: return s + TH; } } [Test] public void Ordinal_ReturnsExpectedResults() { Assert.AreEqual("-1", (1-2).Ordinal()); Assert.AreEqual("0", 0.Ordinal()); Assert.AreEqual("1st", 1.Ordinal()); Assert.AreEqual("2nd", 2.Ordinal()); Assert.AreEqual("3rd", 3.Ordinal()); Assert.AreEqual("4th", 4.Ordinal()); Assert.AreEqual("5th", 5.Ordinal()); Assert.AreEqual("6th", 6.Ordinal()); Assert.AreEqual("7th", 7.Ordinal()); Assert.AreEqual("8th", 8.Ordinal()); Assert.AreEqual("9th", 9.Ordinal()); Assert.AreEqual("10th", 10.Ordinal()); Assert.AreEqual("11th", 11.Ordinal()); Assert.AreEqual("12th", 12.Ordinal()); Assert.AreEqual("13th", 13.Ordinal()); Assert.AreEqual("14th", 14.Ordinal()); Assert.AreEqual("20th", 20.Ordinal()); Assert.AreEqual("21st", 21.Ordinal()); Assert.AreEqual("22nd", 22.Ordinal()); Assert.AreEqual("23rd", 23.Ordinal()); Assert.AreEqual("24th", 24.Ordinal()); Assert.AreEqual("100th", 100.Ordinal()); Assert.AreEqual("101st", 101.Ordinal()); Assert.AreEqual("102nd", 102.Ordinal()); Assert.AreEqual("103rd", 103.Ordinal()); Assert.AreEqual("104th", 104.Ordinal()); Assert.AreEqual("110th", 110.Ordinal()); Assert.AreEqual("111th", 111.Ordinal()); Assert.AreEqual("112th", 112.Ordinal()); Assert.AreEqual("113th", 113.Ordinal()); Assert.AreEqual("114th", 114.Ordinal()); Assert.AreEqual("120th", 120.Ordinal()); Assert.AreEqual("121st", 121.Ordinal()); Assert.AreEqual("122nd", 122.Ordinal()); Assert.AreEqual("123rd", 123.Ordinal()); Assert.AreEqual("124th", 124.Ordinal()); } A: I use this extension class: public static class Int32Extensions { public static string ToOrdinal(this int i) { return (i + "th") .Replace("1th", "1st") .Replace("2th", "2nd") .Replace("3th", "3rd"); } } A: Requested "less redundancy" version of samjudson's answer... public static string AddOrdinal(int number) { if (number <= 0) return number.ToString(); string GetIndicator(int num) { switch (num % 100) { case 11: case 12: case 13: return "th"; } switch (num % 10) { case 1: return "st"; case 2: return "nd"; case 3: return "rd"; default: return "th"; } } return number + GetIndicator(number); } A: You'll have to roll your own. From the top of my head: public static string Ordinal(this int number) { var work = number.ToString(); if ((number % 100) == 11 || (number % 100) == 12 || (number % 100) == 13) return work + "th"; switch (number % 10) { case 1: work += "st"; break; case 2: work += "nd"; break; case 3: work += "rd"; break; default: work += "th"; break; } return work; } You can then do Console.WriteLine(432.Ordinal()); Edited for 11/12/13 exceptions. I DID say from the top of my head :-) Edited for 1011 -- others have fixed this already, just want to make sure others don't grab this incorrect version. A: I rather liked elements from both Stu's and samjudson's solutions and worked them together into what I think is a usable combo: public static string Ordinal(this int number) { const string TH = "th"; var s = number.ToString(); number %= 100; if ((number >= 11) && (number <= 13)) { return s + TH; } switch (number % 10) { case 1: return s + "st"; case 2: return s + "nd"; case 3: return s + "rd"; default: return s + TH; } } A: public static string OrdinalSuffix(int ordinal) { //Because negatives won't work with modular division as expected: var abs = Math.Abs(ordinal); var lastdigit = abs % 10; return //Catch 60% of cases (to infinity) in the first conditional: lastdigit > 3 || lastdigit == 0 || (abs % 100) - lastdigit == 10 ? "th" : lastdigit == 1 ? "st" : lastdigit == 2 ? "nd" : "rd"; } A: EDIT: As YM_Industries points out in the comment, samjudson's answer DOES work for numbers over 1000, nickf's comment seems to have gone, and I can't remember what the problem I saw was. Left this answer here for the comparison timings. An awful lot of these don't work for numbers > 999, as nickf pointed out in a comment (EDIT: now missing). Here is a version based off a modified version of samjudson's accepted answer that does. public static String GetOrdinal(int i) { String res = ""; if (i > 0) { int j = (i - ((i / 100) * 100)); if ((j == 11) || (j == 12) || (j == 13)) res = "th"; else { int k = i % 10; if (k == 1) res = "st"; else if (k == 2) res = "nd"; else if (k == 3) res = "rd"; else res = "th"; } } return i.ToString() + res; } Also Shahzad Qureshi's answer using string manipulation works fine, however it does have a performance penalty. For generating a lot of these, a LINQPad example program makes the string version 6-7 times slower than this integer one (although you'd have to be generating a lot to notice). LINQPad example: void Main() { "Examples:".Dump(); foreach(int i in new int[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 22, 113, 122, 201, 202, 211, 212, 2013, 1000003, 10000013 }) Stuff.GetOrdinal(i).Dump(); String s; System.Diagnostics.Stopwatch sw = System.Diagnostics.Stopwatch.StartNew(); for(int iter = 0; iter < 100000; iter++) foreach(int i in new int[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 22, 113, 122, 201, 202, 211, 212, 2013, 1000003, 1000013 }) s = Stuff.GetOrdinal(i); "Integer manipulation".Dump(); sw.Elapsed.Dump(); sw.Restart(); for(int iter = 0; iter < 100000; iter++) foreach(int i in new int[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 22, 113, 122, 201, 202, 211, 212, 2013, 1000003, 1000013 }) s = (i.ToString() + Stuff.GetOrdinalSuffix(i)); "String manipulation".Dump(); sw.Elapsed.Dump(); } public class Stuff { // Use integer manipulation public static String GetOrdinal(int i) { String res = ""; if (i > 0) { int j = (i - ((i / 100) * 100)); if ((j == 11) || (j == 12) || (j == 13)) res = "th"; else { int k = i % 10; if (k == 1) res = "st"; else if (k == 2) res = "nd"; else if (k == 3) res = "rd"; else res = "th"; } } return i.ToString() + res; } // Use string manipulation public static string GetOrdinalSuffix(int num) { if (num.ToString().EndsWith("11")) return "th"; if (num.ToString().EndsWith("12")) return "th"; if (num.ToString().EndsWith("13")) return "th"; if (num.ToString().EndsWith("1")) return "st"; if (num.ToString().EndsWith("2")) return "nd"; if (num.ToString().EndsWith("3")) return "rd"; return "th"; } } A: Based off the other answers: public static string Ordinal(int n) { int r = n % 100, m = n % 10; return (r<4 || r>20) && (m>0 && m<4) ? n+" stndrd".Substring(m*2,2) : n+"th"; } A: While there are plenty of good answers in here, I guess there is room for another one, this time based on pattern matching, if not for anything else, then at least for debatable readability public static string Ordinals1(this int number) { switch (number) { case int p when p % 100 == 11: case int q when q % 100 == 12: case int r when r % 100 == 13: return $"{number}th"; case int p when p % 10 == 1: return $"{number}st"; case int p when p % 10 == 2: return $"{number}nd"; case int p when p % 10 == 3: return $"{number}rd"; default: return $"{number}th"; } } and what makes this solution special? nothing but the fact that I'm adding some performance considerations for various other solutions frankly I doubt performance really matters for this particular scenario (who really needs the ordinals of millions of numbers) but at least it surfaces some comparisons to be taken into account... 1 million items for reference (your millage may vary based on machine specs of course) with pattern matching and divisions (this answer) ~622 ms with pattern matching and strings (this answer) ~1967 ms with two switches and divisions (accepted answer) ~637 ms with one switch and divisions (another answer) ~725 ms void Main() { var timer = new Stopwatch(); var numbers = Enumerable.Range(1, 1000000).ToList(); // 1 timer.Reset(); timer.Start(); var results1 = numbers.Select(p => p.Ordinals1()).ToList(); timer.Stop(); timer.Elapsed.TotalMilliseconds.Dump("with pattern matching and divisions"); // 2 timer.Reset(); timer.Start(); var results2 = numbers.Select(p => p.Ordinals2()).ToList(); timer.Stop(); timer.Elapsed.TotalMilliseconds.Dump("with pattern matching and strings"); // 3 timer.Reset(); timer.Start(); var results3 = numbers.Select(p => p.Ordinals3()).ToList(); timer.Stop(); timer.Elapsed.TotalMilliseconds.Dump("with two switches and divisons"); // 4 timer.Reset(); timer.Start(); var results4 = numbers.Select(p => p.Ordinals4()).ToList(); timer.Stop(); timer.Elapsed.TotalMilliseconds.Dump("with one switche and divisons"); } public static class Extensions { public static string Ordinals1(this int number) { switch (number) { case int p when p % 100 == 11: case int q when q % 100 == 12: case int r when r % 100 == 13: return $"{number}th"; case int p when p % 10 == 1: return $"{number}st"; case int p when p % 10 == 2: return $"{number}nd"; case int p when p % 10 == 3: return $"{number}rd"; default: return $"{number}th"; } } public static string Ordinals2(this int number) { var text = number.ToString(); switch (text) { case string p when p.EndsWith("11"): return $"{number}th"; case string p when p.EndsWith("12"): return $"{number}th"; case string p when p.EndsWith("13"): return $"{number}th"; case string p when p.EndsWith("1"): return $"{number}st"; case string p when p.EndsWith("2"): return $"{number}nd"; case string p when p.EndsWith("3"): return $"{number}rd"; default: return $"{number}th"; } } public static string Ordinals3(this int number) { switch (number % 100) { case 11: case 12: case 13: return $"{number}th"; } switch (number % 10) { case 1: return $"{number}st"; case 2: return $"{number}nd"; case 3: return $"{number}rd"; default: return $"{number}th"; } } public static string Ordinals4(this int number) { var ones = number % 10; var tens = Math.Floor(number / 10f) % 10; if (tens == 1) { return $"{number}th"; } switch (ones) { case 1: return $"{number}th"; case 2: return $"{number}nd"; case 3: return $"{number}rd"; default: return $"{number}th"; } } } A: The Humanizer nuget package will provide helper methods for you. Disclaimer, I am a contributor to this project. Ordinalize turns a number into an ordinal string used to denote the position in an ordered sequence such as 1st, 2nd, 3rd, 4th: 1.Ordinalize() => "1st" 5.Ordinalize() => "5th" You can also call Ordinalize on a numeric string and achieve the same result: "21".Ordinalize() => "21st" Ordinalize also supports grammatical gender for both forms. You can pass an argument to Ordinalize to specify which gender the number should be outputted in. The possible values are GrammaticalGender.Masculine, GrammaticalGender.Feminine and GrammaticalGender.Neuter: // for Brazilian Portuguese locale 1.Ordinalize(GrammaticalGender.Masculine) => "1º" 1.Ordinalize(GrammaticalGender.Feminine) => "1ª" 1.Ordinalize(GrammaticalGender.Neuter) => "1º" "2".Ordinalize(GrammaticalGender.Masculine) => "2º" "2".Ordinalize(GrammaticalGender.Feminine) => "2ª" "2".Ordinalize(GrammaticalGender.Neuter) => "2º" Obviously this only applies to some cultures. For others passing gender in or not passing at all doesn't make any difference in the result. In addition, Ordinalize supports variations some cultures apply depending on the position of the ordinalized number in a sentence. Use the argument wordForm to get one result or another. Possible values are WordForm.Abbreviation and WordForm.Normal. You can combine wordForm argument with gender but passing this argument in when it is not applicable will not make any difference in the result. // Spanish locale 1.Ordinalize(WordForm.Abbreviation) => "1.er" // As in "Vivo en el 1.er piso" 1.Ordinalize(WordForm.Normal) => "1.º" // As in "He llegado el 1º" "3".Ordinalize(GrammaticalGender.Feminine, WordForm.Abbreviation) => "3.ª" "3".Ordinalize(GrammaticalGender.Feminine, WordForm.Normal) => "3.ª" "3".Ordinalize(GrammaticalGender.Masculine, WordForm.Abbreviation) => "3.er" "3".Ordinalize(GrammaticalGender.Masculine, WordForm.Normal) => "3.º" If you want to go deeper, check those test cases: OrdinalizeTests.cs A: The accepted answer with switch-expressions and pattern-matching from c# 8 and 9. No unneeded string conversion or allocations. string.Concat(number, number < 0 ? "" : (number % 100) switch { 11 or 12 or 13 => "th", int n => (n % 10) switch { 1 => "st", 2 => "nd", 3 => "rd", _ => "th", } }) Or as unfriendly one-liner: $"{number}{(number < 0 ? "" : (number % 100) switch { 11 or 12 or 13 => "th", int n => (n % 10) switch { 1 => "st", 2 => "nd", 3 => "rd", _ => "th" }})}" A: FWIW, for MS-SQL, this expression will do the job. Keep the first WHEN (WHEN num % 100 IN (11, 12, 13) THEN 'th') as the first one in the list, as this relies upon being tried before the others. CASE WHEN num % 100 IN (11, 12, 13) THEN 'th' -- must be tried first WHEN num % 10 = 1 THEN 'st' WHEN num % 10 = 2 THEN 'nd' WHEN num % 10 = 3 THEN 'rd' ELSE 'th' END AS Ordinal For Excel : =MID("thstndrdth",MIN(9,2*RIGHT(A1)*(MOD(A1-11,100)>2)+1),2) The expression (MOD(A1-11,100)>2) is TRUE (1) for all numbers except any ending in 11,12,13 (FALSE = 0). So 2 * RIGHT(A1) * (MOD(A1-11,100)>2) +1) ends up as 1 for 11/12/13, otherwise : 1 evaluates to 3 2 to 5, 3 to 7 others : 9 - and the required 2 characters are selected from "thstndrdth" starting from that position. If you really want to convert that fairly directly to SQL, this worked for me for a handful of test values : DECLARE @n as int SET @n=13 SELECT SubString( 'thstndrdth' , (SELECT MIN(value) FROM (SELECT 9 as value UNION SELECT 1+ (2* (ABS(@n) % 10) * CASE WHEN ((ABS(@n)+89) % 100)>2 THEN 1 ELSE 0 END) ) AS Mins ) , 2 ) A: This is the implementation in dart and can be modified according to the language. String getOrdinalSuffix(int num){ if (num.toString().endsWith("11")) return "th"; if (num.toString().endsWith("12")) return "th"; if (num.toString().endsWith("13")) return "th"; if (num.toString().endsWith("1")) return "st"; if (num.toString().endsWith("2")) return "nd"; if (num.toString().endsWith("3")) return "rd"; return "th"; } A: Another one-liner, but without comparisons by only indexing the regex result into an array. public static string GetOrdinalSuffix(int input) { return new []{"th", "st", "nd", "rd"}[Convert.ToInt32("0" + Regex.Match(input.ToString(), "(?<!1)[1-3]$").Value)]; } The PowerShell version can be shortened further: function ord($num) { return ('th','st','nd','rd')[[int]($num -match '(?<!1)[1-3]$') * $matches[0]] } A: Another 1 liner. public static string Ordinal(this int n) { return n + (new [] {"st","nd","rd" }.ElementAtOrDefault((((n + 90) % 100 - 10) % 10 - 1)) ?? "th"); } A: Another alternative that I used based on all the other suggestions, but requires no special casing: public static string DateSuffix(int day) { if (day == 11 | day == 12 | day == 13) return "th"; Math.DivRem(day, 10, out day); switch (day) { case 1: return "st"; case 2: return "nd"; case 3: return "rd"; default: return "th"; } } A: Here is the DateTime Extension class. Copy, Paste & Enjoy public static class DateTimeExtensions { public static string ToStringWithOrdinal(this DateTime d) { var result = ""; bool bReturn = false; switch (d.Day % 100) { case 11: case 12: case 13: result = d.ToString("dd'th' MMMM yyyy"); bReturn = true; break; } if (!bReturn) { switch (d.Day % 10) { case 1: result = d.ToString("dd'st' MMMM yyyy"); break; case 2: result = d.ToString("dd'nd' MMMM yyyy"); break; case 3: result = d.ToString("dd'rd' MMMM yyyy"); break; default: result = d.ToString("dd'th' MMMM yyyy"); break; } } if (result.StartsWith("0")) result = result.Substring(1); return result; } } Result : 9th October 2014
{ "language": "en", "url": "https://stackoverflow.com/questions/20156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "225" }
Q: C# application detected as a virus Regarding the same program as my question a few minutes ago... I added a setup project and built an MSI for the program (just to see if I could figure it out) and it works great except for one thing. When I tried to install it on my parent's laptop, their antivirus (the free Avast Home Edition) set off an alarm and accused my setup.exe of being a Trojan. Does anyone have any idea why this would be happening and how I can fix it? A: Indeed, boot from a clean CD (use a known good machine to build BartPE or something similar) and scan your machine thoroughly. Another good thing to check, though, would be exactly which virus Avast! thinks your program is. Once you know that, you should be able to look it up in one of the virus databases and insure that your software can't contain it. The odds are that Avast! is just getting a false positive for some reason, and I don't know that there's much you can do about that other than contacting Avast! and hoping for a reply. A: I would do what jsight suggested and make sure that your machine did not have a virus. I would also submit the .msi file to Avast's online scanner and see what they identified as being in your package. If that reports your file as containing a trojan, contact Avast and ask them to verify that your .msi package does contain a trojan. If it doesn't contain a trojan, find out from Avast what triggered their scanner. There may be something in your code that matches a pattern that Avast looks for, They may be able to adjust their pattern to ignore your file or you could tweak your code so that it doesn't trigger their scanner. A: I don’t know “Avast”, but in Kaspersky if the configuration is set to high almost every installer fires an alarm (iTunes, Windows Update, everything) especially if the installer modify some registry key or open a port. If avast checks for behavior and your program open a port probably that’s be the cause. A: Rebuild the setup file, check the exact file size. Check the exact file size of the "suspected" setup file. If the source code hasn't changed and the two file sizes are different, there's a pretty good chance it got contaminated in transit. I'd do that as a bit of a sanity check first. A: The very first thing to do would be to scan your build PC for viruses.
{ "language": "en", "url": "https://stackoverflow.com/questions/20168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MSTest and NHibernate Does anyone have any experience getting MSTest to copy hibernate.cfg.xml properly to the output directory? All my MSTests fail with a cannot find hibernate.cfg.xml error (I have it set to Copy Always), but my MBUnit tests pass. A: Edit localtestrun.testrunconfig (in your solution items folder). Select the deployment option and add the hibernate.cfg.xml file to the list of additional files to deploy. The file should then get copied to the output directory where the test gets run. A: Ran into the same thing a few weeks ago -- this is actually a bug with MSTest -- I believe this was corrected with the recent Service Pack Release (even though it still says "Active"). If not, all I had to do was reference my hibernate.cfg.xml directly (sloppy but works for testing -- this is referencing the hibernate.cfg.xml file in my tests project from the "TestResults" folder): try { sessionFactory = new Configuration() .Configure() .BuildSessionFactory(); } // Assume we are in "MSTest mode" catch (Exception) { sessionFactory = new Configuration() .Configure(@"..\..\..\Program.Tests\" + @"\hibernate.cfg.xml") .BuildSessionFactory(); } A: You can try adding the DeploymentItemAttribute to one of your tests, or edit your .testrunconfig file and add the file to the Deployment list. A: a workaround rather than an answer: NHibernate supports programmatic configuration. so you can write your own native properties/config file and parse it into hibernate configurations on startup. A: I like to mark my NHibernate config files as Embedded Resources, and use the Configuration.Configure() overload which reads config files from the Assembly Resources.
{ "language": "en", "url": "https://stackoverflow.com/questions/20173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is there a way to make a constructor only visible to a parent class in C#? I have a collection of classes that inherit from an abstract class I created. I'd like to use the abstract class as a factory for creating instances of concrete implementations of my abstract class. Is there any way to hide a constructor from all code except a parent class. I'd like to do this basically public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return new ConcreteClassA(); if (args == "b") return new ConcreteClassB(); } } public class ConcreteClassA : AbstractClass { } public class ConcreteClassB : AbstractClass { } But I want to prevent anyone from directly instantiating the 2 concrete classes. I want to ensure that only the MakeAbstractClass() method can instantiate the base classes. Is there any way to do this? UPDATE I don't need to access any specific methods of ConcreteClassA or B from outside of the Abstract class. I only need the public methods my Abstract class provides. I don't really need to prevent the Concrete classes from being instantiated, I'm just trying to avoid it since they provide no new public interfaces, just different implementations of some very specific things internal to the abstract class. To me, the simplest solution is to make child classes as samjudson mentioned. I'd like to avoid this however since it would make my abstract class' file a lot bigger than I'd like it to be. I'd rather keep classes split out over a few files for organization. I guess there's no easy solution to this... A: To me, the simplest solution is to make child classes as samjudson mentioned. I'd like to avoid this however since it would make my abstract class' file a lot bigger than I'd like it to be. I'd rather keep classes split out over a few files for organization. No problem, just use partial keyword and you can split your inner classes into as many files as you wish. You don't have to keep it in the same file. Previous answer: It's possible but only with reflection public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return (AbstractClass)Activator.CreateInstance(typeof(ConcreteClassA), true); if (args == "b") return (AbstractClass)Activator.CreateInstance(typeof(ConcreteClassB), true); } } public class ConcreteClassA : AbstractClass { private ConcreteClassA() { } } public class ConcreteClassB : AbstractClass { private ConcreteClassB() { } } and here is another pattern, without ugly MakeAbstractClass(string args) public abstract class AbstractClass<T> where T : AbstractClass<T> { public static T MakeAbstractClass() { T value = (T)Activator.CreateInstance(typeof(T), true); // your processing logic return value; } } public class ConcreteClassA : AbstractClass<ConcreteClassA> { private ConcreteClassA() { } } public class ConcreteClassB : AbstractClass<ConcreteClassB> { private ConcreteClassB() { } } A: If the classes are in the same assembly, can you not make the constructors internal? A: You can make the sub classes child classes, something like this: public abstract class AbstractClass { public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return new ConcreteClassA(); if (args == "b") return new ConcreteClassB(); } private class ConcreteClassA : AbstractClass { } private class ConcreteClassB : AbstractClass { } } @Vaibhav This does indeed mean that the classes are also hidden. But this is as far as I am aware the only way to completely hide the constructor. Edit: As others have mentioned the same thing can be accomplished using Reflection, which might actually be closer to what you would like to be the case - for example the above method replies on the concrete classes being inside the same file as the Abstract class, which probably isn't very convenient. Having said that this way is a nice 'Hack', and good if the number and complexity of the concrete classes is low. A: No, I don't think we can do that. A: Following on from the accepted answer, if you had a public interface and made the private classes implement the interface, you could then return a pointer to the interface and anyone outside of your parent abstract class could then use them (whilst still hiding the child classes). A: Do you actually need to do this? If you're using some kind of pseudo factory pattern without a true design need for it, you're only going to make your code harder to understand, maintain and extend. If you don't need to do this, just implement a true factory pattern. Or, more ALTy, use a DI/IoC framework. A: Can't you use the keyword partial for splitting the code for a class into many files? A: If you are using this class in a seperate service assembly, you can use the internal keyword. public class AbstractClass { public AbstractClass ClassFactory(string args) { switch (args) { case "A": return new ConcreteClassA(); case "B": return new ConcreteClassB(); default: return null; } } } public class ConcreteClassA : AbstractClass { internal ConcreteClassA(){ } } public class ConcreteClassB : AbstractClass { internal ConcreteClassB() {} } A: What you need to do is this to prevent the default constructor to be create. The internal can be change to public if the classes are not in the same assembly. public abstract class AbstractClass{ public static AbstractClass MakeAbstractClass(string args) { if (args == "a") return ConcreteClassA().GetConcreteClassA(); if (args == "b") return ConcreteClassB().GetConcreteClassB(); } } public class ConcreteClassA : AbstractClass { private ConcreteClassA(){} internal static ConcreteClassA GetConcreteClassA(){ return ConcreteClassA(); } } public class ConcreteClassB : AbstractClass { private ConcreteClassB(){} internal static ConcreteClassB Get ConcreteClassB(){ return ConcreteClassB(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/20185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How does the ASP.NET "Yellow Screen of Death" display code? I thought .Net code gets compiled into MSIL, so I always wondered how do Yellow Screens produce the faulty code. If it's executing the compiled code, how is the compiler able to produce code from the source files in the error message? Feel free to edit this question/title, I know it doesn't really make sense. A: A .Net assembly is compiled with metadata about the bytecode included that allows easy decompilation of the code - that's how tools like .Net Reflector work. The PDB files are debug symbols only - the difference in the Yellow Screen Of Death is that you'll get line numbers in the stack trace. In other words, you'd get the code, even if the PDB files were missing. A: like this. i've made a few changes, but it's pretty close to exactly what ms is doing. // reverse the stack private static Stack<Exception> GenerateExceptionStack(Exception exception) { var exceptionStack = new Stack<Exception>(); // create exception stack for (Exception e = exception; e != null; e = e.InnerException) { exceptionStack.Push(e); } return exceptionStack; } // render stack private static string GenerateFormattedStackTrace(Stack<Exception> exceptionStack) { StringBuilder trace = new StringBuilder(); try { // loop through exception stack while (exceptionStack.Count != 0) { trace.Append("\r\n"); // render exception type and message Exception ex = exceptionStack.Pop(); trace.Append("[" + ex.GetType().Name); if (!string.IsNullOrEmpty(ex.Message)) { trace.Append(":" + ex.Message); } trace.Append("]\r\n"); // Load stack trace StackTrace stackTrace = new StackTrace(ex, true); for (int frame = 0; frame < stackTrace.FrameCount; frame++) { StackFrame stackFrame = stackTrace.GetFrame(frame); MethodBase method = stackFrame.GetMethod(); Type declaringType = method.DeclaringType; string declaringNamespace = ""; // get declaring type information if (declaringType != null) { declaringNamespace = declaringType.Namespace ?? ""; } // add namespace if (!string.IsNullOrEmpty(declaringNamespace)) { declaringNamespace += "."; } // add method if (declaringType == null) { trace.Append(" " + method.Name + "("); } else { trace.Append(" " + declaringNamespace + declaringType.Name + "." + method.Name + "("); } // get parameter information ParameterInfo[] parameters = method.GetParameters(); for (int paramIndex = 0; paramIndex < parameters.Length; paramIndex++) { trace.Append(((paramIndex != 0) ? "," : "") + parameters[paramIndex].ParameterType.Name + " " + parameters[paramIndex].Name); } trace.Append(")"); // get information string fileName = stackFrame.GetFileName() ?? ""; if (!string.IsNullOrEmpty(fileName)) { trace.Append(string.Concat(new object[] { " in ", fileName, ":", stackFrame.GetFileLineNumber() })); } else { trace.Append(" + " + stackFrame.GetNativeOffset()); } trace.Append("\r\n"); } } } catch { } if (trace.Length == 0) { trace.Append("[stack trace unavailable]"); } // return html safe stack trace return HttpUtility.HtmlEncode(trace.ToString()).Replace(Environment.NewLine, "<br>"); } A: I believe the pdb files that are output when you do a debug build contain a reference to the location of the source code files. A: I think this is down to the debug information that can be included with the compiled assemblies..(although I could definately be wrong) A: I believe the information that maps the source to the MSIL is stored in the PDB file. If this is not present then that mapping won't happen. It is this lookup that makes an exception such a expensive operation ("exceptions are for exceptional situations").
{ "language": "en", "url": "https://stackoverflow.com/questions/20198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Broken chart images in Crystal Reports in web application I have a collection of crystal reports that contains charts. They look fine locally and when printed, but when viewing them through a web application using a CrystalReportViewer the charts dispay as broken images. Viewing the properties of the broken image show the url as ...CrystalImageHandler.aspx?dynamicimage=cr_tmp_image_8d12a01f-b336-4b8b-b0c7-83d9571d87e4.png. I have tried adding <httpHandlers> <add verb="GET" path="CrystalImageHandler.aspx" type="CrystalDecisions.Web.CrystalImageHandler,CrystalDecisions.Web, Version=Version=10.5.3700.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> </httpHandlers> to the web.config as suggested via a google search but that has not resolved my issue. A: Maybe a permissions issue on the Crystal libraries?? I've run into that before with Crystal, not specfically the ImageHandler though.
{ "language": "en", "url": "https://stackoverflow.com/questions/20201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Where can I find decent visio templates/diagrams for software architecture? Anyone have any good urls for templates or diagram examples in Visio 2007 to be used in software architecture? A: Beautiful set of stencils from Microsoft here. A: Here is a link to a Visio Stencil and Template for UML 2.0. A: There should be templates already included in Visio 2007 for software architecture but you might want to check out Visio 2007 templates. A: For SOA system architecture, I use the SOACP Visio stencil. It provides the symbols that are used in Thomas Erl's SOA book series. I use the Visio Network and Database stencils to model most other requirements.
{ "language": "en", "url": "https://stackoverflow.com/questions/20206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Does the Microsoft Report Viewer Redistributable 2008 really require .NET Framework version 3.5? I'm packaging up a .NET 2.0 based web app for deployment through a Windows Installer based package. Our app uses Report Viewer 2008 and I'm including the Microsoft Report Viewer Redistributable 2008 installer. When I check the download page for Report Viewer 2008, it lists .NET 3.5 as a requirement. Is having .Net 3.5 installed really needed Report Viewer 2008? We targeted .Net 2.0 for our app, there isn't anything in our code that would use the 3.0 or 3.5 Frameworks. We are in the middle of testing and everything seems to be working with out 3.5, but I don't want to miss an edge condition and cause an error for a customer because he was missing a prerequisite run time package. A: Keep in mind that MSFT might be requiring the 3.5 Framework so they can write against it in future updates/releases, which might place your app in an unsupported (by MSFT) state. A: Uising Reflector you can see that Microsoft.ReportViewer.Common.dll has a dependency on "Microsoft.Build.Framework, Version=3.5.0.0" and "Microsoft.Build.Utilities.v3.5, Version=3.5.0.0". So strictly speaking it does have a 3.5 requirement. But if the reporting functionality you use never executes the code that uses/loads these, then you might just be OK :-) A: So far testing with or with out the .NET Framework works as expected. My installer has the user install version 2.0 of the Framework and everything works as expected. My concern is that 3.5 is listed as a prerequisite on the Report Viewer download page. A: If it works without a hitch then you don't need .NET 3.5 Framework for now. Installing .NET 3.5 Framework is easy enough to do along with later versions of your software if and only if your software stops working at that point. A: We have deployed ReportViewer 2008 with only .net v2, no problems so far.
{ "language": "en", "url": "https://stackoverflow.com/questions/20207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I use 3DES encryption/decryption in Java? Every method I write to encode a string in Java using 3DES can't be decrypted back to the original string. Does anyone have a simple code snippet that can just encode and then decode the string back to the original string? I know I'm making a very silly mistake somewhere in this code. Here's what I've been working with so far: ** note, I am not returning the BASE64 text from the encrypt method, and I am not base64 un-encoding in the decrypt method because I was trying to see if I was making a mistake in the BASE64 part of the puzzle. public class TripleDESTest { public static void main(String[] args) { String text = "kyle boon"; byte[] codedtext = new TripleDESTest().encrypt(text); String decodedtext = new TripleDESTest().decrypt(codedtext); System.out.println(codedtext); System.out.println(decodedtext); } public byte[] encrypt(String message) { try { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9".getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher cipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key, iv); final byte[] plainTextBytes = message.getBytes("utf-8"); final byte[] cipherText = cipher.doFinal(plainTextBytes); final String encodedCipherText = new sun.misc.BASE64Encoder().encode(cipherText); return cipherText; } catch (java.security.InvalidAlgorithmParameterException e) { System.out.println("Invalid Algorithm"); } catch (javax.crypto.NoSuchPaddingException e) { System.out.println("No Such Padding"); } catch (java.security.NoSuchAlgorithmException e) { System.out.println("No Such Algorithm"); } catch (java.security.InvalidKeyException e) { System.out.println("Invalid Key"); } catch (BadPaddingException e) { System.out.println("Invalid Key");} catch (IllegalBlockSizeException e) { System.out.println("Invalid Key");} catch (UnsupportedEncodingException e) { System.out.println("Invalid Key");} return null; } public String decrypt(byte[] message) { try { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9".getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher decipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); decipher.init(Cipher.DECRYPT_MODE, key, iv); //final byte[] encData = new sun.misc.BASE64Decoder().decodeBuffer(message); final byte[] plainText = decipher.doFinal(message); return plainText.toString(); } catch (java.security.InvalidAlgorithmParameterException e) { System.out.println("Invalid Algorithm"); } catch (javax.crypto.NoSuchPaddingException e) { System.out.println("No Such Padding"); } catch (java.security.NoSuchAlgorithmException e) { System.out.println("No Such Algorithm"); } catch (java.security.InvalidKeyException e) { System.out.println("Invalid Key"); } catch (BadPaddingException e) { System.out.println("Invalid Key");} catch (IllegalBlockSizeException e) { System.out.println("Invalid Key");} catch (UnsupportedEncodingException e) { System.out.println("Invalid Key");} catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } } A: Your code was fine except for the Base 64 encoding bit (which you mentioned was a test), the reason the output may not have made sense is that you were displaying a raw byte array (doing toString() on a byte array returns its internal Java reference, not the String representation of the contents). Here's a version that's just a teeny bit cleaned up and which prints "kyle boon" as the decoded string: import java.security.MessageDigest; import java.util.Arrays; import javax.crypto.Cipher; import javax.crypto.SecretKey; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; public class TripleDESTest { public static void main(String[] args) throws Exception { String text = "kyle boon"; byte[] codedtext = new TripleDESTest().encrypt(text); String decodedtext = new TripleDESTest().decrypt(codedtext); System.out.println(codedtext); // this is a byte array, you'll just see a reference to an array System.out.println(decodedtext); // This correctly shows "kyle boon" } public byte[] encrypt(String message) throws Exception { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9" .getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher cipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key, iv); final byte[] plainTextBytes = message.getBytes("utf-8"); final byte[] cipherText = cipher.doFinal(plainTextBytes); // final String encodedCipherText = new sun.misc.BASE64Encoder() // .encode(cipherText); return cipherText; } public String decrypt(byte[] message) throws Exception { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9" .getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher decipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); decipher.init(Cipher.DECRYPT_MODE, key, iv); // final byte[] encData = new // sun.misc.BASE64Decoder().decodeBuffer(message); final byte[] plainText = decipher.doFinal(message); return new String(plainText, "UTF-8"); } } A: Here's a very simply static encrypt/decrypt class biased on the Bouncy Castle no padding example by Jose Luis Montes de Oca. This one is using "DESede/ECB/PKCS7Padding" so I don't have to bother manually padding. package com.zenimax.encryption; import java.security.InvalidKeyException; import java.security.NoSuchAlgorithmException; import java.security.NoSuchProviderException; import java.security.Security; import javax.crypto.BadPaddingException; import javax.crypto.Cipher; import javax.crypto.IllegalBlockSizeException; import javax.crypto.NoSuchPaddingException; import javax.crypto.SecretKey; import javax.crypto.spec.SecretKeySpec; import org.bouncycastle.jce.provider.BouncyCastleProvider; /** * * @author Matthew H. Wagner */ public class TripleDesBouncyCastle { private static String TRIPLE_DES_TRANSFORMATION = "DESede/ECB/PKCS7Padding"; private static String ALGORITHM = "DESede"; private static String BOUNCY_CASTLE_PROVIDER = "BC"; private static void init() { Security.addProvider(new BouncyCastleProvider()); } public static byte[] encode(byte[] input, byte[] key) throws IllegalBlockSizeException, BadPaddingException, NoSuchAlgorithmException, NoSuchProviderException, NoSuchPaddingException, InvalidKeyException { init(); SecretKey keySpec = new SecretKeySpec(key, ALGORITHM); Cipher encrypter = Cipher.getInstance(TRIPLE_DES_TRANSFORMATION, BOUNCY_CASTLE_PROVIDER); encrypter.init(Cipher.ENCRYPT_MODE, keySpec); return encrypter.doFinal(input); } public static byte[] decode(byte[] input, byte[] key) throws IllegalBlockSizeException, BadPaddingException, NoSuchAlgorithmException, NoSuchProviderException, NoSuchPaddingException, InvalidKeyException { init(); SecretKey keySpec = new SecretKeySpec(key, ALGORITHM); Cipher decrypter = Cipher.getInstance(TRIPLE_DES_TRANSFORMATION, BOUNCY_CASTLE_PROVIDER); decrypter.init(Cipher.DECRYPT_MODE, keySpec); return decrypter.doFinal(input); } } A: Here is a solution using the javax.crypto library and the apache commons codec library for encoding and decoding in Base64: import java.security.spec.KeySpec; import javax.crypto.Cipher; import javax.crypto.SecretKey; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.DESedeKeySpec; import org.apache.commons.codec.binary.Base64; public class TrippleDes { private static final String UNICODE_FORMAT = "UTF8"; public static final String DESEDE_ENCRYPTION_SCHEME = "DESede"; private KeySpec ks; private SecretKeyFactory skf; private Cipher cipher; byte[] arrayBytes; private String myEncryptionKey; private String myEncryptionScheme; SecretKey key; public TrippleDes() throws Exception { myEncryptionKey = "ThisIsSpartaThisIsSparta"; myEncryptionScheme = DESEDE_ENCRYPTION_SCHEME; arrayBytes = myEncryptionKey.getBytes(UNICODE_FORMAT); ks = new DESedeKeySpec(arrayBytes); skf = SecretKeyFactory.getInstance(myEncryptionScheme); cipher = Cipher.getInstance(myEncryptionScheme); key = skf.generateSecret(ks); } public String encrypt(String unencryptedString) { String encryptedString = null; try { cipher.init(Cipher.ENCRYPT_MODE, key); byte[] plainText = unencryptedString.getBytes(UNICODE_FORMAT); byte[] encryptedText = cipher.doFinal(plainText); encryptedString = new String(Base64.encodeBase64(encryptedText)); } catch (Exception e) { e.printStackTrace(); } return encryptedString; } public String decrypt(String encryptedString) { String decryptedText=null; try { cipher.init(Cipher.DECRYPT_MODE, key); byte[] encryptedText = Base64.decodeBase64(encryptedString); byte[] plainText = cipher.doFinal(encryptedText); decryptedText= new String(plainText); } catch (Exception e) { e.printStackTrace(); } return decryptedText; } public static void main(String args []) throws Exception { TrippleDes td= new TrippleDes(); String target="imparator"; String encrypted=td.encrypt(target); String decrypted=td.decrypt(encrypted); System.out.println("String To Encrypt: "+ target); System.out.println("Encrypted String:" + encrypted); System.out.println("Decrypted String:" + decrypted); } } Running the above program results with the following output: String To Encrypt: imparator Encrypted String:FdBNaYWfjpWN9eYghMpbRA== Decrypted String:imparator A: I had hard times figuring it out myself and this post helped me to find the right answer for my case. When working with financial messaging as ISO-8583 the 3DES requirements are quite specific, so for my especial case the "DESede/CBC/PKCS5Padding" combinations wasn't solving the problem. After some comparative testing of my results against some 3DES calculators designed for the financial world I found the the value "DESede/ECB/Nopadding" is more suited for the the specific task. Here is a demo implementation of my TripleDes class (using the Bouncy Castle provider) import java.security.InvalidKeyException; import java.security.NoSuchAlgorithmException; import java.security.NoSuchProviderException; import java.security.Security; import javax.crypto.BadPaddingException; import javax.crypto.Cipher; import javax.crypto.IllegalBlockSizeException; import javax.crypto.NoSuchPaddingException; import javax.crypto.SecretKey; import javax.crypto.spec.SecretKeySpec; import org.bouncycastle.jce.provider.BouncyCastleProvider; /** * * @author Jose Luis Montes de Oca */ public class TripleDesCipher { private static String TRIPLE_DES_TRANSFORMATION = "DESede/ECB/Nopadding"; private static String ALGORITHM = "DESede"; private static String BOUNCY_CASTLE_PROVIDER = "BC"; private Cipher encrypter; private Cipher decrypter; public TripleDesCipher(byte[] key) throws NoSuchAlgorithmException, NoSuchProviderException, NoSuchPaddingException, InvalidKeyException { Security.addProvider(new BouncyCastleProvider()); SecretKey keySpec = new SecretKeySpec(key, ALGORITHM); encrypter = Cipher.getInstance(TRIPLE_DES_TRANSFORMATION, BOUNCY_CASTLE_PROVIDER); encrypter.init(Cipher.ENCRYPT_MODE, keySpec); decrypter = Cipher.getInstance(TRIPLE_DES_TRANSFORMATION, BOUNCY_CASTLE_PROVIDER); decrypter.init(Cipher.DECRYPT_MODE, keySpec); } public byte[] encode(byte[] input) throws IllegalBlockSizeException, BadPaddingException { return encrypter.doFinal(input); } public byte[] decode(byte[] input) throws IllegalBlockSizeException, BadPaddingException { return decrypter.doFinal(input); } } A: private static final String UNICODE_FORMAT = "UTF8"; private static final String DESEDE_ENCRYPTION_SCHEME = "DESede"; private KeySpec ks; private SecretKeyFactory skf; private Cipher cipher; byte[] arrayBytes; private String encryptionSecretKey = "ThisIsSpartaThisIsSparta"; SecretKey key; public TripleDesEncryptDecrypt() throws Exception { convertStringToSecretKey(encryptionSecretKey); } public TripleDesEncryptDecrypt(String encryptionSecretKey) throws Exception { convertStringToSecretKey(encryptionSecretKey); } public SecretKey convertStringToSecretKey (String encryptionSecretKey) throws Exception { arrayBytes = encryptionSecretKey.getBytes(UNICODE_FORMAT); ks = new DESedeKeySpec(arrayBytes); skf = SecretKeyFactory.getInstance(DESEDE_ENCRYPTION_SCHEME); cipher = Cipher.getInstance(DESEDE_ENCRYPTION_SCHEME); key = skf.generateSecret(ks); return key; } /** * Encrypt without specifying secret key * * @param unencryptedString * @return String */ public String encrypt(String unencryptedString) { String encryptedString = null; try { cipher.init(Cipher.ENCRYPT_MODE, key); byte[] plainText = unencryptedString.getBytes(UNICODE_FORMAT); byte[] encryptedText = cipher.doFinal(plainText); encryptedString = new String(Base64.encodeBase64(encryptedText)); } catch (Exception e) { e.printStackTrace(); } return encryptedString; } /** * Encrypt with specified secret key * * @param unencryptedString * @return String */ public String encrypt(String encryptionSecretKey, String unencryptedString) { String encryptedString = null; try { key = convertStringToSecretKey(encryptionSecretKey); cipher.init(Cipher.ENCRYPT_MODE, key); byte[] plainText = unencryptedString.getBytes(UNICODE_FORMAT); byte[] encryptedText = cipher.doFinal(plainText); encryptedString = new String(Base64.encodeBase64(encryptedText)); } catch (Exception e) { e.printStackTrace(); } return encryptedString; } /** * Decrypt without specifying secret key * @param encryptedString * @return */ public String decrypt(String encryptedString) { String decryptedText=null; try { cipher.init(Cipher.DECRYPT_MODE, key); byte[] encryptedText = Base64.decodeBase64(encryptedString); byte[] plainText = cipher.doFinal(encryptedText); decryptedText= new String(plainText); } catch (Exception e) { e.printStackTrace(); } return decryptedText; } /** * Decrypt with specified secret key * @param encryptedString * @return */ public String decrypt(String encryptionSecretKey, String encryptedString) { String decryptedText=null; try { key = convertStringToSecretKey(encryptionSecretKey); cipher.init(Cipher.DECRYPT_MODE, key); byte[] encryptedText = Base64.decodeBase64(encryptedString); byte[] plainText = cipher.doFinal(encryptedText); decryptedText= new String(plainText); } catch (Exception e) { e.printStackTrace(); } return decryptedText; } A: import java.io.IOException; import java.io.UnsupportedEncodingException; import java.security.Key; import javax.crypto.Cipher; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.DESedeKeySpec; import javax.crypto.spec.IvParameterSpec; import java.util.Base64; import java.util.Base64.Encoder; /** * * @author shivshankar pal * * this code is working properly. doing proper encription and decription note:- it will work only with jdk8 * * */ public class TDes { private static byte[] key = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02 }; private static byte[] keyiv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; public static String encode(String args) { System.out.println("plain data==> " + args); byte[] encoding; try { encoding = Base64.getEncoder().encode(args.getBytes("UTF-8")); System.out.println("Base64.encodeBase64==>" + new String(encoding)); byte[] str5 = des3EncodeCBC(key, keyiv, encoding); System.out.println("des3EncodeCBC==> " + new String(str5)); byte[] encoding1 = Base64.getEncoder().encode(str5); System.out.println("Base64.encodeBase64==> " + new String(encoding1)); return new String(encoding1); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } public static String decode(String args) { try { System.out.println("encrypted data==>" + new String(args.getBytes("UTF-8"))); byte[] decode = Base64.getDecoder().decode(args.getBytes("UTF-8")); System.out.println("Base64.decodeBase64(main encription)==>" + new String(decode)); byte[] str6 = des3DecodeCBC(key, keyiv, decode); System.out.println("des3DecodeCBC==>" + new String(str6)); String data=new String(str6); byte[] decode1 = Base64.getDecoder().decode(data.trim().getBytes("UTF-8")); System.out.println("plaintext==> " + new String(decode1)); return new String(decode1); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } return "u mistaken in try block"; } private static byte[] des3EncodeCBC(byte[] key, byte[] keyiv, byte[] data) { try { Key deskey = null; DESedeKeySpec spec = new DESedeKeySpec(key); SecretKeyFactory keyfactory = SecretKeyFactory.getInstance("desede"); deskey = keyfactory.generateSecret(spec); Cipher cipher = Cipher.getInstance("desede/ CBC/PKCS5Padding"); IvParameterSpec ips = new IvParameterSpec(keyiv); cipher.init(Cipher.ENCRYPT_MODE, deskey, ips); byte[] bout = cipher.doFinal(data); return bout; } catch (Exception e) { System.out.println("methods qualified name" + e); } return null; } private static byte[] des3DecodeCBC(byte[] key, byte[] keyiv, byte[] data) { try { Key deskey = null; DESedeKeySpec spec = new DESedeKeySpec(key); SecretKeyFactory keyfactory = SecretKeyFactory.getInstance("desede"); deskey = keyfactory.generateSecret(spec); Cipher cipher = Cipher.getInstance("desede/ CBC/NoPadding");//PKCS5Padding NoPadding IvParameterSpec ips = new IvParameterSpec(keyiv); cipher.init(Cipher.DECRYPT_MODE, deskey, ips); byte[] bout = cipher.doFinal(data); return bout; } catch (Exception e) { System.out.println("methods qualified name" + e); } return null; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/20227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Audio player on Windows Mobile I'm trying to develop specialized audio player for windows mobile devices (Professional ones). And I've ran into the problem an once. There no compressed audio APIs on WM or I was unable to found house in documentation. Yes there are WM6 Sound API but it cannot even pause playback or seek to specified position. There are allways Windows Media Player on WM device but I've not found it APIs documentation. So the question is: Is there simple way to play, pause, forward, rewind, getting playback position and getting audio file length on compressed audio of several popular formats? Any library? platform APIs? Anything? A: This might be of no help at all, but the (very good) podcast player BeyondPod has a built in player, based on Windows Media Player, and it's open source - so you could have a look at what API they are using. Obviously if they've written their own custom player, you wont be able to just copy it if you're writing a commercial app. But you could use it for API documentation if they're just calling through to some Media Player API. A: I've found quite a sufficient compressed audio playback library FMOD. There are WM version of it. And I've found sample application on CodeProject to start with.
{ "language": "en", "url": "https://stackoverflow.com/questions/20233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Process raw HTTP request content I am doing an e-commerce solution in ASP.NET which uses PayPal's Website Payments Standard service. Together with that I use a service they offer (Payment Data Transfer) that sends you back order information after a user has completed a payment. The final thing I need to do is to parse the POST request from them and persist the info in it. The HTTP request's content is in this form : SUCCESS first_name=Jane+Doe last_name=Smith payment_status=Completed payer_email=janedoesmith%40hotmail.com payment_gross=3.99 mc_currency=USD custom=For+the+purchase+of+the+rare+book+Green+Eggs+%26+Ham Basically I want to parse this information and do something meaningful, like send it through e-mail or save it in DB. My question is what is the right approach to do parsing raw HTTP data in ASP.NET, not how the parsing itself is done. A: Use an IHttpHandler and avoid the Page model overhead (which you don't need), but use Request.Form to get the values so you don't have to parse name value pairs yourself. Just pretend you're in PHP or Classic ASP (or ASP.NET MVC, for that matter). ;) A: I'd strongly recommend saving each request to some file. This way, you can always go back to the actual contents of it later. You can thank me later, when you find that hostile-endian, koi-8 encoded, [...], whatever it was that stumped your parser... A: Something like this placed in your onload event. if (Request.RequestType == "POST") { using (StreamReader sr = new StreamReader(Request.InputStream)) { if (sr.ReadLine() == "SUCCESS") { /* Do your parsing here */ } } } Mind you that they might want some special sort of response to (ie; not your full webpage), so you might do something like this after you're done parsing. Response.Clear(); Response.ContentType = "text/plain"; Response.Write("Thanks!"); Response.End(); Update: this should be done in a Generic Handler (.ashx) file in order to avoid a great deal of overhead from the page model. Check out this article for more information about .ashx files A: Well if the incoming data is in a standard form encoded POST format, then using the Request.Form array will give you all the data in a nice to handle manner. If not then I can't see any way other than using Request.InputStream. A: If I'm reading your question right, I think you're looking for the InputStream property on the Request object. Keep in mind that this is a firehose stream, so you can't reset it.
{ "language": "en", "url": "https://stackoverflow.com/questions/20245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: ILMerge and Web Resources We're attemtping to merge our DLL's into one for deployment, thus ILMerge. Almost everything seems to work great. We have a couple web controls that use ClientScript.RegisterClientScriptResource and these are 404-ing after the merge (These worked before the merge). For example one of our controls would look like namespace Company.WebControls { public class ControlA: CompositeControl, INamingContainer { protected override void OnPreRender(EventArgs e) { base.OnPreRender(e); this.Page.ClientScript.RegisterClientScriptResource(typeof(ControlA), "Company.WebControls.ControlA.js"); } } } It would be located in Project WebControls, assembly Company.WebControls. Underneath would be ControlA.cs and ControlA.js. ControlA.js is marked as an embedded resource. In the AssemblyInfo.cs I include the following: [assembly: System.Web.UI.WebResource("Company.WebControls.ControlA.js", "application/x-javascript")] After this is merged into CompanyA.dll, what is the proper way to reference this web resource? The ILMerge command line is as follows (from the bin directory after the build): "C:\Program Files\Microsoft\ILMerge\ILMerge.exe" /keyfile:../../CompanySK.snk /wildcards:True /copyattrs:True /out:Company.dll Company.*.dll A: OK - I got this working. It looks like the primary assembly was the only one whose assembly attributes were being copied. With copyattrs set, the last one in would win, not a merge (as far as I can tell). I created a dummy project to reference the other DLL's and included all the web resources from those projects in the dummy assembly info - now multiple resources from multiple projects are all loading correctly. Final post-build command line for dummy project: "C:\Program Files\Microsoft\ILMerge\ILMerge.exe" /keyfile:../../Company.snk /wildcards:True /out:Company.dll Company.Merge.dll Company.*.dll A: You need to set /allowMultiple along with /copyattrs. It is only then that ILMerge will merge the embedded resources from all assemblies.
{ "language": "en", "url": "https://stackoverflow.com/questions/20249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Refactoring for Testability on an existing system I've joined a team that works on a product. This product has been around for ~5 years or so, and uses ASP.NET WebForms. Its original architecture has faded over time, and things have become relatively disorganized throughout the solution. It's by no means terrible, but definitely can use some work; you all know what I mean. I've been performing some refactorings since coming on to the project team about 6 months ago. Some of those refactorings are simple, Extract Method, Pull Method Up, etc. Some of the refactorings are more structural. The latter changes make me nervous as there isn't a comprehensive suite of unit tests to accompany every component. The whole team is on board for the need to make structural changes through refactoring, but our Project Manager has expressed some concerns that we don't have adequate tests to make refactorings with the confidence that we aren't introducing regression bugs into the system. He would like us to write more tests first (against the existing architecture), then perform the refactorings. My argument is that the system's class structure is too tightly coupled to write adequate tests, and that using a more Test Driven approach while we perform our refactorings may be better. What I mean by this is not writing tests against the existing components, but writing tests for specific functional requirements, then refactoring existing code to meet those requirements. This will allow us to write tests that will probably have more longevity in the system, rather than writing a bunch of 'throw away' tests. Does anyone have any experience as to what the best course of action is? I have my own thoughts, but would like to hear some input from the community. A: Your PM's concerns are valid - make sure you get your system under test before making any major refactorings. I would strongly recommend getting a copy of Michael Feather's book Working Effectively With Legacy Code (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). This is chock full of good ideas for how to break down those couplings and dependencies you speak of, in a safe manner that won't risk introducing regression bugs. Good luck with the refactoring programme; in my experience it's an enjoyable and cathartic process from which you can learn a lot. A: Can you re-factor in parallel? What I mean is re-write the pieces you want to refactor using TDD, but leave the existing code base in place. Then phase out the existing code when your new tests meet the needs for your PM? A: I would also like to throw in a suggestion to visit the Refactoring website by Martin Fowler. He literally wrote the book on this stuff. As far as introducing unit tests into the equation the best method I have found is to find a top level component and identify all the external dependencies it has on concrete objects and replace them with interfaces. Once you've done that it will be a lot easier to write unit tests against your code base and you can do it one component at a time. Even better, you won't have to throw away any unit tests. Unit testing ASP.Net can be tricky, but there are plenty of frameworks that make it easier to do. ASP.Net MVC, and WCSF to name a few. A: Just tossing out a second recommendation for Working Effectively with Legacy Code, an excellent book that really opened my eyes to the fact that almost any old / crappy / untestable code can be wrangled! A: Totally agree with the answer from Ian Nelson. Additionally I would start to get some "high level" tests (functional or component tests) in place to preserve the behaviour from the view point of the user. This point might be the most important concern for your PM.
{ "language": "en", "url": "https://stackoverflow.com/questions/20262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a Profiler equivalent for MySql? "Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the Database Engine or Analysis Services." I find using SQL Server Profiler extremely useful during development, testing and when I am debugging database application problems. Does anybody know if there is an equivalent program for MySql? A: Try JET profiler is a real-time query performance and diagnostics tool! I use it in my work. Excellent software and support. Review Jet Profiler for MySQL A: In my opinion I've found everything here in raw.... Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication” # * Logging and Replication # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. log = /var/log/mysql/mysql.log or in newer versions of mysql, comment OUT this lines of codes general_log_file = /var/log/mysql/mysql.log general_log = 1 log_error = /var/log/mysql/error.log Just uncomment the “log” variable to turn on logging. Restart MySQL with this command: sudo /etc/init.d/mysql restart Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary. tail -f /var/log/mysql/mysql.log A: Using Neor Profiler SQL, is excellent! and the application is free for all users. https://www.profilesql.com/download/ A: Not sure about graphical user interface but there is a command that has helped me profile stored procedures a lot in MySQL using workbench: SET profiling = 1; call your_procedure; SHOW PROFILES; SET profiling = 0; A: Jet Profiler is good if it's a paid version. The LogMonitor just point it to the mysql log file. A: Something cool that is in version 5.0.37 of the community server is MySQL's new profiler. This may give you what info you are looking for. A: If version 5.0.37 isn't available, you might want to look at mytop. It simply outputs the current status of the server, but allows you to run EXPLAIN as (mentioned by mercutio) on particular queries. A: Are you wanting to monitor performance, or just see which queries are executing? If the latter, you can configure MySQL to log all queries it's given. On a RedHat Linux box, you might add log = /var/lib/mysql/query.log to the [mysqld] section of /etc/my.cnf before restarting MySQL. Remember that in a busy database scenario, those logs can grow quite large. A: I don't know about any profiling apps as such, but it's commonplace to use the EXPLAIN syntax for analysing queries. You can use these to figure out the best indexes to create, or you can try changing the overall query and see how it changes the efficiency, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/20263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77" }
Q: Best way to replace tokens in a large text template I have a large text template which needs tokenized sections replaced by other text. The tokens look something like this: ##USERNAME##. My first instinct is just to use String.Replace(), but is there a better, more efficient way or is Replace() already optimized for this? A: The only situation in which I've had to do this is sending a templated e-mail. In .NET this is provided out of the box by the MailDefinition class. So this is how you create a templated message: MailDefinition md = new MailDefinition(); md.BodyFileName = pathToTemplate; md.From = "[email protected]"; ListDictionary replacements = new ListDictionary(); replacements.Add("<%To%>", someValue); // continue adding replacements MailMessage msg = md.CreateMailMessage("[email protected]", replacements, this); After this, msg.Body would be created by substituting the values in the template. I guess you can take a look at MailDefinition.CreateMailMessage() with Reflector :). Sorry for being a little off-topic, but if this is your scenario I think it's the easiest way. A: Well, depending on how many variables you have in your template, how many templates you have, etc. this might be a work for a full template processor. The only one I've ever used for .NET is NVelocity, but I'm sure there must be scores of others out there, most of them linked to some web framework or another. A: string.Replace is fine. I'd prefer using a Regex, but I'm *** for regular expressions. The thing to keep in mind is how big these templates are. If its real big, and memory is an issue, you might want to create a custom tokenizer that acts on a stream. That way you only hold a small part of the file in memory while you manipulate it. But, for the naiive implementation, string.Replace should be fine. A: If you are doing multiple replaces on large strings then it might be better to use StringBuilder.Replace(), as the usual performance issues with strings will appear. A: Had to do something similar recently. What I did was: * *make a method that takes a dictionary (key = token name, value = the text you need to insert) *Get all matches to your token format (##.+?## in your case I guess, not that good at regular expressions :P) using Regex.Matches(input, regular expression) *foreach over the results, using the dictionary to find the insert value for your token. *return result. Done ;-) If you want to test your regexes I can suggest the regulator. A: Regular expressions would be the quickest solution to code up but if you have many different tokens then it will get slower. If performance is not an issue then use this option. A better approach would be to define token, like your "##" that you can scan for in the text. Then select what to replace from a hash table with the text that follows the token as the key. If this is part of a build script then nAnt has a great feature for doing this called Filter Chains. The code for that is open source so you could look at how its done for a fast implementation. A: FastReplacer implements token replacement in O(n*log(n) + m) time and uses 3x the memory of the original string. FastReplacer is good for executing many Replace operations on a large string when performance is important. The main idea is to avoid modifying existing text or allocating new memory every time a string is replaced. We have designed FastReplacer to help us on a project where we had to generate a large text with a large number of append and replace operations. The first version of the application took 20 seconds to generate the text using StringBuilder. The second improved version that used the String class took 10 seconds. Then we implemented FastReplacer and the duration dropped to 0.1 seconds. A: System.Text.RegularExpressions.Regex.Replace() is what you seek - IF your tokens are odd enough that you need a regex to find them. Some kind soul did some performance testing, and between Regex.Replace(), String.Replace(), and StringBuilder.Replace(), String.Replace() actually came out on top. A: If your template is large and you have lots of tokens, you probably don't want walk it and replace the token in the template one by one as that would result in an O(N * M) operation where N is the size of the template and M is the number of tokens to replace. The following method accepts a template and a dictionary of the keys value pairs you wish to replace. By initializing the StringBuilder to slightly larger than the size of the template, it should result in an O(N) operation (i.e. it shouldn't have to grow itself log N times). Finally, you can move the building of the tokens into a Singleton as it only needs to be generated once. static string SimpleTemplate(string template, Dictionary<string, string> replacements) { // parse the message into an array of tokens Regex regex = new Regex("(##[^#]+##)"); string[] tokens = regex.Split(template); // the new message from the tokens var sb = new StringBuilder((int)((double)template.Length * 1.1)); foreach (string token in tokens) sb.Append(replacements.ContainsKey(token) ? replacements[token] : token); return sb.ToString(); } A: This is an ideal use of Regular Expressions. Check out this helpful website, the .Net Regular Expressions class, and this very helpful book Mastering Regular Expressions.
{ "language": "en", "url": "https://stackoverflow.com/questions/20267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Why doesn't 'shell' work in VBscript in VS6? In a macro for Visual Studio 6, I wanted to run an external program, so I typed: shell("p4 open " + ActiveDocument.FullName) Which gave me a type mismatch runtime error. What I ended up having to type was this: Dim wshShell Set wshShell = CreateObject("WScript.Shell") strResult = wshShell.Run("p4 open " + ActiveDocument.FullName) What is going on here? Is that nonsense really necessary or have I missed something? A: VBScript isn't Visual Basic. A: As lassevk pointed out, VBScript is not Visual Basic. I believe the only built in object in VBScript is the WScript object. WScript.Echo "Hello, World!" From the docs The WScript object is the root object of the Windows Script Host object model hierarchy. It never needs to be instantiated before invoking its properties and methods, and it is always available from any script file. Everything else must be created via the CreateObject call. Some of those objects are listed here. The Shell object is one of the other objects that you need to create if you want to call methods on it. One caveat, is that RegExp is sort of built in, in that you can instantiate a RegExp object like so in VBScript: Dim r as New RegExp A: Give this a try: Shell "p4 open" & ActiveDocument.FullName A: VB6 uses & to concatenate strings rather than +, and you'll want to make sure the file name is encased in quotes in case of spaces. Try it like this: Shell "p4 open """ & ActiveDocument.FullName & """"
{ "language": "en", "url": "https://stackoverflow.com/questions/20272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Serve a form without a web interface Anyone doing any work using "offline" forms? We have an application that requires inputting data from outside our company. I was thinking about sending a form out via email, allowing the form to be filled out then sent back. Obviously a web application would be the best solution, but management doesn't seem ready to build the infrastructure and security to support that. I've read a little about PDF forms is that a good solution or are there other solutions? A: Have you considered InfoPath? These can be created and distributed through email. And then the data can be collated automatically. Also, consider using Google Spreadsheets with Google Forms. It's free and infrastructure is outsourced. PDF forms can work as well. A: Another possibility is to use Microsoft SharePoint. If your company uses Microsoft Office for the people filling the forms you referring to, you could deploy an Office based solution and gather information with Sharepoint Server. Check this link out.
{ "language": "en", "url": "https://stackoverflow.com/questions/20281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Associating source and search keywords with account creation As a part of the signup process for my online application, I'm thinking of tracking the source and/or search keywords used to get to my site. This would allow me to see what advertising is working and from where with a somewhat finer grain than Google Analytics would. I assume I could set some kind of cookie with this information when people get to my site, but I'm not sure how I would go about getting it. Is it even possible? I'm using Rails, but a language-independent solution (or even just pointers to where to find this information) would be appreciated! A: Your best bet IMO would be to use javascript to look for a cookie named "origReferrer" or something like that and if that cookie doesn't exist you should create one (with an expiry of ~24hours) and fill it with the current referrer. That way you'll have preserved the original referrer all the way from your users first visit and when your users have completed whatever steps you want them to have completed (ie, account creation) you can read back that cookie on the server and do whatever parsing/analyzing you want. Andy Brice explains the technique in his blog post Cookie tracking for profit and pleasure.
{ "language": "en", "url": "https://stackoverflow.com/questions/20286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to stop an animation in C# / WPF? I have something like this: barProgress.BeginAnimation(RangeBase.ValueProperty, new DoubleAnimation( barProgress.Value, dNextProgressValue, new Duration(TimeSpan.FromSeconds(dDuration))); Now, how would you stop that animation (the DoubleAnimation)? The reason I want to do this, is because I would like to start new animations (this seems to work, but it's hard to tell) and eventually stop the last animation... A: To stop it, call BeginAnimation again with the second argument set to null. A: In my case I had to use two commands, my xaml has a button which fires a trigger, and its trigger fires the storyboard animation. I've put a button to stop animation with this code behind: MyBeginStoryboard.Storyboard.Begin(this, true); MyBeginStoryboard.Storyboard.Stop(this); I don't like it but it really works here. Give it a try! A: If you want the base value to become the effective value again, you must stop the animation from influencing the property. There are three ways to do this with storyboard animations: * *Set the animation's FillBehavior property to Stop *Remove the entire Storyboard *Remove the animation from the individual property From MSDN How to: Set a Property After Animating It with a Storyboard A: Place the animation in a StoryBoard. Call Begin() and Stop() on the storyboard to start to stop the animations. A: When using storyboards to control an animation, make sure you set the second parameter to true in order to set the animation as controllable: public void Begin( FrameworkContentElement containingObject, **bool isControllable** ) A: There are two ways to stop a BeginAnimation. The first is to call BeginAnimation again with the second parameter set to null. This will remove all animations on the property and revert the value back to its base value. Depending on how you are using that value this may not be the behavior you want. The second way is to set the animations BeginTime to null then call BeginAnimation with it. This will remove that specific animation and leave the value at its current position. DoubleAnimation myAnimation = new Animation(); // Initialize animation ... // To start element.BeginAnimation(Property, myAnimation); // To stop and keep the current value of the animated property myAnimation.BeginTime = null; element.BeginAnimation(Property, myAnimation); A: <Trigger.EnterActions> <BeginStoryboard x:Name="myStory"> ......... </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="myStory"/> </Trigger.ExitActions> A: You can use this code: [StoryBoardName].Remove([StoryBoardOwnerControl]);
{ "language": "en", "url": "https://stackoverflow.com/questions/20298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: What's the best way to manage a classic asp front-end using Visual Studio 2008? I support a third party system that uses COM, classic ASP, and SQL Server. Our company has gone to using TFS as our source control provider - which pushes things through Visual Studio. So, what's the best way to get a classic asp front-end into Visual Studio? A: When I had to do this, I created a blank solution in VS and then added the folders from the ASP site one at a time, adding "existing items" to each folder as I created it. In this way I'm able to open the solution which keeps track of what files I had open at last open, plus I get the benefits of intellisense A: I have done this frequently over the last 5 years... You could create an Empty Website, or even create a standard (.NET) website and simply delete the default stuff it generates (web.config etc). Note: create a website, creates a solution and adds a website project within it, which is arguably slightly more preferable than simply adding files and folders to solution. As Geoff says, you get most of the benefits of VS including intellisense. Suck it and see... Have a fiddle - you are not going to break anything! A: Did any of you heard of Visual Web developer 2008 Express edition? Work wonders for me. Most important as soon as you ask to open website in it- it opens each folder of intended website and when you save project it does what you need to do to later open this website in regular visual studio. It have some limited capability even to convert simple Classic ASP pages into NET code! A: Create a blank solution in VS and add all the files by choosing "Add Existing Item.."
{ "language": "en", "url": "https://stackoverflow.com/questions/20309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: One database or many? I am developing a website that will manage data for multiple entities. No data is shared between entities, but they may be owned by the same customer. A customer may want to manage all their entities from a single "dashboard". So should I have one database for everything, or keep the data seperated into individual databases? Is there a best-practice? What are the positives/negatives for having a: * *database for the entire site (entity has a "customerID", data has "entityID") *database for each customer (data has "entityID") *database for each entity (relation of database to customer is outside of database) Multiple databases seems like it would have better performance (fewer rows and joins) but may eventually become a maintenance nightmare. A: Personally, I prefer separate databases, specifically a database for each entity. I like this approach for the following reasons: * *Smaller = faster regarding the queries. *Queries are simpler. *No risk of ever accidentally displaying one customer's data to another. *One database could pose a performance bottleneck as it gets large (# of entities increase). You get a sort of build in horizontal scalability with 1 per entity. *Easy data clean up as customers or entities are removed. Sure it'll take more time to upgrade the schema, but in my experience modifications are fairly uncommon once you deploy and additions are trivial. A: I think this is hard to answer without more information. I lean on the side of one database. Properly coded business objects should prevent you from forgetting clientId in your queries. The type of database you are using and how it scales might help you make your decision. For schema changes down the road, it seems one database would be easier from a maintenance perspective - you have one place to make them. A: What about backup and restore? Could you experience a customer wanting to restore a backup for one of their entities? A: This is a fairly normal scenario in multi-tenant SAAS applications. Both approaches have their pros and cons. Search on best practices for multi-tenant SAAS (software as a service) and you will find tons of stuff to ponder upon. A: Check out this article on Microsoft's site. I think it does a nice job of laying out the different costs and benefits associated with Multi-Tenant designs. Also look at the Multi tenancy article on wikipedeia. There are many trade offs and your best match greatly depends on what type of product you are developing. A: One good argument for keeping them in separate databases is that its easier to scale (you can simply have multiple installations of the server, with the client databases distributed across the servers). Another argument is that once you are logged in, you don't need to add an extra where check (for client ID) in each of your queries. So, a master DB backed by multiple DBs for each client may be a better approach, A: If the client would ever need to restore only a single entity from a backup and leave the others in their current state, then the maintenance will be much easier if each entity is in a separate database. if they can be backed up and restored together, then it may be easier to maintain the entities as a single database. A: I think you have to go with the most realistic scenario and not necessarily what a customer "may" want to do in the future. If you are going to market that feature (i.e. seeing all your entities in one dashboard), then you have to either find a solution (maybe have the dashboard pull from multiple databases) or use a single database for the whole app. IMHO, having the data for multiple clients in the same database just seems like a bad idea to me. You'll have to remember to always filter your queries by clientID. A: It also depends on your RDBMS e.g. With SQL server databases are cheep With Oracle it is easy to partition tables by customer "customerID", so a single large database can run as fast as a small database for each customer. However witch every you choose, try to hide it as a low level in your data access code A: Do you plan to have your code deployed to multiple environments? If so, then try to keep it within one database and have all table references prefixed with a namespace from a configuration file. A: The single database option would make the maintenance much easier.
{ "language": "en", "url": "https://stackoverflow.com/questions/20321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I log uncaught exceptions in PHP? I've found out how to convert errors into exceptions, and I display them nicely if they aren't caught, but I don't know how to log them in a useful way. Simply writing them to a file won't be useful, will it? And would you risk accessing a database, when you don't know what caused the exception yet? A: I really like log4php for logging, even though it's not yet out of the incubator. I use log4net in just about everything, and have found the style quite natural for me. With regard to system crashes, you can log the error to multiple destinations (e.g., have appenders whose threshold is CRITICAL or ERROR that only come into play when things go wrong). I'm not sure how fail-safe the existing appenders are--if the database is down, how does that appender fail?--but you could quite easily write your own appender that will fail gracefully if it's unable to log. A: Simply writing them to a file won't be useful, will it? But of course it is - that's a great thing to do, much better than displaying them on the screen. You want to show the user a nice screen which says "Sorry, we goofed. Engineers have been notified. Go back and try again" and ABSOLUTELY NO TECHNICAL DETAIL, because to do so would be a security risk. You can send an email to a shared mailbox and log the exception to file or DB for review later. This would be a best-practice. A: You could use set_error_handler to set a custom exception to log your errors. I'd personally consider storing them in the database as the default Exception handler's backtrace can provide information on what caused it - this of course won't be possible if the database handler triggered the exception however. You could also use error_log to log your errors. It has a choice of message destinations including: Quoted from error_log * *PHP's system logger, using the Operating System's system logging mechanism or a file, depending on what the error_log configuration directive is set to. This is the default option. *Sent by email to the address in the destination parameter. This is the only message type where the fourth parameter, extra_headers is used. *Appended to the file destination . A newline is not automatically added to the end of the message string. Edit: Does markdown have a noparse tag for underscores? A: I'd write them to a file - and maybe set a monitoring system up to check for changes to the filesize or last-modified date. Webmin is one easy way, but there are more complete software solutions. If you know its a one-off error, then emailing a notice can be fine. However, with a many hits per minute website, do not ever email a notification. I've seen a website brought down by having hundreds of emails per minute being generated to say that the system could not connect to the database. The fact that it also had a LoadAvg of > 200 because of of the mail server being run for every new message, did not help at all. In that instance - the best scenario was, by far and away, the watchdog checking for filesizes and connecting to an external service to send an SMS (maybe an IM), or having an external system look on a webpage for an error message (which doesn't have to be visible on screen - it can be in a HTML comment). A: I think it depends a lot of where your error occured. If the DB is down logging it to the DB is no good idea ;) I use the syslog() function for logging the error, but I have no problems writing it to a file when I'm on a system which has no syslog-support. You can easily set up your system to send you an email or a jabber message using for example logwatch or the standard syslogd. A: I second log4php. I typically have it configured to send things like exceptions to ERROR or CRITITCAL and have them written to syslog. From there, you can have your syslog feed into Zenoss, Nagios, Splunk or anything else that syslog can talk to. A: You can also catch and record PHP exceptions using Google Forms. There is a tutorial here that explains the process.
{ "language": "en", "url": "https://stackoverflow.com/questions/20322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Double postback issue I have a ASP.NET 1.1 application, and I'm trying to find out why when I change a ComboBox which value is used to fill another one (parent-child relation), two postbacks are produced. I have checked and checked the code, and I can't find the cause. Here are both call stacks which end in a page_load First postback (generated by teh ComboBox's autopostback) Postback call stack (broken) Second postback (this is what I want to find why it's happening) alt text (broken) Any suggestion? What can I check? A: It's a very specific problem with this code, I doubt it will be useful for someone else, but here it goes: A check was added to the combo's onchange with an if, if the condition was met, an explicit call to the postback function was made. If the combo was set to AutoPostback, asp.net added the postback call again, producing the two postbacks... The generated html was like this: [select onchange="javascript: if (CustomFunction()){__doPostBack('name','')}; __doPostBack('name','')"] A: First thing I would look for is that you don't have the second ComboBox's AutoPostBack property set to true. If you change the value in the second combo with that property set true, I believe it will generate a postback on that control. A: Do you have any code you could share? Double post backs plagued me so much in classic ASP back in the day that it was what finally prompted me to switch to .NET once and for all. Whenever I have problems like these for .NET, I go to every CONTROL and every PAGE element like load, init, prerender, click, SelectedIndexChanged, and the like and put a breakpoint. Even if I don't have code there, I'll insert something like: Dim i As Integer i = 0 I am usually able to pinpoint some action that I wasn't expecting and fix as needed. I would suggest you do that here. Good luck. A: Check Request.Form["__EVENTTARGET"] to find the control initiating the postback - that may help you narrow it down. Looking at the callstacks, and some Reflectoring (into ASP.NET 2 - I don't have 1.1 handy) - it looks like SessionStateModule.PollLockedSessionCallback is part of the HttpApplication startup routines. It may be possible that your app is being recycled - I'm pretty sure an event gets written into the Event log for that. My only other suggestion would be Fiddler or something on the client to capture HTTP traffic. A: This is very old post, but people still looking at this for solution exactly same as I did last week. Like Grengby said Double events are primary reasons - but removing one of them is not allways an option. Atleast on my case and I had to resolve this on 3rd party's application. I added following script and amended ASP form on masterpage: <script>var Q = 0;</script> <form id="Form1" runat="server" onsubmit="Q++; if(Q==1){return true;} else { return false;}"> This seems to be working and please forward your comments. Arun http://www.velocityreviews.com/forums/t117900-asp-net-multiple-postback-issue.html
{ "language": "en", "url": "https://stackoverflow.com/questions/20326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: State of Registers After Bootup I'm working on a boot loader on an x86 machine. When the BIOS copies the contents of the MBR to 0x7c00 and jumps to that address, is there a standard meaning to the contents of the registers? Do the registers have standard values? I know that the segment registers are typically set to 0, but will sometimes be 0x7c0. What about the other hardware registers? A: This early execution environment is highly implementation defined, meaning the implementation of your particular BIOS. Never make any assumptions on the contents of registers. They might be initialized to 0, but they might contain a random value just as well. from the OS dev Wiki, which is where I get information when I'm playing with my toy OS's A: Best option would be to assume nothing. If they have meaning, you will find that from the other side when you need the information they provide. A: Undefined, I believe? I think it depends on the mainboard and CPU, and should be treated as random for your own good. A: You can always initialize them yourself to start with a known state. A: Safest bet is to assume undefined. A: The only thing that I know to be well defined is the processor state immediately after reset. For the record you can find that in Intel's Software Developer's Manual Vol 3 chapter 8: "PROCESSOR MANAGEMENT AND INITIALIZATION" in the table titled " IA-32 Processor States Following Power-up, Reset, or INIT" A: Always assume undefined, otherwise you'll hit bad problems if you ever try to port architectures. There is nothing quite like the pain of porting code that assumes everything uninitialized will be set to zero.
{ "language": "en", "url": "https://stackoverflow.com/questions/20336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What are attributes in .NET? What are attributes in .NET, what are they good for, and how do I create my own attributes? A: An attribute is a class that contains some bit of functionality that you can apply to objects in your code. To create one, create a class that inherits from System.Attribute. As for what they're good for... there are almost limitless uses for them. http://www.codeproject.com/KB/cs/dotnetattributes.aspx A: Attributes are like metadata applied to classes, methods or assemblies. They are good for any number of things (debugger visualization, marking things as obsolete, marking things as serializable, the list is endless). Creating your own custom ones is easy as pie. Start here: http://msdn.microsoft.com/en-us/library/sw480ze8(VS.71).aspx A: In the project I'm currently working on, there is a set of UI objects of various flavours and an editor to assembly these objects to create pages for use in the main application, a bit like the form designer in DevStudio. These objects exist in their own assembly and each object is a class derived from UserControl and has a custom attribute. This attribute is defined like this: [AttributeUsage (AttributeTargets::Class)] public ref class ControlDescriptionAttribute : Attribute { public: ControlDescriptionAttribute (String ^name, String ^description) : _name (name), _description (description) { } property String ^Name { String ^get () { return _name; } } property String ^Description { String ^get () { return _description; } } private: String ^ _name, ^ _description; }; and I apply it to a class like this: [ControlDescription ("Pie Chart", "Displays a pie chart")] public ref class PieControl sealed : UserControl { // stuff }; which is what the previous posters have said. To use the attribute, the editor has a Generic::List <Type> containing the control types. There is a list box which the user can drag from and drop onto the page to create an instance of the control. To populate the list box, I get the ControlDescriptionAttribute for the control and fill out an entry in the list: // done for each control type array <Object ^> // get all the custom attributes ^attributes = controltype->GetCustomAttributes (true); Type // this is the one we're interested in ^attributetype = ECMMainPageDisplay::ControlDescriptionAttribute::typeid; // iterate over the custom attributes for each (Object ^attribute in attributes) { if (attributetype->IsInstanceOfType (attribute)) { ECMMainPageDisplay::ControlDescriptionAttribute ^description = safe_cast <ECMMainPageDisplay::ControlDescriptionAttribute ^> (attribute); // get the name and description and create an entry in the list ListViewItem ^item = gcnew ListViewItem (description->Name); item->Tag = controltype->Name; item->SubItems->Add (description->Description); mcontrols->Items->Add (item); break; } } Note: the above is C++/CLI but it's not difficult to convert to C# (yeah, I know, C++/CLI is an abomination but it's what I have to work with :-( ) You can put attributes on most things and there are whole range of predefined attributes. The editor mentioned above also looks for custom attributes on properties that describe the property and how to edit it. Once you get the whole idea, you'll wonder how you ever lived without them. A: As said, Attributes are relatively easy to create. The other part of the work is creating code that uses it. In most cases you will use reflection at runtime to alter behavior based on the presence of an attribute or its properties. There are also scenarios where you will inspect attributes on compiled code to do some sort of static analysis. For example, parameters might be marked as non-null and the analysis tool can use this as a hint. Using the attributes and knowing the appropriate scenarios for their use is the bulk of the work. A: Many people have answered but no one has mentioned this so far... Attributes are used heavily with reflection. Reflection is already pretty slow. It is very worthwhile marking your custom attributes as being sealed classes to improve their runtime performance. It is also a good idea to consider where it would be appropriate to use place such an attribute, and to attribute your attribute (!) to indicate this via AttributeUsage. The list of available attribute usages might surprise you: * *Assembly *Module *Class *Struct *Enum *Constructor *Method *Property *Field *Event *Interface *Parameter *Delegate *ReturnValue *GenericParameter *All It's also cool that the AttributeUsage attribute is part of the AttributeUsage attribute's signature. Whoa for circular dependencies! [AttributeUsageAttribute(AttributeTargets.Class, Inherited = true)] public sealed class AttributeUsageAttribute : Attribute A: Attributes are, essentially, bits of data you want to attach to your types (classes, methods, events, enums, etc.) The idea is that at run time some other type/framework/tool will query your type for the information in the attribute and act upon it. So, for example, Visual Studio can query the attributes on a 3rd party control to figure out which properties of the control should appear in the Properties pane at design time. Attributes can also be used in Aspect Oriented Programming to inject/manipulate objects at run time based on the attributes that decorate them and add validation, logging, etc. to the objects without affecting the business logic of the object. A: You can use custom attributes as a simple way to define tag values in sub classes without having to write the same code over and over again for each subclass. I came across a nice concise example by John Waters of how to define and use custom attributes in your own code. There is a tutorial at http://msdn.microsoft.com/en-us/library/aa288454(VS.71).aspx A: To get started creating an attribute, open a C# source file, type attribute and hit [TAB]. It will expand to a template for a new attribute. A: Metadata. Data about your objects/methods/properties. For example I might declare an Attribute called: DisplayOrder so I can easily control in what order properties should appear in the UI. I could then append it to a class and write some GUI components that extract the attributes and order the UI elements appropriately. public class DisplayWrapper { private UnderlyingClass underlyingObject; public DisplayWrapper(UnderlyingClass u) { underlyingObject = u; } [DisplayOrder(1)] public int SomeInt { get { return underlyingObject .SomeInt; } } [DisplayOrder(2)] public DateTime SomeDate { get { return underlyingObject .SomeDate; } } } Thereby ensuring that SomeInt is always displayed before SomeDate when working with my custom GUI components. However, you'll see them most commonly used outside of the direct coding environment. For example the Windows Designer uses them extensively so it knows how to deal with custom made objects. Using the BrowsableAttribute like so: [Browsable(false)] public SomeCustomType DontShowThisInTheDesigner { get{/*do something*/} } Tells the designer not to list this in the available properties in the Properties window at design time for example. You could also use them for code-generation, pre-compile operations (such as Post-Sharp) or run-time operations such as Reflection.Emit. For example, you could write a bit of code for profiling that transparently wrapped every single call your code makes and times it. You could "opt-out" of the timing via an attribute that you place on particular methods. public void SomeProfilingMethod(MethodInfo targetMethod, object target, params object[] args) { bool time = true; foreach (Attribute a in target.GetCustomAttributes()) { if (a.GetType() is NoTimingAttribute) { time = false; break; } } if (time) { StopWatch stopWatch = new StopWatch(); stopWatch.Start(); targetMethod.Invoke(target, args); stopWatch.Stop(); HandleTimingOutput(targetMethod, stopWatch.Duration); } else { targetMethod.Invoke(target, args); } } Declaring them is easy, just make a class that inherits from Attribute. public class DisplayOrderAttribute : Attribute { private int order; public DisplayOrderAttribute(int order) { this.order = order; } public int Order { get { return order; } } } And remember that when you use the attribute you can omit the suffix "attribute" the compiler will add that for you. NOTE: Attributes don't do anything by themselves - there needs to be some other code that uses them. Sometimes that code has been written for you but sometimes you have to write it yourself. For example, the C# compiler cares about some and certain frameworks frameworks use some (e.g. NUnit looks for [TestFixture] on a class and [Test] on a test method when loading an assembly). So when creating your own custom attribute be aware that it will not impact the behaviour of your code at all. You'll need to write the other part that checks attributes (via reflection) and act on them. A: Attributes are a kind of meta data for tagging classes. This is often used in WinForms for example to hide controls from the toolbar, but can be implemented in your own application to enable instances of different classes to behave in specific ways. Start by creating an attribute: [AttributeUsage(AttributeTargets.Class, AllowMultiple=false, Inherited=true)] public class SortOrderAttribute : Attribute { public int SortOrder { get; set; } public SortOrderAttribute(int sortOrder) { this.SortOrder = sortOrder; } } All attribute classes must have the suffix "Attribute" to be valid. After this is done, create a class that uses the attribute. [SortOrder(23)] public class MyClass { public MyClass() { } } Now you can check a specific class' SortOrderAttribute (if it has one) by doing the following: public class MyInvestigatorClass { public void InvestigateTheAttribute() { // Get the type object for the class that is using // the attribute. Type type = typeof(MyClass); // Get all custom attributes for the type. object[] attributes = type.GetCustomAttributes( typeof(SortOrderAttribute), true); // Now let's make sure that we got at least one attribute. if (attributes != null && attributes.Length > 0) { // Get the first attribute in the list of custom attributes // that is of the type "SortOrderAttribute". This should only // be one since we said "AllowMultiple=false". SortOrderAttribute attribute = attributes[0] as SortOrderAttribute; // Now we can get the sort order for the class "MyClass". int sortOrder = attribute.SortOrder; } } } If you want to read more about this you can always check out MSDN which has a pretty good description. I hope this helped you out! A: Attributes are also commonly used for Aspect Oriented Programming. For an example of this check out the PostSharp project.
{ "language": "en", "url": "https://stackoverflow.com/questions/20346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "209" }
Q: SQL Server 2005 - Export table programmatically (run a .sql file to rebuild it) I have a database with a table Customers that have some data I have another database in the office that everything is the same, but my table Customers is empty How can I create a sql file in SQL Server 2005 (T-SQL) that takes everything on the table Customers from the first database, creates a, let's say, buildcustomers.sql, I zip that file, copy it across the network, execute it in my SQL Server and voila! my table Customers is full How can I do the same for a whole database? A: This functionality is already built in to Sql Server Management Studio 2008. Just download the trial and only install the client tools (which shouldn't expire). Use Management Studio 2008 to connect to your 2005 database (its backwards compatible). * *Right click your database *Choose Tasks > Generate Scripts *Press Next, select your database again *On the 'Choose Script Options' screen, there is an option called Script Data which will generate SQL insert statements for all your data. (Note: for SQL Server Management Studio 2008 R2, the option is called "Types of data to script" and is the last one in the General section. The choices are "data only", "schema and data", and "schema only") A: Use bcp (from the command line) to a networked file and then restore it. e.g. bcp "SELECT * FROM CustomerTable" queryout "c:\temp\CustomerTable.bcp" -N -S SOURCESERVERNAME -T bcp TargetDatabaseTable in "c:\temp\CustomerTable.bcp" -N -S TARGETSERVERNAME -T * *-N use native types *-T use the trusted connection *-S ServerName Very quick and easy to embed within code. (I've built a database backup(restore) system around this very command. A: You can check the following article to see how you can do this by using both SQL Server native tools and the third party tools: SQL Server bulk copy and bulk import and export techniques Disclaimer: I work for ApexSQL as a Support Engineer Hope this helps A: You could always export the data from the Customers table to an Excel file and import that data into your Customers table. To import/export data: * *Right click on database *Go to Tasks *Go to Import Data or Export Data *Change the data source to Microsoft Excel *Follow the wizard A: If both databases resides in the same instance of SQL Server, ie use same connection, this SQL might be helpful: INSERT INTO [DestinationDB].[schema].[table] ([column]) SELECT [column] FROM [OriginDB].[schema].[table] GO A: For Data Expoer as SQL script in SQL server 2005, http://blog.sqlauthority.com/2007/11/16/sql-server-2005-generate-script-with-data-from-database-database-publishing-wizard/ A: I just like to add some screen shoots for Sql Server Management Studio 2008. It is correct to use the steps describe previously. When you have the 'Generate and Publish Script' -> 'Set Script Options' then press Advance to see script options: ![Where to find Advanced script options]: image missing because I do not have the right reputation :( For Sql Server Management Studio 2008 the option to included data is 'Types of data to script' ![Types of data to script]: image missing because I do not have the right reputation :(
{ "language": "en", "url": "https://stackoverflow.com/questions/20363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: JavaScript Profiler in IE Does anyone know a tool for Profiling JavaScript in IE? List available: * *IE8 (Internet Explorer 8 only) *JavaScript Profiler *YUI! A: JavaScript Profiler is a decent tool. A: Checkout http://ejohn.org/blog/deep-tracing-of-internet-explorer/ the dynaTrace tool shown here is fantastic and works with IE7. A: The Internet Explorer 8 beta (2) has a builtin Javascript profiler (in the developer toolbar). It's worth playing with at least... A: YUI also provides a profiler. A: js-profiler also provide a profiler that is working in any browser and does not depend on any framework. A: I don't think debugbar has a profiler.. but it does have a debugger and a console... so you can fake it... A: We use Firebugs console.log, console.time and console.timeEnd (I think) a lot. Firebug also has a built in profiler.
{ "language": "en", "url": "https://stackoverflow.com/questions/20376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Memory leaks in .NET What are all the possible ways in which we can get memory leaks in .NET? I know of two: * *Not properly un-registering Event Handlers/Delegates. *Not disposing dynamic child controls in Windows Forms: Example: // Causes Leaks Label label = new Label(); this.Controls.Add(label); this.Controls.Remove(label); // Correct Code Label label = new Label(); this.Controls.Add(label); this.Controls.Remove(label); label.Dispose(); Update: The idea is to list common pitfalls which are not too obvious (such as the above). Usually the notion is that memory leaks are not a big problem because of the garbage collector. Not like it used to be in C++. Great discussion guys, but let me clarify... by definition, if there is no reference left to an object in .NET, it will be Garbage Collected at some time. So that is not a way to induce memory leaks. In the managed environment, I would consider it a memory leak if you had an unintended reference to any object that you aren't aware of (hence the two examples in my question). So, what are the various possible ways in which such a memory leak can happen? A: Block the finalizer thread. No other objects will be garbage collected until the finalizer thread is unblocked. Thus the amount of memory used will grow and grow. Further reading: http://dotnetdebug.net/2005/06/22/blocked-finalizer-thread/ A: Exceptions in Finalise (or Dispose calls from a Finaliser) methods that prevent unmanaged resources from being correctly disposed. A common one is due to the programmer assuming what order objects will be disposed and trying to release peer objects that have already been disposed resulting in an exception and the rest of the Finalise/Dispose from Finalise method not being called. A: I have 4 additional items to add to this discussion: * *Terminating threads (Thread.Abort()) that have created UI Controls without properly preparing for such an event may lead to memory being used expectantly. *Accessing unmanaged resources through Pinvoke and not cleaning them up may lead to memory leaks. *Modifying large string objects. Not necessarily a memory leak, once out of scope, GC will take care of it, however, performance wise, your system may take a hit if large strings are modified often because you can not really depend on GC to ensure your program's foot print is minimal. *Creating GDI objects often to perform custom drawing. If performing GDI work often, reuse a single gdi object. A: That doesn't really cause leaks, it just makes more work for the GC: // slows GC Label label = new Label(); this.Controls.Add(label); this.Controls.Remove(label); // better Label label = new Label(); this.Controls.Add(label); this.Controls.Remove(label); label.Dispose(); // best using( Label label = new Label() ) { this.Controls.Add(label); this.Controls.Remove(label); } Leaving disposable components lying around like this is never much of a problem in a managed environment like .Net - that's a big part of what managed means. You'll slow you app down, certainly. But you won't leave a mess for anything else. A: Are you talking about unexpected memory usage or actual leaks? The two cases you listed aren't exactly leaks; they are cases where objects stick around longer than intended. In other words, they are references the person who calls them memory leaks didn't know or forgot about. Edit: Or they are actual bugs in the garbage collector or non-managed code. Edit 2: Another way to think about this is to always make sure external references to your objects get released appropriately. External means code outside of your control. Any case where that happens is a case where you can "leak" memory. A: Calling IDisposable every time is the easiest place to start, and definitely an effective way to grab all the low-hanging memory leak fruit in the codebase. However, it is not always enough. For example, it's also important to understand how and when managed code is generated at runtime, and that once assemblies are loaded into the application domain, they are never unloaded, which can increase the application footprint. A: To prevent .NET memory leaks: 1) Employ the 'using' construct (or 'try-finally construct) whenever an object with 'IDisposable' interface is created. 2) Make classes 'IDisposable' if they create a thread or they add an object to a static or long lived collection. Remember a C# 'event' is a collection. Here is a short article on Tips to Prevent Memory Leaks. A: There's no way to provide a comprehensive list... this is very much like asking "How can you get wet?" That said, make sure you're calling Dispose() on everything that implements IDisposable, and make sure you implement IDisposable on any types that consume unmanaged resources of any kind. Every now and then, run something like FxCop on your codebase to help you enforce that rule - you'd be surprised how deep some disposable objects get buried within an application framework. A: Setting the GridControl.DataSource property directly without using an instance of the BindingSource class (http://msdn.microsoft.com/en-us/library/system.windows.forms.bindingsource.aspx). This caused leaks in my application that took me quite a while to track down with a profiler, eventually I found this bug report that Microsoft responded to: http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=92260 It's funny that in the documentation for the BindingSource class Microsoft try to pass it off as a legitmate well thought out class, but I think they just created it to solve a fundamental leak regarding currency managers and binding data to grid controls. Watch out for this one, I bet there are absolutely loads of leaky applications out there because of this! A: *Keeping around references to objects that you no longer need. Re other comments - one way to ensure Dispose gets called is to use using... when code structure allows it. A: One thing that was really unexpected for me is this: Region oldClip = graphics.Clip; using (Region newClip = new Region(...)) { graphics.Clip = newClip; // draw something graphics.Clip = oldClip; } Where's the memory leak? Right, you should have disposed oldClip, too! Because Graphics.Clip is one of the rare properties that returns a new disposable object every time the getter is invoked. A: Tess Fernandez Has great blog posts about finding and debugging memory leaks. Lab 6 Lab 7 A: A lot of the things that can cause memory leaks in unmanaged languages can still cause memory leaks in managed languages. For example, bad caching policies can result in memory leaks. But as Greg and Danny have said, there is no comprehensive list. Anything that can result in holding memory after its useful lifetime can cause a leak. A: Deadlocked threads will never release roots. Obviously you could argue that the deadlock presents a bigger problem. A deadlocked finalizer thread will prevent all remaining finalizers to run and thus prevent all finalizable objects from being reclaimed (as they are still being rooted by the freachable list). On a multi CPU machine you could create finalizable objects faster than the finalizer thread could run finalizers. As long as that is sustained you will "leak" memory. It is probably not very likely that this will happen in the wild, but it is easy to reproduce. The large object heap is not compacted, so you could leak memory through fragmentation. There are a number of objects which must be freed manually. E.g. remoting objects with no lease and assemblies (must unload AppDomain).
{ "language": "en", "url": "https://stackoverflow.com/questions/20386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Deploying InfoPath forms to different SharePoint servers How do you manage deploying InfoPath forms to different sharepoint servers? Is there a better way to deal all the data connections being site-specific without opening the forms, editing the data connections and republishing for each environment? A: This is a common problem, if you are working on a dev-system and need deployments to a productive system from time to time. I use a script that performs (plain text) replacements based on regular expressions. on each deploy: * *make a backup of your form ;-) *Save your form as source code. (I suggest you work on source code files rather than the .xsn, because the xsn is only a renamed .cab with the source files in it. And you are able to use source control in a more satisfying way.) *open the manifest.xsf file *search for the xml node "DataConnections" *search and replace the site-url part *(Do not forget the save-path, file-&site attributes and publishUrl) *deploy from the InfoPath Designer I use a script that does all the replacements. That works fine and already saved me a lot of work. A: If you go into the submit options, there is an option to perform custom action using rules. If you have all of the data connections set up, you can configure rules to select which connection to submit to. A: If I understand your scenario correctly: You have an InfoPath form, with data connections that submit your data. You wish to deploy this form on multiple SharePoint Servers and have those data connections submit data to the currently deployed server. You can't really get around needing to do work on every SharePoint server that you would want to deploy the form to. However, you can get around needing to modify the InfoPath Form Template. If you use the SharePoint Data Connection Library (DCL), and create a UDC file from your data connection, on every SharePoint Server that you would want to use...then your InfoPath Template can just talk to the UDC file. Here's a link to an article about integrating InfoPath with SharePoint's DCL: http://msdn.microsoft.com/en-us/library/bb267335.aspx A: re: speedfox's answer, try to stay away from editing the manifest whenever possible. It'll just lead to head aches. If I understand your problem, you're deploying to multiple servers (DEV, UAT, Production) and need to edit the data connection manually every time you go from one environment to another? Forgive me if I've over simplified the problem I've found the best way to make data connections site relative is to: * *Use data connection files in your form. Open the data connection wizard in infopath and for all of you data conencting click "Convert..." this changes your data connection from being embedded in the form to being an independant XML file. You'll need a Data Conenction Library on you sharepoint site to store these in. Create that in the browser. *After you've converted and the connection go back into it and there will be a Connection Options... button use it to change from "Local data connection library" to "Centrally managed connection library" *Upload the data connection that is in your sites Data Connection Library to central admin *When you publish your form make sure you're publishing to a centrally managed location (Central Admin) *Use your form as a content type in any forms library on that site collection. *To use the form on another site, upload the data connection file to the new servers central admin and publish the (unchanged) form to the centrally managed forms. A: See my blog post where I take you step-by-step with relevant snapshots covering the following: a. Converting InfoPath Data Connections to DCL library in SharePoint. b. Publishing InfoPath form to a SharePoint List/Library c. Creating a .wsp solution package for the InfoPath form and its code-behind d. Creating a batch script that will deploy the InfoPath form on your Production site. e. Ensuring the InfoPath form has been deployed as a feature f. Modify the DCL's in the production environment. g. Associate the InfoPath Content Type with the Document/Forms Library See the full blog post at: http://www.sharepointfix.com/2009/12/infopath-2007-form-and-nintex-workflows.html A: By site-specific, do you mean that the data connections in your forms refer to the server the form is deployed to? If that's the case perhaps you could tweak your connections to use localhost instead of the server name for the hostname part of the data connection URLs. A: In my scenario, I am not using the built-in "save" button. I have a data connection that I use to "post" the data to another list. Yes, that's what I mean by site-specific. I don't think you can use localhost 'cos then when a user saves the form, it'll try to post to the user's computer (i.e. localhost). I have tried to use relative paths but that doesn't seem to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/20389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Script to backup svn repository to network share I have a svn repo on my machine (Windows). Anyone have a script to back it up to a network share? I'm using the repo locally since I'm disconnected a lot. The network share is on a server with a backup strategy. I'm a perfect candidate for git/hg but I don't want to give up my VS integration just yet. A: svnadmin dump C:\SVNRepositorio\Repositorio > \Backups\BkTmpSubversion\subversiontemp.dump ditto Spooky's reply ^^ On linux you might try adding "| gzip" in the middle also take a look at the --incremental & --deltas flags sparkes: For some values of "My machine" that won't be local. Also If you are using SVN for non commercial reasons (I have all my homework from collage checked into a SVN) you might not have a backup system. A: svnadmin dump C:\SVNRepositorio\Repositorio > \\Backups\BkTmpSubversion\subversiontemp.dump Try this. A: I wrote a batch file to do this for a bunch of repos, you could just hook that batch file up to windows scheduler and run it on a schedule. svnadmin hotcopy m:\Source\Q4Press\Repo m:\SvnOut\Q4Press I use the hotcopy but the svn dump would work just as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/20391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is it OK to drop sql statistics? We've been trying to alter a lot of columns from nullable to not nullable, which involves dropping all the associated objects, making the change, and recreating the associated objects. We've been using SQL Compare to generate the scripts, but I noticed that SQL Compare doesn't script statistic objects. Does this mean its ok to drop them and the database will work as well as it did before without them, or have Red Gate missed a trick? A: If you have update stats and auto create stats on then it should works as before You can also run sp_updatestats or UPDATE STATISTICS WITH FULLSCAN after you make the changes A: It is considered best practice to auto create and auto update statistics. Sql Server will create them if it needs them. You will often see the tuning wizard generate lots of these, and you will also see people advise that you update statistics as a part of your maintenance plan, but this is not necessary and might actually make things worse, just so long as auto create and auto update are enabled. A: Why are you dropping objects? Seems to me the sequence should be a lot simpler, and less destructive: assign all of these objects a default value, then make the change to not nullable. A: Statistics are too data-specific to be tooled. It would be potentially very inefficient to blindly re-create them on a data set.
{ "language": "en", "url": "https://stackoverflow.com/questions/20392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Any ReSharper equivalent for Xcode? I'm a complete Xcode/Objective-C/Cocoa newbie but I'm learning fast and really starting to enjoy getting to grips with a new language, platform and paradigm. One thing is though, having been using Visual Studio with R# for so long I've kind of been spoiled with the coding tools such as refactorings and completion etc and as far as I can tell Xcode has some fairly limited built in support for this stuff. On that note, does anyone know if any add-ins or whatever are available for the Xcode environment which add coding helpers such as automatically generating implementation skeletons from a class interface definition etc? I suspect there aren't but I suppose it can't help to ask. A: Xcode has refactoring for C and Objective-C built in. Just select what you'd like to refactor, choose "Refactor..." from either the menu bar or the contextual menu, and you'll get a window including the available refactorings and a preview area. Xcode doesn't currently have a public plug-in API; if there are specific types of plug-ins you'd like Apple to enable, file enhancement requests in the Bug Reporter. That way Apple can count and track such requests. However, there are third-party tools like Accessorizer and mogenerator (the latest release is mogenerator 1.10) that you can use to make various development tasks faster. Accessorizer helps you create accessor methods for your classes, while mogenerator does more advanced code generation for Core Data managed object classes that are modeled using Xcode's modeling tools. A: You sound as if you're looking for three major things: code templates, refactoring tools, and auto-completion. The good news is that Xcode 3 and later come with superb auto-completion and template support. By default, you have to explicitly request completion by hitting the escape key. (This actually works in all NSTextViews; try it!) If you want to have the completions appear automatically, you can go to Preferences -> Code Sense and set the pop-up to appear automatically after a few seconds. You should find good completions for C and Objective-C code, and pretty good completions for C++. Xcode also has a solid template/skeleton system that you can use. You can see what templates are available by default by going to Edit -> Insert Text Macro. Of course, you don't want to insert text macros with the mouse; that defeats the point. Instead, you have two options: * *Back in Preferences,go to Key Bindings, and then, under Menu Key Bindings, assign a specific shortcut to macros you use often. I personally don't bother doing this, but I know plenty of great Mac devs who do *Use the CompletionPrefix. By default, nearly all of the templates have a special prefix that, if you type and then hit the escape key, will result in the template being inserted. You can use Control-/ to move between the completion fields. You can see a full list of Xcode's default macros and their associated CompletionPrefixes at Crooked Spin. You can also add your own macros, or modify the defaults. To do so, edit the file /Developer/Library/Xcode/Specifications/{C,HTML}.xctxtmacro. The syntax should be self-explanatory, if not terribly friendly. Unfortunately, if you're addicted to R#, you will be disappointed by your refactoring options. Basic refactoring is provided within Xcode through the context menu or by hitting Shift-Apple-J. From there, you can extract and rename methods, promote and demote them through the class hierarchy, and a few other common operations. Unfortunately, neither Xcode nor any third-party utilities offer anything approaching Resharper, so on that front, you're currently out of luck. Thankfully, Apple has already demonstrated versions of Xcode in the works that have vastly improved refactoring capabilities, so hopefully you won't have to wait too long before the situation starts to improve. A: Just so people know, Accessorizer does more than just generate accessors (both 1.0 and properties for 2.0) it also generates Core Data code for persisting non-standard attributes, your NSSet accessors for custom to-many relationships. In fact, Accessorizer will help provide you with the init, keypath, keyed-archiving, indexed accessors, accessors for unordered collections such as NSSet, copyWithZone, KVO, key-validation, singleton overrides, dealloc, setNilForKey, non-standard attribute persistence (Core Data), locking, headerdoc, convert method to selector, NSUndoManager methods and more. A: I'm excited to say that JetBrains have decided to make a decent IDE for Objective-C coders. It's called AppCode and it's based on their other tools like RubyMine and Resharper. It's not native Cocoa, but has loads of raw refactoring power. http://www.jetbrains.com/objc/index.html I've started using it for my main Objective C project and I'm already in love. It's still in it's infancy, but for code editing and refactoring it already blows Xcode away. Update It's now at a totally usable speed. I've switched over to it full time and it still blows my mind how amazing refactoring and coding is compared with Xcode. It just handles so much for you - auto importing, almost infinite customisation. It makes Xcode look like a toy. A: I found some xtmacro files in Xcode.app package: /Developer/Applications/Xcode.app/Contents/PlugIns/TextMacros.xctxtmacro/Contents/Resources Installed Xcode ver. 3.2.5.
{ "language": "en", "url": "https://stackoverflow.com/questions/20420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: How to maintain a recursive invariant in a MySQL database? I have a tree encoded in a MySQL database as edges: CREATE TABLE items ( num INT, tot INT, PRIMARY KEY (num) ); CREATE TABLE tree ( orig INT, term INT FOREIGN KEY (orig,term) REFERENCES items (num,num) ) For each leaf in the tree, items.tot is set by someone. For interior nodes, items.tot needs to be the sum of it's children. Running the following query repeatedly would generate the desired result. UPDATE items SET tot = ( SELECT SUM(b.tot) FROM tree JOIN items AS b ON tree.term = b.num WHERE tree.orig=items.num) WHERE EXISTS (SELECT * FROM tree WHERE orig=items.num) (note this actually doesn't work but that's beside the point) Assume that the database exists and the invariant are already satisfied. The question is: What is the most practical way to update the DB while maintaining this requirement? Updates may move nodes around or alter the value of tot on leaf nodes. It can be assumed that leaf nodes will stay as leaf nodes, interior nodes will stay as interior nodes and the whole thing will remain as a proper tree. Some thoughts I have had: * *Full Invalidation, after any update, recompute everything (Um... No) *Set a trigger on the items table to update the parent of any row that is updated * *This would be recursive (updates trigger updates, trigger updates, ...) *Doesn't work, MySQL can't update the table that kicked off the trigger *Set a trigger to schedule an update of the parent of any row that is updated * *This would be iterative (get an item from the schedule, processing it schedules more items) *What kicks this off? Trust client code to get it right? *An advantage is that if the updates are ordered correctly fewer sums need to be computer. But that ordering is a complication in and of it's own. An ideal solution would generalize to other "aggregating invariants" FWIW I know this is "a bit overboard", but I'm doing this for fun (Fun: verb, Finding the impossible by doing it. :-) A: I am not sure I understand correctly your question, but this could work My take on trees in SQL. Linked post described method of storing tree in database -- PostgreSQL in that case -- but the method is clear enough, so it can be adopted easily for any database. With this method you can easy update all the nodes depend on modified node K with about N simple SELECTs queries where N is distance of K from root node. I hope your tree is not really deep :). Good Luck! A: The problem you are having is clear, recursion in SQL. You need to get the parent of the parent... of the leaf and updates it's total (either subtracting the old and adding the new, or recomputing). You need some form of identifier to see the structure of the tree, and grab all of a nodes children and a list of the parents/path to a leaf to update. This method adds constant space (2 columns to your table --but you only need one table, else you can do a join later). I played around with a structure awhile ago that used a hierarchical format using 'left' and 'right' columns (obviously not those names), calculated by a pre-order traversal and a post-order traversal, respectively --don't worry these don't need to be recalculated every time. I'll let you take a look at a page using this method in mysql instead of continuing this discussion in case you don't like this method as an answer. But if you like it, post/edit and I'll take some time and clarify.
{ "language": "en", "url": "https://stackoverflow.com/questions/20426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Cleaning up RTF text I'd like to take some RTF input and clean it to remove all RTF formatting except \ul \b \i to paste it into Word with minor format information. The command used to paste into Word will be something like: oWord.ActiveDocument.ActiveWindow.Selection.PasteAndFormat(0) (with some RTF text already in the Clipboard) {\rtf1\ansi\deff0{\fonttbl{\f0\fnil\fcharset0 Courier New;}} {\colortbl ;\red255\green255\blue140;} \viewkind4\uc1\pard\highlight1\lang3084\f0\fs18 The company is a global leader in responsible tourism and was \ul the first major hotel chain in North America\ulnone to embrace environmental stewardship within its daily operations\highlight0\par Do you have any idea on how I can clean up the RTF safely with some regular expressions or something? I am using VB.NET to do the processing but any .NET language sample will do. A: I would use a hidden RichTextBox, set the Rtf member, then retrieve the Text member to sanitize the RTF in a well-supported way. Then I would use manually inject the desired formatting afterwards. A: I'd do something like the following: Dim unformatedtext As String someRTFtext = Replace(someRTFtext, "\ul", "[ul]") someRTFtext = Replace(someRTFtext, "\b", "[b]") someRTFtext = Replace(someRTFtext, "\i", "[i]") Dim RTFConvert As RichTextBox = New RichTextBox RTFConvert.Rtf = someRTFtext unformatedtext = RTFConvert.Text unformatedtext = Replace(unformatedtext, "[ul]", "\ul") unformatedtext = Replace(unformatedtext, "[b]", "\b") unformatedtext = Replace(unformatedtext, "[i]", "\i") Clipboard.SetText(unformatedtext) oWord.ActiveDocument.ActiveWindow.Selection.PasteAndFormat(0) A: You can strip out the tags with regular expressions. Just make sure that your expressions will not filter tags that were actually text. If the text had "\b" in the body of text, it would appear as \b in the RTF stream. In other words, you would match on "\b" but not "\b". You could probably take a short cut and filter out the header RTF tags. Look for the first occurrence of "\viewkind4" in the input. Then read ahead to the first space character. You would remove all of the characters from the start of the text up to and including that space character. That would strip out the RTF header information (fonts, colors, etc). A: Regex it, it wont parse absolutely everything correctly (tables for example) but does the job in most cases. string unformatted = Regex.Replace(rtfString, @"\{\*?\\[^{}]+}|[{}]|\\\n?[A-Za-z]+\n?(?:-?\d+)?[ ]?", ""); Magic =)
{ "language": "en", "url": "https://stackoverflow.com/questions/20450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the point of interfaces in PHP? Interfaces allow you to create code which defines the methods of classes that implement it. You cannot however add any code to those methods. Abstract classes allow you to do the same thing, along with adding code to the method. Now if you can achieve the same goal with abstract classes, why do we even need the concept of interfaces? I've been told that it has to do with OO theory from C++ to Java, which is what PHP's OO stuff is based on. Is the concept useful in Java but not in PHP? Is it just a way to keep from having placeholders littered in the abstract class? Am I missing something? A: Interfaces aren't just for making sure developers implement certain methods. The idea is that because these classes are guaranteed to have certain methods, you can use these methods even if you don't know the class's actual type. Example: interface Readable { String read(); } List<Readable> readables; // dunno what these actually are, but we know they have read(); for(Readable reader : readables) System.out.println(reader.read()); In many cases, it doesn't make sense to provide a base class, abstract or not, because the implementations vary wildly and don't share anything in common besides a few methods. Dynamically typed languages have the notion of "duck-typing" where you don't need interfaces; you are free to assume that the object has the method that you're calling on it. This works around the problem in statically typed languages where your object has some method (in my example, read()), but doesn't implement the interface. A: Why would you need an interface, if there are already abstract classes? To prevent multiple inheritance (can cause multiple known problems). One of such problems: The "diamond problem" (sometimes referred to as the "deadly diamond of death") is an ambiguity that arises when two classes B and C inherit from A and class D inherits from both B and C. If there is a method in A that B and C have overridden, and D does not override it, then which version of the method does D inherit: that of B, or that of C? Source: https://en.wikipedia.org/wiki/Multiple_inheritance#The_diamond_problem Why/When to use an interface? An example... All cars in the world have the same interface (methods)... AccelerationPedalIsOnTheRight(), BrakePedalISOnTheLeft(). Imagine that each car brand would have these "methods" different from another brand. BMW would have The brakes on the right side, and Honda would have brakes on the left side of the wheel. People would have to learn how these "methods" work every time they would buy a different brand of car. That's why it's a good idea to have the same interface in multiple "places." What does an interface do for you (why would someone even use one)? An interface prevents you from making "mistakes" (it assures you that all classes which implement a specific interface, will all have the methods which are in the interface). // Methods inside this interface must be implemented in all classes which implement this interface. interface IPersonService { public function Create($personObject); } class MySqlPerson implements IPersonService { public function Create($personObject) { // Create a new person in MySql database. } } class MongoPerson implements IPersonService { public function Create($personObject) { // Mongo database creates a new person differently then MySQL does. But the code outside of this method doesn't care how a person will be added to the database, all it has to know is that the method Create() has 1 parameter (the person object). } } This way, the Create() method will always be used the same way. It doesn't matter if we are using the MySqlPerson class or the MongoPerson class. The way how we are using a method stays the same (the interface stays the same). For example, it will be used like this (everywhere in our code): new MySqlPerson()->Create($personObject); new MongoPerson()->Create($personObject); This way, something like this can't happen: new MySqlPerson()->Create($personObject) new MongoPerson()->Create($personsName, $personsAge); It's much easier to remember one interface and use the same one everywhere, than multiple different ones. This way, the inside of the Create() method can be different for different classes, without affecting the "outside" code, which calls this method. All the outside code has to know is that the method Create() has 1 parameter ($personObject), because that's how the outside code will use/call the method. The outside code doesn't care what's happening inside the method; it only has to know how to use/call it. You can do this without an interface as well, but if you use an interface, it's "safer" (because it prevents you to make mistakes). The interface assures you that the method Create() will have the same signature (same types and a same number of parameters) in all classes that implement the interface. This way you can be sure that ANY class which implements the IPersonService interface, will have the method Create() (in this example) and will need only 1 parameter ($personObject) to get called/used. A class that implements an interface must implement all methods, which the interface does/has. I hope that I didn't repeat myself too much. A: In my opinion, interfaces should be preferred over non-functional abstract classes. I wouldn't be surprised if there would be even a performance hit there, as there is only one object instantiated, instead of parsing two, combining them (although, I can't be sure, I'm not familiar with the inner workings of OOP PHP). It is true that interfaces are less useful/meaningful than compared to, say, Java. On the other hand, PHP6 will introduce even more type hinting, including type hinting for return values. This should add some value to PHP interfaces. tl;dr: interfaces defines a list of methods that need to be followed (think API), while an abstract class gives some basic/common functionality, which the subclasses refine to specific needs. A: I can't remember if PHP is different in this respect, but in Java, you can implement multiple Interfaces, but you can't inherit multiple abstract classes. I'd assume PHP works the same way. In PHP you can apply multiple interfaces by seperating them with a comma (I think, I don't find that a clean soloution). As for multiple abstract classes you could have multiple abstracts extending each other (again, I'm not totally sure about that but I think I've seen that somewhere before). The only thing you can't extend is a final class. A: Interfaces will not give your code any performance boosts or anything like that, but they can go a long way toward making it maintainable. It is true that an abstract class (or even a non-abstract class) can be used to establish an interface to your code, but proper interfaces (the ones you define with the keyword and that only contain method signatures) are just plain easier to sort through and read. That being said, I tend to use discretion when deciding whether or not to use an interface over a class. Sometimes I want default method implementations, or variables that will be common to all subclasses. Of course, the point about multiple-interface implementation is a sound one, too. If you have a class that implements multiple interfaces, you can use an object of that class as different types in the same application. The fact that your question is about PHP, though, makes things a bit more interesting. Typing to interfaces is still not incredibly necessary in PHP, where you can pretty much feed anything to any method, regardless of its type. You can statically type method parameters, but some of that is broken (String, I believe, causes some hiccups). Couple this with the fact that you can't type most other references, and there isn't much value in trying to force static typing in PHP (at this point). And because of that, the value of interfaces in PHP, at this point is far less than it is in more strongly-typed languages. They have the benefit of readability, but little else. Multiple-implementation isn't even beneficial, because you still have to declare the methods and give them bodies within the implementor. A: Interfaces are like your genes. Abstract classes are like your actual parents. Their purposes are hereditary, but in the case of abstract classes vs interfaces, what is inherited is more specific. A: I don't know about other languages, what is the concept of interface there. But for PHP, I will try my best to explain it. Just be patient, and Please comment if this helped. An interface works as a "contracts", specifying what a set of subclasses does, but not how they do it. The Rule * *An Interface can't be instantiate. *You can't implement any method in an interface,i.e. it only contains .signature of the method but not details(body). *Interfaces can contain methods and/or constants, but no attributes. Interface constants have the same restrictions as class constants. Interface methods are implicitly abstract. *Interfaces must not declare constructors or destructors, since these are implementation details on the class level. *All the methods in an interface must have public visibility. Now let's take an example. Suppose we have two toys: one is a Dog, and other one is a Cat. As we know a dog barks, and cat mews.These two have same speak method, but with different functionality or implementation. Suppose we are giving the user a remote control that has a speak button. When the user presses speak button, the toy have to speak it doesn't matter if it's Dog or a Cat. This a good case to use an interface, not an abstract class because the implementations are different. Why? Remember If you need to support the child classes by adding some non-abstract method, you should use abstract classes. Otherwise, interfaces would be your choice. A: The difference between using an interface and an abstract class has more to do with code organization for me, than enforcement by the language itself. I use them a lot when preparing code for other developers to work with so that they stay within the intended design patterns. Interfaces are a kind of "design by contract" whereby your code is agreeing to respond to a prescribed set of API calls that may be coming from code you do not have aceess to. While inheritance from abstract class is a "is a" relation, that isn't always what you want, and implementing an interface is more of a "acts like a" relation. This difference can be quite significant in certain contexts. For example, let us say you have an abstract class Account from which many other classes extend (types of accounts and so forth). It has a particular set of methods that are only applicable to that type group. However, some of these account subclasses implement Versionable, or Listable, or Editable so that they can be thrown into controllers that expect to use those APIs. The controller does not care what type of object it is By contrast, I can also create an object that does not extend from Account, say a User abstract class, and still implement Listable and Editable, but not Versionable, which doesn't make sense here. In this way, I am saying that FooUser subclass is NOT an account, but DOES act like an Editable object. Likewise BarAccount extends from Account, but is not a User subclass, but implements Editable, Listable and also Versionable. Adding all of these APIs for Editable, Listable and Versionable into the abstract classes itself would not only be cluttered and ugly, but would either duplicate the common interfaces in Account and User, or force my User object to implement Versionable, probably just to throw an exception. A: Interfaces are essentially a blueprint for what you can create. They define what methods a class must have, but you can create extra methods outside of those limitations. I'm not sure what you mean by not being able to add code to methods - because you can. Are you applying the interface to an abstract class or the class that extends it? A method in the interface applied to the abstract class will need to be implemented in that abstract class. However apply that interface to the extending class and the method only needs implementing in the extending class. I could be wrong here - I don't use interfaces as often as I could/should. I've always thought of interfaces as a pattern for external developers or an extra ruleset to ensure things are correct. A: Below are the points for PHP Interface * *It is used to define required no of methods in class [if you want to load html then id and name is required so in this case interface include setID and setName]. *Interface strictly force class to include all the methods define in it. *You can only define method in interface with public accessibility. *You can also extend interface like class. You can extend interface in php using extends keyword. *Extend multiple interface. *You can not implement 2 interfaces if both share function with same name. It will throw error. Example code : interface test{ public function A($i); public function B($j = 20); } class xyz implements test{ public function A($a){ echo "CLASS A Value is ".$a; } public function B($b){ echo "CLASS B Value is ".$b; } } $x = new xyz(); echo $x->A(11); echo "<br/>"; echo $x->B(10); A: We saw that abstract classes and interfaces are similar in that they provide abstract methods that must be implemented in the child classes. However, they still have the following differences: 1.Interfaces can include abstract methods and constants, but cannot contain concrete methods and variables. 2.All the methods in the interface must be in the public visibility scope. 3.A class can implement more than one interface, while it can inherit from only one abstract class. interface abstract class the code - abstract methods - abstract methods - constants - constants - concrete methods - concrete variables access modifiers - public - public - protected - private etc. number of parents The same class can implement more than 1 interface The child class can inherit only from 1 abstract class Hope this will helps to anyone to understand! A: You will use interfaces in PHP: * *To hide implementation - establish an access protocol to a class of objects an change the underlying implementation without refactoring in all the places you've used that objects *To check type - as in making sure that a parameter has a specific type $object instanceof MyInterface *To enforce parameter checking at runtime *To implement multiple behaviours into a single class (build complex types) class Car implements EngineInterface, BodyInterface, SteeringInterface { so that a Car object ca now start(), stop() (EngineInterface) or goRight(),goLeft() (Steering interface) and other things I cannot think of right now Number 4 it's probably the most obvious use case that you cannot address with abstract classes. From Thinking in Java: An interface says, “This is what all classes that implement this particular interface will look like.” Thus, any code that uses a particular interface knows what methods can be called for that interface, and that’s all. So the interface is used to establish a “protocol” between classes. A: The entire point of interfaces is to give you the flexibility to have your class be forced to implement multiple interfaces, but still not allow multiple inheritance. The issues with inheriting from multiple classes are many and varied and the wikipedia page on it sums them up pretty well. Interfaces are a compromise. Most of the problems with multiple inheritance don't apply to abstract base classes, so most modern languages these days disable multiple inheritance yet call abstract base classes interfaces and allows a class to "implement" as many of those as they want. A: The concept is useful all around in object oriented programming. To me I think of an interface as a contract. So long my class and your class agree on this method signature contract we can "interface". As for abstract classes those I see as more of base classes that stub out some methods and I need to fill in the details. A: Interfaces exist not as a base on which classes can extend but as a map of required functions. The following is an example of using an interface where an abstract class does not fit: Lets say I have a calendar application that allows users to import calendar data from external sources. I would write classes to handle importing each type of data source (ical, rss, atom, json) Each of those classes would implement a common interface that would ensure they all have the common public methods that my application needs to get the data. <?php interface ImportableFeed { public function getEvents(); } Then when a user adds a new feed I can identify the type of feed it is and use the class developed for that type to import the data. Each class written to import data for a specific feed would have completely different code, there may otherwise be very few similarities between the classes outside of the fact that they are required to implement the interface that allows my application to consume them. If I were to use an abstract class, I could very easily ignore the fact that I have not overridden the getEvents() method which would then break my application in this instance whereas using an interface would not let my app run if ANY of the methods defined in the interface do not exist in the class that implemented it. My app doesn't have to care what class it uses to get data from a feed, only that the methods it needs to get that data are present. To take this a step further, the interface proves to be extremely useful when I come back to my calendar app with the intent of adding another feed type. Using the ImportableFeed interface means I can continue adding more classes that import different feed types by simply adding new classes that implement this interface. This allows me to add tons of functionality without having to add unnecessarily bulk to my core application since my core application only relies on there being the public methods available that the interface requires so as long as my new feed import classes implement the ImportableFeed interface then I know I can just drop it in place and keep moving. This is just a very simple start. I can then create another interface that all my calendar classes can be required to implement that offers more functionality specific to the feed type the class handles. Another good example would be a method to verify the feed type, etc. This goes beyond the question but since I used the example above: Interfaces come with their own set of issues if used in this manner. I find myself needing to ensure the output that is returned from the methods implemented to match the interface and to achieve this I use an IDE that reads PHPDoc blocks and add the return type as a type hint in a PHPDoc block of the interface which will then translate to the concrete class that implements it. My classes that consume the data output from the classes that implement this interface will then at the very least know it's expecting an array returned in this example: <?php interface ImportableFeed { /** * @return array */ public function getEvents(); } There isn't much room in which to compare abstract classes and interfaces. Interfaces are simply maps that when implemented require the class to have a set of public interfaces.
{ "language": "en", "url": "https://stackoverflow.com/questions/20463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "244" }
Q: .NET - Excel ListObject autosizing on databind I'm developing an Excel 2007 add-in using Visual Studio Tools for Office (2008). I have one sheet with several ListObjects on it, which are being bound to datatables on startup. When they are bound, they autosize correctly. The problem comes when they are re-bound. I have a custom button on the ribbon bar which goes back out to the database and retrieves different information based on some criteria that the user inputs. This new data comes back and is re-bound to the ListObjects - however, this time they are not resized and I get an exception: ListObject cannot be bound because it cannot be resized to fit the data. The ListObject failed to add new rows. This can be caused because of inability to move objects below of the list object. Inner exception: "Insert method of Range class failed" Reason: Microsoft.Office.Tools.Excel.FailureReason.CouldNotResizeListObject I was not able to find anything very meaningful on this error on Google or MSDN. I have been trying to figure this out for a while, but to no avail. Basic code structure: //at startup DataTable tbl = //get from database listObj1.SetDataBinding(tbl); DataTable tbl2 = //get from database listObj2.SetDataBinding(tbl2); //in buttonClick event handler DataTable tbl = //get different info from database //have tried with and without unbinding old source listObj1.SetDataBinding(tbl); <-- exception here DataTable tbl2 = //get different info from database listObj2.SetDataBinding(tbl2); Note that this exception occurs even when the ListObject is shrinking, and not only when it grows. A: If anyone else is having this problem, I have found the cause of this exception. ListObjects will automatically re-size on binding, as long as they do not affect any other objects on the sheet. Keep in mind that ListObjects can only affect the Ranges which they wrap around. In my case, the list object which was above the other one had fewer columns than the one below it. Let's say the top ListObject had 2 columns, and the bottom ListObject had 3 columns. When the top ListObject changed its number of rows, it had no ability to make any changes to the third column since it wasn't in it's underlying Range. This means that it couldn't shift any cells in the third column, and so the second ListObject couldn't be properly moved, resulting in my exception above. Changing the positions of the ListObjects to place the wider one above the smaller one works fine. Following the logic above, this now means that the wider ListObject can shift all of the columns of the second ListObject, and since there is nothing below the smaller one it can also shift any cells necessary. The reason I wasn't having any trouble on the initial binding is that both ListObjects were a single cell. Since this is not optimal in my case, I will probably use empty columns or try to play around with invisible columns if that's possible, but at least the cause is now clear. A: I've got a similar issue with refreshign multiple listobjects. We are setting each listObject.DataSource = null, then rebinding starting at the bottom listobject and working our way up instead of the top down. A: Just an idea of something to try to see if it gives you more info: Try resizes the list object before the exception line and see if that also throws an exception. If not, try and resize the range object to the new size of the DataTable. You say that this happens when the ListObject shrinks and grows. Does it also happen if the ListObject remains the same size?
{ "language": "en", "url": "https://stackoverflow.com/questions/20465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Path Display in Label Are there any automatic methods for trimming a path string in .NET? For example: C:\Documents and Settings\nick\My Documents\Tests\demo data\demo data.emx becomes C:\Documents...\demo data.emx It would be particularly cool if this were built into the Label class, and I seem to recall it is--can't find it though! A: Use TextRenderer.DrawText with TextFormatFlags.PathEllipsis flag void label_Paint(object sender, PaintEventArgs e) { Label label = (Label)sender; TextRenderer.DrawText(e.Graphics, label.Text, label.Font, label.ClientRectangle, label.ForeColor, TextFormatFlags.PathEllipsis); } Your code is 95% there. The only problem is that the trimmed text is drawn on top of the text which is already on the label. Yes thanks, I was aware of that. My intention was only to demonstrate use of DrawText method. I didn't know whether you want to manually create event for each label or just override OnPaint() method in inherited label. Thanks for sharing your final solution though. A: @ lubos hasko Your code is 95% there. The only problem is that the trimmed text is drawn on top of the text which is already on the label. This is easily solved: Label label = (Label)sender; using (SolidBrush b = new SolidBrush(label.BackColor)) e.Graphics.FillRectangle(b, label.ClientRectangle); TextRenderer.DrawText( e.Graphics, label.Text, label.Font, label.ClientRectangle, label.ForeColor, TextFormatFlags.PathEllipsis); A: Not hard to write yourself though: public static string TrimPath(string path) { int someArbitaryNumber = 10; string directory = Path.GetDirectoryName(path); string fileName = Path.GetFileName(path); if (directory.Length > someArbitaryNumber) { return String.Format(@"{0}...\{1}", directory.Substring(0, someArbitaryNumber), fileName); } else { return path; } } I guess you could even add it as an extension method. A: What you are thinking on the label is that it will put ... if it is longer than the width (not set to auto size), but that would be c:\Documents and Settings\nick\My Doc... If there is support, it would probably be on the Path class in System.IO A: You could use the System.IO.Path.GetFileName method and append that string to a shortened System.IO.Path.GetDirectoryName string. A: Next code works for folders. I'm using it to display a download path! public static string TrimPath(string path) { string shortenedPath = ""; string[] pathParts = path.Split('\\'); for (int i = 0; i < pathParts.Length-1; i++) { string part = pathParts[i]; if (pathParts.Length-2 != i) { if (part.Length > 5) { //If folder name length is bigger than 5 chars shortenedPath += "..\\"; } else { shortenedPath += part+"\\"; } } else { shortenedPath += part+"\\"; } } return shortenedPath; } Example: Input: C:\Users\Sandra\Desktop\Proyectos de programación\Prototype\ServerClient\test output: C:\Users\..\..\..\..\..\test\
{ "language": "en", "url": "https://stackoverflow.com/questions/20467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Use a LIKE clause in part of an INNER JOIN Can/Should I use a LIKE criteria as part of an INNER JOIN when building a stored procedure/query? I'm not sure I'm asking the right thing, so let me explain. I'm creating a procedure that is going to take a list of keywords to be searched for in a column that contains text. If I was sitting at the console, I'd execute it as such: SELECT Id, Name, Description FROM dbo.Card WHERE Description LIKE '%warrior%' OR Description LIKE '%fiend%' OR Description LIKE '%damage%' But a trick I picked up a little while go to do "strongly typed" list parsing in a stored procedure is to parse the list into a table variable/temporary table, converting it to the proper type and then doing an INNER JOIN against that table in my final result set. This works great when sending say a list of integer IDs to the procedure. I wind up having a final query that looks like this: SELECT Id, Name, Description FROM dbo.Card INNER JOIN @tblExclusiveCard ON dbo.Card.Id = @tblExclusiveCard.CardId I want to use this trick with a list of strings. But since I'm looking for a particular keyword, I am going to use the LIKE clause. So ideally I'm thinking I'd have my final query look like this: SELECT Id, Name, Description FROM dbo.Card INNER JOIN @tblKeyword ON dbo.Card.Description LIKE '%' + @tblKeyword.Value + '%' Is this possible/recommended? Is there a better way to do something like this? The reason I'm putting wildcards on both ends of the clause is because there are "archfiend", "beast-warrior", "direct-damage" and "battle-damage" terms that are used in the card texts. I'm getting the impression that depending on the performance, I can either use the query I specified or use a full-text keyword search to accomplish the same task? Other than having the server do a text index on the fields I want to text search, is there anything else I need to do? A: Your first query will work but will require a full table scan because any index on that column will be ignored. You will also have to do some dynamic SQL to generate all your LIKE clauses. Try a full text search if your using SQL Server or check out one of the Lucene implementations. Joel talked about his success with it recently. A: try it... select * from table11 a inner join table2 b on b.id like (select '%'+a.id+'%') where a.city='abc'. Its works for me.:-) A: Try this select * from Table_1 a left join Table_2 b on b.type LIKE '%' + a.type + '%' This practice is not ideal. Use with caution. A: It seems like you are looking for full-text search. Because you want to query a set of keywords against the card description and find any hits? Correct? A: Personally, I have done it before, and it has worked out well for me. The only issues i could see is possibly issues with an unindexed column, but i think you would have the same issue with a where clause. My advice to you is just look at the execution plans between the two. I'm sure that it will differ which one is better depending on the situation, just like all good programming problems. A: @Dillie-O How big is this table? What is the data type of Description field? If either are small a full text search will be overkill. @Dillie-O Maybe not the answer you where looking for but I would advocate a schema change... proposed schema: create table name( nameID identity / int ,name varchar(50)) create table description( descID identity / int ,desc varchar(50)) --something reasonable and to make the most of it alwase lower case your values create table nameDescJunc( nameID int ,descID int) This will let you use index's without have to implement a bolt on solution, and keeps your data atomic. related: Recommended SQL database design for tags or tagging A: a trick I picked up a little while go to do "strongly typed" list parsing in a stored procedure is to parse the list into a table variable/temporary table I think what you might be alluding to here is to put the keywords to include into a table then use relational division to find matches (could also use another table for words to exclude). For a worked example in SQL see Keyword Searches by Joe Celko. A: Performance will be depend on the actual server than you use, and on the schema of the data, and the amount of data. With current versions of MS SQL Server, that query should run just fine (MS SQL Server 7.0 had issues with that syntax, but it was addressed in SP2). Have you run that code through a profiler? If the performance is fast enough and the data has the appropriate indexes in place, you should be all set. A: LIKE '%fiend%' will never use an seek, LIKE 'fiend%' will. Simply a wildcard search is not sargable A: Try this; SELECT Id, Name, Description FROM dbo.Card INNER JOIN @tblKeyword ON dbo.Card.Description LIKE '%' + CONCAT(CONCAT('%',@tblKeyword.Value),'%') + '%'
{ "language": "en", "url": "https://stackoverflow.com/questions/20484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Executing JavaScript from Flex: Is this javascript function dangerous? I have a flex application that needs the ability to generate and execute JavaScript. When I say this, I mean I need to execute raw JavaScript that I create in my Flex application (not just an existing JavaScript method) I am currently doing this by exposing the following JavaScript method: function doScript(js){ eval(js);} I can then do something like this in Flex (note: I am doing something more substantial then an alert box in the real Flex app): ExternalInterface.call("doScript","alert('foo')); My question is does this impose any security risk, I am assuming it's not since the Flex and JasvaScript all run client side... Is there a better way to do this? A: There's no need for the JavaScript function, the first argument to ExternalInterface can be any JavaScript code, it doesn't have to be a function name (the documentation says so, but it is wrong). Try this: ExternalInterface.call("alert('hello')"); A: This isn't inherently dangerous, but the moment you pass any user-provided data into the function, it's ripe for a code injection exploit. That's worrisome, and something I'd avoid. I think a better approach would be to only expose the functionality you need, and nothing more. A: As far as I know, and I'm definately not a hacker, you are completely fine. Really, if someone wanted to, they could exploit your code anyway clientside, but i don't see how they could exploit your server side code using javascript (unless you use server side javascript) A: I don't see where this lets them do anything that they couldn't do already by calling eval. If there's a security hole being introduced here, I don't see it. A: Remember also that the script actions are controlled by the "AllowScriptAccess" tag in the statement. If the web page doesn't want these actions, they should not permit scripts to call out. http://kb.adobe.com/selfservice/viewContent.do?externalId=tn_16494 A: ExternalInterface.call("eval", "alert('hello');");
{ "language": "en", "url": "https://stackoverflow.com/questions/20510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you unit test web apps hosted remotely? I'm familiar with TDD and use it in both my workplace and my home-brewed web applications. However, every time I have used TDD in a web application, I have had the luxury of having full access to the web server. That means that I can update the server then run my unit tests directly from the server. My question is, if you are using a third party web host, how do you run your unit tests on them? You could argue that if your app is designed well and your build process is sound and automated, that running unit tests on your production server isn't necessary, but personally I like the peace of mind in knowing that everything is still "green" after a major update. For everyone who has responded with "just test before you deploy" and "don't you have a staging server?", I understand where you're coming from. I do have a staging server and a CI process set up. My unit tests do run and I make sure they all pass before an an update to production. I realize that in a perfect world I wouldn't be concerned with this. But I've seen it happen before. If a file is left out of the update or a SQL script isn't run, the effects are immediately apparent when running your unit tests but can go unnoticed for quite some time without them. What I'm asking here is if there is any way, if only to satisfy my own compulsive desires, to run a unit test on a server that I cannot install applications on or remote into (e.g. one which I will only have FTP access to in order to update files)? A: I think I probably would have to argue that running unit tests on your production server isn't really part of TDD because by the time you deploy to your production environment technically speaking, you're past "development". I'm quite a stickler for TDD, and when I'm preaching the benefits to clients I often find myself saying "you can't half adopt TDD, it's all or nothing" What you probably should have is some form of automated testing that you perform "after" deployment but these are not part of TDD. Maybe you should look at your process again. A: You could write functional tests in something like WATIR, WATIN or Selenium that test what is returned in the reponse page after posting certain form data or requesting specific URLs. A: For clarification: what sort of access do you have to your web server? FTP or WebDAV only? From your question, I'm guessing ssh access isn't available - you're dropping files in a directory to deploy. Is that correct? If so, the answer for unit testing is likely 'do it before you deploy'. You can set up functional testing driven by an automated tool like Selenium to test your app remotely via the web interface, but that's not really unit testing the sense that you're restricted to testing the system as a whole. Have you considered setting up a staging server, perhaps as a VMWare instance, that mirrors or at least mimics your deployment environment? A: What's preventing you from running unit tests on the server? If you can upload your production code and let it run there, why can't you upload this other code and run it as well? A: I've written test tools for sites using python and httplib/urllib2 generally it would have been overkill but it was suitable in these cases. Not sure it's going to be of general use though.
{ "language": "en", "url": "https://stackoverflow.com/questions/20511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Error installing iKernel.exe Has anybody come across this problem during an application install? * *OS is Windows Server 2k3 I do have local Admin access and I have installed this application on other machines. Any help would be much appreciated, as Google isn't helping much A: This issue may occur if one or more of the following files are missing from the Windows\System32 folder: * *Stdole32.tlb *Stdole2.tlb *Stdole.tlb To resolve this issue, expand the appropriate file from the Windows Server 2003 Installation disk: (using the command prompt) Expand cd_drive_letter:\i386\filename.tl_ drive:\Windows\system32\filename.tlb and then press ENTER. * *filename is the name of the file that you want to expand *drive is the letter of the drive where Windows is installed
{ "language": "en", "url": "https://stackoverflow.com/questions/20522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Useful Eclipse features? I have been using Eclipse as an IDE for a short amount of time (about 3 months of full use) and almost every day I learn about some shortcut or feature that I had absolutely no idea about. For instance, just today I learned that Ctrl+3 was the shortcut for a Quick Access window. I was wondering what your most useful/favorite Eclipse features are. With the IDE being so big, it would be helpful to learn about the more commonly used parts of the program. A: Local History It's a great feature. Eclipse has its own mini-CVS for all files in a project. If you want to revert some change you made, or even restore deleted file - you can right click on the project and select "Restore from Local History". Just saved my ass *tears of joy* A: My most commonly used features are ctrl+1 quick-fix / spell-checker opening files * *ctrl+shift+t load class file by classname *ctrl+shift+r load any file by filename matches are made on the start of the class/filename. start your search pattern with a * to search anywhere within the filename/classname. Formatting * *ctrl+shift+f Format source file (set up your formatting style in Window | preferences | java | code style | formatter) *ctrl+shift+o Organise imports Generated code * *alt+s,r to generate getters and setters *alt+s,v to insert method signatures for overidden methods from superclass or interface Refactorings * *alt+shift+l Extract text-selection as local variable (really handy in that it determines and inserts the type for you. *alt+shift+m Extract text-selection as a method *alt+shift+i inline selected method Running and debugging. alt+shift+x is a really handy prefix to run stuff in your current file. * *alt+shift+x, t run unit tests in current file *alt+shift+x, j run main in current file *alt+shift+x, r run on server There are more. The options are shown to you in the lower-right popup after hitting alt+shift+x. alt+shift+x can be switched for alt+shift+d in all the above examples to run in the debugger. Validation As of the recent Ganymede release, you can now switch of validation in specified files and folders. I've been waiting for this feature for ages. * *Go to Project | Properties | Validation *click on the ... button in the settings column of the validator you want to shut up *Add a rule to the exclude group code navigation * *hold down ctrl to make all variables, methods and classnames hyperlinks to their definitions. *alt+left to navigate back to where you clicked ctrl *alt+right to go "forwards" again A: CTRL+Shift+P to jump to the matching bracket/parenthesis. A: This is cool: If someone has emailed you a stack trace, you can copy and paste the stack trace into Eclipse's Console window. You can then click on class names in the stack trace as if your own code had generated it. A: One key feature: Shift+Alt+T for the refactoring menu. A: * *Ctrl-shift-T, but only type the initial characters (and even a few more) of the class you're looking for. For example, you can type "NetLi" to find NetworkListener *In the Search window, Ctrl-. takes you to the first leaf of a tree branch *Alt-/ is Word Completion. Slightly different from Ctrl-space A: A lot of the key bindings depend on the perspective and view currently active. My most used ones for the Java perspective: * *ctrl-shift-r open resource *ctrl-shift-t open type *ctrl-1 quick fix/refactor *ctrl-j incremental search *ctrl-h search in files (select a base directory and set scope to selected resources) *ctrl-o list methods *ctrl-alt-h open call hierarchy *ctrl-shift-l list shortcut keys *hit ctrl-shift-l again to go directly to preferences to change key mappings A: I'd like to add two additional shortcuts: * *CTRL+F6 Switch between open editors (CTRL+SHIFT+F6 to scroll through the list in the opposite direction) *CTRL+F11 start program in debug mode *F11 start program in normal mode A: * *CTRL+SHIFT+X selected text becomes UPPERCASE *CTRL+SHIFT+Y selected text becomes lowercase A: I am also a fan of Eclipse, however since I spend a majority of my time in Visual Studio, I will suggest that you read Eric Sink's series of articles "C# to Java" (parts 1-4). Not only is Eric always an entertaining read, but this brief series covers some awesome Eclipse insight for those who have not been into Eclipse or have been away from it for a while: From C# to Java: Part 1 From C# to Java: Part 2 From C# to Java: Part 3 From C# to Java: Part 4 A: Ctrl-Shift-M while the cursor is on a class name in your java file, will specifically add that and only that class to your imports. This is different from Ctrl-Shift-O which will not only add those imports not already defined, but will also remove imports not currently needed, something you might not necessarily want to do. I forgot about [Ctrl+2 -> r] scope variable rename. Place the cursor in the variable you wish to rename, press Ctrl+2, then r, then type the new name watching all instances of that variable get renamed at the same time. It's awesome at refactoring Hungarian Notation. A: alt-shift-a is extremely useful in a few situations. A: Ctrl-Alt (up/down) Copy selected line(s) above or below current line. Alt (up/down) Move current (or multiple selected) lines up or down Ctrl-Shift-R Bring up the resource window, start typing to find class/resource Ctrl-O Bring up all methods and fields for the current class. Hitting it again will bring up all methods and fields for current class and super classes. Ctrl-/ or Ctrl-Alt-C Comment single or multiple lines with // Ctrl-Shift-/ Comment selected lines with /* */ Ctrl-. Take you to the next error or warning line A: In terms of actual features, rather than shortcuts, I strongly recommend taking a look at Mylyn. It essentially skins Eclipse with a task focussed view. It tracks the files you touch when working on a task, and focusses many aspects of the UI onto the resources that it decides are relevant to the job in hand. Eclipse can be somewhat busy to look at, especially with a big multi module project, and Mylyn helps cut through the cruft. The connectivity to issue tracking software and source control repositories is also excellent. In my experience, it polarises opinion amongst those who try working with it, which is probably a sign that it is offering something interesting... Don't mean to sound like a fanboy - it is definitely worth a look though. A: A shortcut that I use everyday is Ctrl+K. In your editor (not only Java file), simply select a text (like a variable, a function, etc.), and then use this shortcut to go to the next occurrence of this text in the current editor. It's faster than using the Ctrl+F shortcut... Note also that you can use Ctrl+Shift+K to search backwards. A: CTRL+PAGE DOWN / CTRL+PAGE UP to switch between opened editors CTRL+E to also switch between opened editors (allows to type the name) CTRL+O is extremely important for me. You don't longer need the Outline View then (you can close it which will give you more space). Then, you can type a method name or just the beginning of it and you quickly can get to it. I also use it to inspect what stuff is available. For example: CTRL+O and then type get ... now I see all getters. F3 while an element is selected in the code: brings you to its definition or it's source. e.g. used on a method call it brings you into the source code of that method. CTRL+M to maximize the current window As already said, CTRL+3 is extremely good. It basically allows you to use Eclipse completely without a mouse. Just type CTRL+3 and then package explorer for example. CTRL+F8 cycle trough perspectives CTRL+L allows to type a line number and brings you directly to that line. CTRL+SHIFT+G searches for all references to the selected element in the workspace. And not a shortcut: In the project settings under Java Editor you can find Save Actions. This allows you to set up the project so that the code is automatically cleaned up and formatted when you save a file. That's very good it safes you from constantly pressing CTRL+O and CTRL+F. A: Eclipse auto refresh isn't on by default so if you make changes to a file outside of eclipse, the change won't be reflected in your build. this is very annoying if you just did an svn/git update/rebase and things aren't working the way they're supposed to. Turn it on in windows->preferences->workspace and tick Refresh Automatically. A: I use a lot of the above and also like for quick search: CTRL+J then type what I am looking for, then CTRL+K for next occurrence. A: Shift+Alt+b for the simple navigation row over the editor. A: Lately I've been using the MouseFeeds plugin to automatically tell me what the key stroke combinations are. That way by repetition I remember them better. This link has a better picture and description of what it looks like and does. A: I've just released this blog post about Top 5 Useful Hidden Eclipse Features. It contains: * *Favorites: Types and members that will always show up in auto-completion *The awesome block selection mode: For multi-line editing *The EGit staging view: Much better than git itself *Type filters: To remove awt and java.lang.Object stuff from auto-completion *Formatter tags: To delimit code sections that shouldn't be auto-formatted A: Alt+left and Alt+ right will navigate you forward and back. A: I find the project-specific settings useful in Eclipse 3.3. For example, if you have several developers working on a project who have different code styles for curly braces, line spacing, number of imports etc. then you can specify the style settings for the project. Then you can configure the save actions so that the code is automatically formatted when a file is saved. The result is everyone's code is formatted the same before it's checked in.
{ "language": "en", "url": "https://stackoverflow.com/questions/20529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68" }
Q: List of macOS text editors and code editors I searched for this and found Maudite's question about text editors but they were all for Windows. As you have no doubt guessed, I am trying to find out if there are any text/code editors for the Mac besides what I know of. I'll edit my post to include editors listed. Free * *Textwrangler *Xcode *Mac Vim *Aquamacs and closer to the original EMacs *JEdit *Editra *Eclipse *NetBeans *Kod *TextMate2 - GPL *Brackets *Atom.io Commercial * *Textmate *BBEdit *SubEthaEdit *Coda *Sublime Text 2 *Smultron *WebStorm *Peppermint Articles related to the subject * *Faceoff, which is the best text editor ever? *Maceditors.com, mac editors features compared Thank you everybody that has added suggestions. A: MacVim and SubEthaEdit are two nice options A: I've tried Komodo out a bit, and I really like it so far. Aptana, an Eclipse variant, is also rather useful for a wide variety of things. There's always good ole' VI, too! A: If you ever plan on making a serious effort at learning Emacs, immediately forget about Aquamacs. It tries to twist and bend Emacs into something it's not (a super-native OS X app). That might sound well and all, but once you realize that it completely breaks nearly every standard keybinding and behavior of Emacs, you begin to wonder why you aren't just using TextEdit or TextMate. Carbon Emacs is a good Emacs application for OS X. It is as close as you'll get to GNU Emacs without compiling for yourself. It fits in well enough with the operating system, but at the same time, is the wonderful Emacs we all know and love. Currently it requires Leopard with the latest release, but most people have upgraded by now anyway. You can fetch it here. Alternatively, if you want to use Vim on OS X, I've heard good things about MacVim. Beyond those, there are the obvious TextEdit, TextMate, etc line of editors. They work for some people, but most "advanced" users I know (myself included) hate touching them with anything shorter than a 15ft pole. A: CotEditor is a Cocoa-based open source text editor. It is popular in Japan. A: Best open source one is Smultron in my opinion, but it doesn't a torch to TextMate. A: There's a new kid on the block - PHPStorm. I used it for a whole year. Its not free but offers an individual license of 49$ for a year, free for Open Source Developers. * *Speedy for an IDE - Its based on Java so looks somewhat like Eclipse/Netbeans but smokes them to dust in terms of speed (not as fast as Coda/Textmate as this is an IDE). *Keyboard shortcuts galore - I seldom touched the mouse while developing using PHPStorm (that's what I didn't like about Coda) *Subversion support built-in - Didn't need to touch Versions or any other SVN client on Mac *Supports snippets, templates - zen-coding is supported as well *Supports projects, though in separate windows *File search, code search *code completion, supports PHPDoc code completion too A: * *BBEdit makes all other editors look like Notepad. It handles gigantic files with ease; most text editors (TextMate especially) slow down to a dead crawl or just crash when presented with a large file. The regexp and multiple-file Find dialogs beat anything else for usability. The clippings system works like magic, and has selection, indentation, placeholder, and insertion point tags, it's not just dumb text. BBEdit is heavily AppleScriptable. Everything can be scripted. In 9.0, BBEdit has code completion, projects, and a ton of other improvements. I primarily use it for HTML, CSS, JS, and Python, where it's extremely strong. Some more obscure languages are not as well-supported in it, but for most purposes it's fantastic. The only devs I know who like TextMate are Ruby fans. I really do not get the appeal, it's marginally better than TextWrangler (BBEdit's free little brother), but if you're spending money, you may as well buy the better tool for a few dollars more. * *jEdit does have the virtue of being cross-platform. It's not nearly as good as BBEdit, but it's a competent programmer's editor. If you're ever faced with a Windows or Linux system, it's handy to have one tool you know that works. *Vim is fine if you have to work over ssh and the remote system or your computer can't do X11. I used to love Vim for the ease of editing large files and doing repeated commands. But these days, it's a no-vote for me, with the annoyance of the non-standard search & replace (using (foo) groups instead of (foo), etc.), painfully bad multi-document handling, lack of a project/disk browser view, lack of AppleScript, and bizarre mouse handling in the GVim version. A: I thought TextMate was everyone's favourite. I haven't met a programmer using a Mac who is not using TextMate. A: jEdit runs on OS X, being Java-based. It's somewhat similar to TextMate, I think. Editra looks interesting, but I've not tried it myself. A: TextMate not for "advanced programmers". That does not make sense, TextMate contains everything an "advanced programmer" would want. It allows them to define a bundle that allows them to quickly set up the way they want their source code formatted, or one that follows the project guidelines, quick easy access to create entire structures and classes based on typing part of a construct and hitting tab. TextMate is my tool of choice, it is fast, lightweight and yet contains all of the features I would want in a tool to program with. While it is not tightly integrated in Xcode, that is not a problem for me as I don't write software for Mac OS X. I write software for FreeBSD. A: Definitely BBEdit. I code, and BBEdit is what I use to code. A: I haven't used it myself, but another free one that I've heard good thing about is Smultron. In my own research on this, I found this interesting article: Faceoff: Which Is The Best Mac Text Editor Ever? A: * *Emacs *Vim But I use TextMate, and can say that it is, without a doubt, worth every penny I paid for it. A: Sublime text is awesome (http://www.sublimetext.com/2). Excellent search features, very fast and lightweight. Very decent code completion. I also use RubyMine and WebStorm a lot (http://www.jetbrains.com/). They are excellent but not all purpose like TextMate. A: * *Eclipse and its variants. *Netbeans A: * *SubEthaEdit *Coda *DashCode with OS X 10.8 or older A: Smultron is another good (and free) one. A: I use Eclipse as my primary editor (for Python) but I always keep SubEthaEdit handy as my supplemental text editor (free trial, 30 euros to license). It's not super-complicated but it does what I need. A: You might consider one of the classics - they're both free, extensible and have large user bases that extend beyond the Mac: * *Aquamacs - emacs for OS X (emacs in a shell window is also an option) *Mac Vim - VI with a Mac-specific GUI (vim in a shell window is also an option) A: I prefer an old-school editing setup. I use command-line vim embedded in a GNU Screen "window" inside of iTerm. This may not integrate well with XCode, but I think it works great for developing and using command-line programs. If you spend any significant time working in a terminal, GNU Screen is worth the 30 minutes it takes to master the basic terminal multiplexing concepts. A: Coda's great for PHP/ASP/HTML style development. Great interface, multiple-file search and replace with regexp support, slick FTP/SFTP/etc integration for browsing and editing remote files, SVN integration, etc. It now supports plugins and the plugin editor can import TextMate bundles, so there's a bright future there. There aren't a lot of must-have plugins yet because the plugin support was newly introduced with version 1.6 a few months back. It's a popular app, though, so I expect more in the future. The "killer features" for me are: * Seamless editing of remote files * Code navigator (symbol browser; pane that lists functions etc) Most people aren't really into using symbol browsers but as I have to maintain a lot of unfamiliar code I find them invaluable. I'm not sure that Coda has the "raw power" of TextMate though. I plan on getting familiar with TextMate next. A: I make use of Komodo IDE. It supports a huge number of languages, and is customisable but is a bit expensive (my company bought me a copy). A really good alternative is the free version called Komodo Edit. Loads really quickly and has a decent feature list and I find myself turning to it rather than the full IDE for a lot of jobs. A: Fraise is a nice free option. It has some rough edges, but you can't beat the price. I believe it's a fork or successor of Smultron. A: I actually prefer EditRocket over TextMate. I use it on both my Mac and Ubuntu machines. It is nice to use the same editor on multiple operating systems. A: Textmate is state of the Art editor, but if someone is thinking about developing on several platforms without awkward memory eaters monsters like jedit, eclipse, netbeans etc take a look at geany (geany.org). It is free. The only problem the editor has not esthetic look and feel on Mac OS X :) A: Another vote for Smultron. I used it when doing some XQuery programming and being able to define a keyword files for syntax color highlighting was great. A: I have installed both Smultron and Textwrangler, but find myself using Smultron most of the time. A: I would love to use a different editor than XCode for coding, but I feel, that no other editor integrates tightly enough with it to be really worthwhile. However, given some time, TextMate might eventually get to that point. At the moment though, it primarily lacks debugging features and refactoring. For everything that does not need XCode, I love TextMate. If I had another Mac-user in my workgroup I would probably consider SubEthaEdit for its collaboration features. If it is Emacs you want, I would recommend Aquamacs (more Mac-like) or Carbon Emacs (more GNU-Emacs-like) A: I've been using BBEdit for years. It's rock-solid, fast, and integrates into my Xcode workflow decently well. (I'm not sure anything integrates into Xcode as well as the built-in editor, but who has time to wait for the built-in editor?) For small team projects which don't use a source control system, or for single user editing on multiple machines, SubEthaEdit comes highly recommended. A: Eclipse and Netbeans have text editors among a whole lot of other stuff. I don't think you would want to wait 10 seconds for your text editor to become ready :/...If you are going to spend some serious time coding then spend some time and learn to use vim (emacs too but, I recommend vim) A: I've been using TextWrangler, it's pretty decent. But I REALLY miss the Search and Replace and other capabilities of UltraEdit... enough that it's usually worth firing up Parallels to use it instead (UltraEdit runs poorly under Wine at the moment). A: I have to say that i love Coda, it can do almost anything you need in 'plain' text WebDevelopent, i use it daily to develop simple and complex projects using XHTML,PHP,Javascript,CSS... Ok, it's not free but compare it with many other development suits and you'll find that that 100$ are really affordable (i bought many months ago when it was at about 60$) In the last version they included a lot of new nice features and whoa... just look at the panic WebSite Before using coda i was a hardcore ZendStudio User, i used that in Windows,Linux and Mac (i have been user for a long time for all that platforms) as it was developed in Java it was really slow even in a modern MacBookPro.. so i also tested a lots of diferent IDEs for developing but at this moment any of these are as powerful and simple as Coda is A: I used to use PageSpiner from optima Software (http://www.optima-system.com/pagespinner/) but converted to Coda when Panic first released it and haven't looked back. Now that the latest version has multi-file find and replace it has just about everything I need and I use it on a daily basis. Another vote for Coda from me. A: I used BBEdit for years, but recently converted to Panic's Coda. I love Coda. It does everything that I need and now that I've begun programming plug-ins for it, it's become a far more rich tool. The support team are responsive and the community that is growing around it is fantastic. There is still a lot of room for improvement, but that's the cool thing about being part of the kind of community that surrounds it; you have a say in what that improvement is. Panic - Coda A: My vote would be for BBedit's free little brother TextWrangler. A: I purchased Textmate because I liked it so much, one of the few apps I paid for. Other editors are just not worth it. If you are going for an IDE, eclipse or netbeans are great and free. A: I use Xcode and TextMate. A: Mac UltraEdit is out as of 2011 - http://www.ultraedit.com/products/mac-text-editor.html Very much a 1.0-ish feel and sluggish (though labeled 2.0.2), has all the great features of the Windows version (column edits, true hex mode, full out macro recording, and plugins for every language under the sun). A: I like Aptana Studio and Redcar for rails programming.
{ "language": "en", "url": "https://stackoverflow.com/questions/20533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "176" }
Q: Link from ASP.NET yellow error page directly to VS source code When an ASP.NET application errors out and generates the yellow-screen display, I'd like to create some kind of link from the error page which would jump directly to the correct line of code in Visual Studio. I'm not sure how to approach this, or if there are any tools already in existence which accomplish it - but I'd love some feedback on where to start. In the event that generating a new error page is necessary, is it possible to replace the standard yellow screen across an entire webserver, rather than having to configure the customized error output for each application? A: You would probably need to embed an ActiveX control in the page for something like that to be possible. A: The yellow screen of death is served by the default ASP.NET HTTPHandler. In order to intercept it, you would need to add another HTTPHandler in front of it that intercepts all uncaught exceptions. At that point, you could do whatever you want for your error layout. Creating a way to directly jump to Visual Studio would be tricky. I could see it done in IE via a COM/ActiveX object. A: The yellow screen of death is just a 500 error as far as the server is concerned, you can redirect to a custom screen using the error section of the web.config. To make a whole server change in the same manner you could probably override it at the iis level? Or perhaps even set the default behaviour in the machine.config file (not 100% sure about that one though) A: The yellow screen of death is just a 500 error as far as the server is concerned, you can redirect to a custom screen using the error section of the web.config. To make a whole server change in the same manner you could probably override it at the iis level? Or perhaps even set the default behaviour in the machine.config file (not 100% sure about that one though) If you let it bubble up all the way to IIS you will not have any way to access the Exception information. Its better to catch the Exception before the YSOD and serve your own. This can be done at the application level. A: Don't forget that you need the Program Debug Database (pdb) file to find the source code line number. An application in release mode won't have the same level of information as a debug release. A: The easiest, laziest thing I could think of would be to have the process happen thusly: * *The yellow screen is modified so the line is source code is clickable. When clicked it delivers a small text file with the source file name and line number. *A small program on the PC is tied to the extension of the small file the yellow screen let you download. The program uses visual studio's extensibility model to open the source file and goto that line. The program may need to know where your source code is. A simple Google search gives helpful pointers on how to manipulate VS with an external program such as this post on MSDN. If you want to go snazzier, then there are certainly other methods, but I'd rather write out a quick and dirty program, and get it out of my way so I can be about my business. Don't let the tools become projects... -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/20575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Image UriSource and Data Binding I'm trying to bind a list of custom objects to a WPF Image like this: <Image> <Image.Source> <BitmapImage UriSource="{Binding Path=ImagePath}" /> </Image.Source> </Image> But it doesn't work. This is the error I'm getting: "Property 'UriSource' or property 'StreamSource' must be set." What am I missing? A: WPF has built-in converters for certain types. If you bind the Image's Source property to a string or Uri value, under the hood WPF will use an ImageSourceConverter to convert the value to an ImageSource. So <Image Source="{Binding ImageSource}"/> would work if the ImageSource property was a string representation of a valid URI to an image. You can of course roll your own Binding converter: public class ImageConverter : IValueConverter { public object Convert( object value, Type targetType, object parameter, CultureInfo culture) { return new BitmapImage(new Uri(value.ToString())); } public object ConvertBack( object value, Type targetType, object parameter, CultureInfo culture) { throw new NotSupportedException(); } } and use it like this: <Image Source="{Binding ImageSource, Converter={StaticResource ImageConverter}}"/> A: You need to have an implementation of IValueConverter interface that converts the uri into an image. Your Convert implementation of IValueConverter will look something like this: BitmapImage image = new BitmapImage(); image.BeginInit(); image.UriSource = new Uri(value as string); image.EndInit(); return image; Then you will need to use the converter in your binding: <Image> <Image.Source> <BitmapImage UriSource="{Binding Path=ImagePath, Converter=...}" /> </Image.Source> </Image> A: The problem with the answer that was chosen here is that when navigating back and forth, the converter will get triggered every time the page is shown. This causes new file handles to be created continuously and will block any attempt to delete the file because it is still in use. This can be verified by using Process Explorer. If the image file might be deleted at some point, a converter such as this might be used: using XAML to bind to a System.Drawing.Image into a System.Windows.Image control The disadvantage with this memory stream method is that the image(s) get loaded and decoded every time and no caching can take place: "To prevent images from being decoded more than once, assign the Image.Source property from an Uri rather than using memory streams" Source: "Performance tips for Windows Store apps using XAML" To solve the performance issue, the repository pattern can be used to provide a caching layer. The caching could take place in memory, which may cause memory issues, or as thumbnail files that reside in a temp folder that can be cleared when the app exits. A: you may use ImageSourceConverter class to get what you want img1.Source = (ImageSource)new ImageSourceConverter().ConvertFromString("/Assets/check.png"); A: This article by Atul Gupta has sample code that covers several scenarios: * *Regular resource image binding to Source property in XAML *Binding resource image, but from code behind *Binding resource image in code behind by using Application.GetResourceStream *Loading image from file path via memory stream (same is applicable when loading blog image data from database) *Loading image from file path, but by using binding to a file path Property *Binding image data to a user control which internally has image control via dependency property *Same as point 5, but also ensuring that the file doesn't get's locked on hard-disk A: You can also simply set the Source attribute rather than using the child elements. To do this your class needs to return the image as a Bitmap Image. Here is an example of one way I've done it <Image Width="90" Height="90" Source="{Binding Path=ImageSource}" Margin="0,0,0,5" /> And the class property is simply this public object ImageSource { get { BitmapImage image = new BitmapImage(); try { image.BeginInit(); image.CacheOption = BitmapCacheOption.OnLoad; image.CreateOptions = BitmapCreateOptions.IgnoreImageCache; image.UriSource = new Uri( FullPath, UriKind.Absolute ); image.EndInit(); } catch{ return DependencyProperty.UnsetValue; } return image; } } I suppose it may be a little more work than the value converter, but it is another option.
{ "language": "en", "url": "https://stackoverflow.com/questions/20586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: How to write stored procedure output directly to a file on an FTP without using local or temp files? I want to get the results of a stored procedure and place them into a CSV file onto an FTP location. The catch though is that I cannot create a local/temporary file that I can then FTP over. The approach I was taking was to use an SSIS package to create a temporary file and then have a FTP Task within the pack to FTP the file over, but our DBA's do not allow temporary files to be created on any servers. in reply to Yaakov Ellis I think we will need to convince the DBA's to let me use at least a share on a server that they do not operate, or ask them how they would do it. in reply to Kev I like the idea of the CLR integration, but I don't think our DBA's even know what that is lol and they would probably not allow it either. But I will probably be able to do this within a Script Task in an SSIS package that can be scheduled. A: If you were allowed to implement CLR integration assemblies you could actually use FTP without having to write a temporary file: public static void DoQueryAndUploadFile(string uri, string username, string password, string filename) { FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(uri + "/" + filename); ftp.Method = WebRequestMethods.Ftp.UploadFile; ftp.Credentials = new System.Net.NetworkCredential(username, password); using(StreamWriter sw = new StreamWriter(ftp.GetRequestStream())) { // Do the query here then write to the ftp stream by iterating DataReader or other resultset, following code is just to demo concept: for (int i = 0; i < 100; i++) { sw.WriteLine("{0},row-{1},data-{2}", i, i, i); } sw.Flush(); } } A: This step-by-step example is for others who might stumble upon this question. This example uses Windows Server 2008 R2 server and SSIS 2008 R2. Even though, the example uses SSIS 2008 R2, the logic used is applicable to SSIS 2005 as well. Thanks to @Kev for the FTPWebRequest code. Create an SSIS package (Steps to create an SSIS package). I have named the package in the format YYYYMMDD_hhmm in the beginning followed by SO stands for Stack Overflow, followed by the SO question id, and finally a description. I am not saying that you should name your package like this. This is for me to easily refer this back later. Note that I also have two Data Sources namely Adventure Works and Practice DB. I will be using Adventure Works data source, which points to AdventureWorks database downloaded from this link. Refer screenshot #1 at the bottom of the answer. In the AdventureWorks database, create a stored procedure named dbo.GetCurrency using the below given script. CREATE PROCEDURE [dbo].[GetCurrency] AS BEGIN SET NOCOUNT ON; SELECT TOP 10 CurrencyCode , Name , ModifiedDate FROM Sales.Currency ORDER BY CurrencyCode END GO On the package’s Connection Manager section, right-click and select New Connection From Data Source. On the Select Data Source dialog, select Adventure Works and click OK. You should now see the Adventure Works data source under the Connection Managers section. Refer screenshot #2, #3 and #4. On the package, create the following variables. Refer screenshot #5. * *ColumnDelimiter: This variable is of type String. This will be used to separate the column data when it is written to the file. In this example, we will be using comma (,) and the code is written to handle only displayable characters. For non-displayable characters like tab (\t), you might need to change the code used in this example accordingly. *FileName: This variable is of type String. It will contain the name of the file. In this example, I have named the file as Currencies.csv because I am going to export list of currency names. *FTPPassword: This variable is of type String. This will contain the password to the FTP website. Ideally, the package should be encrypted to hide sensitive information. *FTPRemotePath: This variable is of type String. This will contain the FTP folder path to which the file should be uploaded to. For example if the complete FTP URI is ftp://myFTPSite.com/ssis/samples/uploads, then the RemotePath would be /ssis/samples/uploads. *FTPServerName: This variable is of type String. This will contain the FTP site root URI. For example if the complete FTP URI is ftp://myFTPSite.com/ssis/samples/uploads, then the FTPServerName would contain ftp://myFTPSite.com. You can combine FTPRemotePath with this variable and have a single variable. It is up to your preference. *FTPUserName:This variable is of type String. This will contain the user name that will be used to connect to the FTP website. *ListOfCurrencies: This variable is of type Object. This will contain the result set from the stored procedure and it will be looped through in the Script Task. *ShowHeader: This variable is of type Boolean. This will contain values true/false. True indicates that the first row in the file will contain Column names and False indicates that the first row will not contain Column names. *SQLGetData: This variable is of type String. This will contain the Stored Procedure execution statement. This example uses the value EXEC dbo.GetCurrency On the package’s Control Flow tab, place an Execute SQL Task and name it as Get Data. Double-click on the Execute SQL Task to bring the Execute SQL Task Editor. On the General section of the Execute SQL Task Editor, set the ResultSet to Full result set, the Connection to Adventure Works, the SQLSourceType to Variable and the SourceVariable to User::SQLGetData. On the Result Set section, click Add button. Set the Result Name to 0, this indicates the index and the Variable to User::ListOfCurrencies. The output of the stored procedure will be saved to this object variable. Click OK. Refer screenshot #6 and #7. On the package’s Control Flow tab, place a Script Task below the Execute SQL Task and name it as Save to FTP. Double-click on the Script Task to bring the Script Task Editor. On the Script section, click the Edit Script… button. Refer screenshot #8. This will bring up the Visual Studio Tools for Applications (VSTA) editor. Replace the code within the class ScriptMain in the editor with the code given below. Also, make sure that you add the using statements to the namespaces System.Data.OleDb, System.IO, System.Net, System.Text. Refer screenshot #9 that highlights the code changes. Close the VSTA editor and click Ok to close the Script Task Editor. Script code takes the object variable ListOfCurrencies and stores it into a DataTable with the help of OleDbDataAdapter because we are using OleDb connection. The code then loops through each row and if the variable ShowHeader is set to true, the code will include the Column names in the first row written to the file. The result is stored in a stringbuilder variable. After the string builder variable is populated with all the data, the code creates an FTPWebRequest object and connects to the FTP Uri by combining the variables FTPServerName, FTPRemotePath and FileName using the credentials provided in the variables FTPUserName and FTPPassword. Then the full string builder variable contents are written to the file. The method WriteRowData is created to loop through columns and provide the column names or data information based on the parameters passed. using System; using System.Data; using Microsoft.SqlServer.Dts.Runtime; using System.Windows.Forms; using System.Data.OleDb; using System.IO; using System.Net; using System.Text; namespace ST_7033c2fc30234dae8086558a88a897dd.csproj { [System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")] public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase { #region VSTA generated code enum ScriptResults { Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success, Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure }; #endregion public void Main() { Variables varCollection = null; Dts.VariableDispenser.LockForRead("User::ColumnDelimiter"); Dts.VariableDispenser.LockForRead("User::FileName"); Dts.VariableDispenser.LockForRead("User::FTPPassword"); Dts.VariableDispenser.LockForRead("User::FTPRemotePath"); Dts.VariableDispenser.LockForRead("User::FTPServerName"); Dts.VariableDispenser.LockForRead("User::FTPUserName"); Dts.VariableDispenser.LockForRead("User::ListOfCurrencies"); Dts.VariableDispenser.LockForRead("User::ShowHeader"); Dts.VariableDispenser.GetVariables(ref varCollection); OleDbDataAdapter dataAdapter = new OleDbDataAdapter(); DataTable currencies = new DataTable(); dataAdapter.Fill(currencies, varCollection["User::ListOfCurrencies"].Value); bool showHeader = Convert.ToBoolean(varCollection["User::ShowHeader"].Value); int rowCounter = 0; string columnDelimiter = varCollection["User::ColumnDelimiter"].Value.ToString(); StringBuilder sb = new StringBuilder(); foreach (DataRow row in currencies.Rows) { rowCounter++; if (rowCounter == 1 && showHeader) { WriteRowData(currencies, row, columnDelimiter, true, ref sb); } WriteRowData(currencies, row, columnDelimiter, false, ref sb); } string ftpUri = string.Concat(varCollection["User::FTPServerName"].Value, varCollection["User::FTPRemotePath"].Value, varCollection["User::FileName"].Value); FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(ftpUri); ftp.Method = WebRequestMethods.Ftp.UploadFile; string ftpUserName = varCollection["User::FTPUserName"].Value.ToString(); string ftpPassword = varCollection["User::FTPPassword"].Value.ToString(); ftp.Credentials = new System.Net.NetworkCredential(ftpUserName, ftpPassword); using (StreamWriter sw = new StreamWriter(ftp.GetRequestStream())) { sw.WriteLine(sb.ToString()); sw.Flush(); } Dts.TaskResult = (int)ScriptResults.Success; } public void WriteRowData(DataTable currencies, DataRow row, string columnDelimiter, bool isHeader, ref StringBuilder sb) { int counter = 0; foreach (DataColumn column in currencies.Columns) { counter++; if (isHeader) { sb.Append(column.ColumnName); } else { sb.Append(row[column].ToString()); } if (counter != currencies.Columns.Count) { sb.Append(columnDelimiter); } } sb.Append(System.Environment.NewLine); } } } Once the tasks have been configured, the package’s Control Flow should look like as shown in screenshot #10. Screenshot #11 shows the output of the stored procedure execution statement EXEC dbo.GetCurrency. Execute the package. Screenshot #12 shows successful execution of the package. Using the FireFTP add-on available in FireFox browser, I logged into the FTP website and verified that the file has been successfully uploaded to the FTP website. Refer screenshot #13. Examining the contents by opening the file in Notepad++ shows that it matches with the stored procedure output. Refer screenshot #14. Thus, the example demonstrated how to write results from database to an FTP website without having to use temporary/local files. Hope that helps someone. Screenshots: #1: Solution_Explorer #2: New_Connection_From_Data_Source #3: Select_Data_Source #4: Connection_Managers #5: Variables #6: Execute_SQL_Task_Editor_General #7: Execute_SQL_Task_Editor_Result_Set #8: Script_Task_Editor #9: Script_Task_VSTA_Code #10: Control_Flow_Tab #11: Query_Results #12: Package_Execution_Successful #13: File_In_FTP #14: File_Contents A: Is there a server anywhere that you can use where you can create a temporary file? If so, make a web service that returns an array containing the contents of the file. Call the web service from the computer where you can create a temporary file, use the contents of the array to build the temp file and ftp it over. If there is no where at all where you can create a temporary file, I don't see how you will be able to send anything by FTP. A: Try using a CLR stored procedure. You might be able to come up with something, but without first creating a temporary file, it might still be difficult. Could you set up a share on another machine and write to that, and then ftp from there? A: Script from the FTP server, and just call the stored proc. A: The catch though is that I cannot create a local/temporary file that I can then FTP over. This restriction does not make any sense, try to talk to DBA nicely and explain it to him/her. It is totally reasonable for any Windows process or job to create temporary file(s) in appropriate location, i.e. %TEMP% folder. Actually, SSIS runtime itself often creates temporary files there - so if DBA allows you to run SSIS, he is allowing you to create temporary files :). As long as DBA understands that these temporary files do not create problem or additional workload for him (explain that he does not have to set special permissions, or back them up, etc), he should agree to let you create them. The only maintenance task for DBA is to periodically clean %TEMP% directory in case your SSIS job fails and leaves the file behind. But he should do this anyway, as many other processes may do the same. A simple SQL Agent job will do this.
{ "language": "en", "url": "https://stackoverflow.com/questions/20587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: FTP in NetBeans 6.1 Is there an FTP browser hiding away in NetBeans 6.1? The help manual doesn't even suggest FTP exists. All I've been able to find so far is a tree viewer in the Services panel (no edit controls) and the ability to upload projects, folders and specific files from the Projects/Files views. Is there anywhere to delete or rename or will I have to keep switching back to my browser? I can see from the previews that there's a nice FTP controller in 6.5 but I'm not desperate enough to completely convert to a beta (yet). A: It looks like something was recently added to netbeans for php... http://blogs.oracle.com/netbeansphp/entry/ftp_support_added don't know if you can make use of that... A: The remotefs addin works for 6.5: remotefs (source: netbeans.org) A: You can try the Plugin FTP Site Deployer. This is free and open source plugin for Netbeans, it's add a contextual menu with the voice "upload to FTP". Is still in development but it is working: You can find some other information here: http://www.askweb.it/wordpress/?p=136 And download source and nmb from sourceforge https://sourceforge.net/projects/ftpsitedeployer/
{ "language": "en", "url": "https://stackoverflow.com/questions/20597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: If I have a PHP string in the format YYYY-DD-MM and a timestamp in MySQL, is there a good way to convert between them? I'm interested in doing comparisons between the date string and the MySQL timestamp. However, I'm not seeing an easy conversion. Am I overlooking something obvious? A: You can avoid having to use strtotime() or getdate() in PHP by using MySQL's UNIX_TIMESTAMP() function. SELECT UNIX_TIMESTAMP(timestamp) FROM sometable The resulting data will be a standard integer Unix timestamp, so you can do a direct comparison to time(). A: I wrote this little function to simplify the process: /** * Convert MySQL datetime to PHP time */ function convert_datetime($datetime) { //example: 2008-02-07 12:19:32 $values = split(" ", $datetime); $dates = split("-", $values[0]); $times = split(":", $values[1]); $newdate = mktime($times[0], $times[1], $times[2], $dates[1], $dates[2], $dates[0]); return $newdate; } I hope this helps A: Converting from timestamp to format: date('Y-m-d', $timestamp); Converting from formatted to timestamp: mktime(0, 0, 0, $month, $day, $year, $is_dst); See date and mktime for further documentation. When it comes to storing it's up to you whether to use the MySQL DATE format for stroing as a formatted date; as an integer for storing as a UNIX timestamp; or you can use MySQL's TIMESTAMP format which converts a numeric timestamp into a readable format. Check the MySQL Doc for TIMESTAMP info. A: strtotime() and getdate() are two functions that can be used to get dates from strings and timestamps. There isn't a standard library function that converts between MySQL and PHP timestamps though. A: Use the PHP Date function. You may have to convert the mysql timestamp to a Unix timestamp in your query using the UNIX_TIMESTAMP function in mysql. A: A date string of the form: YYYY-MM-DD has no time associated with it. A MySQL Timestamp is of the form: YYYY-MM-DD HH:mm:ss to compare the two, you'll either have to add a time to the date string, like midnight for example $datetime = '2008-08-21'.' 00:00:00'; and then use a function to compare the epoc time between them if (strtotime($datetime) > strtotime($timestamp)) { echo 'Datetime later'; } else { echo 'Timestamp equal or greater'; }
{ "language": "en", "url": "https://stackoverflow.com/questions/20598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Removing nodes from an XmlDocument The following code should find the appropriate project tag and remove it from the XmlDocument, however when I test it, it says: The node to be removed is not a child of this node. Does anyone know the proper way to do this? public void DeleteProject (string projectName) { string ccConfigPath = ConfigurationManager.AppSettings["ConfigPath"]; XmlDocument configDoc = new XmlDocument(); configDoc.Load(ccConfigPath); XmlNodeList projectNodes = configDoc.GetElementsByTagName("project"); for (int i = 0; i < projectNodes.Count; i++) { if (projectNodes[i].Attributes["name"] != null) { if (projectName == projectNodes[i].Attributes["name"].InnerText) { configDoc.RemoveChild(projectNodes[i]); configDoc.Save(ccConfigPath); } } } } UPDATE Fixed. I did two things: XmlNode project = configDoc.SelectSingleNode("//project[@name='" + projectName + "']"); Replaced the For loop with an XPath query, which wasn't for fixing it, just because it was a better approach. The actual fix was: project.ParentNode.RemoveChild(project); Thanks Pat and Chuck for this suggestion. A: Instead of configDoc.RemoveChild(projectNodes[i]); try projectNodes[i].parentNode.RemoveChild(projectNodes[i]); A: try configDoc.DocumentElement.RemoveChild(projectNodes[i]); A: Looks like you need to select the parent node of projectNodes[i] before calling RemoveChild. A: When you get sufficiently annoyed by writing it the long way (for me that was fairly soon) you can use a helper extension method provided below. Yay new technology! public static class Extensions { ... public static XmlNode RemoveFromParent(this XmlNode node) { return (node == null) ? null : node.ParentNode.RemoveChild(node); } } ... //some_long_node_expression.parentNode.RemoveChild(some_long_node_expression); some_long_node_expression.RemoveFromParent(); A: Is it possible that the project nodes aren't child nodes, but grandchildren or lower? GetElementsByTagName will give you elements from anywhere in the child element tree, IIRC. A: It would be handy to see a sample of the XML file you're processing but my guess would be that you have something like this <Root> <Blah> <project>...</project> </Blah> </Root> The error message seems to be because you're trying to remove <project> from the grandparent rather than the direct parent of the project node
{ "language": "en", "url": "https://stackoverflow.com/questions/20611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: why are downloads sometimes tagged md5, sha1 and other hash indicators? I've seen this all over the place: Download here! SHA1 = 8e1ed2ce9e7e473d38a9dc7824a384a9ac34d7d0 What does it mean? How does a hash come into play as far as downloads and... What use can I make of it? Is this a legacy item where you used to have to verify some checksum after you downloaded the whole file? A: When downloading larger files, it's often useful to perform a checksum to ensure your download was successful and not mangled along transport. There's tons of freeware apps that can be used to gen the checksum for you to validate your download. This to me is an interesting mainstreaming of procedures that popular mp3 and warez sites used to use back in the day when distributing files. A: SHA1 and MD5 hashes are used to verify the integrity of files you've downloaded. They aren't necessarily a legacy technology, and can be used by tools like those in the openssl to verify whether or not your a file has been corrupted/changed from its original. A: It's a security measure. It allows you to verify that the file you just downloaded is the one that the author posted to the site. Note that using hashes from the same website you're getting the files from is not especially secure. Often a good place to get them from is a mailing list announcement where a PGP-signed email contains the link to the file and the hash. Since this answer has been ranked so highly compared to the others for some reason, I'm editing it to add the other major reason mentioned first by the other authors below, which is to verify the integrity of the file after transferring it over the network. So: * *Security - verify that the file that you downloaded was the one the author originally published *Integrity - verify that the file wasn't damaged during transmission over the network. A: It's to ensure that you downloaded the file correctly. If you hash the downloaded the file and it matches the hash on the page, all is well. A: A cryptographic hash (such as SH1 or MD5) allows you to verify that file you have has been downloaded correctly and has not been tampered with. A: To go along with what everyone here is saying I use HashTab when I need to generate/compare MD5 and SHA1 hashes on Windows. It adds a new tab to the file properties window and will calculate the hashes. A: With a has (MD5, SHA-1) one input matches only with one output, and then if you down load the file and calculate the hash again should obtain the same output. If the output is different the file is corrupt. If (hash(file) == “Hash in page”) validFile = true; else validFile = false;
{ "language": "en", "url": "https://stackoverflow.com/questions/20627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Determine if my PC supports HW Virtualization How, in general, does one determine if a PC supports hardware virtualization? I use VirtualPC to set up parallel test environments and I'd enjoy a bit of a speed boost. A: Download this: http://www.cpuid.com/cpuz.php Also check, http://en.wikipedia.org/wiki/X86_virtualization Edit: Additional, I know it's for XEN but the instructions are the same for all VMs that want hardware support. http://wiki.xensource.com/xenwiki/HVM_Compatible_Processors I can't try it from work, but I'm sure it can identify whether you've got the Intel VT or AMD-V instructions. Intel will have a "vmx" instruction and AMD will have a "svm". On linux you can check /proc/cpuinfo, "egrep '(vmx|svm)' /proc/cpuinfo" A: The first thing is to run VPC, open Options, and see if the HW virtualization option is available. If it isn't you may still have it. Many machines have HW virtualization disabled in the BIOS. If you believe this is the case you'll need to confirm with your processor mfg that MW virtualization is supported, then find out from your BIOS mfg how to enable that feature. @Nick what processor do you have? A: Try cpu-z or SecurAble on windows or on linux, cat /proc/cpuinfo and look for the flags: vmx (Intel) or svm (AMD) All of those will tell you if the hardware supports it, but as others said it must be enabled in the BIOS. (But checking first will avoid an unnecessary reboot...) A: Try just turning the option on in VirtualPC. If it doesn't do anything (or the option isn't available), then your PC doesn't. A: Try just turning the option on in VirtualPC. If it doesn't do anything (or the option isn't available), then your PC doesn't. Some PC's require a BIOS setting to be turned on in order for this option to be enabled. I couldn't find that BIOS setting on my machine, but then again there are a lot of options to comb through. Presumably this is a CPU or motherboard chipset feature, so there must be a list of CPU's that support it. A: You can take a look in the BIOS of the machine. It indicates if the machine supports hardware virtualization. You can run programs like virtual pc even if you machine does not support HW virtualization, but if the machine supports it the program take advantage of this extensions. A: Your processor does NOT support hardware-assisted virtualization, but as others have said you can still run virtualization tools. http://www.intel.com/products/processor_number/chart/pentium_d.htm A: http://en.wikipedia.org/wiki/X86_virtualization first place I'd check
{ "language": "en", "url": "https://stackoverflow.com/questions/20658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Do you use AOP (Aspect Oriented Programming) in production software? AOP is an interesting programming paradigm in my opinion. However, there haven't been discussions about it yet here on stackoverflow (at least I couldn't find them). What do you think about it in general? Do you use AOP in your projects? Or do you think it's rather a niche technology that won't be around for a long time or won't make it into the mainstream (like OOP did, at least in theory ;))? If you do use AOP then please let us know which tools you use as well. Thanks! A: I don't understand how one can handle cross-cutting concerns like logging, security, transaction management, exception-handling in a clean fashion without using AOP. Anyone using the Spring framework (probably about 50% of Java enterprise developers) is using AOP whether they know it or not. A: At Terracotta we use AOP and bytecode instrumentation pretty extensively to integrate with and instrument third-party software. For example, our Spring intergration is accomplished in large part by using aspectwerkz. In a nutshell, we need to intercept calls to Spring beans and bean factories at various points in order to cluster them. So AOP can be useful for integrating with third party code that can't otherwise be modified. However, we've found there is a huge pitfall - if possible, only use the third party public API in your join points, otherwise you risk having your code broken by a change to some private method in the next minor release, and it becomes a maintenance nightmare. A: AOP and transaction demarcation is a match made in heaven. We use Spring AOP @Transaction annotations, it makes for easier and more intuitive tx-demarcation than I've ever seen anywhere else. A: We used aspectJ in one of my big projects for quite some time. The project was made up of several web services, each with several functions, which was the front end for a complicated document processing/querying system. Somewhere around 75k lines of code. We used aspects for two relatively minor pieces of functionality. First was tracing application flow. We created an aspect that ran before and after each function call to print out "entered 'function'" and "exited 'function'". With the function selector thing (pointcut maybe? I don't remember the right name) we were able to use this as a debugging tool, selecting only functions that we wanted to trace at a given time. This was a really nice use for aspects in our project. The second thing we did was application specific metrics. We put aspects around our web service methods to capture timing, object information, etc. and dump the results in a database. This was nice because we could capture this information, but still keep all of that capture code separate from the "real" code that did the work. I've read about some nice solutions that aspects can bring to the table, but I'm still not convinced that they can really do anything that you couldn't do (maybe better) with "normal" technology. For example, I couldn't think of any major feature or functionality that any of our projects needed that couldn't be done just as easily without aspects - where I've found aspects useful are the kind of minor things that I've mentioned. A: Python supports AOP by letting you dynamically modify its classes at runtime (which in Python is typically called monkeypatching rather than AOP). Here are some of my AOP use cases: * *I have a website in which every page is generated by a Python function. I'd like to take a class and make all of the webpages generated by that class password-protected. AOP comes to the rescue; before each function is called, I do the appropriate session checking and redirect if necessary. *I'd like to do some logging and profiling on a bunch of functions in my program during its actual usage. AOP lets me calculate timing and print data to log files without actually modifying any of these functions. *I have a module or class full of non-thread-safe functions and I find myself using it in some multi-threaded code. Some AOP adds locking around these function calls without having to go into the library and change anything. This kind of thing doesn't come up very often, but whenever it does, monkeypatching is VERY useful. Python also has decorators which implement the Decorator design pattern (http://en.wikipedia.org/wiki/Decorator_pattern) to accomplish similar things. Note that dynamically modifying classes can also let you work around bugs or add features to a third-party library without actually having to modify that library. I almost never need to do this, but the few times it's come up it's been incredibly useful. A: Yes. Orthogonal concerns, like security, are best done with AOP-style interception. Whether that is done automatically (through something like a dependency injection container) or manually is unimportant to the end goal. One example: the "before/after" attributes in xUnit.net (an open source project I run) are a form of AOP-style method interception. You decorate your test methods with these attributes, and just before and after that test method runs, your code is called. It can be used for things like setting up a database and rolling back the results, changing the security context in which the test runs, etc. Another example: the filter attributes in ASP.NET MVC also act like specialized AOP-style method interceptors. One, for instance, allows you to say how unhandled errors should be treated, if they happen in your action method. Many dependency injection containers, including Castle Windsor and Unity, support this behavior either "in the box" or through the use of extensions. A: I use AOP heavily in my C# applications. I'm not a huge fan of having to use Attributes, so I used Castle DynamicProxy and Boo to apply aspects at runtime without polluting my code A: We use AOP in our session facade to provide a consistent framework for our customers to customize our application. This allows us to expose a single point of customization without having to add manual hook support in for each method. Additionally, AOP provides a single point of configuration for additional transaction setup and teardown, and the usual logging things. All told, much more maintainable than doing all of this by hand. A: The main application I work on includes a script host. AOP allows the host to examine the properties of a script before deciding whether or not to load the script into the Application Domain. Since some of the scripts are quite cumbersome, this makes for much faster loading at run-time. We also use and plan to use a significant number of attributes for things like compiler control, flow control and in-IDE debugging, which do not need to be part of the final distributed application. A: We use PostSharp for our AOP solution. We have caching, error handling, and database retry aspects that we currently use and are in the process of making our security checks an Aspect. Works great for us. Developers really do like the separation of concerns. The Architects really like having the platform level logic consolidated in one location. The PostSharp library is a post compiler that does the injection of the code. It has a library of pre-defined intercepts that are brain dead easy to implement. It feels like wiring in event handlers. A: Yes, we do use AOP in application programming . I preferably use AspectJ for integrating aop in my Spring applications. Have a look at this article for getting a broader prospective for the same. http://codemodeweb.blogspot.in/2018/03/spring-aop-and-aspectj-framework.html
{ "language": "en", "url": "https://stackoverflow.com/questions/20663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Is there a way to call a private Class method from an instance in Ruby? Other than self.class.send :method, args..., of course. I'd like to make a rather complex method available at both the class and instance level without duplicating the code. UPDATE: @Jonathan Branam: that was my assumption, but I wanted to make sure nobody else had found a way around. Visibility in Ruby is very different from that in Java. You're also quite right that private doesn't work on class methods, though this will declare a private class method: class Foo class <<self private def bar puts 'bar' end end end Foo.bar # => NoMethodError: private method 'bar' called for Foo:Class A: Let me contribute to this list of more or less strange solutions and non-solutions: puts RUBY_VERSION # => 2.1.2 class C class << self private def foo 'Je suis foo' end end private define_method :foo, &method(:foo) def bar foo end end puts C.new.bar # => Je suis foo puts C.new.foo # => NoMethodError A: Nowadays you don't need the helper methods anymore. You can simply inline them with your method definition. This should feel very familiar to the Java folks: class MyClass private_class_method def self.my_private_method puts "private class method" end private def my_private_method puts "private instance method" end end And no, you cannot call a private class method from an instance method. However, you could instead implement the the private class method as public class method in a private nested class instead, using the private_constant helper method. See this blogpost for more detail. A: Here is a code snippet to go along with the question. Using "private" in a class definition does not apply to class methods. You need to use "private_class_method" as in the following example. class Foo def self.private_bar # Complex logic goes here puts "hi" end private_class_method :private_bar class <<self private def another_private_bar puts "bar" end end public def instance_bar self.class.private_bar end def instance_bar2 self.class.another_private_bar end end f=Foo.new f=instance_bar # NoMethodError: private method `private_bar' called for Foo:Class f=instance_bar2 # NoMethodError: private method `another_private_bar' called for Foo:Class I don't see a way to get around this. The documentation says that you cannot specify the receive of a private method. Also you can only access a private method from the same instance. The class Foo is a different object than a given instance of Foo. Don't take my answer as final. I'm certainly not an expert, but I wanted to provide a code snippet so that others who attempt to answer will have properly private class methods. A: If your method is merely a utility function (that is, it doesn't rely on any instance variables), you could put the method into a module and include and extend the class so that it's available as both a private class method and a private instance method. A: This is the way to play with "real" private class methods. class Foo def self.private_bar # Complex logic goes here puts "hi" end private_class_method :private_bar class <<self private def another_private_bar puts "bar" end end public def instance_bar self.class.private_bar end def instance_bar2 self.class.another_private_bar end def calling_private_method Foo.send :another_private_bar self.class.send :private_bar end end f=Foo.new f.send :calling_private_method # "bar" # "hi" Foo.send :another_private_bar # "bar" cheers A: This is probably the most "native vanilla Ruby" way: class Foo module PrivateStatic # like Java private def foo 'foo' end end extend PrivateStatic include PrivateStatic def self.static_public_call "static public #{foo}" end def public_call "instance public #{foo}" end end Foo.static_public_call # 'static public foo' Foo.new.public_call # 'instance public foo' Foo.foo # NoMethodError: private method `foo' called for Foo:Class Foo.new.foo # NoMethodError: private method `foo' called for #<Foo:0x00007fa154d13f10> With some Ruby metaprogramming, you could even make it look like: class Foo def self.foo 'foo' end extend PrivateStatic private_static :foo end Ruby's metaprogramming is quite powerful, so you could technically implement any scoping rules you might want. That being said, I'd still prefer the clarity and minimal surprise of the first variant. A: Unless I'm misunderstanding, don't you just need something like this: class Foo private def Foo.bar # Complex logic goes here puts "hi" end public def bar Foo.bar end end Of course you could change the second definition to use your self.class.send approach if you wanted to avoid hardcoding the class name...
{ "language": "en", "url": "https://stackoverflow.com/questions/20674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: ASP/VBScript - Int() vs CInt() What is the difference in ASP/VBScript between Int() and CInt()? A: The usual answer for this issue is to manually force a re-rounding. This problem is as old as FORTRAN. Instead of a = int(40.91 * 100) Use b = 40.91 * 100 a = int(b + 0.5) Very old trick, still useful in Excel spreadsheets from time to time. A: * *Int() The Int function returns the integer part of a specified number. * *CInt() The CInt function converts an expression to type Integer. And the best answer comes from MSDN CInt differs from the Fix and Int functions, which truncate, rather than round, the fractional part of a number. When the fractional part is exactly 0.5, the CInt function always rounds it to the nearest even number. For example, 0.5 rounds to 0, and 1.5 rounds to 2. A: Here is another difference: Script: wscript.echo 40.91 * 100 wscript.echo Int(40.91 * 100) wscript.echo CInt(40.91 * 100) result: 4091 4090 (????) 4091 Any thoughts? A: And, the most important difference (IME, at least)....is that CInt overflows at 32,767.
{ "language": "en", "url": "https://stackoverflow.com/questions/20675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Linq to SQL - Underlying Column Length I've been using Linq to SQL for some time now and I find it to be really helpful and easy to use. With other ORM tools I've used in the past, the entity object filled from the database normally has a property indicating the length of the underlying data column in the database. This is helpful in databinding situations where you can set the MaxLength property on a textbox, for example, to limit the length of input entered by the user. I cannot find a way using Linq to SQL to obtain the length of an underlying data column. Does anyone know of a way to do this? Help please. A: Using the LINQ ColumnAttribute to Get Field Lengths from your Database : http://www.codeproject.com/KB/cs/LinqColumnAttributeTricks.aspx A: Thanks. Actually both of these answers seem to work. Unfortunately, they seem to look at the Linq attributes generated when the code-generation was done. Although that would seem to be the right thing to do, in my situation we sell software products and occasionally the customer will expand some columns lengths to accommodate their data. Thus, the length of the field as reported using this technique may not always reflect the true length of the underlying data column. Ah well, not Linq to SQL's fault, is it? :) Thanks for the quick answers! A: If you need to know the exact column length you can resort to the System.Data classes themselves. Something a bit like this: var context = new DataContextFromSomewhere(); var connection = context.Connection; var command = connection.CreateCommand( "SELECT TOP 1 * FROM TableImInterestedIn" ); var reader = command.ExecuteReader(); var table = reader.GetSchemaTable(); foreach( var column in table.Columns ) { Console.WriteLine( "Length: {0}", column.MaxLength ); }
{ "language": "en", "url": "https://stackoverflow.com/questions/20684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I declare a list of fixed length in specman? In E (specman) I want to declare variables that are lists, and I want to fix their lengths. It's easy to do for a member of a struct: thread[2] : list of thread_t; while for a "regular" variable in a function the above doesn't work, and I have to do something like: var warned : list of bool; gen warned keeping { it.size() == 5; }; Is there a better way to declare a list of fixed size? A: A hard keep like you have is only going to fix the size at initialization but elements could still be added or dropped later, are you trying to guard against this condition? The only way I can think of to guarantee that elements aren't added or dropped later is emitting an event synced on the size != the predetermined amount: event list_size_changed is true (wanted.size() != 5) @clk; The only other thing that I can offer is a bit of syntactic sugar for the hard keep: var warned : list of bool; keep warned.size() == 5; A: I know nothing of specman, but a fixed sized list is an array, so that might point you somewhere.
{ "language": "en", "url": "https://stackoverflow.com/questions/20696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Testing .NET code in partial trust environments I want to test the behavior of a certain piece of .NET code in partial trust environments. What's the fastest way to set this up? Feel free to assume that I (and other readers) are total CAS noobs. @Nick: Thanks for the reply. Alas, the tool in question is explicitly for unmanaged code. I didn't say "managed" in my question, and should not have assumed that people would infer it from the ".NET" tag. A: This is an excellent question, especially from a TDD point of view and validating code under different trust scenarios. I think the way I'd approach this would be something along the lines of - * *Create an AppDomain in my TDD code using the AppDomain.CreateDomain() overload that allows you to pass in a PermissionSet. The PermissionSet would be constructed to match the different trust scenarios you'd want to test against. *Load the assembly containing logic under test into the app domain *Create instances of types/call methods etc in app domain, trap security exceptions Something kinda like that. I've not had time to knock up a proof of concept yet. A: The functionality you're looking for is built-in into visual studio : On the security tab of your project, there's an "Advanced ..." button which let you configure whether you want to debug in full trust, or on a specified trust level. A: Use the Microsoft Application Verifier. AppVerifier helps to determine: * *When the application is using APIs correctly: (Unsafe TerminateThread APIs., Correct use of Thread Local Storage (TLS) APIs., o Correct use of virtual space manipulations (for example, VirtualAlloc, MapViewOfFile). *Whether the application is hiding access violations using structured exception handling. *Whether the application is attempting to use invalid handles. *Whether there are memory corruptions or issues in the heap. *Whether the application runs out of memory under low resources. *Whether the correct usage of critical sections is occurring. *Whether an application running in an administrative environment will run well in an environment with less privilege. *Whether there are potential problems when the application is running as a limited user. *Whether there are uninitialized variables in future function calls in a thread's context. A: You should look at the .NET Framework Configuration Tool. It's in the .NET SDK, and you can find instructions on running it here... http://msdn.microsoft.com/en-us/library/2bc0cxhc.aspx In the Runtime Security Policy section you'll find 3 policy levels: Enterprise, Machine and User. If you drill into Machine or User you'll find definitions of Code Groups and Permission Sets . When you say that you want to test some .NET code in partial trust environments, I guess you'll want to test against one of the standard permission sets already defined, such as Internet . You need to define a Code Group that matches your app (or specific assemblies) and assign your chosen permission set to that Code Group . You can define your own custom Permission Sets too, but let's keep it simple for now. Choose whether you want your new code group to exist at machine-wide scope, or just for your user account, and drill into the Machine or User policy level accordingly. You'll see a code group called _All _ Code_ . Create a child code group inside that one, by right-clicking and selecting New... Give it a name, say PartialTrustGroup , then click Next . You have to specify a membership condition for this group, and there are various types. I like to create a specific folder called PartialTrust on my machine, and then create a URL membership condition that matches. So, my URL looks like this... file://c:/users/martin/documents/partialtrust/* The * is a wildcard to catch any assembly beneath that path. Click Next . Now you can pick a permission set for your new code group. For now, pick Internet . It's quite a restrictive set, similar to a Java applet sandbox. Click Next and Finish . Now right-click on your new code-group and select Properties. In the General tab, ensure the topmost checkbox is selected, then click OK. Now, any .NET assemblies that are loaded from a location beneath the URL you specified will have the Internet permission set applied to them. Expect to get some SecurityExceptions if you haven't written your code to carefully observe the reduced permission set. Sorry this is a long description. It really is a lot more simple than it sounds. A: I just posted an article titled Partial Trust Testing with xUnit.net on my blog. It details the xUnit.net-based framework that we use on the Entity Framework team to exercise code under medium trust. Here is an example of its usage. public class SomeTests : MarshalByRefObject { [PartialTrustFact] public void Partial_trust_test1() { // Runs in medium trust } } // Or... [PartialTrustFixture] public class MoreTests : MarshalByRefObject { [Fact] public void Another_partial_trust_test() { // Runs in medium trust } }
{ "language": "en", "url": "https://stackoverflow.com/questions/20718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Version detection with Silverlight How can I efficiently and effectively detect the version and, for that matter, any available information about the instance of Silverlight currently running on the browser? A: The Silverlight control only has an IsVersionSupported function, which returns true / false when you give it a version number, e.g.: if(slPlugin.isVersionSupported("2.0")) { alert("I haz some flavour of Silverlight 2"); You can be as specific as you want when checking the build, since the version string can include all of the following: * *major - the major number *minor - the minor number *build - the build number *revision - the revision number So we can check for a specific build number as follows: if(slPlugin.isVersionSupported("2.0.30523")) { alert("I haz Silverlight 2.0.30523, but could be any revision."); Silverlight 1.0 Beta included a control.settings.version property, which was replaced with the isVersionSupported() method. The idea is that you shouldn't be programming against specific versions of Silverlight. Rather, you should be checking if the client has at least verion 1.0, or 2.0, etc. That being said, you can get the Silverlight version number in Firefox by checking the Silverlight plugin description: alert(navigator.plugins["Silverlight Plug-In"].description); Shows '2.0.30523.8' on my computer. Note that it is possible to brute force it by iterating through all released version numbers. Presumably that's what BrowserHawk does - they'll report which version of Silverlight the client has installed. A: I got this from http://forums.asp.net/p/1135746/1997617.aspx#1997617 which is the same link Stu gave you. I just included the code snippet. Silverlight.isInstalled = function(d) { var c = false, a = null; try { var b = null; if(Silverlight.ua.Browser == "MSIE") b = new ActiveXObject("AgControl.AgControl"); else if(navigator.plugins["Silverlight Plug-In"]) { a = document.createElement("div"); document.body.appendChild(a); a.innerHTML = '<embed type="application/x-silverlight" />'; b = a.childNodes[0] } if(b.IsVersionSupported(d)) c = true; b = null; Silverlight.available = true } catch(e) { c=false } if(a) document.body.removeChild(a); return c }; A: found this site that detects the full version of silverlight- silverlight version (aka silverlightversion.com) A: As mentioned in the above comments there is currently no efficient direct way to get the installed Silverlight version number (that works cross browser platform). I wrote a post on how to workaround this problem and detect the Silverlight major version number (including version 3) programmatically and more efficiently using JavaScript. You can find the code and the post at: http://www.apijunkie.com/APIJunkie/blog/post/2009/04/How-to-programmatically-detect-Silverlight-version.aspx Good luck! A: Environment.Version will do what you want! Supported since Silverlight 2.0 A: Look in silverlight.js: http://forums.asp.net/p/1135746/1997617.aspx#1997617
{ "language": "en", "url": "https://stackoverflow.com/questions/20722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What's the best way to create ClickOnce deployments Our team develops distributed winform apps. We use ClickOnce for deployment and are very pleased with it. However, we've found the pain point with ClickOnce is in creating the deployments. We have the standard dev/test/production environments and need to be able to create deployments for each of these that install and update separate from one another. Also, we want control over what assemblies get deployed. Just because an assembly was compiled doesn't mean we want it deployed. The obvious first choice for creating deployments is Visual Studio. However, VS really doesn't address the issues stated. The next in line is the SDK tool, Mage. Mage works OK but creating deployments is rather tedious and we don't want every developer having our code signing certificate and password. What we ended up doing was rolling our own deployment app that uses the command line version of Mage to create the ClickOnce manifest files. I'm satisfied with our current solution but is seems like there would be an industry-wide, accepted approach to this problem. Is there? A: I've used nAnt to run the overall build strategy, but pass parameters into MSBuild to compile and create the deployment package. Basically, nAnt calls into MSBuild for each environment you need to deploy to, and generates a separate deployment output for each. You end up with a folder and all ClickOnce files you need for every environment, which you can just copy out to the server. This is how we handled multiple production environments as well -- we had separate instances of our application for the US, Canada, and Europe, so each build would end up creating nine deployments, three each for dev, qa, and prod. A: I would look at using msbuild. It has built in tasks for handling clickonce deployments. I included some references which will help you get started, if you want to go down this path. It is what I use and I have found it to fit my needs. With a good build process using msbuild, you should be able to accomplish squashing the pains you have felt. Here is detailed post on how ClickOnce manifest generation works with MsBuild.
{ "language": "en", "url": "https://stackoverflow.com/questions/20728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How do you clear a stringstream variable? I've tried several things already, std::stringstream m; m.empty(); m.clear(); both of which don't work. A: For all the standard library types the member function empty() is a query, not a command, i.e. it means "are you empty?" not "please throw away your contents". The clear() member function is inherited from ios and is used to clear the error state of the stream, e.g. if a file stream has the error state set to eofbit (end-of-file), then calling clear() will set the error state back to goodbit (no error). For clearing the contents of a stringstream, using: m.str(""); is correct, although using: m.str(std::string()); is technically more efficient, because you avoid invoking the std::string constructor that takes const char*. But any compiler these days should be able to generate the same code in both cases - so I would just go with whatever is more readable. A: You can clear the error state and empty the stringstream all in one line std::stringstream().swap(m); // swap m with a default constructed stringstream This effectively resets m to a default constructed state, meaning that it actually deletes the buffers allocated by the string stream and resets the error state. Here's an experimental proof: int main () { std::string payload(16, 'x'); std::stringstream *ss = new std::stringstream; // Create a memory leak (*ss) << payload; // Leak more memory // Now choose a way to "clear" a string stream //std::stringstream().swap(*ss); // Method 1 //ss->str(std::string()); // Method 2 std::cout << "end" << std::endl; } Demo When the demo is compiled with address sanitizer, memory usage is revealed: ================================================================= ==10415==ERROR: LeakSanitizer: detected memory leaks Direct leak of 392 byte(s) in 1 object(s) allocated from: #0 0x510ae8 in operator new(unsigned long) (/tmp/1637178326.0089633/a.out+0x510ae8) #1 0x514e80 in main (/tmp/1637178326.0089633/a.out+0x514e80) #2 0x7f3079ffb82f in __libc_start_main /build/glibc-Cl5G7W/glibc-2.23/csu/../csu/libc-start.c:291 Indirect leak of 513 byte(s) in 1 object(s) allocated from: #0 0x510ae8 in operator new(unsigned long) (/tmp/1637178326.0089633/a.out+0x510ae8) #1 0x7f307b03a25c in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::reserve(unsigned long) (/usr/local/lib64/libstdc++.so.6+0x13725c) #2 0x603000000010 (<unknown module>) SUMMARY: AddressSanitizer: 905 byte(s) leaked in 2 allocation(s). Pretty steep if you ask me. To hold just 16bytes of payload, we spent 905 bytes ... string streams are no toy. Memory is allocated in two parts: * *The constructed string stream (392 bytes) *The extra buffer needed for the payload (513 bytes). The extraneous size has to do with the allocation strategy chosen by the stream and for payloads <= 8 bytes, blocks inside the initial object can be used. If you enable method 1 (the one shown in this answer) the extra 513 (payload) bytes are reclaimed, because the stream is actually cleared. If you enable method2 as suggested in the comments or other answers, you can see that all 905 bytes are in use by the time we exit. In terms of program semantics, one may only care that the stream "appears" and "behaves" as empty, similar to how a vector::clear may leave the capacity untouched but render the vector empty to the user (of course vector would spend just 16 bytes here). Given the memory allocation that string stream requires, I can imagine this approach being often faster. This answer's primary goal is to actually clear the string stream, given that memory consumption that comes with it is no joke. Depending on your use case (number of streams, data they hold, frequency of clearing) you may choose the best approach. Finally note that it's rarely useful to clear the stream without clearing the error state and all inherited state. The one liner in this answer does both. A: This should be the most reliable way regardless of the compiler: m=std::stringstream(); A: There are many other answers that "work", but they often do unnecessary copies or reallocate memory. * *Swapping streams means that you need to discard one of them, wasting the memory allocation. Same goes for assigning a default-constructed stream, *Assigning to the string in the string buffer (via stringstream::str or stringbuf::str) may lose the buffer already allocated by the string. The canonical way to clear the string stream would be: void clear(std::stringstream &stream) { if (stream.rdbuf()) stream.rdbuf()->pubseekpos(0); } The canonical way to get the size of the data in the stream's buffer is: std::size_t availSize() (const std::stringstream& stream) { if (stream.rdbuf()) return std::size_t( stream.rdbuf()->pubseekoff(0, std::ios_base::cur, std::ios_base::out)); else return 0; } The canonical way to copy the data from the stream to some other preallocated buffer and then clear it would then be: std::size_t readAndClear(std::stringstream &stream, void* outBuf, std::size_t outSize) { auto const copySize = std::min(availSize(stream), outSize); if (!copySize) return 0; // takes care of null stream.rdbuf() stream.rdbuf()->sgetn(outBuf, copySize); stream.rdbuf()->pubseekpos(0); // clear the buffer return copySize; } I intend this to be a canonical answer. Language lawyers, feel free to pitch in. A: m.str(""); seems to work. A: I am always scoping it: { std::stringstream ss; ss << "what"; } { std::stringstream ss; ss << "the"; } { std::stringstream ss; ss << "heck"; } A: my 2 cents: this seemed to work for me in xcode and dev-c++, I had a program in the form of a menu that if executed iteratively as per the request of a user will fill up a stringstream variable which would work ok the first time the code would run but would not clear the stringstream the next time the user will run the same code. but the two lines of code below finally cleared up the stringstream variable everytime before filling up the string variable. (2 hours of trial and error and google searches), btw, using each line on their own would not do the trick. //clear the stringstream variable sstm.str(""); sstm.clear(); //fill up the streamstream variable sstm << "crap" << "morecrap"; A: These do not discard the data in the stringstream in gnu c++ m.str(""); m.str() = ""; m.str(std::string()); The following does empty the stringstream for me: m.str().clear(); A: It's a conceptual problem. Stringstream is a stream, so its iterators are forward, cannot return. In an output stringstream, you need a flush() to reinitialize it, as in any other output stream.
{ "language": "en", "url": "https://stackoverflow.com/questions/20731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "578" }
Q: Red-Black Trees I've seen binary trees and binary searching mentioned in several books I've read lately, but as I'm still at the beginning of my studies in Computer Science, I've yet to take a class that's really dealt with algorithms and data structures in a serious way. I've checked around the typical sources (Wikipedia, Google) and most descriptions of the usefulness and implementation of (in particular) Red-Black trees have come off as dense and difficult to understand. I'm sure for someone with the necessary background, it makes perfect sense, but at the moment it reads like a foreign language almost. So what makes binary trees useful in some of the common tasks you find yourself doing while programming? Beyond that, which trees do you prefer to use (please include a sample implementation) and why? A: Red Black trees are good for creating well-balanced trees. The major problem with binary search trees is that you can make them unbalanced very easily. Imagine your first number is a 15. Then all the numbers after that are increasingly smaller than 15. You'll have a tree that is very heavy on the left side and has nothing on the right side. Red Black trees solve that by forcing your tree to be balanced whenever you insert or delete. It accomplishes this through a series of rotations between ancestor nodes and child nodes. The algorithm is actually pretty straightforward, although it is a bit long. I'd suggest picking up the CLRS (Cormen, Lieserson, Rivest and Stein) textbook, "Introduction to Algorithms" and reading up on RB Trees. The implementation is also not really so short so it's probably not really best to include it here. Nevertheless, trees are used extensively for high performance apps that need access to lots of data. They provide a very efficient way of finding nodes, with a relatively small overhead of insertion/deletion. Again, I'd suggest looking at CLRS to read up on how they're used. While BSTs may not be used explicitly - one example of the use of trees in general are in almost every single modern RDBMS. Similarly, your file system is almost certainly represented as some sort of tree structure, and files are likewise indexed that way. Trees power Google. Trees power just about every website on the internet. A: Red Black Trees and B-trees are used in all sorts of persistent storage; because the trees are balanced the performance of breadth and depth traversals are mitigated. Nearly all modern database systems use trees for data storage. A: I'd like to address only the question "So what makes binary trees useful in some of the common tasks you find yourself doing while programming?" This is a big topic that many people disagree on. Some say that the algorithms taught in a CS degree such as binary search trees and directed graphs are not used in day-to-day programming and are therefore irrelevant. Others disagree, saying that these algorithms and data structures are the foundation for all of our programming and it is essential to understand them, even if you never have to write one for yourself. This filters into conversations about good interviewing and hiring practices. For example, Steve Yegge has an article on interviewing at Google that addresses this question. Remember this debate; experienced people disagree. In typical business programming you may not need to create binary trees or even trees very often at all. However, you will use many classes which internally operate using trees. Many of the core organization classes in every language use trees and hashes to store and access data. If you are involved in more high-performance endeavors or situations that are somewhat outside the norm of business programming, you will find trees to be an immediate friend. As another poster said, trees are core data structures for databases and indexes of all kinds. They are useful in data mining and visualization, advanced graphics (2d and 3d), and a host of other computational problems. I have used binary trees in the form of BSP (binary space partitioning) trees in 3d graphics. I am currently looking at trees again to sort large amounts of geocoded data and other data for information visualization in Flash/Flex applications. Whenever you are pushing the boundary of the hardware or you want to run on lower hardware specifications, understanding and selecting the best algorithm can make the difference between failure and success. A: BSTs make the world go round, as said by Micheal. If you're looking for a good tree to implement, take a look at AVL trees (Wikipedia). They have a balancing condition, so they are guaranteed to be O(logn). This kind of searching efficiency makes it logical to put into any kind of indexing process. The only thing that would be more efficient would be a hashing function, but those get ugly quick, fast, and in a hurry. Also, you run into the Birthday Paradox (also known as the pigeon-hole problem). What textbook are you using? We used Data Structures and Analysis in Java by Mark Allen Weiss. I actually have it open in my lap as i'm typing this. It has a great section about Red-Black trees, and even includes the code necessary to implement all the trees it talks about. A: The best description of red-black trees I have seen is the one in Cormen, Leisersen and Rivest's 'Introduction to Algorithms'. I could even understand it enough to partially implement one (insertion only). There are also quite a few applets such as This One on various web pages that animate the process and allow you to watch and step through a graphical representation of the algorithm building a tree structure. A: Red-black trees stay balanced, so you don't have to traverse deep to get items out. The time saved makes RB trees O(log()n)) in the WORST case, whereas unlucky binary trees can get into a lop sided configuration and cause retrievals in O(n) a bad case. This does happen in practice or on random data. So if you need time critical code (database retrievals, network server etc.) you use RB trees to support ordered or unordered lists/sets . But RBTrees are for noobs! If you are doing AI and you need to perform a search, you find you fork the state information alot. You can use a persistent red-black to fork new states in O(log(n)). A persistent red black tree keeps a copy of the tree before and after a morphological operation (insert/delete), but without copying the entire tree (normally and O(log(n)) operation). I have open sourced a persistent red-black tree for java http://edinburghhacklab.com/2011/07/a-java-implementation-of-persistent-red-black-trees-open-sourced/ A: None of the answers mention what it is exactly BSTs are good for. If what you want to do is just lookup by values then a hashtable is much faster, O(1) insert and lookup (amortized best case). A BST will be O(log N) lookup where N is the number of nodes in the tree, inserts are also O(log N). RB and AVL trees are important like another answer mentioned because of this property, if a plain BST is created with in-order values then the tree will be as high as the number of values inserted, this is bad for lookup performance. The difference between RB and AVL trees are in the the rotations required to rebalance after an insert or delete, AVL trees are O(log N) for rebalances while RB trees are O(1). An example of benefit of this constant complexity is in a case where you might be keeping a persistent data source, if you need to track changes to roll-back you would have to track O(log N) possible changes with an AVL tree. Why would you be willing to pay for the cost of a tree over a hash table? ORDER! Hash tables have no order, BSTs on the other hand are always naturally ordered by virtue of their structure. So if you find yourself throwing a bunch of data in an array or other container and then sorting it later, a BST may be a better solution. The tree's order property gives you a number of ordered iteration capabilities, in-order, depth-first, breadth-first, pre-order, post-order. These iteration algorithms are useful in different circumstances if you want to look them up. Red black trees are used internally in almost every ordered container of language libraries, C++ Set and Map, .NET SortedDictionary, Java TreeSet, etc... So trees are very useful, and you may use them quite often without even knowing it. You most likely will never need to write one yourself, though I would highly recommend it as an interesting programming exercise. A: Since you ask which tree people use, you need to know that a Red Black tree is fundamentally a 2-3-4 B-tree (i.e a B-tree of order 4). A B-tree is not equivalent to a binary tree(as asked in your question). Here's an excellent resource describing the initial abstraction known as the symmetric binary B-tree that later evolved into the RBTree. You would need a good grasp on B-trees before it makes sense. To summarize: a 'red' link on a Red Black tree is a way to represent nodes that are part of a B-tree node (values within a key range), whereas 'black' links are nodes that are connected vertically in a B-tree. So, here's what you get when you translate the rules of a Red Black tree in terms of a B-tree (I'm using the format Red Black tree rule => B Tree equivalent): 1) A node is either red or black. => A node in a b-tree can either be part of a node, or as a node in a new level. 2) The root is black. (This rule is sometimes omitted, since it doesn't affect analysis) => The root node can be thought of either as a part of an internal root node as a child of an imaginary parent node. 3) All leaves (NIL) are black. (All leaves are same color as the root.) => Since one way of representing a RB tree is by omitting the leaves, we can rule this out. 4)Both children of every red node are black. => The children of an internal node in a B-tree always lie on another level. 5)Every simple path from a given node to any of its descendant leaves contains the same number of black nodes. => A B-tree is kept balanced as it requires that all leaf nodes are at the same depth (Hence the height of a B-tree node is represented by the number of black links from the root to the leaf of a Red Black tree) Also, there's a simpler 'non-standard' implementation by Robert Sedgewick here: (He's the author of the book Algorithms along with Wayne) A: Lots and lots of heat here, but not much light, so lets see if we can provide some. First, a RB tree is an associative data structure, unlike, say an array, which cannot take a key and return an associated value, well, unless that's an integer "key" in a 0% sparse index of contiguous integers. An array cannot grow in size either (yes, I know about realloc() too, but under the covers that requires a new array and then a memcpy()), so if you have either of these requirements, an array won't do. An array's memory efficiency is perfect. Zero waste, but not very smart, or flexible - realloc() not withstanding. Second, in contrast to a bsearch() on an array of elements, which IS an associative data structure, a RB tree can grow (AND shrink) itself in size dynamically. The bsearch() works fine for indexing a data structure of a known size, which will remain that size. So if you don't know the size of your data in advance, or new elements need to be added, or deleted, a bsearch() is out. Bsearch() and qsort() are both well supported in classic C, and have good memory efficiency, but are not dynamic enough for many applications. They are my personal favorite though because they're quick, easy, and if you're not dealing with real-time apps, quite often are flexible enough. In addition, in C/C++ you can sort an array of pointers to data records, pointing to the struc{} member, for example, you wish to compare, and then rearranging the pointer in the pointer array such that reading the pointers in order at the end of the pointer sort yields your data in sorted order. Using this with memory-mapped data files is extremely memory efficient, fast, and fairly easy. All you need to do is add a few "*"s to your compare function/s. Third, in contrast to a hashtable, which also must be a fixed size, and cannot be grown once filled, a RB tree will automagically grow itself and balance itself to maintain its O(log(n)) performance guarantee. Especially if the RB tree's key is an int, it can be faster than a hash, because even though a hashtable's complexity is O(1), that 1 can be a very expensive hash calculation. A tree's multiple 1-clock integer compares often outperform 100-clock+ hash calculations, to say nothing of rehashing, and malloc()ing space for hash collisions and rehashes. Finally, if you want ISAM access, as well as key access to your data, a hash is ruled out, as there is no ordering of the data inherent in the hashtable, in contrast to the natural ordering of data in any tree implementation. The classic use for a hash table is to provide keyed access to a table of reserved words for a compiler. It's memory efficiency is excellent. Fourth, and very low on any list, is the linked, or doubly-linked list, which, in contrast to an array, naturally supports element insertions and deletions, and as that implies, resizing. It's the slowest of all the data structures, as each element only knows how to get to the next element, so you have to search, on average, (element_knt/2) links to find your datum. It is mostly used where insertions and deletions somewhere in the middle of the list are common, and especially, where the list is circular and feeds an expensive process which makes the time to read the links relatively small. My general RX is to use an arbitrarily large array instead of a linked list if your only requirement is that it be able to increase in size. If you run out of size with an array, you can realloc() a larger array. The STL does this for you "under the covers" when you use a vector. Crude, but potentially 1,000s of times faster if you don't need insertions, deletions or keyed lookups. It's memory efficiency is poor, especially for doubly-linked lists. In fact, a doubly-linked list, requiring two pointers, is exactly as memory inefficient as a red-black tree while having NONE of its appealing fast, ordered retrieval characteristics. Fifth, trees support many additional operations on their sorted data than any other data structure. For example, many database queries make use of the fact that a range of leaf values can be easily specified by specifying their common parent, and then focusing subsequent processing on the part of the tree that parent "owns". The potential for multi-threading offered by this approach should be obvious, as only a small region of the tree needs to be locked - namely, only the nodes the parent owns, and the parent itself. In short, trees are the Cadillac of data structures. You pay a high price in terms of memory used, but you get a completely self-maintaining data structure. This is why, as pointed out in other replies here, transaction databases use trees almost exclusively. A: If you would like to see how a Red-Black tree is supposed to look graphically, I have coded an implementation of a Red-Black tree that you can download here A: IME, almost no one understands the RB tree algorithm. People can repeat the rules back to you, but they don't understand why those rules and where they come from. I am no exception :-) For this reason, I prefer the AVL algorithm, because it's easy to comprehend. Once you understand it, you can then code it up from scratch, because it make sense to you. A: Trees can be fast. If you have a million nodes in a balanced binary tree, it takes twenty comparisons on average to find any one item. If you have a million nodes in a linked list, it takes five hundred thousands comparisons on average to find the same item. If the tree is unbalanced, though, it can be just as slow as a list, and also take more memory to store. Imagine a tree where most nodes have a right child, but no left child; it is a list, but you still have to hold memory space to put in the left node if one shows up. Anyways, the AVL tree was the first balanced binary tree algorithm, and the Wikipedia article on it is pretty clear. The Wikipedia article on red-black trees is clear as mud, honestly. Beyond binary trees, B-Trees are trees where each node can have many values. B-Tree is not a binary tree, just happens to be the name of it. They're really useful for utilizing memory efficiently; each node of the tree can be sized to fit in one block of memory, so that you're not (slowly) going and finding tons of different things in memory that was paged to disk. Here's a phenomenal example of the B-Tree.
{ "language": "en", "url": "https://stackoverflow.com/questions/20734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: SQL Reporting Services viewer for webpage - can you move the View Report button? Using the viewer control for display of SQL Reporting Services reports on web page (Microsoft.ReportViewer.WebForms), can you move the View Report button? It defaults to the very right side of the report, which means you have to scroll all the way across before the button is visible. Not a problem for reports that fit the window width, but on very wide reports that is quickly an issue. A: No, you cannot reposition the view report button in the ReportViewer control. However, you could create your own custom report viewing control. The control would be comprised of fields for report parameters and a button to generate the report. When a user clicks the button you could generate the report in the background. You could display the report as a PDF, HTML, etc. A: It's kind of a hack, but you can move it in JavaScript. Just see what HTML the ReportViewer generates, and write the appropriate JavaScript code to move the button. I used JavaScript to hide the button (because we wanted our own View Report button). Any JavaScript code that manipulates the generated ReportViewer's HTML must come after the ReportViewer control in the .aspx page. Here's my code for hiding the button, to give you an idea of what you'd do: function getRepViewBtn() { return document.getElementsByName("ReportViewer1$ctl00$ctl00")[0]; } function hideViewReportButton() { // call this where needed var btn = getRepViewBtn(); btn.style.display = 'none'; } A: The reason the button is pushed over to the right is that the td for the parameters has width="100%". I'm solving this problem with the following jquery. It simply changes the width of the parameters td to 1. Browsers will expand the width on their own to the width of the contents of the element. Hope this helps. <script type="text/javascript"> $(document).ready(function() { $("#<%= ReportViewer1.ClientID %> td:first").attr("width", "1"); }); </script> A: Since I was searching for this answer just yesterday, I thought I'd post what I came up with to solve our problem. Our reports were coming back wide, and we wanted the "view reports" button to exist on the left side of the control so there was no need to scroll to get to the button. I did need to go into the source of the rendered file to find the ID names of the button and the target table. I wrote a simple cut and paste javascript function to pull the button from its original position and essentially drop it into the next row in the containing table below the date pickers. function moveButton() { document.getElementById('ParameterTable_ctl00_MainContent_MyReports_ctl04').appendChild(document.getElementById('ctl00_MainContent_MyReports_ctl04_ctl00')); } This function gets called on the report viewer load event. ScriptManager.RegisterStartupScript(Me, Me.GetType(), "moveButton", "moveButton();", True) To adjust the position, I used the CSS ID. #ctl00_MainContent_MyReports_ctl04_ctl00 { margin: 0px 0px 0px 50px; } A: I had the same problem and ended up using an extension on Travis Collins answer; As well as changing the table column width I also align the "View Report" button left so that it appears neearer to rest of the controls. <script type="text/javascript"> $(document).ready(function() { $("#_ctl0_MainContent_reportViewer_fixedTable tr:first td:first-child").attr("width", "1"); $("#_ctl0_MainContent_reportViewer_fixedTable tr:first td:last-child").attr("align", "left"); }); </script> You may need to tweak the JQuery selector depending on the element naming assigned to your existing control.
{ "language": "en", "url": "https://stackoverflow.com/questions/20744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you remove invalid hexadecimal characters from an XML-based data source prior to constructing an XmlReader or XPathDocument that uses the data? Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: * *The solution needs to handle XML data sources that use character encodings other than UTF-8, e.g. by specifying the character encoding at the XML document declaration. Not mangling the character encoding of the source while stripping invalid hexadecimal characters has been a major sticking point. *The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution. A: Modernising dnewcombe's answer, you could take a slightly simpler approach public static string RemoveInvalidXmlChars(string input) { var isValid = new Predicate<char>(value => (value >= 0x0020 && value <= 0xD7FF) || (value >= 0xE000 && value <= 0xFFFD) || value == 0x0009 || value == 0x000A || value == 0x000D); return new string(Array.FindAll(input.ToCharArray(), isValid)); } or, with Linq public static string RemoveInvalidXmlChars(string input) { return new string(input.Where(value => (value >= 0x0020 && value <= 0xD7FF) || (value >= 0xE000 && value <= 0xFFFD) || value == 0x0009 || value == 0x000A || value == 0x000D).ToArray()); } I'd be interested to know how the performance of these methods compares and how they all compare to a black list approach using Buffer.BlockCopy. A: It may not be perfect (emphasis added since people missing this disclaimer), but what I've done in that case is below. You can adjust to use with a stream. /// <summary> /// Removes control characters and other non-UTF-8 characters /// </summary> /// <param name="inString">The string to process</param> /// <returns>A string with no control characters or entities above 0x00FD</returns> public static string RemoveTroublesomeCharacters(string inString) { if (inString == null) return null; StringBuilder newString = new StringBuilder(); char ch; for (int i = 0; i < inString.Length; i++) { ch = inString[i]; // remove any characters outside the valid UTF-8 range as well as all control characters // except tabs and new lines //if ((ch < 0x00FD && ch > 0x001F) || ch == '\t' || ch == '\n' || ch == '\r') //if using .NET version prior to 4, use above logic if (XmlConvert.IsXmlChar(ch)) //this method is new in .NET 4 { newString.Append(ch); } } return newString.ToString(); } A: I like Eugene's whitelist concept. I needed to do a similar thing as the original poster, but I needed to support all Unicode characters, not just up to 0x00FD. The XML spec is: Char = #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] In .NET, the internal representation of Unicode characters is only 16 bits, so we can't `allow' 0x10000-0x10FFFF explicitly. The XML spec explicitly disallows the surrogate code points starting at 0xD800 from appearing. However it is possible that if we allowed these surrogate code points in our whitelist, utf-8 encoding our string might produce valid XML in the end as long as proper utf-8 encoding was produced from the surrogate pairs of utf-16 characters in the .NET string. I haven't explored this though, so I went with the safer bet and didn't allow the surrogates in my whitelist. The comments in Eugene's solution are misleading though, the problem is that the characters we are excluding are not valid in XML ... they are perfectly valid Unicode code points. We are not removing `non-utf-8 characters'. We are removing utf-8 characters that may not appear in well-formed XML documents. public static string XmlCharacterWhitelist( string in_string ) { if( in_string == null ) return null; StringBuilder sbOutput = new StringBuilder(); char ch; for( int i = 0; i < in_string.Length; i++ ) { ch = in_string[i]; if( ( ch >= 0x0020 && ch <= 0xD7FF ) || ( ch >= 0xE000 && ch <= 0xFFFD ) || ch == 0x0009 || ch == 0x000A || ch == 0x000D ) { sbOutput.Append( ch ); } } return sbOutput.ToString(); } A: Here is dnewcome's answer in a custom StreamReader. It simply wraps a real stream reader and replaces the characters as they are read. I only implemented a few methods to save myself time. I used this in conjunction with XDocument.Load and a file stream and only the Read(char[] buffer, int index, int count) method was called, so it worked like this. You may need to implement additional methods to get this to work for your application. I used this approach because it seems more efficient than the other answers. I also only implemented one of the constructors, you could obviously implement any of the StreamReader constructors that you need, since it is just a pass through. I chose to replace the characters rather than removing them because it greatly simplifies the solution. In this way the length of the text stays the same, so there is no need to keep track of a separate index. public class InvalidXmlCharacterReplacingStreamReader : TextReader { private StreamReader implementingStreamReader; private char replacementCharacter; public InvalidXmlCharacterReplacingStreamReader(Stream stream, char replacementCharacter) { implementingStreamReader = new StreamReader(stream); this.replacementCharacter = replacementCharacter; } public override void Close() { implementingStreamReader.Close(); } public override ObjRef CreateObjRef(Type requestedType) { return implementingStreamReader.CreateObjRef(requestedType); } public void Dispose() { implementingStreamReader.Dispose(); } public override bool Equals(object obj) { return implementingStreamReader.Equals(obj); } public override int GetHashCode() { return implementingStreamReader.GetHashCode(); } public override object InitializeLifetimeService() { return implementingStreamReader.InitializeLifetimeService(); } public override int Peek() { int ch = implementingStreamReader.Peek(); if (ch != -1) { if ( (ch < 0x0020 || ch > 0xD7FF) && (ch < 0xE000 || ch > 0xFFFD) && ch != 0x0009 && ch != 0x000A && ch != 0x000D ) { return replacementCharacter; } } return ch; } public override int Read() { int ch = implementingStreamReader.Read(); if (ch != -1) { if ( (ch < 0x0020 || ch > 0xD7FF) && (ch < 0xE000 || ch > 0xFFFD) && ch != 0x0009 && ch != 0x000A && ch != 0x000D ) { return replacementCharacter; } } return ch; } public override int Read(char[] buffer, int index, int count) { int readCount = implementingStreamReader.Read(buffer, index, count); for (int i = index; i < readCount+index; i++) { char ch = buffer[i]; if ( (ch < 0x0020 || ch > 0xD7FF) && (ch < 0xE000 || ch > 0xFFFD) && ch != 0x0009 && ch != 0x000A && ch != 0x000D ) { buffer[i] = replacementCharacter; } } return readCount; } public override Task<int> ReadAsync(char[] buffer, int index, int count) { throw new NotImplementedException(); } public override int ReadBlock(char[] buffer, int index, int count) { throw new NotImplementedException(); } public override Task<int> ReadBlockAsync(char[] buffer, int index, int count) { throw new NotImplementedException(); } public override string ReadLine() { throw new NotImplementedException(); } public override Task<string> ReadLineAsync() { throw new NotImplementedException(); } public override string ReadToEnd() { throw new NotImplementedException(); } public override Task<string> ReadToEndAsync() { throw new NotImplementedException(); } public override string ToString() { return implementingStreamReader.ToString(); } } A: Regex based approach public static string StripInvalidXmlCharacters(string str) { var invalidXmlCharactersRegex = new Regex("[^\u0009\u000a\u000d\u0020-\ud7ff\ue000-\ufffd]|([\ud800-\udbff](?![\udc00-\udfff]))|((?<![\ud800-\udbff])[\udc00-\udfff])"); return invalidXmlCharactersRegex.Replace(str, ""); } See my blogpost for more details A: As the way to remove invalid XML characters I suggest you to use XmlConvert.IsXmlChar method. It was added since .NET Framework 4 and is presented in Silverlight too. Here is the small sample: void Main() { string content = "\v\f\0"; Console.WriteLine(IsValidXmlString(content)); // False content = RemoveInvalidXmlChars(content); Console.WriteLine(IsValidXmlString(content)); // True } static string RemoveInvalidXmlChars(string text) { char[] validXmlChars = text.Where(ch => XmlConvert.IsXmlChar(ch)).ToArray(); return new string(validXmlChars); } static bool IsValidXmlString(string text) { try { XmlConvert.VerifyXmlChars(text); return true; } catch { return false; } } A: I created a slightly updated version of @Neolisk's answer, which supports the *Async functions and uses the .Net 4.0 XmlConvert.IsXmlChar function. public class InvalidXmlCharacterReplacingStreamReader : StreamReader { private readonly char _replacementCharacter; public InvalidXmlCharacterReplacingStreamReader(string fileName, char replacementCharacter) : base(fileName) { _replacementCharacter = replacementCharacter; } public InvalidXmlCharacterReplacingStreamReader(Stream stream, char replacementCharacter) : base(stream) { _replacementCharacter = replacementCharacter; } public override int Peek() { var ch = base.Peek(); if (ch != -1 && IsInvalidChar(ch)) { return _replacementCharacter; } return ch; } public override int Read() { var ch = base.Read(); if (ch != -1 && IsInvalidChar(ch)) { return _replacementCharacter; } return ch; } public override int Read(char[] buffer, int index, int count) { var readCount = base.Read(buffer, index, count); ReplaceInBuffer(buffer, index, readCount); return readCount; } public override async Task<int> ReadAsync(char[] buffer, int index, int count) { var readCount = await base.ReadAsync(buffer, index, count).ConfigureAwait(false); ReplaceInBuffer(buffer, index, readCount); return readCount; } private void ReplaceInBuffer(char[] buffer, int index, int readCount) { for (var i = index; i < readCount + index; i++) { var ch = buffer[i]; if (IsInvalidChar(ch)) { buffer[i] = _replacementCharacter; } } } private static bool IsInvalidChar(int ch) { return IsInvalidChar((char)ch); } private static bool IsInvalidChar(char ch) { return !XmlConvert.IsXmlChar(ch); } } A: The above solutions seem to be for removing invalid characters prior to converting to XML. Use this code to remove invalid XML characters from an XML string. eg. &x1A; public static string CleanInvalidXmlChars( string Xml, string XMLVersion ) { string pattern = String.Empty; switch( XMLVersion ) { case "1.0": pattern = @"&#x((10?|[2-F])FFF[EF]|FDD[0-9A-F]|7F|8[0-46-9A-F]9[0-9A-F]);"; break; case "1.1": pattern = @"&#x((10?|[2-F])FFF[EF]|FDD[0-9A-F]|[19][0-9A-F]|7F|8[0-46-9A-F]|0?[1-8BCEF]);"; break; default: throw new Exception( "Error: Invalid XML Version!" ); } Regex regex = new Regex( pattern, RegexOptions.IgnoreCase ); if( regex.IsMatch( Xml ) ) Xml = regex.Replace( Xml, String.Empty ); return Xml; } http://balajiramesh.wordpress.com/2008/05/30/strip-illegal-xml-characters-based-on-w3c-standard/ A: DRY implementation of this answer's solution (using a different constructor - feel free to use the one you need in your application): public class InvalidXmlCharacterReplacingStreamReader : StreamReader { private readonly char _replacementCharacter; public InvalidXmlCharacterReplacingStreamReader(string fileName, char replacementCharacter) : base(fileName) { this._replacementCharacter = replacementCharacter; } public override int Peek() { int ch = base.Peek(); if (ch != -1 && IsInvalidChar(ch)) { return this._replacementCharacter; } return ch; } public override int Read() { int ch = base.Read(); if (ch != -1 && IsInvalidChar(ch)) { return this._replacementCharacter; } return ch; } public override int Read(char[] buffer, int index, int count) { int readCount = base.Read(buffer, index, count); for (int i = index; i < readCount + index; i++) { char ch = buffer[i]; if (IsInvalidChar(ch)) { buffer[i] = this._replacementCharacter; } } return readCount; } private static bool IsInvalidChar(int ch) { return (ch < 0x0020 || ch > 0xD7FF) && (ch < 0xE000 || ch > 0xFFFD) && ch != 0x0009 && ch != 0x000A && ch != 0x000D; } } A: Modified answer or original answer by Neolisk above. Changes: of \0 character is passed, removal is done, rather than a replacement. also, made use of XmlConvert.IsXmlChar(char) method /// <summary> /// Replaces invalid Xml characters from input file, NOTE: if replacement character is \0, then invalid Xml character is removed, instead of 1-for-1 replacement /// </summary> public class InvalidXmlCharacterReplacingStreamReader : StreamReader { private readonly char _replacementCharacter; public InvalidXmlCharacterReplacingStreamReader(string fileName, char replacementCharacter) : base(fileName) { _replacementCharacter = replacementCharacter; } public override int Peek() { int ch = base.Peek(); if (ch != -1 && IsInvalidChar(ch)) { if ('\0' == _replacementCharacter) return Peek(); // peek at the next one return _replacementCharacter; } return ch; } public override int Read() { int ch = base.Read(); if (ch != -1 && IsInvalidChar(ch)) { if ('\0' == _replacementCharacter) return Read(); // read next one return _replacementCharacter; } return ch; } public override int Read(char[] buffer, int index, int count) { int readCount= 0, ch; for (int i = 0; i < count && (ch = Read()) != -1; i++) { readCount++; buffer[index + i] = (char)ch; } return readCount; } private static bool IsInvalidChar(int ch) { return !XmlConvert.IsXmlChar((char)ch); } } A: Use this function to remove invalid xml characters. public static string CleanInvalidXmlChars(string text) { string re = @"[^\x09\x0A\x0D\x20-\xD7FF\xE000-\xFFFD\x10000-x10FFFF]"; return Regex.Replace(text, re, ""); } A: Try this for PHP! $goodUTF8 = iconv("utf-8", "utf-8//IGNORE", $badUTF8); A: private static String removeNonUtf8CompliantCharacters( final String inString ) { if (null == inString ) return null; byte[] byteArr = inString.getBytes(); for ( int i=0; i < byteArr.length; i++ ) { byte ch= byteArr[i]; // remove any characters outside the valid UTF-8 range as well as all control characters // except tabs and new lines if ( !( (ch > 31 && ch < 253 ) || ch == '\t' || ch == '\n' || ch == '\r') ) { byteArr[i]=' '; } } return new String( byteArr ); } A: You can pass non-UTF characters with the following: string sFinalString = ""; string hex = ""; foreach (char ch in UTFCHAR) { int tmp = ch; if ((ch < 0x00FD && ch > 0x001F) || ch == '\t' || ch == '\n' || ch == '\r') { sFinalString += ch; } else { sFinalString += "&#" + tmp+";"; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/20762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: XRef Relationships in dbml So I have a database schema like this: Users UserId RoleUserXRef RoleUserId RoleId UserId Roles RoleId Name With foreign keys defined between User & RoleUserXRef and RoleUserXRef & Role. Basically, I have a one to many relationship between users and roles. How would I model this in dbml, such that the generated User class has a list of Roles that the user has assigned to them? A: Creating a many-to-many releationship via simple DBML manipulation is not supported currently. You can extend the partial class to manually create properties, if you really want that sort of functionality "built in".
{ "language": "en", "url": "https://stackoverflow.com/questions/20765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you convert binary data to Strings and back in Java? I have binary data in a file that I can read into a byte array and process with no problem. Now I need to send parts of the data over a network connection as elements in an XML document. My problem is that when I convert the data from an array of bytes to a String and back to an array of bytes, the data is getting corrupted. I've tested this on one machine to isolate the problem to the String conversion, so I now know that it isn't getting corrupted by the XML parser or the network transport. What I've got right now is byte[] buffer = ...; // read from file // a few lines that prove I can process the data successfully String element = new String(buffer); byte[] newBuffer = element.getBytes(); // a few lines that try to process newBuffer and fail because it is not the same data anymore Does anyone know how to convert binary to String and back without data loss? Answered: Thanks Sam. I feel like an idiot. I had this answered yesterday because my SAX parser was complaining. For some reason when I ran into this seemingly separate issue, it didn't occur to me that it was a new symptom of the same problem. EDIT: Just for the sake of completeness, I used the Base64 class from the Apache Commons Codec package to solve this problem. A: String(byte[]) treats the data as the default character encoding. So, how bytes get converted from 8-bit values to 16-bit Java Unicode chars will vary not only between operating systems, but can even vary between different users using different codepages on the same machine! This constructor is only good for decoding one of your own text files. Do not try to convert arbitrary bytes to chars in Java! Encoding as base64 is a good solution. This is how files are sent over SMTP (e-mail). The (free) Apache Commons Codec project will do the job. byte[] bytes = loadFile(file); //all chars in encoded are guaranteed to be 7-bit ASCII byte[] encoded = Base64.encodeBase64(bytes); String printMe = new String(encoded, "US-ASCII"); System.out.println(printMe); byte[] decoded = Base64.decodeBase64(encoded); Alternatively, you can use the Java 6 DatatypeConverter: import java.io.*; import java.nio.channels.*; import javax.xml.bind.DatatypeConverter; public class EncodeDecode { public static void main(String[] args) throws Exception { File file = new File("/bin/ls"); byte[] bytes = loadFile(file, new ByteArrayOutputStream()).toByteArray(); String encoded = DatatypeConverter.printBase64Binary(bytes); System.out.println(encoded); byte[] decoded = DatatypeConverter.parseBase64Binary(encoded); // check for (int i = 0; i < bytes.length; i++) { assert bytes[i] == decoded[i]; } } private static <T extends OutputStream> T loadFile(File file, T out) throws IOException { FileChannel in = new FileInputStream(file).getChannel(); try { assert in.size() == in.transferTo(0, in.size(), Channels.newChannel(out)); return out; } finally { in.close(); } } } A: If you encode it in base64, this will turn any data into ascii safe text, but base64 encoded data is larger than the orignal data A: See this question, How do you embed binary data in XML? Instead of converting the byte[] into String then pushing into XML somewhere, convert the byte[] to a String via BASE64 encoding (some XML libraries have a type to do this for you). The BASE64 decode once you get the String back from XML. Use http://commons.apache.org/codec/ You data may be getting messed up due to all sorts of weird character set restrictions and the presence of non-priting characters. Stick w/ BASE64. A: How are you building your XML document? If you use java's built in XML classes then the string encoding should be handled for you. Take a look at the javax.xml and org.xml packages. That's what we use for generating XML docs, and it handles all the string encoding and decoding quite nicely. ---EDIT: Hmm, I think I misunderstood the problem. You're not trying to encode a regular string, but some set of arbitrary binary data? In that case the Base64 encoding suggested in an earlier comment is probably the way to go. I believe that's a fairly standard way of encoding binary data in XML.
{ "language": "en", "url": "https://stackoverflow.com/questions/20778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Call Project Server Interface web method from an msi installer I'm using a Visual Studio web setup project to install an application that extends the functionality of Project Server. I want to call a method from the PSI ( Project Server Interface ) from one of the custom actions of my setup project, but every time a get a "401 Unauthorized access" error. What should I do to be able to access the PSI? The same code, when used from a Console Application, works without any issues. A: It sounds like in the console situation you are running with your current user credentials, which have access to the PSI. When running from the web, it's running with the creds of the IIS application instance. I think you'd either need to set up delegation to pass the session creds to the IIS application, or use some static creds for your IIS app that have access to the PSI. A: I finally found the answer. You can call the LoginWindows PSI service an set the credentials to NetworkCredentials using the appropriate user, password and domain tokens. Then you can call any PSI method, as long as the credentials are explicit. Otherwise, using DefaultCredentials you'll get an Unauthorized Access error, because an msi is run with Local System Account.
{ "language": "en", "url": "https://stackoverflow.com/questions/20782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: When to use STL bitsets instead of separate variables? In what situation would it be more appropriate for me to use a bitset (STL container) to manage a set of flags rather than having them declared as a number of separate (bool) variables? Will I get a significant performance gain if I used a bitset for 50 flags rather than using 50 separate bool variables? A: std::bitset will give you extra points when you need to serialize / deserialize it. You can just write it to a stream or read from a stream with it. But certainly, the separate bools are going to be faster. They are optimized for this kind of use after all, while a bitset is optimized for space, and has still function calls involved. It will never be faster than separate bools. Bitset * *Very space efficient *Less efficient due to bit fiddling *Provides serialize / de-serialize with op<< and op>> *All bits packed together: You will have the flags at one place. Separate bools * *Very fast *Bools are not packed together. They will be members somewhere. Decide on the facts. I, personally, would use std::bitset for some not-performance critical, and would use bools if I either have only a few bools (and thus it's quite overview-able), or if I need the extra performance. A: It depends what you mean by 'performance gain'. If you only need 50 of them, and you're not low on memory then separate bools is pretty much always a better choice than a bitset. They will take more memory, but the bools will be much faster. A bitset is usually implemented as an array of ints (the bools are packed into those ints). So the first 32 bools (bits) in your bitset will only take up a single 32bit int, but to read each value you have to do some bitwise operations first to mask out all the values you don't want. E.g. to read the 2nd bit of a bitset, you need to: * *Find the int that contains the bit you want (in this case, it's the first int) *Bitwise And that int with '2' (i.e. value & 0x02) to find out if that bit is set However, if memory is a bottleneck and you have a lot of bools using a bitset could make sense (e.g. if you're target platform is a mobile phone, or it's some state in a very busy web service) NOTE: A std::vector of bool usually has a specialisation to use the equivalent of a bitset, thus making it much smaller and also slower for the same reasons. So if speed is an issue, you'll be better off using a vector of char (or even int), or even just use an old school bool array. A: RE @Wilka: Actually, bitsets are supported by C/C++ in a way that doesn't require you to do your own masking. I don't remember the exact syntax, but it's something like this: struct MyBitset { bool firstOption:1; bool secondOption:1; bool thirdOption:1; int fourBitNumber:4; }; You can reference any value in that struct by just using dot notation, and the right things will happen: MyBitset bits; bits.firstOption = true; bits.fourBitNumber = 2; if(bits.thirdOption) { // Whatever! } You can use arbitrary bit sizes for things. The resulting struct can be up to 7 bits larger than the data you define (its size is always the minimum number of bytes needed to store the data you defined). A: Well, 50 bools as a bitset will take 7 bytes, while 50 bools as bools will take 50 bytes. These days that's not really a big deal, so using bools is probably fine. However, one place a bitset might be useful is if you need to pass those bools around a lot, especially if you need to return the set from a function. Using a bitset you have less data that has to be moved around on the stack for returns. Then again, you could just use refs instead and have even less data to pass around. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/20787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What tools do you use for static code analysis? This question on Cyclomatic Complexity made me think more about static code analysis. Analyzing code complexity and consistency is occasionally useful, and I'd like to start doing it more. What tools do you recommend (per language) for such analysis? Wikipedia has a large list of tools, but which ones have people tried before? Edit: As David points out, this is not a completely unasked question when it comes to C/UNIX based tools. A: For C and Objective-C, you can also use the LLVM/Clang Static Analyzer. It's Open Source and under active development. A: For .Net we use NDepend. It is a great tool and can be integrated to the build (we use CCNet). http://www.ndepend.com/ HTH. A: For C++, I use CppCheck. It seems to work fine. A: I have been setting up a Hudson continuous integration (CI) build system for my Objective-C iPhone projects (iOS apps), and have compiled a varied list of tools that can be used to analyze my projects during a build: * *Clang static analyzer: free, up-to-date stand-alone tool that catches more issues than the version of Clang included with Xcode 4. Active project. -- visit http://clang-analyzer.llvm.org *Doxygen: free documentation generation tool that also generates class dependency diagrams. Active project -- visit http://www.doxygen.nl *HFCCA (header-free cyclomatic complexity analyzer): free Python script to calculate code complexity, but without header files and pre-processors. Supports output in XML format for Hudson/Jenkins builds. Active project. -- visit http://code.google.com/p/headerfile-free-cyclomatic-complexity-analyzer *CLOC (count lines of code): free tool to count files, lines of code, comments, and blank lines. Supports diffing, so you can see the differences between builds. Active project. -- visit http://cloc.sourceforge.net *SLOCcount (source lines of code count): a free tool to count lines of code and estimate the costs and time associated with a project. Does not appear to be active. -- visit http://sourceforge.net/projects/sloccount and http://www.dwheeler.com/sloccount *AnalysisTool: free code analysis tool that measures code complexity and also generates dependency diagrams. Not active. Does not seem to work with Xcode 4, but I would love to get it working. -- visit http://www.karppinen.fi/analysistool A: I use the PMD plugin for Eclipse a lot. It's pretty nice, and very configurable. CheckStyle is also good, if you're looking for more of a style enforcer. A: Checkstyle, Findbugs, and PMD all work pretty well in Java. I'm currently pretty happy with PMD running in NetBeans. It has a fairly simple GUI for managing what rules you want to run. It's also very easy to run the checker on one file, an entire package, or an entire project. A: Obviously, the answer depends on the programming languages. UNO is good for C programs. @Thomas Owens: I think you meant Splint. A: Lint is the only one I have used at a previous position. It wasn't bad, most of the things it suggested were good catches, some didn't make much sense. As long you don't have a process in place to ensure that there are no lint errors or warnings, then it is useful to perhaps catch some otherwise hidden bugs A: We use Programming Research's QAC for our C code. Works OK. Recently we have been talking about checking out some of the more advanced and static/dynamic code analyzers like Coverity's Prevent or the analysis tool by GrammaTech. They claim to not only do static analysis but also find runtime errors etc. One major selling point is supposed to be fewer false positives. A: We use Coverity Prevent at Palm for C and C++ code analysis, and it's done a great job of uncovering some hidden bugs in our code. It also finds a lot of not likely to be hit problems, but it's easy to mark those as "will not fix" or "not a problem" in the code database that the tool generates. It is expensive, but the company occasionally does runs on open source projects and provides reports to the maintainers. They have a whitepaper about our use of the product on their site if you want to read more about our experience. A: My admins are really cheap, so can I only use really cheap tools: 1) CCCC (C / C++ Code Counter): Various results related to number of lines (vs. lines of comments, cyclomatic complexity, Information flow, ...) 2) Semian: Fastest code duplication finder I ever tried. 3) LOC Metrix: Not very usefull but can help to make a point. A: The only time I've ever used one of those tools is Split (C programming language). I thought it was helpful, but I was by no means a power user and I think I barely scratched the surface of what it could do. A: I've used quite a few open-source and commercial static analysis tools across many languages and I find that the overall discussion fits under the umbrella of a concept known as technical debt (which this article goes into detail about). However for a quick answer, here are my favorites per language: * *Java: Google CodePro Analytix & Checkstyle *.NET: FxCop & StyleCop *C/C++: McCabe IQ
{ "language": "en", "url": "https://stackoverflow.com/questions/20788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How do I set up a test cert for an SSL connection in .NET? I would like to create an SSL connection for generic TCP communication. I think I figured out how to do it in the code, using the info here: http://msdn.microsoft.com/en-us/library/system.net.security.sslstream.aspx What I'm having trouble with is creating a cert so I can test this out. I tried makecert.exe testCert, and that created a cert, but apparently it doesn't have a private key associated with it. So what I'm looking for is the simplest procedure to create a cert and get the connection to work. A: I haven't found a simple way to do this yet, but I found this site helpful a few months back. O'Reilly also published a book called Network Security Hacks (available on Safari) that has a section starting at Hack #45 on creating your own certificate authority. A: I asked a similar question referencing apache, but someone linked to a microsoft howto on the response. It may be helpful. Here A: For anybody else attempting this the pkcs12 certificate created as detailed here openssl question and using the X509Certificate2.CreateFromCertFile method gets past the private key problem. Note I used openssl with cygwin on Windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/20791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }