source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
128,923 | Many times I've seen links like these in HTML pages: <a href='#' onclick='someFunc(3.1415926); return false;'>Click here !</a> What's the effect of the return false in there? Also, I don't usually see that in buttons. Is this specified anywhere? In some spec in w3.org? | The return value of an event handler determines whether or not the default browser behaviour should take place as well. In the case of clicking on links, this would be following the link, but the difference is most noticeable in form submit handlers, where you can cancel a form submission if the user has made a mistake entering the information. I don't believe there is a W3C specification for this. All the ancient JavaScript interfaces like this have been given the nickname "DOM 0", and are mostly unspecified. You may have some luck reading old Netscape 2 documentation. The modern way of achieving this effect is to call event.preventDefault() , and this is specified in the DOM 2 Events specification . | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/128923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15649/"
]
} |
128,938 | I want to install a gem on all my application servers, but gem install requires sudo access - how can I enable sudo only for running this capistrano command? In other words, I don't wish to use sudo for all my deployment recipes, just when I invoke this command on the command line. | Found it - cap invoke COMMAND=" command that requires sudo " SUDO=1 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21788/"
]
} |
128,965 | When I started writing database queries I didn't know the JOIN keyword yet and naturally I just extended what I already knew and wrote queries like this: SELECT a.someRow, b.someRow FROM tableA AS a, tableB AS b WHERE a.ID=b.ID AND b.ID= $someVar Now that I know that this is the same as an INNER JOIN I find all these queries in my code and ask myself if I should rewrite them. Is there something smelly about them or are they just fine? My answer summary : There is nothing wrong with this query BUT using the keywords will most probably make the code more readable/maintainable. My conclusion : I will not change my old queries but I will correct my writing style and use the keywords in the future. | Filtering joins solely using WHERE can be extremely inefficient in some common scenarios. For example: SELECT * FROM people p, companies c WHERE p.companyID = c.id AND p.firstName = 'Daniel' Most databases will execute this query quite literally, first taking the Cartesian product of the people and companies tables and then filtering by those which have matching companyID and id fields. While the fully-unconstrained product does not exist anywhere but in memory and then only for a moment, its calculation does take some time. A better approach is to group the constraints with the JOIN s where relevant. This is not only subjectively easier to read but also far more efficient. Thusly: SELECT * FROM people p JOIN companies c ON p.companyID = c.id WHERE p.firstName = 'Daniel' It's a little longer, but the database is able to look at the ON clause and use it to compute the fully-constrained JOIN directly, rather than starting with everything and then limiting down. This is faster to compute (especially with large data sets and/or many-table joins) and requires less memory. I change every query I see which uses the "comma JOIN " syntax. In my opinion, the only purpose for its existence is conciseness. Considering the performance impact, I don't think this is a compelling reason. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11995/"
]
} |
128,990 | I have a base URL : http://my.server.com/folder/directory/sample And a relative one : ../../other/path How to get the absolute URL from this ? It's pretty straighforward using string manipulation, but I would like to do this in a secure way, using the Uri class or something similar. It's for a standard a C# app, not an ASP.NET one. | var baseUri = new Uri("http://my.server.com/folder/directory/sample");var absoluteUri = new Uri(baseUri,"../../other/path"); OR Uri uri;if ( Uri.TryCreate("http://base/","../relative", out uri) ) doSomething(uri); | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/128990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4687/"
]
} |
129,023 | I have a questionable coding practice. When I need to iterate through a small list of items whose count limit is under 32000 , I use Int16 for my i variable type instead of Integer . I do this because I assume using the Int16 is more efficient than a full blown Integer . Am I wrong? Is there no effective performance difference between using an Int16 vs an Integer ? Should I stop using Int16 and just stick with Integer for all my counting/iteration needs? | You should almost always use Int32 or Int64 (and, no, you do not get credit by using UInt32 or UInt64 ) when looping over an array or collection by index. The most obvious reason that it's less efficient is that all array and collection indexes found in the BCL take Int32 s, so an implicit cast is always going to happen in code that tries to use Int16 s as an index. The less-obvious reason (and the reason that arrays take Int32 as an index) is that the CIL specification says that all operation-stack values are either Int32 or Int64 . Every time you either load or store a value to any other integer type ( Byte , SByte , UInt16 , Int16 , UInt32 , or UInt64 ), there is an implicit conversion operation involved. Unsigned types have no penalty for loading, but for storing the value, this amounts to a truncation and a possible overflow check. For the signed types every load sign-extends, and every store sign-collapses (and has a possible overflow check). The place that this is going to hurt you most is the loop itself, not the array accesses. For example take this innocent-looking loop: for (short i = 0; i < 32000; i++) { ...} Looks good, right? Nope! You can basically ignore the initialization ( short i = 0 ) since it only happens once, but the comparison ( i<32000 ) and incrementing ( i++ ) parts happen 32000 times. Here's some pesudo-code for what this thing looks like at the machine level: Int16 i = 0;LOOP: Int32 temp0 = Convert_I16_To_I32(i); // !!! if (temp0 >= 32000) goto END; ... Int32 temp1 = Convert_I16_To_I32(i); // !!! Int32 temp2 = temp1 + 1; i = Convert_I32_To_I16(temp2); // !!! goto LOOP;END: There are 3 conversions in there that are run 32000 times. And they could have been completely avoided by just using an Int32 or Int64 . Update: As I said in the comment, I have now, in fact written a blog post on this topic, .NET Integral Data Types And You | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/129023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17235/"
]
} |
129,036 | I am writing a component that, given a ZIP file, needs to: Unzip the file. Find a specific dll among the unzipped files. Load that dll through reflection and invoke a method on it. I'd like to unit test this component. I'm tempted to write code that deals directly with the file system: void DoIt(){ Zip.Unzip(theZipFile, "C:\\foo\\Unzipped"); System.IO.File myDll = File.Open("C:\\foo\\Unzipped\\SuperSecret.bar"); myDll.InvokeSomeSpecialMethod();} But folks often say, "Don't write unit tests that rely on the file system, database, network, etc." If I were to write this in a unit-test friendly way, I suppose it would look like this: void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner){ string path = zipper.Unzip(theZipFile); IFakeFile file = fileSystem.Open(path); runner.Run(file);} Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc. It doesn't feel like I'm testing functionality anymore. It feels like I'm just testing class interactions. My question is this : what's the proper way to unit test something that is dependent on the file system? edit I'm using .NET, but the concept could apply Java or native code too. | There's really nothing wrong with this, it's just a question of whether you call it a unit test or an integration test. You just have to make sure that if you do interact with the file system, there are no unintended side effects. Specifically, make sure that you clean up after youself -- delete any temporary files you created -- and that you don't accidentally overwrite an existing file that happened to have the same filename as a temporary file you were using. Always use relative paths and not absolute paths. It would also be a good idea to chdir() into a temporary directory before running your test, and chdir() back afterwards. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/129036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/536/"
]
} |
129,046 | How would I disable and later enable all indexes in a given schema/database in Oracle? Note: This is to make sqlldr run faster. | Here's making the indexes unusable without the file: DECLARE CURSOR usr_idxs IS select * from user_indexes; cur_idx usr_idxs% ROWTYPE; v_sql VARCHAR2(1024);BEGIN OPEN usr_idxs; LOOP FETCH usr_idxs INTO cur_idx; EXIT WHEN NOT usr_idxs%FOUND; v_sql:= 'ALTER INDEX ' || cur_idx.index_name || ' UNUSABLE'; EXECUTE IMMEDIATE v_sql; END LOOP; CLOSE usr_idxs;END; The rebuild would be similiar. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/129046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9435/"
]
} |
129,077 | This issue came up when I got different records counts for what I thought were identical queries one using a not in where constraint and the other a left join . The table in the not in constraint had one null value (bad data) which caused that query to return a count of 0 records. I sort of understand why but I could use some help fully grasping the concept. To state it simply, why does query A return a result but B doesn't? A: select 'true' where 3 in (1, 2, 3, null)B: select 'true' where 3 not in (1, 2, null) This was on SQL Server 2005. I also found that calling set ansi_nulls off causes B to return a result. | Query A is the same as: select 'true' where 3 = 1 or 3 = 2 or 3 = 3 or 3 = null Since 3 = 3 is true, you get a result. Query B is the same as: select 'true' where 3 <> 1 and 3 <> 2 and 3 <> null When ansi_nulls is on, 3 <> null is UNKNOWN, so the predicate evaluates to UNKNOWN, and you don't get any rows. When ansi_nulls is off, 3 <> null is true, so the predicate evaluates to true, and you get a row. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/129077",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12752/"
]
} |
129,088 | I am writing a script for MS PowerShell. This script uses the Copy-Item command. One of the optional arguments to this command is " -container ". The documentation for the argument states that specifying this argument "Preserves container objects during the copy operation." This is all well and good, for I would be the last person to want unpreserved container objects during a copy operation. But in all seriousness, what does this argument do? Particularly in the case where I am copying a disk directory tree from one place to another, what difference does this make to the behavior of the Copy-Item command? | The container the documentation is talking about is the folder structure. If you are doing a recursive copy and want to preserve the folder structure, you would use the -container switch. (Note: by default the -container switch is set to true, so you really would not need to specify it. If you wanted to turn it off you could use -container: $false .) There is a catch to this... if you do a directory listing and pipe it to Copy-Item, it will not preserve the folder structure. If you want to preserve the folder structure, you have to specify the -path property and the -recurse switch. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129088",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9117/"
]
} |
129,120 | I've been a professional software engineer for about a year now, having graduated with a CS degree. I've known about assertions for a while in C++ and C, but had no idea they existed in C# and .NET at all until recently. Our production code contains no asserts whatsoever and my question is this... Should I begin using Asserts in our production code? And if so, When is its use most appropriate? Would it make more sense to do Debug.Assert(val != null); or if ( val == null ) throw new exception(); | In Debugging Microsoft .NET 2.0 Applications John Robbins has a big section on assertions. His main points are: Assert liberally. You can never have too many assertions. Assertions don't replace exceptions. Exceptions cover the things your code demands; assertions cover the things it assumes. A well-written assertion can tell you not just what happened and where (like an exception), but why. An exception message can often be cryptic, requiring you to work backwards through the code to recreate the context that caused the error. An assertion can preserve the program's state at the time the error occurred. Assertions double as documentation, telling other developers what implied assumptions your code depends on. The dialog that appears when an assertion fails lets you attach a debugger to the process, so you can poke around the stack as if you had put a breakpoint there. PS: If you liked Code Complete, I recommend following it up with this book. I bought it to learn about using WinDBG and dump files, but the first half is packed with tips to help avoid bugs in the first place. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/129120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8945/"
]
} |
129,133 | How do I view the SQL that is generated by nHibernate? version 1.2 | You can put something like this in your app.config/web.config file : in the configSections node : <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> in the configuration node : <log4net> <appender name="NHibernateFileLog" type="log4net.Appender.FileAppender"> <file value="logs/nhibernate.txt" /> <appendToFile value="false" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss.fff} [%t] %-5p %c - %m%n" /> </layout> </appender> <logger name="NHibernate.SQL" additivity="false"> <level value="DEBUG"/> <appender-ref ref="NHibernateFileLog"/> </logger></log4net> And don't forget to call log4net.Config.XmlConfigurator.Configure(); at the startup of your application, or to put [assembly: log4net.Config.XmlConfigurator(Watch=true)] in the assemblyinfo.cs In the configuration settings, set the "show_sql" property to true. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1642688/"
]
} |
129,144 | Sometimes I find myself in the situation where I want to execute several sequential commands like such: try: foo(a, b)except Exception, e: baz(e)try: bar(c, d)except Exception, e: baz(e)... This same pattern occurs when exceptions simply need to be ignored. This feels redundant and the excessive syntax causes it to be surprisingly difficult to follow when reading code. In C, I would have solved this type of problem easily with a macro, but unfortunately, this cannot be done in straight python. Question: How can I best reduce the code footprint and increase code readability when coming across this pattern? | You could use the with statement if you have python 2.5 or above: from __future__ import with_statementimport [email protected] handler(): try: yield except Exception, e: baz(e) Your example now becomes: with handler(): foo(a, b)with handler(): bar(c, d) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/129144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9241/"
]
} |
129,160 | The resource definition in tomcat's server.xml looks something like this... <Resource name="jdbc/tox" scope="Shareable" type="javax.sql.DataSource" url="jdbc:oracle:thin:@yourDBserver.yourCompany.com:1521:yourDBsid" driverClassName="oracle.jdbc.pool.OracleDataSource" username="tox" password="toxbaby" maxIdle="3" maxActive="10" removeAbandoned="true" removeAbandonedTimeout="60" testOnBorrow="true" validationQuery="select * from dual" logAbandoned="true" debug="99"/> The password is in the clear. How to avoid this? | As said before encrypting passwords is just moving the problem somewhere else. Anyway, it's quite simple.Just write a class with static fields for your secret key and so on, and static methods to encrypt, decrypt your passwords.Encrypt your password in Tomcat's configuration file ( server.xml or yourapp.xml ...) using this class. And to decrypt the password "on the fly" in Tomcat, extend the DBCP's BasicDataSourceFactory and use this factory in your resource. It will look like: <Resource name="jdbc/myDataSource" auth="Container" type="javax.sql.DataSource" username="user" password="encryptedpassword" driverClassName="driverClass" factory="mypackage.MyCustomBasicDataSourceFactory" url="jdbc:blabla://..."/> And for the custom factory: package mypackage;....public class MyCustomBasicDataSourceFactory extends org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory {@Overridepublic Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable environment) throws Exception { Object o = super.getObjectInstance(obj, name, nameCtx, environment); if (o != null) { BasicDataSource ds = (BasicDataSource) o; if (ds.getPassword() != null && ds.getPassword().length() > 0) { String pwd = MyPasswordUtilClass.unscramblePassword(ds.getPassword()); ds.setPassword(pwd); } return ds; } else { return null; }} Hope this helps. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/129160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13930/"
]
} |
129,178 | I have recently been thinking about the difference between the two ways of defining an array: int[] array int array[] Is there a difference? | They are semantically identical. The int array[] syntax was only added to help C programmers get used to java. int[] array is much preferable, and less confusing. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/129178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21199/"
]
} |
129,181 | Is there a way to format a UTC time into any arbitrary string format I want in java? Basically I was thinking of having some class take the timestamp and I pass it is string telling it how I want it formated, and it returns the formatted string for me. Is there a way to do this? | The java.text.SimpleDateFormat class provides formatting and parsing for dates in a locale-sensitive manner. The javadoc header for SimpleDateFormat is a good source of detailed information. There is also a Java Tutorial with example usages. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2328/"
]
} |
129,207 | Is there a way to statically/globally request a copy of the ApplicationContext in a Spring application? Assuming the main class starts up and initializes the application context, does it need to pass that down through the call stack to any classes that need it, or is there a way for a class to ask for the previously created context? (Which I assume has to be a singleton?) | If the object that needs access to the container is a bean in the container, just implement the BeanFactoryAware or ApplicationContextAware interfaces. If an object outside the container needs access to the container, I've used a standard GoF singleton pattern for the spring container. That way, you only have one singleton in your application, the rest are all singleton beans in the container. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/129207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14057/"
]
} |
129,265 | I have a Postgresql database on which I want to do a few cascading deletes. However, the tables aren't set up with the ON DELETE CASCADE rule. Is there any way I can perform a delete and tell Postgresql to cascade it just this once? Something equivalent to DELETE FROM some_table CASCADE; The answers to this older question make it seem like no such solution exists, but I figured I'd ask this question explicitly just to be sure. | No. To do it just once you would simply write the delete statement for the table you want to cascade. DELETE FROM some_child_table WHERE some_fk_field IN (SELECT some_id FROM some_Table);DELETE FROM some_table; | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/129265",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694/"
]
} |
129,267 | There have been a few questions asked here about why you can't define static methods within interfaces, but none of them address a basic inconsistency: why can you define static fields and static inner types within an interface, but not static methods? Static inner types perhaps aren't a fair comparison, since that's just syntactic sugar that generates a new class, but why fields but not methods? An argument against static methods within interfaces is that it breaks the virtual table resolution strategy used by the JVM, but shouldn't that apply equally to static fields, i.e. the compiler can just inline it? Consistency is what I desire, and Java should have either supported no statics of any form within an interface, or it should be consistent and allow them. | An official proposal has been made to allow static methods in interfaces in Java 7. This proposal is being made under Project Coin . My personal opinion is that it's a great idea. There is no technical difficulty in implementation, and it's a very logical, reasonable thing to do. There are several proposals in Project Coin that I hope will never become part of the Java language, but this is one that could clean up a lot of APIs. For example, the Collections class has static methods for manipulating any List implementation; those could be included in the List interface. Update: In the Java Posse Podcast #234, Joe D'arcy mentioned the proposal briefly, saying that it was "complex" and probably would not make it in under Project Coin. Update: While they didn't make it into Project Coin for Java 7, Java 8 does support static functions in interfaces. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/129267",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21234/"
]
} |
129,277 | This is likely going to be an easy answer and I'm just missing something, but here goes...If I have a Type, (that is, an actual System.Type...not an instance) how do I tell if it inherits from another specific base type? | Use the IsSubclassOf method of the System.Type class. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5469/"
]
} |
129,285 | Is it possible to add attributes at runtime or to change the value of an attribute at runtime? | Attributes are static metadata. Assemblies, modules, types, members, parameters, and return values aren't first-class objects in C# (e.g., the System.Type class is merely a reflected representation of a type). You can get an instance of an attribute for a type and change the properties if they're writable but that won't affect the attribute as it is applied to the type. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/129285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16979/"
]
} |
129,305 | I often run into the problem that I have one stream full of data and want to write everything of it into another stream. All code-examples out there use a buffer in form of a byte-array. Is there a more elegant way to this? If not, what's the ideal size of the buffer. Which factors make up this value? | In .NET 4.0 we finally got a Stream.CopyTo method! Yay! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129305",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9632/"
]
} |
129,329 | I understand the differences between optimistic and pessimistic locking. Now, could someone explain to me when I would use either one in general? And does the answer to this question change depending on whether or not I'm using a stored procedure to perform the query? But just to check, optimistic means "don't lock the table while reading" and pessimistic means "lock the table while reading." | Optimistic Locking is a strategy where you read a record, take note of a version number (other methods to do this involve dates, timestamps or checksums/hashes) and check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic. (i.e. hasn't been updated between when you check the version and write the record to the disk) and update the version in one hit. If the record is dirty (i.e. different version to yours) you abort the transaction and the user can re-start it. This strategy is most applicable to high-volume systems and three-tier architectures where you do not necessarily maintain a connection to the database for your session. In this situation the client cannot actually maintain database locks as the connections are taken from a pool and you may not be using the same connection from one access to the next. Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks . To use pessimistic locking you need either a direct connection to the database (as would typically be the case in a two tier client server application) or an externally available transaction ID that can be used independently of the connection. In the latter case you open the transaction with the TxID and then reconnect using that ID. The DBMS maintains the locks and allows you to pick the session back up through the TxID. This is how distributed transactions using two-phase commit protocols (such as XA or COM+ Transactions ) work. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/129329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
]
} |
129,335 | When you call RedirectToAction within a controller, it automatically redirects using an HTTP GET. How do I explicitly tell it to use an HTTP POST? I have an action that accepts both GET and POST requests, and I want to be able to RedirectToAction using POST and send it some values. Like this: this.RedirectToAction( "actionname", new RouteValueDictionary(new { someValue = 2, anotherValue = "text" })); I want the someValue and anotherValue values to be sent using an HTTP POST instead of a GET. Does anyone know how to do this? | HTTP doesn't support redirection to a page using POST. When you redirect somewhere, the HTTP "Location" header tells the browser where to go, and the browser makes a GET request for that page. You'll probably have to just write the code for your page to accept GET requests as well as POST requests. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/129335",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7831/"
]
} |
129,389 | I want a true deep copy. In Java, this was easy, but how do you do it in C#? | Important Note BinaryFormatter has been deprecated, and will no longer be available in .NET after November 2023. See BinaryFormatter Obsoletion Strategy I've seen a few different approaches to this, but I use a generic utility method as such: public static T DeepClone<T>(this T obj){ using (var ms = new MemoryStream()) { var formatter = new BinaryFormatter(); formatter.Serialize(ms, obj); ms.Position = 0; return (T) formatter.Deserialize(ms); }} Notes: Your class MUST be marked as [Serializable] for this to work. Your source file must include the following code: using System.Runtime.Serialization.Formatters.Binary; using System.IO; | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/129389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18931/"
]
} |
129,405 | How do I Configuring DoxyGen to document ActionScript files? I've included the *.as and *.asi files in doxygen's search pattern, but the classes, functions and variables don't show there. | Instead of doxygen you should use a documentation generator that specifically supports the language. For ActionScript 2, you have a couple choices: NaturalDocs ( example ) (free) ZenDoc (free) AS2Doc Pro ( example ) (commercial) If you are using ActionScript 3, Adobe includes a free documentation generator along with their open source compiler (the Flex SDK ), called " ASDoc ". If you are using FlashDevelop , the latest beta has a built in GUI for running ASDoc, so you don't have to dirty your hands with the commandline. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/129405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18790/"
]
} |
129,406 | When my browser renders the following test case, there's a gap below the image. From my understanding of CSS, the bottom of the blue box should touch the bottom of the red box. But that's not the case. Why? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"><head> <title>foo</title></head><body> <div style="border: solid blue 2px; padding: 0px;"> <img alt='' style="border: solid red 2px; margin: 0px;" src="http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png" /> </div></body></html> | Inline elements are vertically aligned to the baseline, not the very bottom of the containing box. This is because text needs a small amount of space underneath for descenders - the tails on letters like lowercase 'p'. So there is an imaginary line a short distance above the bottom, called the baseline, and inline elements are vertically aligned with it by default. There's two ways of fixing this problem. You can either specify that the image should be vertically aligned to the bottom, or you can set it to be a block element, in which case it is no longer treated as a part of the text. In addition to this, Internet Explorer has an HTML parsing bug that does not ignore trailing whitespace after a closing element, so removing this whitespace may be necessary if you are having problems with Internet Explorer compatibility. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/129406",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7598/"
]
} |
129,438 | I'm trying to teach myself how to use Modern Persistence Patterns (OR/M, Repository, etc) and development practices (TDD, etc). Because the best way (for me) to learn is by doing, I'd like to build some sort of demo application for myself. The problem is, I've got no idea what sort of application to build. I'd like to blog about my experience, so I'd like to build something of some worth to the community, but at the same time I want to avoid things that others are actively doing ( web commerce , forums ) or have been done to death (blog engines). Does anybody have any suggestions for a good pet project I could work on and maybe blog about my experiences with? | There are innumerable community-service organizations with little or no web presence. Pick a service organization -- any one -- Literacy Volunteers, Food Pantries, Home Furnishings Donations, Alcoholics Anonymous -- anything. The grass-roots community organizations benefit the most from involvement; they often need a more dynamic web presence but can't afford it. Look at their current web site. Build them something better. Donate it to them. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/129438",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16942/"
]
} |
129,445 | I'm new to postgreSQL and I have a simple question: I'm trying to create a simple script that creates a DB so I can later call it like this: psql -f createDB.sql I want the script to call other scripts (separate ones for creating tables, adding constraints, functions etc), like this: \i script1.sql\i script2.sql It works fine provided that createDB.sql is in the same dir . But if I move script2 to a directory under the one with createDB, and modify the createDB so it looks like this: \i script1.sql\i somedir\script2.sql I get an error: psql:createDB.sql:2: somedir: Permission denied I'm using Postgres Plus 8.3 for windows, default postgres user. EDIT: Silly me, unix slashes solved the problem. | Postgres started on Linux/Unix. I suspect that reversing the slash with fix it. \i somedir/script2.sql If you need to fully qualify something \i c:/somedir/script2.sql If that doesn't fix it, my next guess would be you need to escape the backslash. \i somedir\\script2.sql | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/129445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21853/"
]
} |
129,453 | Every time I start in deep in a C# project, I end up with lots of events that really just need to pass a single item. I stick with the EventHandler / EventArgs practice, but what I like to do is have something like: public delegate void EventHandler<T>(object src, EventArgs<T> args);public class EventArgs<T>: EventArgs { private T item; public EventArgs(T item) { this.item = item; } public T Item { get { return item; } }} Later, I can have my public event EventHandler<Foo> FooChanged;public event EventHandler<Bar> BarChanged; However, it seems that the standard for .NET is to create a new delegate and EventArgs subclass for each type of event. Is there something wrong with my generic approach? EDIT: The reason for this post is that I just re-created this in a new project, and wanted to make sure it was ok. Actually, I was re-creating it as I posted. I found that there is a generic EventHandler<TEventArgs> , so you don't need to create the generic delegate, but you still need the generic EventArgs<T> class, because TEventArgs: EventArgs . Another EDIT: One downside (to me) of the built-in solution is the extra verbosity: public event EventHandler<EventArgs<Foo>> FooChanged; vs. public event EventHandler<Foo> FooChanged; It can be a pain for clients to register for your events though, because the System namespace is imported by default, so they have to manually seek out your namespace, even with a fancy tool like Resharper... Anyone have any ideas pertaining to that? | Delegate of the following form has been added since .NET Framework 2.0 public delegate void EventHandler<TArgs>(object sender, TArgs args) where TArgs : EventArgs You approach goes a bit further, since you provide out-of-the-box implementation for EventArgs with single data item, but it lacks several properties of the original idea: You cannot add more properties to the event data without changing dependent code. You will have to change the delegate signature to provide more data to the event subscriber. Your data object is generic, but it is also "anonymous", and while reading the code you will have to decipher the "Item" property from usages. It should be named according to the data it provides. Using generics this way you can't make parallel hierarchy of EventArgs, when you have hierarchy of underlying (item) types. E.g. EventArgs<BaseType> is not base type for EventArgs<DerivedType>, even if BaseType is base for DerivedType. So, I think it is better to use generic EventHandler<T>, but still have custom EventArgs classes, organized according to the requirements of the data model. With Visual Studio and extensions like ReSharper, it is only a matter of few commands to create new class like that. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/96/"
]
} |
129,502 | This is on iPhone 0S 2.0. Answers for 2.1 are fine too, though I am unaware of any differences regarding tables. It feels like it should be possible to get text to wrap without creating a custom cell, since a UITableViewCell contains a UILabel by default. I know I can make it work if I create a custom cell, but that's not what I'm trying to achieve - I want to understand why my current approach doesn't work. I've figured out that the label is created on demand (since the cell supports text and image access, so it doesn't create the data view until necessary), so if I do something like this: cell.text = @""; // create the labelUILabel* label = (UILabel*)[[cell.contentView subviews] objectAtIndex:0]; then I get a valid label, but setting numberOfLines on that (and lineBreakMode) doesn't work - I still get single line text. There is plenty of height in the UILabel for the text to display - I'm just returning a large value for the height in heightForRowAtIndexPath . | Here is a simpler way, and it works for me: Inside your cellForRowAtIndexPath: function. The first time you create your cell: UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];if (cell == nil){ cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; cell.textLabel.lineBreakMode = UILineBreakModeWordWrap; cell.textLabel.numberOfLines = 0; cell.textLabel.font = [UIFont fontWithName:@"Helvetica" size:17.0];} You'll notice that I set the number of lines for the label to 0. This lets it use as many lines as it needs. The next part is to specify how large your UITableViewCell will be, so do that in your heightForRowAtIndexPath function: - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath{ NSString *cellText = @"Go get some text for your cell."; UIFont *cellFont = [UIFont fontWithName:@"Helvetica" size:17.0]; CGSize constraintSize = CGSizeMake(280.0f, MAXFLOAT); CGSize labelSize = [cellText sizeWithFont:cellFont constrainedToSize:constraintSize lineBreakMode:UILineBreakModeWordWrap]; return labelSize.height + 20;} I added 20 to my returned cell height because I like a little buffer around my text. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/129502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18017/"
]
} |
129,507 | How does one write a unittest that fails only if a function doesn't throw an expected exception? | Use TestCase.assertRaises (or TestCase.failUnlessRaises ) from the unittest module, for example: import mymodclass MyTestCase(unittest.TestCase): def test1(self): self.assertRaises(SomeCoolException, mymod.myfunc) | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/129507",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4766/"
]
} |
129,560 | What is the C# equivalent to the sql server 2005 real type? | it is a Single see here for more info on SQL Server to .Net DataTypes | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
129,607 | I am seeing both of them used in this script I am trying to debug and the literature is just not clear. Can someone demystify this for me? | Dynamic Scoping. It is a neat concept. Many people don't use it, or understand it. Basically think of my as creating and anchoring a variable to one block of {}, A.K.A. scope. my $foo if (true); # $foo lives and dies within the if statement. So a my variable is what you are used to. whereas with dynamic scoping $var can be declared anywhere and used anywhere.So with local you basically suspend the use of that global variable, and use a "local value" to work with it. So local creates a temporary scope for a temporary variable. $var = 4;print $var, "\n";&hello;print $var, "\n";# subroutinessub hello { local $var = 10; print $var, "\n"; &gogo; # calling subroutine gogo print $var, "\n";}sub gogo { $var ++;} This should print: 410114 | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/129607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3208/"
]
} |
129,628 | I keep hearing this term tossed around in several different contexts. What is it? | Declarative programming is when you write your code in such a way that it describes what you want to do, and not how you want to do it. It is left up to the compiler to figure out the how. Examples of declarative programming languages are SQL and Prolog. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/129628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3208/"
]
} |
129,651 | In the following HTML, I'd like the frame around the image to be snug -- not to stretch out and take up all the available width in the parent container. I know there are a couple of ways to do this (including horrible things like manually setting its width to a particular number of pixels), but what is the right way? Edit: One answer suggests I turn off "display:block" -- but this causes the rendering to look malformed in every browser I've tested it in. Is there a way to get a nice-looking rendering with "display:block" off? Edit: If I add "float: left" to the pictureframe and "clear:both" to the P tag, it looks great. But I don't always want these frames floated to the left. Is there a more direct way to accomplish whatever "float" is doing? .pictureframe { display: block; margin: 5px; padding: 5px; border: solid brown 2px; background-color: #ffeecc;}#foo { border: solid blue 2px; float: left;}img { display: block;} <div id="foo"> <span class="pictureframe"> <img alt='' src="http://stackoverflow.com/favicon.ico" /> </span> <p> Why is the beige rectangle so wide? </p></div> | The right way is to use: .pictureframe { display: inline-block;} Edit: Floating the element also produces the same effect, this is because floating elements use the same shrink-to-fit algorithm for determining the width. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7598/"
]
} |
129,677 | Is there a catchall function somewhere that works well for sanitizing user input for SQL injection and XSS attacks, while still allowing certain types of HTML tags? | It's a common misconception that user input can be filtered. PHP even has a (now deprecated) "feature", called magic-quotes , that builds on this idea. It's nonsense. Forget about filtering (or cleaning, or whatever people call it). What you should do, to avoid problems, is quite simple: whenever you embed a a piece of data within a foreign code, you must treat it according to the formatting rules of that code. But you must understand that such rules could be too complicated to try to follow them all manually. For example, in SQL, rules for strings, numbers and identifiers are all different. For your convenience, in most cases there is a dedicated tool for such an embedding. For example, when you need to use a PHP variable in the SQL query, you have to use a prepared statement, that will take care of all the proper formatting/treatment. Another example is HTML: If you embed strings within HTML markup, you must escape it with htmlspecialchars . This means that every single echo or print statement should use htmlspecialchars . A third example could be shell commands: If you are going to embed strings (such as arguments) to external commands, and call them with exec , then you must use escapeshellcmd and escapeshellarg . Also, a very compelling example is JSON. The rules are so numerous and complicated that you would never be able to follow them all manually. That's why you should never ever create a JSON string manually, but always use a dedicated function, json_encode() that will correctly format every bit of data. And so on and so forth ... The only case where you need to actively filter data, is if you're accepting preformatted input. For example, if you let your users post HTML markup, that you plan to display on the site. However, you should be wise to avoid this at all cost, since no matter how well you filter it, it will always be a potential security hole. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/129677",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10680/"
]
} |
129,693 | I ruined several unit tests some time ago when I went through and refactored them to make them more DRY --the intent of each test was no longer clear. It seems there is a trade-off between tests' readability and maintainability. If I leave duplicated code in unit tests, they're more readable, but then if I change the SUT , I'll have to track down and change each copy of the duplicated code. Do you agree that this trade-off exists? If so, do you prefer your tests to be readable, or maintainable? | Readability is more important for tests. If a test fails, you want the problem to be obvious. The developer shouldn't have to wade through a lot of heavily factored test code to determine exactly what failed. You don't want your test code to become so complex that you need to write unit-test-tests. However, eliminating duplication is usually a good thing, as long as it doesn't obscure anything, and eliminating the duplication in your tests may lead to a better API. Just make sure you don't go past the point of diminishing returns. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/129693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4766/"
]
} |
129,772 | I've been using Winforms since .NET 1.1 and I want to start learning WPF. I'm looking for some good resources for a beginner in WPF. What should I read, what tools do I need, and what are the best practices I should follow? | Please have a look at this StackOverflow post , which has a list of book recommendations. In terms of best practices, get familiar with the M-V-VM pattern . It seems to have gained the most traction in WPF-land. Check out this post for what tools you can use for WPF development. The MSDN Forum is a great place for resources, as is the MSDN help files on WPF. My personal recommendation is for you to forget everything you have learnt about WinForms. WPF is a totally different model, and once I finally dropped my "I did it this way in WinForms, but that way doesn't work in WPF" I had one of those "lightbulb" moments. Hope this helps! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11361/"
]
} |
129,861 | It is a bit of a "chicken or egg" kind of query, but can someone dreamup a query that can return the name of the current database instance in which the query executes? Believe me when I say I understand the paradox: why do you need to know the name of the database instance if you're already connected to execute the query? Auditing in a multi-database environment. I've looked at all the @@ globals in Books Online. " SELECT @@servername " comes close, but I want the name of the database instance rather than the server. | SELECT DB_NAME() Returns the database name. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129861",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/470/"
]
} |
129,921 | I've heard the term MVC (Model View Controller) tossed about with a ton of Buzz lately, but what really is it? | You might want to take a look at what Martin Fowler has to say about MVC, MVP and UI architectures in general at Martin Fowlers site . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/129921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7831/"
]
} |
129,945 | Can someone explain what exactly the string "0 but true" means in Perl? As far as I understand, it equals zero in an integer comparison, but evaluates to true when used as a boolean. Is this correct? Is this a normal behavior of the language or is this a special string treated as a special case in the interpreter? | It's normal behaviour of the language. Quoting the perlsyn manpage: The number 0 , the strings '0' and "" , the empty list () , and undef are all false in a boolean context. All other values are true. Negation of a true value by ! or not returns a special false value. When evaluated as a string it is treated as "" , but as a number, it is treated as 0 . Because of this, there needs to be a way to return 0 from a system call that expects to return 0 as a (successful) return value, and leave a way to signal a failure case by actually returning a false value. "0 but true" serves that purpose. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/129945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12523/"
]
} |
129,972 | Does anyone know of any way to convert a simple gif to xaml? E.G. A tool that would look at an image and create elipses, rectangles and paths based upon a gif / jpg / bitmap? | Illustrator has a trace tool which will do this a cheaper option might be http://vectormagic.com it will export a svg that you should be able to convert to xaml | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/129972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12497/"
]
} |
129,989 | We've developed a Java application and would like to use this application from a C# client. The application has dependencies on Spring, Log4j, ... What would be the most efficient mechanism - make DLL(s) from Java code, ... - to achieve this ? | IKVM! It is really awesome. The only problem is that it DOES add ~30MB to the project.log4net and Spring .NET are available as well, but if living with existing code, go the ikvm route. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/129989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
130,020 | Can anyone recommend a dropdownlist control for asp.net (3.5) that can render option groups? Thanks | I've used the standard control in the past, and just added a simple ControlAdapter for it that would override the default behavior so it could render <optgroup>s in certain places. This works great even if you have controls that don't need the special behavior, because the additional feature doesn't get in the way. Note that this was for a specific purpose and written in .Net 2.0, so it may not suit you as well, but it should at least give you a starting point. Also, you have to hook it up using a .browserfile in your project (see the end of the post for an example). 'This codes makes the dropdownlist control recognize items with "--"'for the label or items with an OptionGroup attribute and render them'as <optgroup> instead of <option>.Public Class DropDownListAdapter Inherits System.Web.UI.WebControls.Adapters.WebControlAdapter Protected Overrides Sub RenderContents(ByVal writer As HtmlTextWriter) Dim list As DropDownList = Me.Control Dim currentOptionGroup As String Dim renderedOptionGroups As New Generic.List(Of String) For Each item As ListItem In list.Items Page.ClientScript.RegisterForEventValidation(list.UniqueID, item.Value) If item.Attributes("OptionGroup") IsNot Nothing Then 'The item is part of an option group currentOptionGroup = item.Attributes("OptionGroup") If Not renderedOptionGroups.Contains(currentOptionGroup) Then 'the header was not written- do that first 'TODO: make this stack-based, so the same option group can be used more than once in longer select element (check the most-recent stack item instead of anything in the list) If (renderedOptionGroups.Count > 0) Then RenderOptionGroupEndTag(writer) 'need to close previous group End If RenderOptionGroupBeginTag(currentOptionGroup, writer) renderedOptionGroups.Add(currentOptionGroup) End If RenderListItem(item, writer) ElseIf item.Text = "--" Then 'simple separator RenderOptionGroupBeginTag("--", writer) RenderOptionGroupEndTag(writer) Else 'default behavior: render the list item as normal RenderListItem(item, writer) End If Next item If renderedOptionGroups.Count > 0 Then RenderOptionGroupEndTag(writer) End If End Sub Private Sub RenderOptionGroupBeginTag(ByVal name As String, ByVal writer As HtmlTextWriter) writer.WriteBeginTag("optgroup") writer.WriteAttribute("label", name) writer.Write(HtmlTextWriter.TagRightChar) writer.WriteLine() End Sub Private Sub RenderOptionGroupEndTag(ByVal writer As HtmlTextWriter) writer.WriteEndTag("optgroup") writer.WriteLine() End Sub Private Sub RenderListItem(ByVal item As ListItem, ByVal writer As HtmlTextWriter) writer.WriteBeginTag("option") writer.WriteAttribute("value", item.Value, True) If item.Selected Then writer.WriteAttribute("selected", "selected", False) End If For Each key As String In item.Attributes.Keys writer.WriteAttribute(key, item.Attributes(key)) Next key writer.Write(HtmlTextWriter.TagRightChar) HttpUtility.HtmlEncode(item.Text, writer) writer.WriteEndTag("option") writer.WriteLine() End SubEnd Class Here's a C# implementation of the same Class: /* This codes makes the dropdownlist control recognize items with "--" * for the label or items with an OptionGroup attribute and render them * as <optgroup> instead of <option>. */public class DropDownListAdapter : WebControlAdapter{ protected override void RenderContents(HtmlTextWriter writer) { //System.Web.HttpContext.Current.Response.Write("here"); var list = (DropDownList)this.Control; string currentOptionGroup; var renderedOptionGroups = new List<string>(); foreach (ListItem item in list.Items) { Page.ClientScript.RegisterForEventValidation(list.UniqueID, item.Value); //Is the item part of an option group? if (item.Attributes["OptionGroup"] != null) { currentOptionGroup = item.Attributes["OptionGroup"]; //Was the option header already written, then just render the list item if (renderedOptionGroups.Contains(currentOptionGroup)) RenderListItem(item, writer); //The header was not written,do that first else { //Close previous group if (renderedOptionGroups.Count > 0) RenderOptionGroupEndTag(writer); RenderOptionGroupBeginTag(currentOptionGroup, writer); renderedOptionGroups.Add(currentOptionGroup); RenderListItem(item, writer); } } //Simple separator else if (item.Text == "--") { RenderOptionGroupBeginTag("--", writer); RenderOptionGroupEndTag(writer); } //Default behavior, render the list item as normal else RenderListItem(item, writer); } if (renderedOptionGroups.Count > 0) RenderOptionGroupEndTag(writer); } private void RenderOptionGroupBeginTag(string name, HtmlTextWriter writer) { writer.WriteBeginTag("optgroup"); writer.WriteAttribute("label", name); writer.Write(HtmlTextWriter.TagRightChar); writer.WriteLine(); } private void RenderOptionGroupEndTag(HtmlTextWriter writer) { writer.WriteEndTag("optgroup"); writer.WriteLine(); } private void RenderListItem(ListItem item, HtmlTextWriter writer) { writer.WriteBeginTag("option"); writer.WriteAttribute("value", item.Value, true); if (item.Selected) writer.WriteAttribute("selected", "selected", false); foreach (string key in item.Attributes.Keys) writer.WriteAttribute(key, item.Attributes[key]); writer.Write(HtmlTextWriter.TagRightChar); HttpUtility.HtmlEncode(item.Text, writer); writer.WriteEndTag("option"); writer.WriteLine(); }} My browser file was named "App_Browsers\BrowserFile.browser" and looked like this: <!-- You can find existing browser definitions at <windir>\Microsoft.NET\Framework\<ver>\CONFIG\Browsers--><browsers> <browser refID="Default"> <controlAdapters> <adapter controlType="System.Web.UI.WebControls.DropDownList" adapterType="DropDownListAdapter" /> </controlAdapters> </browser></browsers> | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/130020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14072/"
]
} |
130,032 | Is there a built-in editor for a multi-line string in a PropertyGrid . | I found that System.Design.dll has System.ComponentModel.Design.MultilineStringEditor which can be used as follows: public class Stuff{ [Editor(typeof(MultilineStringEditor), typeof(UITypeEditor))] public string MultiLineProperty { get; set; }} | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/130032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4592/"
]
} |
130,058 | I mean, I always was wondered about how the hell somebody can develop algorithms to break/cheat the constraints of legal use in many shareware programs out there. Just for curiosity. | Apart from being illegal, it's a very complex task. Speaking just at a teoretical level the common way is to disassemble the program to crack and try to find where the key or the serialcode is checked. Easier said than done since any serious protection scheme will check values in multiple places and also will derive critical information from the serial key for later use so that when you think you guessed it, the program will crash. To create a crack you have to identify all the points where a check is done and modify the assembly code appropriately (often inverting a conditional jump or storing costants into memory locations). To create a keygen you have to understand the algorithm and write a program to re-do the exact same calculation (I remember an old version of MS Office whose serial had a very simple rule, the sum of the digit should have been a multiple of 7, so writing the keygen was rather trivial). Both activities requires you to follow the execution of the application into a debugger and try to figure out what's happening. And you need to know the low level API of your Operating System. Some heavily protected application have the code encrypted so that the file can't be disassembled. It is decrypted when loaded into memory but then they refuse to start if they detect that an in-memory debugger has started, In essence it's something that requires a very deep knowledge, ingenuity and a lot of time! Oh, did I mention that is illegal in most countries? If you want to know more, Google for the +ORC Cracking Tutorials they are very old and probably useless nowdays but will give you a good idea of what it means. Anyway, a very good reason to know all this is if you want to write your own protection scheme. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130058",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18300/"
]
} |
130,065 | I am planning to develop an operating system for the x86 architecture. What options of programming languages do I have? What types of compilers are there available, preferably on a Windows environment? Are there any good sources that will help me learn more about operating system development? Is it better to test my operating system on a Virtual Machine or on physical hardware? Any suggestions? | For my final year project in collage I developed a small x86 OS with a virtual memory manager, a virtual file system and fully preemptive multitasking. I made it open source and the code is heavily commented, check out its source forge page at: https://github.com/stephenfewer/NoNameOS From my experience I can recommend the following: You will need x86 assembly language for various parts, this in unavoidable, but can be kept to a minimum. Fairly quickly you will get running C code, which is a proven choice for OS development. Once you have some sort of memory manager available you can go into C++ if you like (you need some kind of memory manager for things like new and delete). No matter what language you choose you will still need assembly & C to bring a system from boot where the BIOS leaves you into any useable form. Ultimately, the primary language you choose will depend on the type of OS you want to develop. My development environment was the Windows port of the GNU development tools DJGPP along with the NASM assembler. For my IDE I used IBM's Eclipse with the CDT plugin which provides a C/C++ development environment within Eclipse. For testing I recommend BOCHS , an open source x86 PC emulator. It lets you boot up your OS quickly which is great for testing and can be integrated into eclipse so you can build and run your OS at the push of a button. I would also recommend using both VMWare and a physical PC occasionally as you can pick up on some subtle bugs that way. P.S. OS development is really fun but is very intensive, mine took the best part of 12 months. My advice is to plan well and your design is key! enjoy :) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21900/"
]
} |
130,074 | python's time module seems a little haphazard. For example, here is a list of methods in there, from the docstring: time() -- return current time in seconds since the Epoch as a floatclock() -- return CPU time since process start as a floatsleep() -- delay for a number of seconds given as a floatgmtime() -- convert seconds since Epoch to UTC tuplelocaltime() -- convert seconds since Epoch to local time tupleasctime() -- convert time tuple to stringctime() -- convert time in seconds to stringmktime() -- convert local time tuple to seconds since Epochstrftime() -- convert time tuple to string according to format specificationstrptime() -- parse string to time tuple according to format specificationtzset() -- change the local timezone Looking at localtime() and its inverse mktime(), why is there no inverse for gmtime() ? Bonus questions: what would you name the method ? How would you implement it ? | There is actually an inverse function, but for some bizarre reason, it's in the calendar module: calendar.timegm(). I listed the functions in this answer . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2900/"
]
} |
130,112 | How can you get the directory of the script that was run and use it within the .cmd file? | Raymond Chen has a few ideas: https://devblogs.microsoft.com/oldnewthing/20050128-00/?p=36573 Quoted here in full because MSDN archives tend to be somewhat unreliable: The easy way is to use the %CD% pseudo-variable. It expands to the current working directory. set OLDDIR=%CD% .. do stuff .. chdir /d %OLDDIR% &rem restore current directory (Of course, directory save/restore could more easily have been done with pushd / popd , but that's not the point here.) The %CD% trick is handy even from the command line. For example, I often find myself in a directory where there's a file that I want to operate on but... oh, I need to chdir to some other directory in order to perform that operation. set _=%CD%\curfile.txt cd ... some other directory ... somecommand args %_% args (I like to use %_% as my scratch environment variable.) Type SET /? to see the other pseudo-variables provided by the command processor. Also the comments in the article are well worth scanning for example this one (via the WayBack Machine, since comments are gone from older articles): http://blogs.msdn.com/oldnewthing/archive/2005/01/28/362565.aspx#362741 This covers the use of %~dp0: If you want to know where the batch file lives: %~dp0 %0 is the name of the batch file. ~dp gives you the drive and path of the specified argument. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/130112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3153/"
]
} |
130,116 | How can I read the first line from a text file using a Windows batch file? Since the file is large I only want to deal with the first line. | uh?imo this is much simpler set /p texte=< file.txt echo %texte% | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/130116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9822/"
]
} |
130,117 | Most people say never throw an exception out of a destructor - doing so results in undefined behavior. Stroustrup makes the point that "the vector destructor explicitly invokes the destructor for every element. This implies that if an element destructor throws, the vector destruction fails... There is really no good way to protect against exceptions thrown from destructors, so the library makes no guarantees if an element destructor throws" (from Appendix E3.2) . This article seems to say otherwise - that throwing destructors are more or less okay. So my question is this - if throwing from a destructor results in undefined behavior, how do you handle errors that occur during a destructor? If an error occurs during a cleanup operation, do you just ignore it? If it is an error that can potentially be handled up the stack but not right in the destructor, doesn't it make sense to throw an exception out of the destructor? Obviously these kinds of errors are rare, but possible. | Throwing an exception out of a destructor is dangerous. If another exception is already propagating the application will terminate. #include <iostream>class Bad{ public: // Added the noexcept(false) so the code keeps its original meaning. // Post C++11 destructors are by default `noexcept(true)` and // this will (by default) call terminate if an exception is // escapes the destructor. // // But this example is designed to show that terminate is called // if two exceptions are propagating at the same time. ~Bad() noexcept(false) { throw 1; }};class Bad2{ public: ~Bad2() { throw 1; }};int main(int argc, char* argv[]){ try { Bad bad; } catch(...) { std::cout << "Print This\n"; } try { if (argc > 3) { Bad bad; // This destructor will throw an exception that escapes (see above) throw 2; // But having two exceptions propagating at the // same time causes terminate to be called. } else { Bad2 bad; // The exception in this destructor will // cause terminate to be called. } } catch(...) { std::cout << "Never print this\n"; }} This basically boils down to: Anything dangerous (i.e. that could throw an exception) should be done via public methods (not necessarily directly). The user of your class can then potentially handle these situations by using the public methods and catching any potential exceptions. The destructor will then finish off the object by calling these methods (if the user did not do so explicitly), but any exceptions throw are caught and dropped (after attempting to fix the problem). So in effect you pass the responsibility onto the user. If the user is in a position to correct exceptions they will manually call the appropriate functions and processes any errors. If the user of the object is not worried (as the object will be destroyed) then the destructor is left to take care of business. An example: std::fstream The close() method can potentially throw an exception.The destructor calls close() if the file has been opened but makes sure that any exceptions do not propagate out of the destructor. So if the user of a file object wants to do special handling for problems associated to closing the file they will manually call close() and handle any exceptions. If on the other hand they do not care then the destructor will be left to handle the situation. Scott Myers has an excellent article about the subject in his book "Effective C++" Edit: Apparently also in "More Effective C++" Item 11: Prevent exceptions from leaving destructors | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/130117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5963/"
]
} |
130,161 | I've gotten used to the idea that if I want/need to use alpha-trans PNGs in a cross-browser manner, that I use a background image on a div and then, in IE6-only CSS, mark the background as "none" and include the proper "filter" argument. Is there another way? A better way? Is there a way to do this with the img tag and not with background images? | The bottom line is, if you want alpha transparency in a PNG, and you want it to work in IE6, then you need to have the AlphaImageLoader filter applied. Now, there are numerous ways to do it: Browser specific hacks, Conditional Comments, Javascript/JQuery/JLibraryOfChoice element iteration, Server-Side CSS-serving via UserAgent-sniffing... But all of 'em come down to having the filter applied and the background removed. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9414/"
]
} |
130,186 | I'm having an unusual problem with an IE document with contentEditable set to true. Calling select() on a range that is positioned at the end of a text node that immediately precedes a block element causes the selection to be shifted to the right one character and appear where it shouldn't. I've submitted a bug to Microsoft against IE8. If you can, please vote for this issue so that it can be fixed. https://connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=390995 I've written a test case to demonstrate the effect: <html> <body> <iframe id="editable"> <html> <body> <div id="test"> Click to the right of this line -> <p id="par">Block Element</p> </div> </body> </html> </iframe> <input id="mytrigger" type="button" value="Then Click here to Save and Restore" /> <script type="text/javascript"> window.onload = function() { var iframe = document.getElementById('editable'); var doc = iframe.contentDocument || iframe.contentWindow.document; // An IFRAME without a source points to a blank document. Here we'll // copy the content we stored in between the IFRAME tags into that // document. It's a hack to allow us to use only one HTML file for this // test. doc.body.innerHTML = iframe.textContent || iframe.innerHTML; // Marke the IFRAME as an editable document if (doc.body.contentEditable) { doc.body.contentEditable = true; } else { var mydoc = doc; doc.designMode = 'On'; } // A function to demonstrate the bug. var myhandler = function() { // Step 1 Get the current selection var selection = doc.selection || iframe.contentWindow.getSelection(); var range = selection.createRange ? selection.createRange() : selection.getRangeAt(0); // Step 2 Restore the selection if (range.select) { range.select(); } else { selection.removeAllRanges(); selection.addRange(range); doc.body.focus(); } } // Set up the button to perform the test code. var button = document.getElementById('mytrigger'); if (button.addEventListener) { button.addEventListener('click', myhandler, false); } else { button.attachEvent('onclick', myhandler); } } </script> </body></html> The problem is exposed in the myhandler function. This is all that I'm doing, there is no Step 3 in between the saving and restoring the selection, and yet the cursor moves. It doesn't seem to happen unless the selection is empty (ie. I have a blinking cursor, but no text), and it only seems to happen whenever the cursor is at the end of a text node that immediately precedes a block node. It seems that the range is still in the correct position (if I call parentElement on the range it returns the div), but if I get a new range from the current selection, the new range is inside the paragraph tag, and that is its parentElement. How do I work around this and consistently save and restore the selection in internet explorer? | I've figured out a few methods for dealing with IE ranges like this. If all you want to do is save where the cursor is, and then restore it, you can use the pasteHTML method to insert an empty span at the current position of the cursor, and then use the moveToElementText method to put it back at that position again: // Save position of cursorrange.pasteHTML('<span id="caret"></span>')...// Create new cursor and put it in the old positionvar caretSpan = iframe.contentWindow.document.getElementById("caret");var selection = iframe.contentWindow.document.selection;newRange = selection.createRange();newRange.moveToElementText(caretSpan); Alternatively, you can count how many characters precede the current cursor position and save that number: var selection = iframe.contentWindow.document.selection;var range = selection.createRange().duplicate();range.moveStart('sentence', -1000000);var cursorPosition = range.text.length; To restore the cursor, you set it to the beginning and then move it that number of characters: var newRange = selection.createRange();newRange.move('sentence', -1000000);newRange.move('character', cursorPosition); Hope this helps. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8458/"
]
} |
130,193 | Is it possible to modify a registry value (whether string or DWORD) via a .bat/.cmd script? | @Franci Penov - modify is possible in the sense of overwrite with /f , eg reg add "HKCU\Software\etc\etc" /f /v "value" /t REG_SZ /d "Yes" | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/130193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3153/"
]
} |
130,262 | The Python list comprehension syntax makes it easy to filter values within a comprehension. For example: result = [x**2 for x in mylist if type(x) is int] Will return a list of the squares of integers in mylist. However, what if the test involves some (costly) computation and you want to filter on the result? One option is: result = [expensive(x) for x in mylist if expensive(x)] This will result in a list of non-"false" expensive(x) values, however expensive() is called twice for each x. Is there a comprehension syntax that allows you to do this test while only calling expensive once per x? | If the calculations are already nicely bundled into functions, how about using filter and map ? result = filter (None, map (expensive, mylist)) You can use itertools.imap if the list is very large. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5222/"
]
} |
130,273 | I'm trying to automate a program I made with a test suite via a .cmd file. I can get the program that I ran's return code via %errorlevel%. My program has certain return codes for each type of error. For example: 1 - means failed for such and such a reason 2 - means failed for some other reason ... echo FAILED: Test case failed, error level: %errorlevel% >> TestSuite1Log.txt Instead I'd like to somehow say: echo FAILED: Test case failed, error reason: lookupError(%errorlevel%) >> TestSuite1Log.txt Is this possible with a .bat file? Or do I have to move to a scripting language like python/perl? | You can do this quite neatly with the ENABLEDELAYEDEXPANSION option. This allows you to use ! as variable marker that is evaluated after % . REM Turn on Delayed ExpansionSETLOCAL ENABLEDELAYEDEXPANSIONREM Define messages as variables with the ERRORLEVEL on the end of the nameSET MESSAGE0=Everything is fineSET MESSAGE1=Failed for such and such a reasonSET MESSAGE2=Failed for some other reasonREM Set ERRORLEVEL - or run command hereSET ERRORLEVEL=2REM Print the message corresponding to the ERRORLEVELECHO !MESSAGE%ERRORLEVEL%! Type HELP SETLOCAL and HELP SET at a command prompt for more information on delayed expansion. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3153/"
]
} |
130,322 | I am trying to pass a member function within a class to a function that takes a member function class pointer. The problem I am having is that I am not sure how to properly do this within the class using the this pointer. Does anyone have suggestions? Here is a copy of the class that is passing the member function: class testMenu : public MenuScreen{public:bool draw;MenuButton<testMenu> x;testMenu():MenuScreen("testMenu"){ x.SetButton(100,100,TEXT("buttonNormal.png"),TEXT("buttonHover.png"),TEXT("buttonPressed.png"),100,40,&this->test2); draw = false;}void test2(){ draw = true;}}; The function x.SetButton(...) is contained in another class, where "object" is a template. void SetButton(int xPos, int yPos, LPCWSTR normalFilePath, LPCWSTR hoverFilePath, LPCWSTR pressedFilePath, int Width, int Height, void (object::*ButtonFunc)()) { BUTTON::SetButton(xPos, yPos, normalFilePath, hoverFilePath, pressedFilePath, Width, Height); this->ButtonFunc = &ButtonFunc;} If anyone has any advice on how I can properly send this function so that I can use it later. | To call a member function by pointer, you need two things: A pointer to the object and a pointer to the function. You need both in MenuButton::SetButton() template <class object>void MenuButton::SetButton(int xPos, int yPos, LPCWSTR normalFilePath, LPCWSTR hoverFilePath, LPCWSTR pressedFilePath, int Width, int Height, object *ButtonObj, void (object::*ButtonFunc)()){ BUTTON::SetButton(xPos, yPos, normalFilePath, hoverFilePath, pressedFilePath, Width, Height); this->ButtonObj = ButtonObj; this->ButtonFunc = ButtonFunc;} Then you can invoke the function using both pointers: ((ButtonObj)->*(ButtonFunc))(); Don't forget to pass the pointer to your object to MenuButton::SetButton() : testMenu::testMenu() :MenuScreen("testMenu"){ x.SetButton(100,100,TEXT("buttonNormal.png"), TEXT("buttonHover.png"), TEXT("buttonPressed.png"), 100, 40, this, test2); draw = false;} | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20229/"
]
} |
130,328 | How do I get the caller's IP address in a WebMethod? [WebMethod]public void Foo(){ // HttpRequest... ? - Not giving me any options through intellisense...} using C# and ASP.NET | HttpContext.Current.Request.UserHostAddress is what you want. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/130328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
130,337 | I am working on a web application (J2EE) and I would like to know the options that are available for handling a double post from the browser. The solutions that I have seen and used in the past are all client-side: Disable the submit button as soon as the user clicks it. Follow a POST-Redirect-GET pattern to prevent POSTs when the user clicks the back button. Handle the onSubmit event of the form and keep track of the submission status with JavaScript. I would prefer to implement a server side solution if possible. Are there any better approaches than the ones I have mentioned above, or are client-side solutions best? | Its hard to implement an idiot-proof solution (as they are alway improving the idiots). No matter what you do, the client side can be manipulated or perform incorrectly. Your solution has got to be server side to be reliable and secure. That said, one approach is to review the request and check system/database state or logs to determine if it was already processed. Ideally, the process on the server side should be idempotent if possible , and it will have to protect against dupe submits if it can't be. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13925/"
]
} |
130,354 | I need to simulate a low bandwidth, high latency connection to a server in order to emulate the conditions of a VPN at a remote site. The bandwidth and latency should be tweakable so I can discover the best combination in order to run our software package. | For macOS , there is the Network Link Conditioner that simulates configurable bandwidth, latency, and packet loss. It is contained in the Additional Tools for Xcode package. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/190298/"
]
} |
130,396 | Is there a way to use constants in JavaScript ? If not, what's the common practice for specifying variables that are used as constants? | Since ES2015 , JavaScript has a notion of const : const MY_CONSTANT = "some-value"; This will work in pretty much all browsers except IE 8, 9 and 10 . Some may also need strict mode enabled. You can use var with conventions like ALL_CAPS to show that certain values should not be modified if you need to support older browsers or are working with legacy code: var MY_CONSTANT = "some-value"; | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/130396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10708/"
]
} |
130,404 | I'm trying to find a way to "pretty print" a JavaScript data structure in a human-readable form for debugging. I have a rather big and complicated data structure being stored in JS and I need to write some code to manipulate it. In order to work out what I'm doing and where I'm going wrong, what I really need is to be able to see the data structure in its entirety, and update it whenever I make changes through the UI. All of this stuff I can handle myself, apart from finding a nice way to dump a JavaScript data structure to a human-readable string. JSON would do, but it really needs to be nicely formatted and indented. I'd usually use Firebug's excellent DOM dumping stuff for this, but I really need to be able to see the entire structure at once, which doesn't seem to be possible in Firebug. | Use Crockford's JSON.stringify like this: var myArray = ['e', {pluribus: 'unum'}];var text = JSON.stringify(myArray, null, '\t'); //you can specify a number instead of '\t' and that many spaces will be used for indentation... Variable text would look like this: [ "e", { "pluribus": "unum" }] By the way, this requires nothing more than that JS file - it will work with any library, etc. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/130404",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17121/"
]
} |
130,438 | Okay. I know this looks like the typical "Why didn't he just Google it or go to www.unicode.org and look it up?" question, but for such a simple question the answer still eludes me after checking both sources. I am pretty sure that all three of these encoding systems support all of the Unicode characters, but I need to confirm it before I make that claim in a presentation. Bonus question: Do these encodings differ in the number of characters they can be extended to support? | No, they're simply different encoding methods. They all support encoding the same set of characters. UTF-8 uses anywhere from one to four bytes per character depending on what character you're encoding. Characters within the ASCII range take only one byte while very unusual characters take four. UTF-32 uses four bytes per character regardless of what character it is, so it will always use more space than UTF-8 to encode the same string. The only advantage is that you can calculate the number of characters in a UTF-32 string by only counting bytes. UTF-16 uses two bytes for most charactes, four bytes for unusual ones. http://en.wikipedia.org/wiki/Comparison_of_Unicode_encodings | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/130438",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/30018/"
]
} |
130,447 | I am currently using TortoiseSVN to manage a couple of the projects that I have on the go at the moment. When I first moved everything into source control I wasn't really sure how everything should be laid out so I ended up putting each project into its own repository. I was wondering would it be a good idea for me just to move them all into one big repository and have them split into project folders? What does everyone else do? At the moment none of them share common code but they may in the future. Would it make it easier to manage if they where all together. Thanks. | Depends to an extent what you mean by "project". I have a general local repository containing random bits of stuff that I write (including my website, since it's small). A single-user local SVN repository is not going to suffer noticeable performance issues until you've spent a lot of years typing. By which time SVN will be faster anyway. So I've yet to regret having thrown everything in one repository, even though some of the stuff in there is completely unrelated other than that I wrote it all. If a "project" means "an assignment from class", or "the scripts I use to drive my TiVo", or "my progress in learning a new language", then creating a repos per project seems a bit unnecessary to me. Then again, it doesn't cost anything either. So I guess I'd say don't change what you're doing. Unless you really want the experience of re-organising repositories, in which case do change what you're doing :-) However, if by "project" you mean a 'real' software project, with public access to the repository, then I think separate repos per project is what makes sense: partly because it divides things cleanly and each project scales independently, but also because it's what people will expect to see. Sharing code between separate repositories is less of an issue than you might think, since svn has the rather lovely "svn:externals" feature. This lets you point a directory of your repository at a directory in another repository, and check that stuff out automatically along with your stuff. See, as always, the SVN book for details. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6335/"
]
} |
130,475 | I've been learning Lisp to expand my horizons because I have heard that it is used in AI programming. After doing some exploring, I have yet to find AI examples or anything in the language that would make it more inclined towards it. Was Lisp used in the past because it was available, or is there something that I'm just missing? | Lisp WAS used in AI until the end of the 1980s. In the 80s, though, Common Lisp was oversold to the business world as the "AI language"; the backlash forced most AI programmers to C++ for a few years. These days, prototypes usually are written in a younger dynamic language (Perl, Python, Ruby, etc) and implementations of successful research is usually in C or C++ (sometimes Java). If you're curious about the 70's...well, I wasn't there. But I think Lisp was successful in AI research for three reasons (in order of importance): Lisp is an excellent prototyping tool. It was the best for a very long time. Lisp is still great at tackling a problem you don't know how to solve yet. That description characterises AI perfectly. Lisp supports symbolic programming well. Old AI was also symbolic. It was also unique in this regard for a long time. Lisp is very powerful. The code/data distinction is weaker so it feels more extensible than other languages because your functions and macros look like the built-in stuff. I do not have Peter Norvig's old AI book , but it is supposed to be a good way to learn to program AI algorithms in Lisp. Disclaimer: I am a grad student in computational linguistics. I know the subfield of natural language processing a lot better than the other fields. Maybe Lisp is used more in other subfields. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/130475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1256/"
]
} |
130,506 | I recently inherited a small Java program that takes information from a large database, does some processing and produces a detailed image regarding the information. The original author wrote the code using a single thread, then later modified it to allow it to use multiple threads. In the code he defines a constant; // number of threadspublic static final int THREADS = Runtime.getRuntime().availableProcessors(); Which then sets the number of threads that are used to create the image. I understand his reasoning that the number of threads cannot be greater than the number of available processors, so set it the the amount to get the full potential out of the processor(s). Is this correct? or is there a better way to utilize the full potential of the processor(s)? EDIT: To give some more clarification, The specific algorithm that is being threaded scales to the resolution of the picture being created, (1 thread per pixel). That is obviously not the best solution though. The work that this algorithm does is what takes all the time, and is wholly mathematical operations, there are no locks or other factors that will cause any given thread to sleep. I just want to maximize the programs CPU utilization to decrease the time to completion. | Threads are fine, but as others have noted, you have to be highly aware of your bottlenecks. Your algorithm sounds like it would be susceptible to cache contention between multiple CPUs - this is particularly nasty because it has the potential to hit the performance of all of your threads (normally you think of using multiple threads to continue processing while waiting for slow or high latency IO operations). Cache contention is a very important aspect of using multi CPUs to process a highly parallelized algorithm: Make sure that you take your memory utilization into account. If you can construct your data objects so each thread has it's own memory that it is working on, you can greatly reduce cache contention between the CPUs. For example, it may be easier to have a big array of ints and have different threads working on different parts of that array - but in Java, the bounds checks on that array are going to be trying to access the same address in memory, which can cause a given CPU to have to reload data from L2 or L3 cache. Splitting the data into it's own data structures, and configure those data structures so they are thread local (might even be more optimal to use ThreadLocal - that actually uses constructs in the OS that provide guarantees that the CPU can use to optimize cache. The best piece of advice I can give you is test, test, test. Don't make assumptions about how CPUs will perform - there is a huge amount of magic going on in CPUs these days, often with counterintuitive results. Note also that the JIT runtime optimization will add an additional layer of complexity here (maybe good, maybe not). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130506",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7613/"
]
} |
130,507 | I'd really like to get into some D3D coding, but I don't have the time lately to learn C++ for what will amount to a hobby project. | Threads are fine, but as others have noted, you have to be highly aware of your bottlenecks. Your algorithm sounds like it would be susceptible to cache contention between multiple CPUs - this is particularly nasty because it has the potential to hit the performance of all of your threads (normally you think of using multiple threads to continue processing while waiting for slow or high latency IO operations). Cache contention is a very important aspect of using multi CPUs to process a highly parallelized algorithm: Make sure that you take your memory utilization into account. If you can construct your data objects so each thread has it's own memory that it is working on, you can greatly reduce cache contention between the CPUs. For example, it may be easier to have a big array of ints and have different threads working on different parts of that array - but in Java, the bounds checks on that array are going to be trying to access the same address in memory, which can cause a given CPU to have to reload data from L2 or L3 cache. Splitting the data into it's own data structures, and configure those data structures so they are thread local (might even be more optimal to use ThreadLocal - that actually uses constructs in the OS that provide guarantees that the CPU can use to optimize cache. The best piece of advice I can give you is test, test, test. Don't make assumptions about how CPUs will perform - there is a huge amount of magic going on in CPUs these days, often with counterintuitive results. Note also that the JIT runtime optimization will add an additional layer of complexity here (maybe good, maybe not). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130507",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16942/"
]
} |
130,526 | Probably a very stupid question but I can't figure how to rename an object in PowerPoint.. For example, all my Graphs are called by default "Graph 1" etc. Could someone help me on that?Thanks! | In PowerPoint 2007 you can do this from the Selection pane. To show the Selection pane, click on the Home tab in the ribbon, then click on Arrange and then 'Selection Pane...' at the bottom. The Selection pane will open on the right. (Or press CTRL+F10) To rename an object, first select the object and then double click on the object name in the Selection pane and you will be able to type the new object name. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
130,564 | I am writing a page where I need an HTML table to maintain a set size. I need the headers at the top of the table to stay there at all times but I also need the body of the table to scroll no matter how many rows are added to the table. Think a mini version of excel. This seems like a simple task but almost every solution I have found on the web has some drawback. How can I solve this? | I had to find the same answer. The best example I found is http://www.cssplay.co.uk/menu/tablescroll.html - I found example #2 worked well for me. You will have to set the height of the inner table with Java Script, the rest is CSS. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4491/"
]
} |
130,573 | The FILETIME structure counts from January 1 1601 (presumably the start of that day) according to the Microsoft documentation, but does this include leap seconds? | The question shouldn't be if FILETIME includes leap seconds. It should be: Do the people, functions, and libraries, who interpret a FILETIME (i.e. FileTimeToSystemTime ) include leap seconds when counting the duration? The simple answer is "no" . FileTimeToSystemTime returns seconds as 0..59 . The simpler answer is: " of course not, how could it? ". My Windows 2000 machine doesn't know that there were 2 leap seconds added in the decade since it was released. Any interpretation it makes of a FILETIME is wrong. Finally, rather than relying on logic, we can determine by direct experimental observation, the answer to the posters question: var systemTime: TSystemTime; fileTime: TFileTime;begin //Construct a system-time for the 12/31/2008 11:59:59 pm ZeroMemory(@systemTime, SizeOf(systemTime)); systemtime.wYear := 2008; systemTime.wMonth := 12; systemTime.wDay := 31; systemTime.wHour := 23; systemtime.wMinute := 59; systemtime.wSecond := 59; //Convert it to a file time SystemTimeToFileTime(systemTime, {var}fileTime); //There was a leap second 12/31/2008 11:59:60 pm //Add one second to our filetime to reach the leap second filetime.dwLowDateTime := fileTime.dwLowDateTime+10000000; //10,000,000 * 100ns = 1s //Convert the filetime, sitting on a leap second, to a displayable system time FileTimeToSystemTime(fileTime, {var}systemTime); //And now print the system time ShowMessage(DateTimeToStr(SystemTimeToDateTime(systemTime))); Adding one second to 12/31/2008 11:59:59pm gives 1/1/2009 12:00:00am rather than 1/1/2009 11:59:60pm Q.E.D. Original poster might not like it, but god intentionally rigged it so that a year is not evenly divisible by a day. He did it just to screw up programmers. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10168/"
]
} |
130,592 | I've never used CI tools before, but from what I've read, I'm not sure it would provide any benefit to a solo developer that isn't writing code every day. First - what benefits does CI provide to any project? Second - who should use CI? Does it benefit all developers? | The basic concept of CI is that you have a system that builds the code and runs automated tests everytime someone makes a commit to the version control system. These tests would include unit and functional tests, or even behavior driven tests. The benefit is that you know - immediately - when someone has broken the build. This means either: A . They committed code that prevents compilation, which would screw any one up B . They committed code that broke some tests, which either means they introduced a bug that needs to be fixed, or the tests need to be updated to reflect the change in the code. If you are a solo developer, CI isn't quite as useful if you are in a good habit of running your tests before a commit, which is what you should be doing. That being said, you could develop a bad habit of letting the CI do your tests for you. As a solo programmer, it mainly comes down to discipline. Using CI is a useful skill to have, but you want to avoid developing any bad habits that wouldn't translate to a team environment. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/130592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
]
} |
130,617 | I got a program that writes some data to a file using a method like the one below. public void ExportToFile(string filename){ using(FileStream fstream = new FileStream(filename,FileMode.Create)) using (TextWriter writer = new StreamWriter(fstream)) { // try catch block for write permissions writer.WriteLine(text); }} When running the program I get an error: Unhandled Exception: System.UnauthorizedAccessException: Access to the path 'mypath' is denied. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, nt32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions ptions, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolea bFromProxy) Question: What code do I need to catch this and how do I grant the access? | UPDATE: Modified the code based on this answer to get rid of obsolete methods. You can use the Security namespace to check this: public void ExportToFile(string filename){ var permissionSet = new PermissionSet(PermissionState.None); var writePermission = new FileIOPermission(FileIOPermissionAccess.Write, filename); permissionSet.AddPermission(writePermission); if (permissionSet.IsSubsetOf(AppDomain.CurrentDomain.PermissionSet)) { using (FileStream fstream = new FileStream(filename, FileMode.Create)) using (TextWriter writer = new StreamWriter(fstream)) { // try catch block for write permissions writer.WriteLine("sometext"); } } else { //perform some recovery action here }} As far as getting those permission, you are going to have to ask the user to do that for you somehow. If you could programatically do this, then we would all be in trouble ;) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/130617",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1361/"
]
} |
130,618 | I would like to find out if a particular python datetime object is older than X hours or minutes. I am trying to do something similar to: if (datetime.now() - self.timestamp) > 100# Where 100 is either seconds or minutes This generates a type error. What is the proper way to do date time comparison in python? I already looked at WorkingWithTime which is close but not exactly what I want. I assume I just want the datetime object represented in seconds so that I can do a normal int comparison. Please post lists of datetime best practices. | Use the datetime.timedelta class: >>> from datetime import datetime, timedelta>>> then = datetime.now() - timedelta(hours = 2)>>> now = datetime.now()>>> (now - then) > timedelta(days = 1)False>>> (now - then) > timedelta(hours = 1)True Your example could be written as: if (datetime.now() - self.timestamp) > timedelta(seconds = 100) or if (datetime.now() - self.timestamp) > timedelta(minutes = 100) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/130618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20794/"
]
} |
130,698 | I want to wrap a Perl one-liner in a batch file. For a (trivial) example, in a Unix shell, I could quote up a command like this: perl -e 'print localtime() . "\n"' But DOS chokes on that with this helpful error message: Can't find string terminator "'" anywhere before EOF at -e line 1. What's the best way to do this within a .bat file ? | For Perl stuff on Windows, I try to use the generalized quoting as much as possible so I don't get leaning toothpick syndrome. I save the quotes for the stuff that DOS needs: perl -e "print scalar localtime() . qq(\n)" If you just need a newline at the end of the print, you can let the -l switch do that for you: perl -le "print scalar localtime()" For other cool things you can do with switches, see the perlrun documentation. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21938/"
]
} |
130,734 | It's been a while since I've had to do any HTML-like code in Vim , but recently I came across this again. Say I'm writing some simple HTML : <html><head><title>This is a title</title></head></html> How do I write those closing tags for title, head and html down quickly? I feel like I'm missing some really simple way here that does not involve me going through writing them all down one by one. Of course I can use Ctrl P to autocomplete the individual tag names but what gets me on my laptop keyboard is actually getting the brackets and slash right. | Check this out.. closetag.vim Functions and mappings to close open HTML/XML tags https://www.vim.org/scripts/script.php?script_id=13 I use something similar. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10098/"
]
} |
130,740 | I have the following program: ~/test> cat test.ccint main(){ int i = 3; int j = __sync_add_and_fetch(&i, 1); return 0;} I'm compiling this program using GCC 4.2.2 on Linux running on a multi-cpu 64-bit Intel machine: ~/test> uname --allLinux doom 2.6.9-67.ELsmp #1 SMP Wed Nov 7 13:56:44 EST 2007 x86_64 x86_64 x86_64 GNU/Linux When I compile the program in 64-bit mode, it compiles and links fine: ~/test> /share/tools/gcc-4.2.2/bin/g++ test.cc~/test> When I compile it in 32-bit mode, I get the following error: ~/test> /share/tools/gcc-4.2.2/bin/g++ -m32 test.cc/tmp/ccEVHGkB.o(.text+0x27): In function `main':: undefined reference to `__sync_add_and_fetch_4'collect2: ld returned 1 exit status~/test> Although I will never actually run on a 32-bit processor, I do need a 32-bit executable so I can link with some 32-bit libraries. My 2 questions are: Why do I get a link error when I compile in 32-bit mode? Is there some way to get the program to compile and link, while still being able to link with a 32-bit library? | From the GCC page on Atomic Builtins : Not all operations are supported by all target processors. If a particular operation cannot be implemented on the target processor, a warning will be generated and a call an external function will be generated. The external function will carry the same name as the builtin, with an additional suffix `_n' where n is the size of the data type. Judging from your compiler output, which refers to __sync_add_and_fetch_4 , this is what's happening. For some reason, GCC is not generating the external function properly. This is likely why you're only getting an error in 32-bit mode - when compiling for 64-bit mode, it compiles for your processor more closely. When compiling for 32-bit, it may well be using a generic arch (i386, for example) which does not natively support those features. Try specifying a specific architecture for your chip family (Xeon, Core 2, etc.) via -mcpu and see if that works. If not, you'll have to figure out why GCC isn't including the appropriate function that it should be generating. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130740",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21435/"
]
} |
130,753 | On Sql Server 2000, is there a way to find out the date and time when a stored procedure was last executed? | If a stored procedure is still in the procedure cache, you can find the last time it was executed by querying the sys.dm_exec_query_stats DMV. In this example, I also cross apply to the sys.dm_exec_query_plan DMF in order to qualify the object id: declare @proc_nm sysname-- select the procedure name hereset @proc_nm = 'usp_test'select s.last_execution_timefrom sys.dm_exec_query_stats scross apply sys.dm_exec_query_plan (s.plan_handle) pwhere object_name(p.objectid, db_id('AdventureWorks')) = @proc_nm [Source] | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3294/"
]
} |
130,763 | I want my Python script to copy files on Vista. When I run it from a normal cmd.exe window, no errors are generated, yet the files are NOT copied. If I run cmd.exe "as administator" and then run my script, it works fine. This makes sense since User Account Control (UAC) normally prevents many file system actions. Is there a way I can, from within a Python script, invoke a UAC elevation request (those dialogs that say something like "such and such app needs admin access, is this OK?") If that's not possible, is there a way my script can at least detect that it is not elevated so it can fail gracefully? | As of 2017, an easy method to achieve this is the following: import ctypes, sysdef is_admin(): try: return ctypes.windll.shell32.IsUserAnAdmin() except: return Falseif is_admin(): # Code of your program hereelse: # Re-run the program with admin rights ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, " ".join(sys.argv), None, 1) If you are using Python 2.x, then you should replace the last line for: ctypes.windll.shell32.ShellExecuteW(None, u"runas", unicode(sys.executable), unicode(" ".join(sys.argv)), None, 1) Also note that if you converted you python script into an executable file (using tools like py2exe , cx_freeze , pyinstaller ) then you should use sys.argv[1:] instead of sys.argv in the fourth parameter. Some of the advantages here are: No external libraries required. It only uses ctypes and sys from standard library. Works on both Python 2 and Python 3. There is no need to modify the file resources nor creating a manifest file. If you don't add code below if/else statement, the code won't ever be executed twice. You can get the return value of the API call in the last line and take an action if it fails (code <= 32). Check possible return values here . You can change the display method of the spawned process modifying the sixth parameter. Documentation for the underlying ShellExecute call is here . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/130763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10559/"
]
} |
130,775 | As far as variable naming conventions go, should iterators be named i or something more semantic like count ? If you don't use i , why not? If you feel that i is acceptable, are there cases of iteration where it shouldn't be used? | Depends on the context I suppose. If you where looping through a set of Objects in some collection then it should be fairly obvious from the context what you are doing. for(int i = 0; i < 10; i++){ // i is well known here to be the index objectCollection[i].SomeProperty = someValue;} However if it is not immediately clear from the context what it is you are doing, or if you are making modifications to the index you should use a variable name that is more indicative of the usage. for(int currentRow = 0; currentRow < numRows; currentRow++){ for(int currentCol = 0; currentCol < numCols; currentCol++) { someTable[currentRow][currentCol] = someValue; }} | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130775",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13281/"
]
} |
130,794 | There have been several questions already posted with specific questions about dependency injection , such as when to use it and what frameworks are there for it. However, What is dependency injection and when/why should or shouldn't it be used? | Dependency Injection is passing dependency to other objects or framework ( dependency injector). Dependency injection makes testing easier. The injection can be done through constructor . SomeClass() has its constructor as following: public SomeClass() { myObject = Factory.getObject();} Problem :If myObject involves complex tasks such as disk access or network access, it is hard to do unit test on SomeClass() . Programmers have to mock myObject and might intercept the factory call. Alternative solution : Passing myObject in as an argument to the constructor public SomeClass (MyClass myObject) { this.myObject = myObject;} myObject can be passed directly which makes testing easier. One common alternative is defining a do-nothing constructor . Dependency injection can be done through setters. (h/t @MikeVella). Martin Fowler documents a third alternative (h/t @MarcDix), where classes explicitly implement an interface for the dependencies programmers wish injected. It is harder to isolate components in unit testing without dependency injection. In 2013, when I wrote this answer, this was a major theme on the Google Testing Blog . It remains the biggest advantage to me, as programmers not always need the extra flexibility in their run-time design (for instance, for service locator or similar patterns). Programmers often need to isolate the classes during testing. | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/130794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1354/"
]
} |
130,801 | I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well. Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project? | To make test run when you issue make check , you need to add them to the TESTS variable Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this: TESTS=my-test-executable It should then be automatically run when you make check , and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable: TESTS=my-first-test my-second-test my-third-test and they will all get run. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5963/"
]
} |
130,819 | I have to continue to support VB6 applications. I've got both VB6 (Visual Studio 6) installed and Visual Studio 2008 as well. Can I read and write to VB6 projects while in Visual Studio 2008? Will it damage or destroy my VB6 application? It would be very cool if I could free up a lot of space and get rid of Visual Studio 6. | Visual Studio 2008 can't compile VB6 applications. You could use it as a text editor only (though it will offer you the VB.NET IntelliSense, not VB6). However, you need Visual Studio 6 to be able to build your application. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14728/"
]
} |
130,829 | I have 3 points in a 3D space of which I know the exact locations. Suppose they are: (x0,y0,z0) , (x1,y1,z1) and (x2,y2,z2) . Also I have a camera that is looking at these 3 points and I know the 2D locations of those three points on camera view plane. So for example (x0,y0,z0) will be (x0',y0') , and (x1,y1,z1) will be (x1',y1') and (x2,y2,z2) will be (x2',y2') from the camera's point of view. What is the easiest way to find the projection matrix that will project those 3D points into 2D points on camera view plane. We don't know anything about the camera location. | This gives you two sets, each of three equations in 3 variables: a*x0+b*y0+c*z0 = x0'a*x1+b*y1+c*z1 = x1'a*x2+b*y2+c*z2 = x2'd*x0+e*y0+f*z0 = y0'd*x1+e*y1+f*z1 = y1'd*x2+e*y2+f*z2 = y2' Just use whatever method of solving simultaneous equations is easiest in your situation (it isn't even hard to solve these "by hand"). Then your transformation matrix is just ((a,b,c)(d,e,f)). ... Actually, that is over-simplified and assumes a camera pointed at the origin of your 3D coordinate system and no perspective. For perspective, the transformation matrix works more like: ( a, b, c, d ) ( xt )( x, y, z, 1 ) ( e, f, g, h ) = ( yt ) ( i, j, k, l ) ( zt )( xv, yv ) = ( xc+s*xt/zt, yc+s*yt/zt ) if md < zt; but the 4x3 matrix is more constrained than 12 degrees of freedom since we should have a*a+b*b+c*c = e*e+f*f+g*g = i*i+j*j+k*k = 1a*a+e*e+i*i = b*b+f*f+j*j = c*c+g*g+k*k = 1 So you should probably have 4 points to get 8 equations to cover the 6 variables for camera position and angle and 1 more for scaling of the 2-D view points since we'll be able to eliminate the "center" coordinates (xc,yc). So if you have 4 points and transform your 2-D view points to be relative to the center of your display, then you can get 14 simultaneous equations in 13 variables and solve. Unfortunately, six of the equations are not linear equations. Fortunately, all of the variables in those equations are restricted to the values between -1 and 1 so it is still probably feasible to solve the equations. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
130,878 | What is the benefit of using singleton instead of global for database connections in PHP? I feel using singleton instead of global makes the code unnecessarily complex. Code with Global $conn = new PDO(...);function getSomething(){ global $conn; . . .} Code with Singleton class DB_Instance{ private static $db; public static function getDBO() { if (!self::$db) self::$db = new PDO(...); return self::$db; }}function getSomething(){ $conn = DB_Instance::getDBO(); . . .} If there's a better way of initializing database connection other than global or singleton, please mention it and describe the advantages it have over global or singleton. | I know this is old, but Dr8k's answer was almost there. When you are considering writing a piece of code, assume it's going to change. That doesn't mean that you're assuming the kinds of changes it will have hoisted upon it at some point in the future, but rather that some form of change will be made. Make it a goal mitigate the pain of making changes in the future: a global is dangerous because it's hard to manage in a single spot. What if I want to make that database connection context aware in the future? What if I want it to close and reopen itself every 5th time it was used. What if I decide that in the interest of scaling my app I want to use a pool of 10 connections? Or a configurable number of connections? A singleton factory gives you that flexibility. I set it up with very little extra complexity and gain more than just access to the same connection; I gain the ability to change how that connection is passed to me later on in a simple manner. Note that I say singleton factory as opposed to simply singleton . There's precious little difference between a singleton and a global, true. And because of that, there's no reason to have a singleton connection: why would you spend the time setting that up when you could create a regular global instead? What a factory gets you is a why to get connections, and a separate spot to decide what connections (or connection) you're going to get. Example class ConnectionFactory{ private static $factory; private $db; public static function getFactory() { if (!self::$factory) self::$factory = new ConnectionFactory(...); return self::$factory; } public function getConnection() { if (!$this->db) $this->db = new PDO(...); return $this->db; }}function getSomething(){ $conn = ConnectionFactory::getFactory()->getConnection(); . . .} Then, in 6 months when your app is super famous and getting dugg and slashdotted and you decide you need more than a single connection, all you have to do is implement some pooling in the getConnection() method. Or if you decide that you want a wrapper that implements SQL logging, you can pass a PDO subclass. Or if you decide you want a new connection on every invocation, you can do do that. It's flexible, instead of rigid. 16 lines of code, including braces, which will save you hours and hours and hours of refactoring to something eerily similar down the line. Note that I don't consider this "Feature Creep" because I'm not doing any feature implementation in the first go round. It's border line "Future Creep", but at some point, the idea that "coding for tomorrow today" is always a bad thing doesn't jive for me. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/130878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1897/"
]
} |
130,894 | I have the source of a program (taken from cvs/svn/git/...) and I'd like to build a Debian/Ubuntu package for it. The package is present in the repositories, but: It is an older version (lacking features I need) I need slightly different compile options than the default. What is the easiest way of doing it? I am concerned about a couple of things How can I check if I have listed all the dependencies correctly? (I can get some hints by looking on what the older version depended, but new dependencies may have been added.) How I can I prevent the update system installing the older version in the repo on an update? How I can prevent the system installing a newer version (when its out), overwriting my custom package? | you can use the special package "checkinstall" for all packages which are not even in debian/ubuntu yet. You can use "uupdate" ( apt-get install devscripts ) to build a package from source with existing debian sources: Example for libdrm2: apt-get build-dep libdrm2apt-get source libdrm2cd libdrm-2.3.1uupdate ~/Downloads/libdrm-2.4.1.tar.gzcd ../libdrm-2.4.1dpkg-buildpackage -us -uc -nc | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19922/"
]
} |
130,898 | How can I perform list comprehensions in C#? | Found this when I was looking up how to do list comprehensions in C#... When someone says list comprehensions I immediately think about Python. The below code generates a list that looks like this: [0,2,4,6,8,10,12,14,16,18] The Python way is like this: list = [2*number for number in range(0,10)] In C#: var list2 = from number in Enumerable.Range(0, 10) select 2*number; Both methods are lazily evaluated. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21944/"
]
} |
130,913 | Is it at the state where it is actually useful and can do more than rename classes? | CDT (C/C++ Development Tools - eclipse project) 5.0 has a bunch of new refactorings * Declare Method* Extract Baseclass* Extract Constant* Extract Method* Extract Subclass* Hide Method* Implement Method* Move Field / Method* Replace Number* Separate Class* Generate Getters and Setters There is a CDT refactoring wiki | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/130913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13760/"
]
} |
130,933 | When coding, what in your experience is a better approach? Break the problem down into small enough pieces and then implement each piece. Break the problem down, but then implement using a top-down approach. Any other? | Here's what I do: Understand the domain first. Understand the problem to be solved. Make sure you and the customer (even if that customer is you!) are on the same page as to what problem is to be solved. Then a high level solution is proposed to the problem and from that, the design will turn into bubbles or bullets on a page or whatever, but the point is that it will shake out into components that can be designed. At that point, I write tests for the yet-to-be written classes and then flesh out the classes to pass those tests. I use a test-first approach and build working, tested components. That is what works for me. When the component interfaces are known and the 'rules' are known for how they talk to each other and provide services to each other, then it becomes generally a straightforward 'hook everything together' exercise. That's how I do it, and it has worked well for me. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/130933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13760/"
]
} |
130,948 | I need an easy way to take a tar file and convert it into a string (and vice versa). Is there a way to do this in Ruby? My best attempt was this: file = File.open("path-to-file.tar.gz")contents = ""file.each {|line| contents << line} I thought that would be enough to convert it to a string, but then when I try to write it back out like this... newFile = File.open("test.tar.gz", "w")newFile.write(contents) It isn't the same file. Doing ls -l shows the files are of different sizes, although they are pretty close (and opening the file reveals most of the contents intact). Is there a small mistake I'm making or an entirely different (but workable) way to accomplish this? | First, you should open the file as a binary file. Then you can read the entire file in, in one command. file = File.open("path-to-file.tar.gz", "rb")contents = file.read That will get you the entire file in a string. After that, you probably want to file.close . If you don’t do that, file won’t be closed until it is garbage-collected, so it would be a slight waste of system resources while it is open. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/130948",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/422/"
]
} |
131,020 | I use both ruby on rails and Java. I really enjoy using migrations when I am working on a rails project. so I am wondering is there a migrations like tool for Java? If there is no such tool is it a good idea to use migrations as a tool to control a database used by a Java project? | For a feature comparison between Flyway Liquibase c5-db-migration dbdeploy mybatis MIGRATEdb migrate4j dbmaintain AutoPatch have a look at http://flywaydb.org This should be a good start for you and anyone else to select the right tool for the job | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/131020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5004/"
]
} |
131,049 | I installed mediawiki on my server as my personal knowledge base. Sometimes I copy some stuff from Web and paste to my wiki - such as tips & tricks from somebody's blog. How do I make the copied content appear in a box with border? For example, the box at the end of this blog post looks pretty nice: http://blog.dreamhost.com/2008/03/21/good-reminiscing-friday/ I could use the pre tag, but paragraphs in a pre tag won't wrap automatically.. Any ideas? | <blockquote style="background-color: lightgrey; border: solid thin grey;">Det er jeg som kjenner hemmeligheten din. Ikke et pip, gutten min.</blockquote> The blockquotes are better than divs because they "explain" that the text is actually a blockqoute, and not "just-some-text". Also a blockquote will most likely be properly indented, and actually look like a blockqoute. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/131049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14068/"
]
} |
131,050 | Since AS3 does not allow private constructors, it seems the only way to construct a singleton and guarantee the constructor isn't explicitly created via "new" is to pass a single parameter and check it. I've heard two recommendations, one is to check the caller and ensure it's the static getInstance(), and the other is to have a private/internal class in the same package namespace. The private object passed on the constructor seems preferable but it does not look like you can have a private class in the same package. Is this true? And more importantly is it the best way to implement a singleton? | A slight adaptation of enobrev's answer is to have instance as a getter. Some would say this is more elegant. Also, enobrev's answer won't enforce a Singleton if you call the constructor before calling getInstance. This may not be perfect, but I have tested this and it works. (There is definitely another good way to do this in the book "Advanced ActionScrpt3 with Design Patterns" too). package { public class Singleton { private static var _instance:Singleton; public function Singleton(enforcer:SingletonEnforcer) { if( !enforcer) { throw new Error( "Singleton and can only be accessed through Singleton.getInstance()" ); } } public static function get instance():Singleton { if(!Singleton._instance) { Singleton._instance = new Singleton(new SingletonEnforcer()); } return Singleton._instance; }}}class SingletonEnforcer{} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/131050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14747/"
]
} |
131,062 | I've read numerous posts about people having problems with viewWillAppear when you do not create your view hierarchy just right. My problem is I can't figure out what that means. If I create a RootViewController and call addSubView on that controller, I would expect the added view(s) to be wired up for viewWillAppear events. Does anyone have an example of a complex programmatic view hierarchy that successfully receives viewWillAppear events at every level? Apple's Docs state: Warning: If the view belonging to a view controller is added to a view hierarchy directly, the view controller will not receive this message. If you insert or add a view to the view hierarchy, and it has a view controller, you should send the associated view controller this message directly. Failing to send the view controller this message will prevent any associated animation from being displayed. The problem is that they don't describe how to do this. What does "directly" mean? How do you "indirectly" add a view? I am fairly new to Cocoa and iPhone so it would be nice if there were useful examples from Apple besides the basic Hello World crap. | If you use a navigation controller and set its delegate, then the view{Will,Did}{Appear,Disappear} methods are not invoked. You need to use the navigation controller delegate methods instead: navigationController:willShowViewController:animated:navigationController:didShowViewController:animated: | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/131062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21964/"
]
} |
131,085 | I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1. What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data? As well, when loading the data into the second DB, is it quicker to: 1) pipe the results directly to the second MySQL server instance and use the --compress option or 2) load it from a text file (ie: mysql < my_sql_dump.sql) | QUICKLY dumping a quiesced database: Using the "-T " option with mysqldump results in lots of .sql and .txt files in the specified directory. This is ~50% faster for dumping large tables than a single .sql file with INSERT statements (takes 1/3 less wall-clock time). Additionally, there is a huge benefit when restoring if you can load multiple tables in parallel, and saturate multiple cores. On an 8-core box, this could be as much as an 8X difference in wall-clock time to restore the dump, on top of the efficiency improvements provided by "-T". Because "-T" causes each table to be stored in a separate file, loading them in parallel is easier than splitting apart a massive .sql file. Taking the strategies above to their logical extreme, one could create a script to dump a database widely in parallel. Well, that's exactly what the Maakit mk-parallel-dump (see http://www.maatkit.org/doc/mk-parallel-dump.html ) and mk-parallel-restore tools are; perl scripts that make multiple calls to the underlying mysqldump program. However, when I tried to use these, I had trouble getting the restore to complete without duplicate key errors that didn't occur with vanilla dumps, so keep in mind that your milage may vary. Dumping data from a LIVE database (w/o service interruption): The --single-transaction switch is very useful for taking a dump of a live database without having to quiesce it or taking a dump of a slave database without having to stop slaving. Sadly, -T is not compatible with --single-transaction, so you only get one. Usually, taking the dump is much faster than restoring it. There is still room for a tool that take the incoming monolithic dump file and breaks it into multiple pieces to be loaded in parallel. To my knowledge, such a tool does not yet exist. Transferring the dump over the Network is usually a win To listen for an incoming dump on one host run: nc -l 7878 > mysql-dump.sql Then on your DB host, run mysqldump $OPTS | nc myhost.mydomain.com 7878 This reduces contention for the disk spindles on the master from writing the dump to disk slightly speeding up your dump (assuming the network is fast enough to keep up, a fairly safe assumption for two hosts in the same datacenter). Plus, if you are building out a new slave, this saves the step of having to transfer the dump file after it is finished. Caveats - obviously, you need to have enough network bandwidth not to slow things down unbearably, and if the TCP session breaks, you have to start all over, but for most dumps this is not a major concern. Lastly, I want to clear up one point of common confusion. Despite how often you see these flags in mysqldump examples and tutorials, they are superfluous because they are turned ON by default: --opt --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset . From http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html : Use of --opt is the same as specifying --add-drop-table, --add-locks, --create-options, --disable-keys, --extended-insert, --lock-tables, --quick, and --set-charset. All of the options that --opt stands for also are on by default because --opt is on by default. Of those behaviors, "--quick" is one of the most important (skips caching the entire result set in mysqld before transmitting the first row), and can be with "mysql" (which does NOT turn --quick on by default) to dramatically speed up queries that return a large result set (eg dumping all the rows of a big table). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/131085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16447/"
]
} |
131,121 | If I have a Range object--for example, let's say it refers to cell A1 on a worksheet called Book1 . So I know that calling Address() will get me a simple local reference: $A$1 . I know it can also be called as Address(External:=True) to get a reference including the workbook name and worksheet name: [Book1]Sheet1!$A$1 . What I want is to get an address including the sheet name, but not the book name. I really don't want to call Address(External:=True) and try to strip out the workbook name myself with string functions. Is there any call I can make on the range to get Sheet1!$A$1 ? | Only way I can think of is to concatenate the worksheet name with the cell reference, as follows: Dim cell As RangeDim cellAddress As StringSet cell = ThisWorkbook.Worksheets(1).Cells(1, 1)cellAddress = cell.Parent.Name & "!" & cell.Address(External:=False) EDIT: Modify last line to : cellAddress = "'" & cell.Parent.Name & "'!" & cell.Address(External:=False) if you want it to work even if there are spaces or other funny characters in the sheet name. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/131121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6209/"
]
} |
131,164 | I have a number of code value tables that contain a code and a description with a Long id. I now want to create an entry for an Account Type that references a number of codes, so I have something like this: insert into account_type_standard (account_type_Standard_id,tax_status_id, recipient_id)( select account_type_standard_seq.nextval,ts.tax_status_id, r.recipient_idfrom tax_status ts, recipient rwhere ts.tax_status_code = ?and r.recipient_code = ?) This retrieves the appropriate values from the tax_status and recipient tables if a match is found for their respective codes. Unfortunately, recipient_code is nullable, and therefore the ? substitution value could be null. Of course, the implicit join doesn't return a row, so a row doesn't get inserted into my table. I've tried using NVL on the ? and on the r.recipient_id. I've tried to force an outer join on the r.recipient_code = ? by adding (+), but it's not an explicit join, so Oracle still didn't add another row. Anyone know of a way of doing this? I can obviously modify the statement so that I do the lookup of the recipient_id externally, and have a ? instead of r.recipient_id, and don't select from the recipient table at all, but I'd prefer to do all this in 1 SQL statement. | Outter joins don't work "as expected" in that case because you have explicitly told Oracle you only want data if that criteria on that table matches. In that scenario, the outter join is rendered useless. A work-around INSERT INTO account_type_standard (account_type_Standard_id, tax_status_id, recipient_id) VALUES( (SELECT account_type_standard_seq.nextval FROM DUAL), (SELECT tax_status_id FROM tax_status WHERE tax_status_code = ?), (SELECT recipient_id FROM recipient WHERE recipient_code = ?)) [Edit]If you expect multiple rows from a sub-select, you can add ROWNUM=1 to each where clause OR use an aggregate such as MAX or MIN. This of course may not be the best solution for all cases. [Edit] Per comment, (SELECT account_type_standard_seq.nextval FROM DUAL), can be just account_type_standard_seq.nextval, | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/131164",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5382/"
]
} |
131,181 | What is a .snk file for? I know it stands for Strongly Named Key , but all explanations of what it is and how it works goes over my head. Is there any simple explanation on how a strongly named key is used and how it works? | The .snk file is used to apply a strong name to a .NET assembly . such a strong name consists of a simple text name, version number,and culture information (ifprovided)—plus a public key and adigital signature. The SNK contains a unique key pair - a private and public key that can be used to ensure that you have a unique strong name for the assembly. When the assembly is strongly-named, a "hash" is constructed from the contents of the assembly, and the hash is encrypted with the private key. Then this signed hash is placed in the assembly along with the public key from the .snk. Later on, when someone needs to verify the integrity of the strongly-named assembly, they build a hash of the assembly's contents, and use the public key from the assembly to decrypt the hash that came with the assembly - if the two hashes match, the assembly verification passes. It's important to be able to verify assemblies in this way to ensure that nobody swaps out an assembly for a malicious one that will subvert the whole application. This is why non-strong-named assemblies aren't trusted in the same way that strongly-named assemblies are, so they can't be placed in the GAC. Also, there's a chain of trust - you can't generate a strongly-named assembly that references non-strongly-named assemblies. The article " The Secrets of Strong Naming (archived at the Wayback Machine) ". Does an excellent job of explaining these concepts in more detail. With pictures. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/131181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
131,241 | Take the following two lines of code: for (int i = 0; i < some_vector.size(); i++){ //do stuff} And this: for (some_iterator = some_vector.begin(); some_iterator != some_vector.end(); some_iterator++){ //do stuff} I'm told that the second way is preferred. Why exactly is this? | The first form is efficient only if vector.size() is a fast operation. This is true for vectors, but not for lists, for example. Also, what are you planning to do within the body of the loop? If you plan on accessing the elements as in T elem = some_vector[i]; then you're making the assumption that the container has operator[](std::size_t) defined. Again, this is true for vector but not for other containers. The use of iterators bring you closer to container independence . You're not making assumptions about random-access ability or fast size() operation, only that the container has iterator capabilities. You could enhance your code further by using standard algorithms. Depending on what it is you're trying to achieve, you may elect to use std::for_each() , std::transform() and so on. By using a standard algorithm rather than an explicit loop you're avoiding re-inventing the wheel. Your code is likely to be more efficient (given the right algorithm is chosen), correct and reusable. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/131241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
]
} |
131,263 | As much as I generally don't like the discussion/subjective posts on SO, I have really come to appreciate the "Hidden Secrets" set of posts that people have put together. They provide a great overview of some commonly missed tools that you might now otherwise discover. For this question I would like to explore the Visual Studio .NET debugger. What are some of the "hidden secrets" in the VS.NET debugger that you use often or recently discovered and wish you would have known long ago? | One of my favorite features is the "When Hit..." option available on a breakpoint. You can print a message with the value of a variable along with lots of other information, such as: $ADDRESS - Current Instruction $CALLER - Previous Function Name $CALLSTACK - Call Stack $FUNCTION - Current Function Name $PID - Process ID $PNAME - Process Name $TID - Thread ID $TNAME - Thread Name You can also have it run a macro, but I've never used that feature. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/131263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3957/"
]
} |
131,303 | How do you measure the memory usage of an application or process in Linux? From the blog article of Understanding memory usage on Linux , ps is not an accurate tool to use for this intent. Why ps is "wrong" Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running . Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely wrong . (Note: This question is covered here in great detail.) | With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but: does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, Valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of Valgrind is called 'massif': Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal. As explained in the Valgrind documentation , you need to run the program through Valgrind: valgrind --tool=massif <executable> <arguments> Massif writes a dump of memory usage snapshots (e.g. massif.out.12345 ). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer . But I found ms_print , a simple text-based tool shipped with Valgrind, to be of great help already. To find memory leaks, use the (default) memcheck tool of valgrind. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/131303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16139/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.