source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
84,556
Personally I like this one: P.S. Do not hotlink the cartoon without the site's permission please.
Another one from xkcd
{ "score": 12, "source": [ "https://Stackoverflow.com/questions/84556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4230/" ] }
84,644
To make it short: hibernate doesn't support projections and query by example? I found this post: The code is this: User usr = new User();usr.setCity = 'TEST';getCurrentSession().createCriteria(User.class).setProjection( Projections.distinct( Projections.projectionList().add( Projections.property("name"), "name").add( Projections.property("city"), "city"))).add( Example.create(usr)) Like the other poster said, The generated sql keeps having a where class refering to just y0_= ? instead of this_.city . I already tried several approaches, and searched the issue tracker but found nothing about this. I even tried to use Projection alias and Transformers, but it does not work: User usr = new User();usr.setCity = 'TEST';getCurrentSession().createCriteria(User.class).setProjection( Projections.distinct( Projections.projectionList().add( Projections.property("name"), "name").add( Projections.property("city"), "city"))).add( Example.create(usr)).setResultTransformer(Transformers.aliasToBean(User.class)); Has anyone used projections and query by example ?
The problem seems to happen when you have an alias the same name as the objects property. Hibernate seems to pick up the alias and use it in the sql. I found this documented here and here , and I believe it to be a bug in Hibernate, although I am not sure that the Hibernate team agrees. Either way, I have found a simple work around that works in my case. Your mileage may vary. The details are below, I tried to simplify the code for this sample so I apologize for any errors or typo's: Criteria criteria = session.createCriteria(MyClass.class) .setProjection(Projections.projectionList() .add(Projections.property("sectionHeader"), "sectionHeader") .add(Projections.property("subSectionHeader"), "subSectionHeader") .add(Projections.property("sectionNumber"), "sectionNumber")) .add(Restrictions.ilike("sectionHeader", sectionHeaderVar)) // <- Problem! .setResultTransformer(Transformers.aliasToBean(MyDTO.class)); Would produce this sql: select this_.SECTION_HEADER as y1_, this_.SUB_SECTION_HEADER as y2_, this_.SECTION_NUMBER as y3_,from MY_TABLE this_ where ( lower(y1_) like ? ) Which was causing an error: java.sql.SQLException: ORA-00904: "Y1_": invalid identifier But, when I changed my restriction to use "this", like so: Criteria criteria = session.createCriteria(MyClass.class) .setProjection(Projections.projectionList() .add(Projections.property("sectionHeader"), "sectionHeader") .add(Projections.property("subSectionHeader"), "subSectionHeader") .add(Projections.property("sectionNumber"), "sectionNumber")) .add(Restrictions.ilike("this.sectionHeader", sectionHeaderVar)) // <- Problem Solved! .setResultTransformer(Transformers.aliasToBean(MyDTO.class)); It produced the following sql and my problem was solved. select this_.SECTION_HEADER as y1_, this_.SUB_SECTION_HEADER as y2_, this_.SECTION_NUMBER as y3_,from MY_TABLE this_ where ( lower(this_.SECTION_HEADER) like ? ) Thats, it! A pretty simple fix to a painful problem. I don't know how this fix would translate to the query by example problem, but it may get you closer.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/84644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
84,680
I'm writing a Spring web application that requires users to login. My company has an Active Directory server that I'd like to make use of for this purpose. However, I'm having trouble using Spring Security to connect to the server. I'm using Spring 2.5.5 and Spring Security 2.0.3, along with Java 1.6. If I change the LDAP URL to the wrong IP address, it doesn't throw an exception or anything, so I'm wondering if it's even trying to connect to the server to begin with. Although the web application starts up just fine, any information I enter into the login page is rejected. I had previously used an InMemoryDaoImpl, which worked fine, so the rest of my application seems to be configured correctly. Here are my security-related beans: <beans:bean id="ldapAuthProvider" class="org.springframework.security.providers.ldap.LdapAuthenticationProvider"> <beans:constructor-arg> <beans:bean class="org.springframework.security.providers.ldap.authenticator.BindAuthenticator"> <beans:constructor-arg ref="initialDirContextFactory" /> <beans:property name="userDnPatterns"> <beans:list> <beans:value>CN={0},OU=SBSUsers,OU=Users,OU=MyBusiness,DC=Acme,DC=com</beans:value> </beans:list> </beans:property> </beans:bean> </beans:constructor-arg> </beans:bean> <beans:bean id="userDetailsService" class="org.springframework.security.userdetails.ldap.LdapUserDetailsManager"> <beans:constructor-arg ref="initialDirContextFactory" /> </beans:bean> <beans:bean id="initialDirContextFactory" class="org.springframework.security.ldap.DefaultInitialDirContextFactory"> <beans:constructor-arg value="ldap://192.168.123.456:389/DC=Acme,DC=com" /> </beans:bean>
I had the same banging-my-head-against-the-wall experience you did, and ended up writing a custom authentication provider that does an LDAP query against the Active Directory server. So my security-related beans are: <beans:bean id="contextSource" class="org.springframework.security.ldap.DefaultSpringSecurityContextSource"> <beans:constructor-arg value="ldap://hostname.queso.com:389/" /></beans:bean><beans:bean id="ldapAuthenticationProvider" class="org.queso.ad.service.authentication.LdapAuthenticationProvider"> <beans:property name="authenticator" ref="ldapAuthenticator" /> <custom-authentication-provider /></beans:bean><beans:bean id="ldapAuthenticator" class="org.queso.ad.service.authentication.LdapAuthenticatorImpl"> <beans:property name="contextFactory" ref="contextSource" /> <beans:property name="principalPrefix" value="QUESO\" /></beans:bean> Then the LdapAuthenticationProvider class: /** * Custom Spring Security authentication provider which tries to bind to an LDAP server with * the passed-in credentials; of note, when used with the custom {@link LdapAuthenticatorImpl}, * does <strong>not</strong> require an LDAP username and password for initial binding. * * @author Jason */public class LdapAuthenticationProvider implements AuthenticationProvider { private LdapAuthenticator authenticator; public Authentication authenticate(Authentication auth) throws AuthenticationException { // Authenticate, using the passed-in credentials. DirContextOperations authAdapter = authenticator.authenticate(auth); // Creating an LdapAuthenticationToken (rather than using the existing Authentication // object) allows us to add the already-created LDAP context for our app to use later. LdapAuthenticationToken ldapAuth = new LdapAuthenticationToken(auth, "ROLE_USER"); InitialLdapContext ldapContext = (InitialLdapContext) authAdapter .getObjectAttribute("ldapContext"); if (ldapContext != null) { ldapAuth.setContext(ldapContext); } return ldapAuth; } public boolean supports(Class clazz) { return (UsernamePasswordAuthenticationToken.class.isAssignableFrom(clazz)); } public LdapAuthenticator getAuthenticator() { return authenticator; } public void setAuthenticator(LdapAuthenticator authenticator) { this.authenticator = authenticator; }} Then the LdapAuthenticatorImpl class: /** * Custom Spring Security LDAP authenticator which tries to bind to an LDAP server using the * passed-in credentials; does <strong>not</strong> require "master" credentials for an * initial bind prior to searching for the passed-in username. * * @author Jason */public class LdapAuthenticatorImpl implements LdapAuthenticator { private DefaultSpringSecurityContextSource contextFactory; private String principalPrefix = ""; public DirContextOperations authenticate(Authentication authentication) { // Grab the username and password out of the authentication object. String principal = principalPrefix + authentication.getName(); String password = ""; if (authentication.getCredentials() != null) { password = authentication.getCredentials().toString(); } // If we have a valid username and password, try to authenticate. if (!("".equals(principal.trim())) && !("".equals(password.trim()))) { InitialLdapContext ldapContext = (InitialLdapContext) contextFactory .getReadWriteContext(principal, password); // We need to pass the context back out, so that the auth provider can add it to the // Authentication object. DirContextOperations authAdapter = new DirContextAdapter(); authAdapter.addAttributeValue("ldapContext", ldapContext); return authAdapter; } else { throw new BadCredentialsException("Blank username and/or password!"); } } /** * Since the InitialLdapContext that's stored as a property of an LdapAuthenticationToken is * transient (because it isn't Serializable), we need some way to recreate the * InitialLdapContext if it's null (e.g., if the LdapAuthenticationToken has been serialized * and deserialized). This is that mechanism. * * @param authenticator * the LdapAuthenticator instance from your application's context * @param auth * the LdapAuthenticationToken in which to recreate the InitialLdapContext * @return */ static public InitialLdapContext recreateLdapContext(LdapAuthenticator authenticator, LdapAuthenticationToken auth) { DirContextOperations authAdapter = authenticator.authenticate(auth); InitialLdapContext context = (InitialLdapContext) authAdapter .getObjectAttribute("ldapContext"); auth.setContext(context); return context; } public DefaultSpringSecurityContextSource getContextFactory() { return contextFactory; } /** * Set the context factory to use for generating a new LDAP context. * * @param contextFactory */ public void setContextFactory(DefaultSpringSecurityContextSource contextFactory) { this.contextFactory = contextFactory; } public String getPrincipalPrefix() { return principalPrefix; } /** * Set the string to be prepended to all principal names prior to attempting authentication * against the LDAP server. (For example, if the Active Directory wants the domain-name-plus * backslash prepended, use this.) * * @param principalPrefix */ public void setPrincipalPrefix(String principalPrefix) { if (principalPrefix != null) { this.principalPrefix = principalPrefix; } else { this.principalPrefix = ""; } }} And finally, the LdapAuthenticationToken class: /** * <p> * Authentication token to use when an app needs further access to the LDAP context used to * authenticate the user. * </p> * * <p> * When this is the Authentication object stored in the Spring Security context, an application * can retrieve the current LDAP context thusly: * </p> * * <pre> * LdapAuthenticationToken ldapAuth = (LdapAuthenticationToken) SecurityContextHolder * .getContext().getAuthentication(); * InitialLdapContext ldapContext = ldapAuth.getContext(); * </pre> * * @author Jason * */public class LdapAuthenticationToken extends AbstractAuthenticationToken { private static final long serialVersionUID = -5040340622950665401L; private Authentication auth; transient private InitialLdapContext context; private List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>(); /** * Construct a new LdapAuthenticationToken, using an existing Authentication object and * granting all users a default authority. * * @param auth * @param defaultAuthority */ public LdapAuthenticationToken(Authentication auth, GrantedAuthority defaultAuthority) { this.auth = auth; if (auth.getAuthorities() != null) { this.authorities.addAll(Arrays.asList(auth.getAuthorities())); } if (defaultAuthority != null) { this.authorities.add(defaultAuthority); } super.setAuthenticated(true); } /** * Construct a new LdapAuthenticationToken, using an existing Authentication object and * granting all users a default authority. * * @param auth * @param defaultAuthority */ public LdapAuthenticationToken(Authentication auth, String defaultAuthority) { this(auth, new GrantedAuthorityImpl(defaultAuthority)); } public GrantedAuthority[] getAuthorities() { GrantedAuthority[] authoritiesArray = this.authorities.toArray(new GrantedAuthority[0]); return authoritiesArray; } public void addAuthority(GrantedAuthority authority) { this.authorities.add(authority); } public Object getCredentials() { return auth.getCredentials(); } public Object getPrincipal() { return auth.getPrincipal(); } /** * Retrieve the LDAP context attached to this user's authentication object. * * @return the LDAP context */ public InitialLdapContext getContext() { return context; } /** * Attach an LDAP context to this user's authentication object. * * @param context * the LDAP context */ public void setContext(InitialLdapContext context) { this.context = context; }} You'll notice that there are a few bits in there that you might not need. For example, my app needed to retain the successfully-logged-in LDAP context for further use by the user once logged in -- the app's purpose is to let users log in via their AD credentials and then perform further AD-related functions. So because of that, I have a custom authentication token, LdapAuthenticationToken, that I pass around (rather than Spring's default Authentication token) which allows me to attach the LDAP context. In LdapAuthenticationProvider.authenticate(), I create that token and pass it back out; in LdapAuthenticatorImpl.authenticate(), I attach the logged-in context to the return object so that it can be added to the user's Spring authentication object. Also, in LdapAuthenticationProvider.authenticate(), I assign all logged-in users the ROLE_USER role -- that's what lets me then test for that role in my intercept-url elements. You'll want to make this match whatever role you want to test for, or even assign roles based on Active Directory groups or whatever. Finally, and a corollary to that, the way I implemented LdapAuthenticationProvider.authenticate() gives all users with valid AD accounts the same ROLE_USER role. Obviously, in that method, you can perform further tests on the user (i.e., is the user in a specific AD group?) and assign roles that way, or even test for some condition before even granting the user access at all .
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/84680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13379/" ] }
84,716
I use the jQuery extend function to extend a class prototype. For example: MyWidget = function(name_var) { this.init(name_var);}$.extend(MyWidget.prototype, { // object variables widget_name: '', init: function(widget_name) { // do initialization here this.widget_name = widget_name; }, doSomething: function() { // an example object method alert('my name is '+this.widget_name); }});// example of using the class built abovevar widget1 = new MyWidget('widget one');widget1.doSomething(); Is there a better way to do this? Is there a cleaner way to create the class above with only one statement instead of two?
I quite like John Resig's Simple JavaScript Inheritance . var MyWidget = Class.extend({ init: function(widget_name){ this.widget_name = widget_name; }, doSomething: function() { alert('my name is ' + this.widget_name); }}); NB: The "Class" object demonstrated above isn't included in jQuery itself - it's a 25 line snippet from Mr. jQuery himself, provided in the article above.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/84716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13850/" ] }
84,717
What are the best conventions of naming testing-assemblies in .NET (or any other language or platform)? What I'm mainly split between are these options (please provide others!): Company.Website - the project Company.Website.Tests or Company.Website Company.WebsiteTests The problem with the first solution is that it looks like .Tests are a sub-namespace to the site, while they really are more parallel in my mind. What happens when a new sub-namespace comes into play, like Company.Website.Controls , where should I put the tests for that namespace, for instance? Maybe it should even be: Tests.Company.Website and Tests.Company.Website.Controls , and so on.
I will go with * Company.Website - the project* Company.Website.Tests The short reason and answer is simple, testing and project are linked in code, therefore it should share namespace. If you want splitting of code and testing in a solution you have that option anyway. e.g. you can set up a solution with -Code Folder Company.Website -Tests Folder Company.Website.Tests
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/84717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2429/" ] }
84,799
Some Eclipse plugins are mandated by your environment. The appropriate source code management plugin, for example - and I'm not interested in those. Some provide useful enhancements, but in a specific niche. I'm not interested in those. Some are great, but cost money. I'm not interested in those. Some were really useful on older versions of Eclipse, but are now part of the core build of the latest Eclipse version (3.4 as I write this). I'm not interested in those. I want advice on which plugins every Java SE developer should be installing, one per answer please.
Findbugs saved me doing something silly twice today. http://findbugs.sourceforge.net/ Eclipse update site is: http://findbugs.cs.umd.edu/eclipse/
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/84799", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7938/" ] }
84,820
Are there any algorithms that can help with hierarchical clustering?Google's map-reduce has only an example of k-clustering. In case of hierarchical clustering, I'm not sure how it's possible to divide the work between nodes.Other resource that I found is: http://issues.apache.org/jira/browse/MAHOUT-19 But it's not apparent, which algorithms are used.
First, you have to decide if you're going to build your hierarchy bottom-up or top-down. Bottom-up is called Hierarchical agglomerative clustering. Here's a simple, well-documented algorithm: http://nlp.stanford.edu/IR-book/html/htmledition/hierarchical-agglomerative-clustering-1.html . Distributing a bottom-up algorithm is tricky because each distributed process needs the entire dataset to make choices about appropriate clusters. It also needs a list of clusters at its current level so it doesn't add a data point to more than one cluster at the same level. Top-down hierarchy construction is called Divisive clustering . K-means is one option to decide how to split your hierarchy's nodes. This paper looks at K-means and Principal Direction Divisive Partitioning (PDDP) for node splitting: http://scgroup.hpclab.ceid.upatras.gr/faculty/stratis/Papers/tm07book.pdf . In the end, you just need to split each parent node into relatively well-balanced child nodes. A top-down approach is easier to distribute. After your first node split, each node created can be shipped to a distributed process to be split again and so on... Each distributed process needs only to be aware of the subset of the dataset it is splitting. Only the parent process is aware of the full dataset. In addition, each split could be performed in parallel. Two examples for k-means: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.1882&rep=rep1&type=pdf http://www.ece.northwestern.edu/~wkliao/Kmeans/index.html .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/84820", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12695/" ] }
84,847
How do I create a self-signed certificate for code signing using tools from the Windows SDK?
Updated Answer If you are using the following Windows versions or later: Windows Server 2012, Windows Server 2012 R2, or Windows 8.1 then MakeCert is now deprecated , and Microsoft recommends using the PowerShell Cmdlet New-SelfSignedCertificate . If you're using an older version such as Windows 7, you'll need to stick with MakeCert or another solution. Some people suggest the Public Key Infrastructure Powershell (PSPKI) Module . Original Answer While you can create a self-signed code-signing certificate (SPC - Software Publisher Certificate ) in one go, I prefer to do the following: Creating a self-signed certificate authority (CA) makecert -r -pe -n "CN=My CA" -ss CA -sr CurrentUser ^ -a sha256 -cy authority -sky signature -sv MyCA.pvk MyCA.cer (^ = allow batch command-line to wrap line) This creates a self-signed (-r) certificate, with an exportable private key (-pe). It's named "My CA", and should be put in the CA store for the current user. We're using the SHA-256 algorithm. The key is meant for signing (-sky). The private key should be stored in the MyCA.pvk file, and the certificate in the MyCA.cer file. Importing the CA certificate Because there's no point in having a CA certificate if you don't trust it, you'll need to import it into the Windows certificate store. You can use the Certificates MMC snapin, but from the command line: certutil -user -addstore Root MyCA.cer Creating a code-signing certificate (SPC) makecert -pe -n "CN=My SPC" -a sha256 -cy end ^ -sky signature ^ -ic MyCA.cer -iv MyCA.pvk ^ -sv MySPC.pvk MySPC.cer It is pretty much the same as above, but we're providing an issuer key and certificate (the -ic and -iv switches). We'll also want to convert the certificate and key into a PFX file: pvk2pfx -pvk MySPC.pvk -spc MySPC.cer -pfx MySPC.pfx If you are using a password please use the below pvk2pfx -pvk MySPC.pvk -spc MySPC.cer -pfx MySPC.pfx -po fess If you want to protect the PFX file, add the -po switch, otherwise PVK2PFX creates a PFX file with no passphrase. Using the certificate for signing code signtool sign /v /f MySPC.pfx ^ /t http://timestamp.url MyExecutable.exe ( See why timestamps may matter ) If you import the PFX file into the certificate store (you can use PVKIMPRT or the MMC snapin), you can sign code as follows: signtool sign /v /n "Me" /s SPC ^ /t http://timestamp.url MyExecutable.exe Some possible timestamp URLs for signtool /t are: http://timestamp.verisign.com/scripts/timstamp.dll http://timestamp.globalsign.com/scripts/timstamp.dll http://timestamp.comodoca.com/authenticode http://timestamp.digicert.com Full Microsoft documentation signtool makecert pvk2pfx Downloads For those who are not .NET developers, you will need a copy of the Windows SDK and .NET framework. A current link is available here: [SDK & .NET][5] (which installs makecert in `C:\Program Files\Microsoft SDKs\Windows\v7.1`). Your mileage may vary. MakeCert is available from the Visual Studio Command Prompt. Visual Studio 2015 does have it, and it can be launched from the Start Menu in Windows 7 under "Developer Command Prompt for VS 2015" or "VS2015 x64 Native Tools Command Prompt" (probably all of them in the same folder).
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/84847", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8446/" ] }
84,855
Should I use Named Pipes, or .NET Remoting to communicate with a running process on my machine?
WCF is the best choice. It supports a number of different transport mechanisms ( including Named Pipes ) and can be completely configuration driven. I would highly recommend that you take a look at WCF. Here is a blog that does a WCF vs Remoting performance comparison . A quote from the blog: The WCF and .NET Remoting are really comparable in performance. The differences are so small (measuring client latency) that it does not matter which one is a bit faster. WCF though has much better server throughput than .NET Remoting. If I would start completely new project I would chose the WCF. Anyway the WCF does much more than Remoting and for all those features I love it. MSDN Section for WCF
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/84855", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2744/" ] }
84,860
With .Net what is the best way to interact with a service (i.e. how do most tray-apps communicate with their servers). It would be preferred if this method would be cross-platform as well (working in Mono, so I guess remoting is out?) Edit: Forgot to mention, we still have to support Windows 2000 machines in the field, so WCF and anything above .Net 2.0 won't fly.
Be aware that if you are planning to eventually deploy on Windows Vista or Windows Server 2008, many ways that this can be done today will not work. This is because of the introduction of a new security feature called "Session 0 Isolation". Most windows services have been moved to run in Session 0 now in order to properly isolate them from the rest of the system. An extension of this is that the first user to login to the system no longer is placed in Session #0, they are placed in Session 1. And hence, the isolation will break code that does certain types of communication between services and desktop applications. The best way to write code today that will work on Vista and Server 2008 going forward when doing communication between services and applications is to use a proper cross-process API like RPC, Named Pipes, etc. Do not use SendMessage/PostMessage as that will fail under Session 0 Isolation. http://www.microsoft.com/whdc/system/vista/services.mspx Now, given your requirements, you are going to be in a bit of a pickle. For the cross-platform concerns, I'm not sure if Remoting would be supported. You may have to drop down and go all the way back to sockets: http://msdn.microsoft.com/en-us/library/system.net.sockets.aspx
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/84860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3798/" ] }
84,882
This is a pretty simple question, at least it seems like it should be, about sudo permissions in Linux. There are a lot of times when I just want to append something to /etc/hosts or a similar file but end up not being able to because both > and >> are not allowed, even with root. Is there someway to make this work without having to su or sudo su into root?
Use tee --append or tee -a . echo 'deb blah ... blah' | sudo tee -a /etc/apt/sources.list Make sure to avoid quotes inside quotes. To avoid printing data back to the console, redirect the output to /dev/null. echo 'deb blah ... blah' | sudo tee -a /etc/apt/sources.list > /dev/null Remember about the ( -a / --append ) flag! Just tee works like > and will overwrite your file. tee -a works like >> and will write at the end of the file.
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/84882", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9908/" ] }
84,912
Making a web page display correctly im all major browsers today is a very time consuming task. Is there a easy way to make a CSS style that looks identical in every browser? Or at least do you have some tips to make this work easier?
I agree with all the "reset" suggestions and the "grid" framework suggestions, but I did want to add a bit of advice: The goal of identical in every browser is, in practical terms, unachievable because you cannot control the client. Case in point: fonts. You declare your font styles in CSS but some Linux machines, some Macs, some mobile browsers -- will not have the font you specified. This variation leads to differing text lengths and wrapping. Then there's the variance of browser versions and operating systems running each; how different browsers implement zoom features; and the text size can be adjusted by the end user. Identical rendering is simply an unachievable goal. But take heart! This is the "art" part of CSS: Being able to be flexible in your design such that variances between browsers, operating systems, and end-user adjustments are handled elegantly. Don't strive for identical rendering -- you should strive for brand consistency + appropriate experience + flexibility.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/84912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14635/" ] }
84,932
I have Perl script and need to determine the full path and filename of the script during execution. I discovered that depending on how you call the script $0 varies and sometimes contains the fullpath+filename and sometimes just filename . Because the working directory can vary as well I can't think of a way to reliably get the fullpath+filename of the script. Anyone got a solution?
There are a few ways: $0 is the currently executing script as provided by POSIX, relative to the current working directory if the script is at or below the CWD Additionally, cwd() , getcwd() and abs_path() are provided by the Cwd module and tell you where the script is being run from The module FindBin provides the $Bin & $RealBin variables that usually are the path to the executing script; this module also provides $Script & $RealScript that are the name of the script __FILE__ is the actual file that the Perl interpreter deals with during compilation, including its full path. I've seen the first three ( $0 , the Cwd module and the FindBin module) fail under mod_perl spectacularly, producing worthless output such as '.' or an empty string. In such environments, I use __FILE__ and get the path from that using the File::Basename module: use File::Basename;my $dirname = dirname(__FILE__);
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/84932", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16331/" ] }
85,034
I want to make a table in SqlServer that will add, on insert, a auto incremented primary key. This should be an autoincremented id similar to MySql auto_increment functionality. (Below) create table foo ( user_id int not null auto_increment, name varchar(50) ) Is there a way of doing this with out creating an insert trigger?
Like this create table foo ( user_id int not null identity, name varchar(50) )
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85034", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12942/" ] }
85,036
I need to manually migrate modified stored procedures from a DEV SQL Server 2005 database instance to a TEST instance. Except for the changes I'm migrating, the databases have the same schemas. How can I quickly identify which stored procedures have been modified in the DEV database for migration to the TEST instance? I assume I can write a query against some of the system tables to view database objects of type stored procedure, sorting by some sort of last modified or compiled data, but I'm not sure. Maybe there is some sort of free utility someone can point me to.
instead of using sysobjects which is not recommended anymore use sys.procedures select name,create_date,modify_datefrom sys.proceduresorder by modify_date desc you can do the where clause yourself but this will list it in order of modification date descending
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/85036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16137/" ] }
85,046
Are there any good books or website that go over creating a JTable ? I want to make one column editable. I would like to actually put a inherited JCheckBox component (that we created here) into one of the table columns instead of just having the table put JCheckBox in based on it being an editable boolean field. I have the JFC Swing Tutorial Second Edition book but I just would like to know if there are other examples I could look at and learn how to deal with the tables better. The book seems to just take the java 'trail' online and put it in the book. I am re-reading the stuff though, just curious if anyone has found something that might help out more.
To make a column editable you have to override the isCellEditable method in the TableModel . Creating a TableModel is fairly easy if you inherit AbstractTableModel and I'd recommend it for all but the most simple JTable s. However, adapting the TableModel is only part of what you need to do. To actually get a custom component in the JTable , you need to set a custom cell renderer. To use an interactive custom component, you need to set a custom cell editor. In some cases, it's enough to use slightly modificated versions of the default classes for this. Editors If you already have got a custom component is easily done using delegation: Create a new class implementing TableCellEditor , and return a new instance of the component in the getCellEditorComponent method. The paramaters to this method include the current value as well as the cell coordinates, a link back to the table and wether or not the cell is selected. The TableCellEditor also has a method that is called when the user commits a change to the cell contents (where you can validate user input and adjust the model) or cancels an edit. Be sure to call the stopEditing() method on your editor if you ever programmatically abort editing, otherwise the editor component will remain on screen -- this once took me like 2 hours to debug. Note that within a JTable editors and only editors receive events! Displaying a button can be done using a renderer. But to get a functioning button, you need to implement an editor with the correct EventListeners registered. Registering a listener on a renderer does nothing. Renderers Implementing a renderer is not strictly necessary for what you describe in your question, but you typically end up doing it anyway, if only for minor modifications. Renderers, unlike editors, are speed critical. The getTableCellRendererComponent of a renderer is called once for every cell in the table! The component returned by a renderer is only used to paint the cell, not for interaction, and thus can be "reused" for the next cell. In other words, you should adjust the component (e.g. using setText(...) or setFont(...) if it is a TextComponent ) in the renderer, you should not instantiate a new one -- that's an easy way to cripple the performance. Caveats Note that for renderers and editors to work, you need to tell the JTable when to use a certain renderer/editor. There are basically two ways to do this. You can set the default cell renderer/editor for a certain type using the respective JTable methods. For this way to work, your TableModel needs to return exactly this type in the getColumnClass(...) method! The default table model will not do this for you, it always returns Object.class . I'm sure that one has stumped a lot of people. The other way to set the editor/renderer is by explicitly setting it on the column itself, that is, by getting the TableColumn via the getTableColumn(...) method of the JTable . This is a lot more elaborate, however, it's also the only way to have two different renderers/editors for a single class. E.g. your model might have two columns of class String which are rendered in entirely different ways, maybe once using a JLabel/DefaultRenderer and the other using a JButton to access a more elaborate editor. JTable with its custom renderers and editors is extremely versatile, but it is also a lot to take in, and there are a lot of things to do wrong. Good luck! How to Use Tables in The Swing Tutorial is mandatory reading for anyone customising JTables. In particular, read and reread Concepts: Editors and Renderers because it typically takes a while for it to "click". The examples on custom renderers and editors are also very worthwhile.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/85046", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14009/" ] }
85,051
Assuming network access is sporadic with no central server, what would be the best way to use git to keep three or more branches in sync? Is there a way to extract just my deltas, email those, and merge them on the other end?
While "git format-patch" and "git am" are great ways to manage patches from non-git sources, for git repositories you should investigate "git bundle". "git bundle" and the subcommands "create" and "unbundle" can be used to create and use a binary blob of incremental commits that can be used to transfer branch history across a 'weak' link via an alternative file transfer mechanism (e.g. email, snail-mail, etc.). git bundles will preserve commit ids, whereas format-patch/am will not resulting in the destination commits not being identical (different SHA1s).
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/85051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16203/" ] }
85,058
I see here that there are a load of languages aside from Java that run on the JVM. I'm a bit confused about the whole concept of other languages running in the JVM. So: What is the advantage in having other languages for the JVM? What is required (in high level terms) to write a language/compiler for the JVM? How do you write/compile/run code in a language (other than Java) in the JVM? EDIT: There were 3 follow up questions (originally comments) that were answered in the accepted answer. They are reprinted here for legibility: How would an app written in, say, JPython, interact with a Java app? Also, Can that JPython application use any of the JDK functions/objects?? What if it was Jaskell code, would the fact that it is a functional language not make it incompatible with the JDK?
To address your three questions separately: What is the advantage in having other languages for the JVM? There are two factors here. (1) Why have a language other than Java for the JVM, and (2) why have another language run on the JVM, instead of a different runtime? Other languages can satisfy other needs. For example, Java has no built-in support for closures , a feature that is often very useful. A language that runs on the JVM is bytecode compatible with any other language that runs on the JVM, meaning that code written in one language can interact with a library written in another language. What is required (in high level terms) to write a language/compiler for the JVM? The JVM reads bytecode (.class) files to obtain the instructions it needs to perform. Thus any language that is to be run on the JVM needs to be compiled to bytecode adhering to the Sun specification . This process is similar to compiling to native code, except that instead of compiling to instructions understood by the CPU, the code is compiled to instructions that are interpreted by the JVM. How do you write/compile/run code in a language (other than Java) in the JVM? Very much in the same way you write/compile/run code in Java. To get your feet wet, I'd recommend looking at Scala , which runs flawlessly on the JVM. Answering your follow up questions: How would an app written in, say, JPython, interact with a Java app? This depends on the implementation's choice of bridging the language gap. In your example, Jython project has a straightforward means of doing this ( see here ): from java.net import URLu = URL('http://jython.org') Also, can that JPython application use any of the JDK functions/objects? Yes, see above. What if it was Jaskell code, would the fact that it is a functional language not make it incompatible with the JDK? No. Scala (link above) for example implements functional features while maintaining compatibility with Java. For example: object Timer { def oncePerSecond(callback: () => unit) { while (true) { callback(); Thread sleep 1000 } } def timeFlies() { println("time flies like an arrow...") } def main(args: Array[String]) { oncePerSecond(timeFlies) }}
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/85058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/142/" ] }
85,116
I want the server to always serve dates in UTC in the HTML, and have JavaScript on the client site convert it to the user's local timezone. Bonus if I can output in the user's locale date format.
Seems the most foolproof way to start with a UTC date is to create a new Date object and use the setUTC… methods to set it to the date/time you want. Then the various toLocale…String methods will provide localized output. Example: // This would come from the server.// Also, this whole block could probably be made into an mktime function.// All very bare here for quick grasping.d = new Date();d.setUTCFullYear(2004);d.setUTCMonth(1);d.setUTCDate(29);d.setUTCHours(2);d.setUTCMinutes(45);d.setUTCSeconds(26);console.log(d); // -> Sat Feb 28 2004 23:45:26 GMT-0300 (BRT)console.log(d.toLocaleString()); // -> Sat Feb 28 23:45:26 2004console.log(d.toLocaleDateString()); // -> 02/28/2004console.log(d.toLocaleTimeString()); // -> 23:45:26 Some references: toLocaleString toLocaleDateString toLocaleTimeString getTimezoneOffset
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/85116", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13989/" ] }
85,119
I know you can do redirection based on the domain or path to rewrite the URI to point at a site-specific location and I've also seen some brutish if and elif statements for every site as shown in the following code, which I would like to avoid. if site == 'site1': ...elif site == 'site2: ... What are some good and clever ways of running multiple sites from a single, common Python web framework (i.e., Pylons, TurboGears, etc)?
Django has this built in. See the sites framework . As a general technique, include a 'host' column in your database schema attached to the data you want to be host-specific, then include the Host HTTP header in the query when you are retrieving data.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4919/" ] }
85,122
On Windows I have a problem I never encountered on Unix. That is how to get a thread to sleep for less than one millisecond. On Unix you typically have a number of choices (sleep, usleep and nanosleep) to fit your needs. On Windows, however, there is only Sleep with millisecond granularity. On Unix, I can use the use the select system call to create a microsecond sleep which is pretty straightforward: int usleep(long usec){ struct timeval tv; tv.tv_sec = usec/1000000L; tv.tv_usec = usec%1000000L; return select(0, 0, 0, 0, &tv);} How can I achieve the same on Windows?
This indicates a mis-understanding of sleep functions. The parameter you pass is a minimum time for sleeping. There's no guarantee that the thread will wake up after exactly the time specified. In fact, threads don't "wake up" at all, but are rather chosen for execution by the OS scheduler. The scheduler might choose to wait much longer than the requested sleep duration to activate a thread, especially if another thread is still active at that moment.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/85122", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6508/" ] }
85,137
Say I have a class named Frog, it looks like: public class Frog{ public int Location { get; set; } public int JumpCount { get; set; } public void OnJump() { JumpCount++; }} I need help with 2 things: I want to create an event named Jump in the class definition. I want to create an instance of the Frog class, and then create another method that will be called when the Frog jumps.
public event EventHandler Jump;public void OnJump(){ EventHandler handler = Jump; if (null != handler) handler(this, EventArgs.Empty);} then Frog frog = new Frog();frog.Jump += new EventHandler(yourMethod);private void yourMethod(object s, EventArgs e){ Console.WriteLine("Frog has Jumped!");}
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/85137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1368/" ] }
85,183
I have an object that implements IDisposable that is registered with the Windsor Container and I would like to dispose of it so it's Dispose method is called and next time Resolve is called it fetches a new instance. Does container.Release(obj); automatically call Dispose() immediately? Or do I need to do obj.Dispose();container.Release(obj); Couldn't find anything in the documentation on what exactly Release does EDIT: See my answer below for the results of tests I ran. Now the question becomes, how do I force the container to release an instance of a component with a singleton lifecycle? This only needs to be done in one place and writing a custom lifecycle seems far too heavyweight, is there no built in way of doing it?
This is something I think people aren't really aware of when working with the Windsor container - especially the often surprising behavior that disposable transient components are held onto by the container for the lifetime of the kernel until it's disposed unless you release them yourself - though it is documented - take a look here - but to quickly quote: the MicroKernel has a pluggable release policy that can hook up and implement somerouting to dispose the components. The MicroKernel comes with three IReleasePolicy implementations: AllComponentsReleasePolicy: track all components to enforce correct disposal upon the MicroKernel instance disposal LifecycledComponentsReleasePolicy: only track components that have a decommission lifecycle associated NoTrackingReleasePolicy: does not perform any tracking You can also implement your own release policy by using the interface IReleasePolicy. What you might find easier is to change the policy to a NoTrackingReleasePolicy and then handle the disposing yourself - this is potentially risky as well, but if your lifestyles are largely transient (or if when your container is disposed your application is about to close anyway) it's probably not a big deal. Remember however that any components which have already been injected with the singleton will hold a reference, so you could end up causing problems trying to "refresh" your singletons - it seems like a bad practice, and I wonder if perhaps you can avoid having to do this in the first place by improving the way your applications put together. Other approaches are to build a custom lifecycle with it's own decommission implementation (so releasing the singleton would actually dispose of the component, much like the transient lifecycle does). Alternatively another approach is to have a decorator for your service registered in the container with a singleton lifestyle, but your actual underlying service registered in the container with a transient lifestyle - then when you need to refresh the component just dispose of the transient underlying component held by the decorator and replace it with a freshly resolved instance (resolve it using the components key, rather then the service, to avoid getting the decorator) - this avoids issues with other singleton services (which aren't being "refreshed") from holding onto stale services which have been disposed of making them unusable, but does require a bit of casting etc. to make it work.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5056/" ] }
85,186
Is there an equivalent schema & data export/dumping tool for SQL Server as there is for MySQL with mysqldump. Trying to relocate a legacy ASP site and I am way out of happy place with working on a windows server. Note: The DTS export utility own seems to export data, without table defs.Using the Enterprise Manager and exporting the db gets closer with exporting the schema & data... but still misses stored procedures. Basically looking for a one does it all solution that grabs everything I need at once.
To do this really easily with SQL Server 2008 Management Studio: 1.) Right click on the database (not the table) and select Tasks -> Generate Scripts 2.) Click Next on the first page 3.) If you want to copy the whole database, just click next. If you want to copy specific tables, click on "Select Specific Database Objects", select the tables you want, and then click next. 4.) "Save to File" should be selected. IMPORTANT : Click the Advanced button next to "Save to File", find "Types of data to script", and change "Schema only" to "Schema and data" (if you want to create the table) or "Data only" (if you're copying data to an existing table). 5.) Click through the rest and you're done! It will save as a .sql file.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/85186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9908/" ] }
85,190
Consider: List<String> someList = new ArrayList<String>();// add "monkey", "donkey", "skeleton key" to someList for (String item : someList) { System.out.println(item);} What would the equivalent for loop look like without using the for each syntax?
for (Iterator<String> i = someIterable.iterator(); i.hasNext();) { String item = i.next(); System.out.println(item);} Note that if you need to use i.remove(); in your loop, or access the actual iterator in some way, you cannot use the for ( : ) idiom, since the actual iterator is merely inferred. As was noted by Denis Bueno, this code works for any object that implements the Iterable interface . Also, if the right-hand side of the for (:) idiom is an array rather than an Iterable object, the internal code uses an int index counter and checks against array.length instead. See the Java Language Specification .
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/85190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5074/" ] }
85,223
Of course most languages have library functions for this, but suppose I want to do it myself. Suppose that the float is given like in a C or Java program (except for the 'f' or 'd' suffix), for example " 4.2e1 ", " .42e2 " or simply " 42 ". In general, we have the "integer part" before the decimal point, the "fractional part" after the decimal point, and the "exponent". All three are integers. It is easy to find and process the individual digits, but how do you compose them into a value of type float or double without losing precision? I'm thinking of multiplying the integer part with 10^ n , where n is the number of digits in the fractional part, and then adding the fractional part to the integer part and subtracting n from the exponent. This effectively turns 4.2e1 into 42e0 , for example. Then I could use the pow function to compute 10^ exponent and multiply the result with the new integer part. The question is, does this method guarantee maximum precision throughout? Any thoughts on this?
All of the other answers have missed how hard it is to do this properly. You can do a first cut approach at this which is accurate to a certain extent, but until you take into account IEEE rounding modes (et al), you will never have the right answer. I've written naive implementations before with a rather large amount of error. If you're not scared of math, I highly recommend reading the following article by David Goldberg, What Every Computer Scientist Should Know About Floating-Point Arithmetic . You'll get a better understanding for what is going on under the hood, and why the bits are laid out as such. My best advice is to start with a working atoi implementation, and move out from there. You'll rapidly find you're missing things, but a few looks at strtod 's source and you'll be on the right path (which is a long, long path). Eventually you'll praise insert diety here that there are standard libraries. /* use this to start your atof implementation *//* atoi - [email protected] *//* PUBLIC DOMAIN */long atoi(const char *value) { unsigned long ival = 0, c, n = 1, i = 0, oval; for( ; c = value[i]; ++i) /* chomp leading spaces */ if(!isspace(c)) break; if(c == '-' || c == '+') { /* chomp sign */ n = (c != '-' ? n : -1); i++; } while(c = value[i++]) { /* parse number */ if(!isdigit(c)) return 0; ival = (ival * 10) + (c - '0'); /* mult/accum */ if((n > 0 && ival > LONG_MAX) || (n < 0 && ival > (LONG_MAX + 1UL))) { /* report overflow/underflow */ errno = ERANGE; return (n > 0 ? LONG_MAX : LONG_MIN); } } return (n>0 ? (long)ival : -(long)ival);}
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85223", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14637/" ] }
85,272
Anyone can read the GoF book to learn what design patterns are and how to use them, but what is the process for figuring out when a design pattern solves a problem? Does the knowledge of the pattern drive the design, or is there a way to figure out how a pattern can be used to change a design? In other words, are there patterns for Patterns?
Design patterns are supposed to provide a structure in which problems can be solved. When solving a real problem, you have to consider many tiny variations of a solution to that problem to see whether any fits a design pattern. In particular, you will probably need to generalise your problem, or its solution, in order to make a design pattern fit. The answer is, it's an art. Knowing the design patterns is certainly an important step. One way to get used to this sort of thing is to study applications of design patterns, not just the patterns. Seeing many different applications of one pattern can help you over time to get better at mapping a task onto a pattern.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/85272", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7565/" ] }
85,275
I'm working on a game where I create a random map of provinces (a la Risk or Diplomacy). To create that map, I'm first generating a series of semi-random points, then figuring the Delaunay triangulations of those points. With that done, I am now looking to create a Voronoi diagram of the points to serve as a starting point for the province borders. My data at this point (no pun intended) consists of the original series of points and a collection of the Delaunay triangles. I've seen a number of ways to do this on the web, but most of them are tied up with how the Delaunay was derived. I'd love to find something that doesn't need to be integrated to the Delaunay, but can work based off the data alone. Failing that, I'm looking for something comprehensible to a relative geometry newbie, as opposed to optimal speed. Thanks!
The Voronoi diagram is just the dual graph of the Delaunay triangulation. So, the edges of the Voronoi diagram are along the perpendicular bisectors of the edges of the Delaunay triangulation, so compute those lines. Then, compute the vertices of the Voronoi diagram by finding the intersections of adjacent edges. Finally, the edges are then the subsets of the lines you computed which lie between the corresponding vertices. Note that the exact code depends on the internal representation you're using for the two diagrams.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85275", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8365/" ] }
85,282
In C# you can make a block inside of a method that is not attached to any other statement. public void TestMethod() { { string x = "test"; string y = x; { int z = 42; int zz = z; } } } This code compiles and runs just as if the braces inside the main method weren't there. Also notice the block inside of a block. Is there a scenario where this would be valuable? I haven't found any yet, but am curious to hear of other people's findings.
Scope and garbage collection: When you leave the unattached block, any variables declared in it go out of scope. That lets the garbage collector clean up those objects. Ray Hayes points out that the .NET garbage collector will not immediately collect the out-of-scope objects, so scoping is the main benefit.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3289/" ] }
85,283
I'm writting a financial C# application which receive messages from the network, translate them into different object according to the message type and finaly apply the application business logic on them. The point is that after the business logic is applied, I'm very sure I will never need this instance again. Rather than to wait for the garbage collector to free them, I'd like to explicitly "delete" them. Is there a better way to do so in C#, should I use a pool of object to reuse always the same set of instance or is there a better strategy. The goal being to avoid the garbage collection to use any CPU during a time critical process.
Don't delete them right away. Calling the garbage collector for each object is a bad idea. Normally you really don't want to mess with the garbage collector at all, and even time critical processes are just race conditions waiting to happen if they're that sensitive. But if you know you'll have busy vs light load periods for your app, you might try a more general GC.Collect() when you reach a light period to encourage cleanup before the next busy period.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/85283", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8315/" ] }
85,353
What is the best (or as good as possible) general SVN ignore pattern to use? There are a number of different IDE, editor, compiler, plug-in, platform, etc. specific files and some file types that "overlap" (i.e. desirable for some types projects and not for others). There are however, a large number of file types that you just never want included in source control automatically regardless the specifics of your development environment. The answer to this question would serve as a good starting point for any project - only requiring them to add the few environment specific items they need. It could be adapted for other Version Control Systems (VCS) as well.
I'll add my own two cents to this question: I use the following SVN ignore pattern with TortoiseSVN and Subversion CLI for native C++, C#/VB.NET, and PERL projects on both Windows and Linux platforms. It works well for me! Formatted for copy and paste: *.o *.lo *.la #*# .*.rej *.rej .*~ *~ .#* .DS_Store thumbs.db Thumbs.db *.bak *.class *.exe *.dll *.mine *.obj *.ncb *.lib *.log *.idb *.pdb *.ilk *.msi* .res *.pch *.suo *.exp *.*~ *.~* ~*.* cvs CVS .CVS .cvs release Release debug Debug ignore Ignore bin Bin obj Obj *.csproj.user *.user *.generated.cs Formatted for readability: *.o *.lo *.la #*# .*.rej *.rej.*~ *~ .#* .DS_Store thumbs.db Thumbs.db *.bak *.class *.exe *.dll*.mine *.obj *.ncb *.lib *.log *.idb *.pdb *.ilk *.msi* .res *.pch *.suo *.exp *.*~ *.~* ~*.* cvs CVS .CVS .cvs release Release debug Debugignore Ignore bin Bin obj Obj*.csproj.user *.user*.generated.cs
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/85353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2233/" ] }
85,373
In SQL Server, how do I "floor" a DATETIME to the second/minute/hour/day/year? Let's say that I have a date of 2008-09-17 12:56:53.430 , then the output of flooring should be: Year: 2008-01-01 00:00:00.000 Month: 2008-09-01 00:00:00.000 Day: 2008-09-17 00:00:00.000 Hour: 2008-09-17 12:00:00.000 Minute: 2008-09-17 12:56:00.000 Second: 2008-09-17 12:56:53.000
The key is to use DATEADD and DATEDIFF along with the appropriate SQL timespan enumeration. declare @datetime datetime;set @datetime = getdate();select @datetime;select dateadd(year,datediff(year,0,@datetime),0);select dateadd(month,datediff(month,0,@datetime),0);select dateadd(day,datediff(day,0,@datetime),0);select dateadd(hour,datediff(hour,0,@datetime),0);select dateadd(minute,datediff(minute,0,@datetime),0);select dateadd(second,datediff(second,'2000-01-01',@datetime),'2000-01-01');select dateadd(week,datediff(week,0,@datetime),-1); --Beginning of week is Sundayselect dateadd(week,datediff(week,0,@datetime),0); --Beginning of week is Monday Note that when you are flooring by the second, you will often get an arithmetic overflow if you use 0. So pick a known value that is guaranteed to be lower than the datetime you are attempting to floor.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/85373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1690/" ] }
85,451
Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy? for example: start = time.clock()... do somethingelapsed = (time.clock() - start) vs. start = time.time()... do somethingelapsed = (time.time() - start)
As of 3.3, time.clock() is deprecated , and it's suggested to use time.process_time() or time.perf_counter() instead. Previously in 2.7, according to the time module docs : time.clock() On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms. On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond. Additionally, there is the timeit module for benchmarking code snippets.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/85451", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16148/" ] }
85,459
I have a series of PDFs named sequentially like so: 01_foo.pdf 02_bar.pdf 03_baz.pdf etc. Using Ruby, is it possible to combine these into one big PDF while keeping them in sequence? I don't mind installing any necessary gems to do the job. If this isn't possible in Ruby, how about another language? No commercial components, if possible. Update: Jason Navarrete's suggestion lead to the perfect solution: Place the PDF files needing to be combined in a directory along with pdftk (or make sure pdftk is in your PATH), then run the following script: pdfs = Dir["[0-9][0-9]_*"].sort.join(" ")`pdftk #{pdfs} output combined.pdf` Or I could even do it as a one-liner from the command-line: ruby -e '`pdftk #{Dir["[0-9][0-9]_*"].sort.join(" ")} output combined.pdf`' Great suggestion Jason, perfect solution, thanks. Give him an up-vote people .
A Ruby-Talk post suggests using the pdftk toolkit to merge the PDFs. It should be relatively straightforward to call pdftk as an external process and have it handle the merging. PDF::Writer may be overkill because all you're looking to accomplish is a simple append.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1944/" ] }
85,479
Can someone give an example of a good time to actually use "unsafe" and "fixed" in C# code? I've played with it before, but never actually found a good use for it. Consider this code... fixed (byte* pSrc = src, pDst = dst) { //Code that copies the bytes in a loop} compared to simply using... Array.Copy(source, target, source.Length); The second is the code found in the .NET Framework, the first a part of the code copied from the Microsoft website, http://msdn.microsoft.com/en-us/library/28k1s2k6(VS.80).aspx . The built in Array.Copy() is dramatically faster than using Unsafe code. This might just because the second is just better written and the first is just an example, but what kinds of situations would you really even need to use Unsafe/Fixed code for anything? Or is this poor web developer messing with something above his head?
It's useful for interop with unmanaged code. Any pointers passed to unmanaged functions need to be fixed (aka. pinned) to prevent the garbage collector from relocating the underlying memory. If you are using P/Invoke, then the default marshaller will pin objects for you. Sometimes it's necessary to perform custom marshalling, and sometimes it's necessary to pin an object for longer than the duration of a single P/Invoke call.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/85479", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
85,487
How do I perform a reverse DNS lookup, that is how do I resolve an IP address to its DNS hostname in Perl?
gethostbyaddr and similar calls. See http://perldoc.perl.org/functions/gethostbyaddr.html
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16432/" ] }
85,513
What does the option “convert to web application” do if I select it in visual studio? If I do convert my site to a web application what are the advantages? Can I go back?
Well, it converts your web site to a web application project. As for the advantages, here is some further reading: MSDN comparison -- Comparing Web Site Projects and Web Application Projects Webcast on ASP.NET -- Web Application Projects vs. Web Site Projects in Visual Studio 2008 "In this webcast, by request, we examine the differences between web application projects and web site projects in Microsoft Visual Studio 2008. We focus specifically on the reasons you would choose one over the other and explain how to make informed decisions when creating a Web solution " The primary difference (to me) between a web application project and a web site is how things gets compiled. In web sites each page has its code-behind compiled into a separate library, whereas in web applications all code-behind gets compiled into a single library. There are advantages and disadvantages to both, it really depends. It's also often a matter of opinion .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4491/" ] }
85,553
MSDN says that you should use structs when you need lightweight objects. Are there any other scenarios when a struct is preferable over a class? Some people might have forgotten that: structs can have methods. structs cannot be inherited. I understand the technical differences between structs and classes, I just don't have a good feel for when to use a struct.
MSDN has the answer: Choosing Between Classes and Structures . Basically, that page gives you a 4-item checklist and says to use a class unless your type meets all of the criteria. Do not define a structure unless the type has all of the following characteristics: It logically represents a single value, similar to primitive types (integer, double, and so on). It has an instance size smaller than 16 bytes. It is immutable. It will not have to be boxed frequently.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/85553", "https://Stackoverflow.com", "https://Stackoverflow.com/users/781/" ] }
85,569
I've seen there are a few of them. opencvdotnet , SharperCV , EmguCV , One on Code Project . Does anyone have any experience with any of these? I played around with the one on Code Project for a bit, but as soon as I tried to do anything complicated I got some nasty uncatchable exceptions (i.e. Msgbox exceptions). Cross platform (supports Mono) would be best.
I started out with opencvdotnet but it's not really actively developed any more. Further, support for the feature I needed (facedetection) was patchy. I'm using EmguCV now: It wraps a much greater part of the API and the guy behind it is very responsive to suggestions and requests. The code is a joy to look at and is known to work on Mono. I've wrote up a quick getting-started guide on my blog.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/85569", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3798/" ] }
85,577
I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?
You need ARP . Python's standard library doesn't include any code for that, so you either need to call an external program (your OS may have an 'arp' utility) or you need to build the packets yourself (possibly with a tool like Scapy .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11760/" ] }
85,622
Most precompiled Windows binaries are made with the MSYS+gcc toolchain. It uses MSVCRT runtime, which is incompatible with Visual C++ 2005/2008. So, how to go about and compile Cairo 1.6.4 (or later) for Visual C++ only. Including dependencies (png,zlib,pixman).
Here are instructions for building Cairo/Cairomm with Visual C++. Required: Visual C++ 2008 Express SP1 (now includes SDK) MSYS 1.0 To use VC++ command line tools, a batch file 'vcvars32.bat' needs to be run. C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\vcvars32.bat ZLib Download (and extract) zlib123.zip from http://www.zlib.net/ cd zlib123 nmake /f win32/Makefile.msc dir # zlib.lib is the static library # # zdll.lib is the import library for zlib1.dll # zlib1.dll is the shared library libpng Download (and extract) lpng1231.zip from http://www.libpng.org/pub/png/libpng.html The VC++ 9.0 compiler gives loads of "this might be unsafe" warnings. Ignore them;this is MS security panic (the code is good). cd lpng1231\lpng1231 # for some reason this is two stories deep nmake /f ../../lpng1231.nmake ZLIB_PATH=../zlib123 dir # libpng.lib is the static library # # dll is not being created Pixman Pixman is part of Cairo, but a separate download. Download (and extract) pixman-0.12.0.tar.gz from http://www.cairographics.org/releases/ Use MSYS to untar via 'tar -xvzf pixman*.tar.gz' Both Pixman and Cairo have Makefiles for Visual C++ command line compiler (cl),but they use Gnu makefile and Unix-like tools (sed etc.). This means we have to run the make from within MSYS. Open a command prompt with VC++ command line tools enabled (try 'cl /?').Turn that command prompt into an MSYS prompt by 'C:\MSYS\1.0\MSYS.BAT'. DO NOT use the MSYS icon, because then your prompt will now know of VC++.You cannot run .bat files from MSYS. Try that VC++ tools work from here: 'cl -?' Try that Gnu make also works: 'make -v'. Cool. cd (use /d/... instead of D:) cd pixman-0.12.0/pixman make -f Makefile.win32 This defaults to MMX and SSE2 optimizations, which require a newishx86 processor (Pentium 4 or Pentium M or above: http://fi.wikipedia.org/wiki/SSE2 ) There's quite some warnings but it seems to succeed. ls release # pixman-1.lib (static lib required by Cairo) Stay in the VC++ spiced MSYS prompt for also Cairo to compile. cairo Download (and extract) cairo-1.6.4.tar.gz from http://www.cairographics.org/releases/ cd cd cairo-1.6.4 The Makefile.win32 here is almost good, but has the Pixman path hardwired. Use the modified 'Makefile-cairo.win32': make -f ../Makefile-cairo.win32 CFG=release \ PIXMAN_PATH=../../pixman-0.12.0 \ LIBPNG_PATH=../../lpng1231 \ ZLIB_PATH=../../zlib123 (Write everything on one line, ignoring the backslashes) It says "no rule to make 'src/cairo-features.h'. Use the manually prepared one(in Cairo > 1.6.4 there may be a 'src/cairo-features-win32.h' that you cansimply rename): cp ../cairo-features.h src/ Retry the make command (arrow up remembers it). ls src/release # # cairo-static.lib cairomm (C++ API) Download (and extract) cairomm-1.6.4.tar.gz from http://www.cairographics.org/releases/ There is a Visual C++ 2005 Project that we can use (via open & upgrade) for 2008. cairomm-1.6.4\MSCV_Net2005\cairomm\cairomm.vcproj Changes that need to be done: Change active configuration to "Release" Cairomm-1.0 properties (with right click menu) C++/General/Additional Include Directories: ..\..\..\cairo-1.6.4\src (append to existing) Linker/General/Additional library directories: ..\..\..\cairo-1.6.4\src\release ..\..\..\lpng1231\lpng1231 ..\..\..\zlib123 Linker/Input/Additional dependencies: cairo-static.lib libpng.lib zlib.lib msimg32.lib Optimization: fast FPU code C++/Code generation/Floating point model Fast Right click on 'cairomm-1.0' and 'build'. There are some warnings. dir cairomm-1.6.4\MSVC_Net2005\cairomm\Release # # cairomm-1.0.lib # cairomm-1.0.dll # cairomm.def
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85622", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14455/" ] }
85,649
How do I remove a USB drive using the Win32 API? I do a lot of work on embedded systems and on one of these I have to copy my programs on a USB stick and insert it into the target hardware. Since I mostly work on the console I don't like to use the mouse and click on the small task-bar icon hundred times a day. I'd love to write a little program to do exactly that so I can put it into my makefiles, but I haven't found any API call that does the same thing. Any ideas?
You can use the CM_Request_Device_Eject() function as well as some other possibilities.Consult the following projects and articles: DevEject: Straightforward. http://www.withopf.com/tools/deveject/ A useful CodeProject article: http://www.codeproject.com/KB/system/RemoveDriveByLetter.aspx
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15955/" ] }
85,675
I have two tables I would like to complare. One of the columns is type CLOB. I would like to do something like this: select key, clob_value source_tableminusselect key, clob_value target_table Unfortunately, Oracle can't perform minus operations on clobs. How can I do this?
The format is this: dbms_lob.compare( lob_1 IN BLOB, lob_2 IN BLOB, amount IN INTEGER := 18446744073709551615, offset_1 IN INTEGER := 1, offset_2 IN INTEGER := 1) RETURN INTEGER; If dbms_lob.compare(lob1, lob2) = 0, they are identical. Here's an example query based on your example: Select key, glob_value From source_table Left Join target_table On source_table.key = target_table.key Where target_table.glob_value is Null Or dbms_lob.compare(source_table.glob_value, target_table.glob_value) <> 0
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16484/" ] }
85,699
I'm building a group calendar application that needs to support recurring events, but all the solutions I've come up with to handle these events seem like a hack. I can limit how far ahead one can look, and then generate all the events at once. Or I can store the events as repeating and dynamically display them when one looks ahead on the calendar, but I'll have to convert them to a normal event if someone wants to change the details on a particular instance of the event. I'm sure there's a better way to do this, but I haven't found it yet. What's the best way to model recurring events, where you can change details of or delete particular event instances? (I'm using Ruby, but please don't let that constrain your answer. If there's a Ruby-specific library or something, though, that's good to know.)
I would use a 'link' concept for all future recurring events. They are dynamically displayed in the calendar and link back to a single reference object. When events have taken place the link is broken and the event becomes a standalone instance. If you attempt to edit a recurring event then prompt to change all future items (i.e. change single linked reference) or change just that instance (in which case convert this to a standalone instance and then make change). The latter cased is slightly problematic as you need to keep track in your recurring list of all future events that were converted to single instance. But, this is entirely do-able. So, in essence, have 2 classes of events - single instances and recurring events.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/85699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6262/" ] }
85,702
I want to have a "select-only" ComboBox that provides a list of items for the user to select from. Typing should be disabled in the text portion of the ComboBox control. My initial googling of this turned up an overly complex, misguided suggestion to capture the KeyPress event.
To make the text portion of a ComboBox non-editable, set the DropDownStyle property to "DropDownList". The ComboBox is now essentially select-only for the user. You can do this in the Visual Studio designer, or in C# like this: stateComboBox.DropDownStyle = ComboBoxStyle.DropDownList; Link to the documentation for the ComboBox DropDownStyle property on MSDN.
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/85702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3406/" ] }
85,761
I have an external variable coming in as a string and I would like to do a switch/case on it. How do I do that in xquery?
Starting with XQuery 1.1, use switch: http://www.w3.org/TR/xquery-11/#id-switch switch ($animal) case "Cow" return "Moo" case "Cat" return "Meow" case "Duck" return "Quack" default return "What's that odd noise?"
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/85761", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1681/" ] }
85,804
Is there a better way to forcefully disconnect all users from an Oracle 10g database schema than restarting the Oracle database services? We have several developers using SQL Developer connecting to the same schema on a single Oracle 10g server. The problem is that when we want to drop the schema to rebuild it, inevitably someone is still connected and we cannot drop the database schema or the user while someone is still connected. By the same token, we do not want to drop all connections to other schemas because other people may still be connected and testing with those schemas. Anyone know of a quick way to resolve this?
To find the sessions, as a DBA use select sid,serial# from v$session where username = '<your_schema>' If you want to be sure only to get the sessions that use SQL Developer, you can add and program = 'SQL Developer' . If you only want to kill sessions belonging to a specific developer, you can add a restriction on os_user Then kill them with alter system kill session '<sid>,<serial#>' (e.g. alter system kill session '39,1232' ) A query that produces ready-built kill-statements could be select 'alter system kill session ''' || sid || ',' || serial# || ''';' from v$session where username = '<your_schema>' This will return one kill statement per session for that user - something like: alter system kill session '375,64855'; alter system kill session '346,53146';
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/85804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15766/" ] }
85,815
How do you tell if a function in JavaScript is defined? I want to do something like this function something_cool(text, callback) { alert(text); if( callback != null ) callback();} But it gets me a callback is not a function error when callback is not defined.
typeof callback === "function"
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/85815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8351/" ] }
85,816
I've got just one page that I want to force to be accessed as an HTTPS page (PHP on Apache). How do I do this without making the whole directory require HTTPS? Or, if you submit a form to an HTTPS page from an HTTP page, does it send it by HTTPS instead of HTTP? Here is my example: http://www.example.com/some-page.php I want it to only be accessed through: https://www.example.com/some-page.php Sure, I can put all of the links to this page pointed at the HTTPS version, but that doesn't stop some fool from accessing it through HTTP on purpose... One thing I thought was putting a redirect in the header of the PHP file to check to be sure that they are accessing the HTTPS version: if($_SERVER["SCRIPT_URI"] == "http://www.example.com/some-page.php"){ header('Location: https://www.example.com/some-page.php');} But that can't be the right way, can it?
The way I've done it before is basically like what you wrote, but doesn't have any hardcoded values: if($_SERVER["HTTPS"] != "on"){ header("Location: https://" . $_SERVER["HTTP_HOST"] . $_SERVER["REQUEST_URI"]); exit();}
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/85816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
85,880
Currently I'm doing some unit tests which are executed from bash. Unit tests are initialized, executed and cleaned up in a bash script. This script usualy contains an init(), execute() and cleanup() functions. But they are not mandatory. I'd like to test if they are or are not defined. I did this previously by greping and seding the source, but it seemed wrong. Is there a more elegant way to do this? Edit: The following sniplet works like a charm: fn_exists(){ LC_ALL=C type $1 | grep -q 'shell function'}
Like this: [[ $(type -t foo) == function ]] && echo "Foo exists" The built-in type command will tell you whether something is a function, built-in function, external command, or just not defined. Additional examples: $ LC_ALL=C type foobash: type: foo: not found$ LC_ALL=C type lsls is aliased to `ls --color=auto'$ which type$ LC_ALL=C type typetype is a shell builtin$ LC_ALL=C type -t rvmfunction$ if [ -n "$(LC_ALL=C type -t rvm)" ] && [ "$(LC_ALL=C type -t rvm)" = function ]; then echo rvm is a function; else echo rvm is NOT a function; firvm is a function
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/85880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9232/" ] }
85,892
I'm trying to determine if there's a way in Visual Basic 2008 (Express edition if that matters) to do inline collection initialization, a la JavaScript or Python: Dim oMapping As Dictionary(Of Integer, String) = {{1,"First"}, {2, "Second"}} I know Visual Basic 2008 supports array initialization like this, but I can't seem to get it to work for collections... Do I have the syntax wrong, or is it just not implemented?
Visual Basic 9.0 doesn't support this yet. However, Visual Basic 10.0 will .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85892", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7542/" ] }
85,916
I'm looking for a Ruby's equivalent of Code Like a Pythonista: Idiomatic Python Desirable features: easy to read single document which covers all topics: tips, tricks, guidelines, caveats, and pitfalls size less than a book idioms should work out of the box for the standard distribution ( % sudo apt-get install ruby irb rdoc ) Please, put one tutorial per answer if possible, with an example code from the tutorial and its meaning. UPDATE: These are the most closest to the above description resources I've encountered: Ruby Idioms Ruby User's Guide
Ruby Idioms (originally from RubyGarden) is my usual reference for idioms. It's clearly organized and fairly complete. As the author says, these are from RubyGarden, which used to be really cool (thanks Wayback Machine ). But now seems to be offline .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4279/" ] }
85,941
I get the following error when running my Visual Studio 2008 ASP.NET project (start without Debugging) on my XP Professional box: System.Web.HttpException: The current identity (machinename\ASPNET) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'. How can I resolve this?
Have you tried, the aspnet_regiis exe in the framework folder?
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/417/" ] }
85,978
For a given table 'foo', I need a query to generate a set of tables that have foreign keys that point to foo. I'm using Oracle 10G.
This should work (or something close): select table_namefrom all_constraintswhere constraint_type='R'and r_constraint_name in (select constraint_name from all_constraints where constraint_type in ('P','U') and table_name='<your table here>');
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/85978", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9940/" ] }
85,992
How do I enumerate the properties of a JavaScript object? I actually want to list all the defined variables and their values, but I've learned that defining a variable actually creates a property of the window object.
Simple enough: for(var propertyName in myObject) { // propertyName is what you want // you can get the value like this: myObject[propertyName]} Now, you will not get private variables this way because they are not available. EDIT: @bitwiseplatypus is correct that unless you use the hasOwnProperty() method, you will get properties that are inherited - however, I don't know why anyone familiar with object-oriented programming would expect anything less! Typically, someone that brings this up has been subjected to Douglas Crockford's warnings about this, which still confuse me a bit. Again, inheritance is a normal part of OO languages and is therefore part of JavaScript, notwithstanding it being prototypical. Now, that said, hasOwnProperty() is useful for filtering, but we don't need to sound a warning as if there is something dangerous in getting inherited properties. EDIT 2: @bitwiseplatypus brings up the situation that would occur should someone add properties/methods to your objects at a point in time later than when you originally wrote your objects (via its prototype) - while it is true that this might cause unexpected behavior, I personally don't see that as my problem entirely. Just a matter of opinion. Besides, what if I design things in such a way that I use prototypes during the construction of my objects and yet have code that iterates over the properties of the object and I want all inherited properties? I wouldn't use hasOwnProperty() . Then, let's say, someone adds new properties later. Is that my fault if things behave badly at that point? I don't think so. I think this is why jQuery, as an example, has specified ways of extending how it works (via jQuery.extend and jQuery.fn.extend ).
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/85992", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4777/" ] }
85,994
If one were to use TiddlyWiki as a personal database for notes and code snippets, how would you go about keeping it in sync between multiple machines. Would a svn/cvs etc work. How would you handle merges?
One option is the up-and-comer DropBox . A free filesharing service that gives you 2GB free, and no limit to the number of computers you share on. Define a shared folder, put your tiddlywiki files in there, and then point the local editing to the shared drive. Any changes are automatically reflected. Note: I have no connections to DropBox other than the fact that I've been reading lots about it, and am trialing it for my personal use.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/85994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13753/" ] }
85,996
I wrote myself a little downloading application so that I could easily grab a set of files from my server and put them all onto a new pc with a clean install of Windows, without actually going on the net. Unfortunately I'm having problems creating the folder I want to put them in and am unsure how to go about it. I want my program to download the apps to program files\any name here\ So basically I need a function that checks if a folder exists, and if it doesn't it creates it.
If Not System.IO.Directory.Exists(YourPath) Then System.IO.Directory.CreateDirectory(YourPath)End If
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/85996", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
86,018
If you are using HAML and SASS in your Rails application, then any templates you define in public/stylesheet/*.sass will be compiled into *.css stylesheets. From your code, you use stylesheet_link_tag to pull in the asset by name without having to worry about the extension. Many people dislike storing generated code or compiled code in version control, and it also stands to reason that the public/ directory shouldn't contain elements that you don't send to the browser. What is the best pattern to follow when laying out SASS resources in your Rails project?
I always version all stylesheets in "public/stylesheets/sass/*.sass" and set up an exclude filter for compiled ones: /public/stylesheets/*.css
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13472/" ] }
86,046
I'm wondering the best way to start a pthread that is a member of a C++ class? My own approach follows as an answer...
I usually use a static member function of the class, and use a pointer to the class as the void * parameter. That function can then either perform thread processing, or call another non-static member function with the class reference. That function can then reference all class members without awkward syntax.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86046", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10774/" ] }
86,049
How do I ignore files in Subversion? Also, how do I find files which are not under version control?
(This answer has been updated to match SVN 1.8 and 1.9's behaviour) You have 2 questions: Marking files as ignored: By "ignored file" I mean the file won't appear in lists even as "unversioned": your SVN client will pretend the file doesn't exist at all in the filesystem. Ignored files are specified by a "file pattern". The syntax and format of file patterns is explained in SVN's online documentation: http://svnbook.red-bean.com/nightly/en/svn.advanced.props.special.ignore.html "File Patterns in Subversion". Subversion, as of version 1.8 (June 2013) and later, supports 3 different ways of specifying file patterns. Here's a summary with examples: 1 - Runtime Configuration Area - global-ignores option: This is a client-side only setting, so your global-ignores list won't be shared by other users, and it applies to all repos you checkout onto your computer. This setting is defined in your Runtime Configuration Area file: Windows (file-based) - C:\Users\{you}\AppData\Roaming\Subversion\config Windows (registry-based) - Software\Tigris.org\Subversion\Config\Miscellany\global-ignores in both HKLM and HKCU . Linux/Unix - ~/.subversion/config 2 - The svn:ignore property, which is set on directories (not files): This is stored within the repo, so other users will have the same ignore files. Similar to how .gitignore works. svn:ignore is applied to directories and is non-recursive or inherited. Any file or immediate subdirectory of the parent directory that matches the File Pattern will be excluded. While SVN 1.8 adds the concept of "inherited properties", the svn:ignore property itself is ignored in non-immediate descendant directories: cd ~/myRepoRoot # Open an existing repo.echo "foo" > "ignoreThis.txt" # Create a file called "ignoreThis.txt".svn status # Check to see if the file is ignored or not.> ? ./ignoreThis.txt> 1 unversioned file # ...it is NOT currently ignored.svn propset svn:ignore "ignoreThis.txt" . # Apply the svn:ignore property to the "myRepoRoot" directory.svn status> 0 unversioned files # ...but now the file is ignored!cd subdirectory # now open a subdirectory.echo "foo" > "ignoreThis.txt" # create another file named "ignoreThis.txt".svn status> ? ./subdirectory/ignoreThis.txt # ...and is is NOT ignored!> 1 unversioned file (So the file ./subdirectory/ignoreThis is not ignored, even though " ignoreThis.txt " is applied on the . repo root). Therefore, to apply an ignore list recursively you must use svn propset svn:ignore <filePattern> . --recursive . This will create a copy of the property on every subdirectory. If the <filePattern> value is different in a child directory then the child's value completely overrides the parents, so there is no "additive" effect. So if you change the <filePattern> on the root . , then you must change it with --recursive to overwrite it on the child and descendant directories. I note that the command-line syntax is counter-intuitive. I started-off assuming that you would ignore a file in SVN by typing something like svn ignore pathToFileToIgnore.txt however this is not how SVN's ignore feature works. 3- The svn:global-ignores property. Requires SVN 1.8 (June 2013): This is similar to svn:ignore , except it makes use of SVN 1.8's "inherited properties" feature. Compare to svn:ignore , the file pattern is automatically applied in every descendant directory (not just immediate children). This means that is unnecessary to set svn:global-ignores with the --recursive flag, as inherited ignore file patterns are automatically applied as they're inherited. Running the same set of commands as in the previous example, but using svn:global-ignores instead: cd ~/myRepoRoot # Open an existing repoecho "foo" > "ignoreThis.txt" # Create a file called "ignoreThis.txt"svn status # Check to see if the file is ignored or not> ? ./ignoreThis.txt> 1 unversioned file # ...it is NOT currently ignoredsvn propset svn:global-ignores "ignoreThis.txt" .svn status> 0 unversioned files # ...but now the file is ignored!cd subdirectory # now open a subdirectoryecho "foo" > "ignoreThis.txt" # create another file named "ignoreThis.txt"svn status> 0 unversioned files # the file is ignored here too! For TortoiseSVN users: This whole arrangement was confusing for me, because TortoiseSVN's terminology (as used in their Windows Explorer menu system) was initially misleading to me - I was unsure what the significance of the Ignore menu's "Add recursively", "Add *" and "Add " options. I hope this post explains how the Ignore feature ties-in to the SVN Properties feature. That said, I suggest using the command-line to set ignored files so you get a feel for how it works instead of using the GUI, and only using the GUI to manipulate properties after you're comfortable with the command-line. Listing files that are ignored: The command svn status will hide ignored files (that is, files that match an RGA global-ignores pattern, or match an immediate parent directory's svn:ignore pattern or match any ancesor directory's svn:global-ignores pattern. Use the --no-ignore option to see those files listed. Ignored files have a status of I , then pipe the output to grep to only show lines starting with "I". The command is: svn status --no-ignore | grep "^I" For example: svn status> ? foo # An unversioned file> M modifiedFile.txt # A versioned file that has been modifiedsvn status --no-ignore> ? foo # An unversioned file> I ignoreThis.txt # A file matching an svn:ignore pattern> M modifiedFile.txt # A versioned file that has been modifiedsvn status --no-ignore | grep "^I"> I ignoreThis.txt # A file matching an svn:ignore pattern ta-da!
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/86049", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2108/" ] }
86,090
So I like the OpenID idea. I support it on my site, and use it wherever it's possible (like here!). But I am not clear about one thing. A site that supports OpenID basically accepts any OpenID provider out there, right? How does that work with sites that want to reduce bot-signups? What's to stop a malicious OpenID provider from setting up unlimited bot IDs automatically? I have some ideas, and will post them as a possible answer, but I was wondering if anyone can see something obvious that I've missed?
You have confused two different things - identification and authorization. Just because you know who somebody is, it doesn't mean you have to automatically give them permission to do anything. Simon Willison covers this nicely in An OpenID is not an account! More discussion on whitelisting is available in Social whitelisting with OpenID .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4913/" ] }
86,105
My web application has a login page that submits authentication credentials via an AJAX call. If the user enters the correct username and password, everything is fine, but if not, the following happens: The web server determines that although the request included a well-formed Authorization header, the credentials in the header do not successfully authenticate. The web server returns a 401 status code and includes one or more WWW-Authenticate headers listing the supported authentication types. The browser detects that the response to my call on the XMLHttpRequest object is a 401 and the response includes WWW-Authenticate headers. It then pops up an authentication dialog asking, again, for the username and password. This is all fine up until step 3. I don't want the dialog to pop up, I want want to handle the 401 response in my AJAX callback function. (For example, by displaying an error message on the login page.) I want the user to re-enter their username and password, of course, but I want them to see my friendly, reassuring login form, not the browser's ugly, default authentication dialog. Incidentally, I have no control over the server, so having it return a custom status code (i.e., something other than a 401) is not an option. Is there any way I can suppress the authentication dialog? In particular, can I suppress the Authentication Required dialog in Firefox 2 or later? Is there any way to suppress the Connect to [host] dialog in IE 6 and later? Edit Additional information from the author (Sept. 18): I should add that the real problem with the browser's authentication dialog popping up is that it give insufficient information to the user. The user has just entered a username and password via the form on the login page, he believes he has typed them both correctly, and he has clicked the submit button or hit enter. His expectation is that he will be taken to the next page or perhaps told that he has entered his information incorrectly and should try again. However, he is instead presented with an unexpected dialog box. The dialog makes no acknowledgment of the fact he just did enter a username and password. It does not clearly state that there was a problem and that he should try again. Instead, the dialog box presents the user with cryptic information like "The site says: ' [realm] '." Where [realm] is a short realm name that only a programmer could love. Web broswer designers take note: no one would ask how to suppress the authentication dialog if the dialog itself were simply more user-friendly. The entire reason that I am doing a login form is that our product management team rightly considers the browsers' authentication dialogs to be awful.
I encountered the same issue here, and the backend engineer at my company implemented a behavior that is apparently considered a good practice : when a call to a URL returns a 401, if the client has set the header X-Requested-With: XMLHttpRequest , the server drops the www-authenticate header in its response. The side effect is that the default authentication popup does not appear. Make sure that your API call has the X-Requested-With header set to XMLHttpRequest . If so there is nothing to do except changing the server behavior according to this good practice...
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/86105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9897/" ] }
86,138
I need to get the default printer name. I'll be using C# but I suspect this is more of a framework question and isn't language specific.
The easiest way I found is to create a new PrinterSettings object. It starts with all default values, so you can check its Name property to get the name of the default printer. PrinterSettings is in System.Drawing.dll in the namespace System.Drawing.Printing . PrinterSettings settings = new PrinterSettings();Console.WriteLine(settings.PrinterName); Alternatively, you could maybe use the static PrinterSettings.InstalledPrinters method to get a list of all printer names, then set the PrinterName property and check the IsDefaultPrinter . I haven't tried this, but the documentation seems to suggest it won't work. Apparently IsDefaultPrinter is only true when PrinterName is not explicitly set.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/86138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7176/" ] }
86,278
I have searched around, and it seems that this is a limitation in MS Access, so I'm wondering what creative solutions other have found to this puzzle. If you have a continuous form and you want a field to be a combo box of options that are specific to that row, Access fails to deliver; the combo box row source is only queried once at the beginning of the form, and thus show the wrong options for the rest of the form. The next step we all try, of course, is to use the onCurrent event to requery the combo box, which does in fact limit the options to the given row. However, at this point, Access goes nuts, and requeries all of the combo boxes, for every row, and the result is often that of disappearing and reappearing options in other rows, depending on whether they have chosen an option that is valid for the current record's row source. The only solution I have found is to just list all options available, all the time. Any creative answers out there? Edit Also, I should note that the reason for the combo box is to have a query as a lookup table, the real value needs to be hidden and stored, while the human readable version is displayed... multiple columns in the combo box row source. Thus, changing limit to list doesn't help, because id's that are not in the current row source query won't have a matching human readable part. In this particular case, continuous forms make a lot of sense, so please don't tell me it's the wrong solution. I'm asking for any creative answers.
I also hate Access, but you must play with the cards you are dealt.Continuous forms are a wonderful thing in Access, until you run into any sort of complexity as is commonly the case, like in this instance. Here is what I would do when faced with this situation (and I have implemented similar workarounds before): Place an UNBOUND combobox on the form. Then place a BOUND textBox for the field you want to edit. Make sure the combobox is hidden behind (NOT invisible, just hidden) behind the textBox. In the OnCurrent event fill the listBox with the necessary data. Go ahead and "Limit to list" it too. In the OnEnter or OnClick event of the textBox give the combobox focus. This will bring the combobox to the forefront. When focus leaves the combobox it will hide itself once more. In the AfterUpdate event of the combobox set the value of the textbox equal to the value of the combobox. Depending on your situation there may be some other details to work out, but that should more or less accomplish your goal without adding too much complexity.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14253/" ] }
86,292
I would much prefer to do this without catching an exception in LoadXml() and using this results as part of my logic. Any ideas for a solution that doesn't involve manually parsing the xml myself? I think VB has a return value of false for this function instead of throwing an XmlException. Xml input is provided from the user. Thanks much! if (!loaded){ this.m_xTableStructure = new XmlDocument(); try { this.m_xTableStructure.LoadXml(input); loaded = true; } catch { loaded = false; }}
Just catch the exception. The small overhead from catching an exception drowns compared to parsing the XML. If you want the function (for stylistic reasons, not for performance), implement it yourself: public class MyXmlDocument: XmlDocument{ bool TryParseXml(string xml){ try{ ParseXml(xml); return true; }catch(XmlException e){ return false; } }
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/86292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1551/" ] }
86,308
Many times we find ourselves working on a problem, only to figure out the solution being created is far more complex than the problem requires. Are there controls, best practices, techniques, etc that help you control over complication in your workplace?
Getting someone new to look at it.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8509/" ] }
86,361
I really like the interface for Yahoo Pipes ( http://pipes.yahoo.com/pipes/ ) and would like to create a similar interface for a different problem. Are there any libraries that would allow me to create an interface with the same basic look and feel? I especially like how the pipes behave and how they are not just straight lines. Edit: The application would be web-based. I'm open to using Flash or Javascript.
WireIt is an open-source javascript library to create web wirable interfaces like Yahoo! Pipes for dataflow applications, visual programming languages or graphical modeling. Wireit uses the YUI library (2.6.0) for DOM and events manipulation, and excanvas for IE support of the canvas tag. It currently supports Firefox 1.5+, Safari 2.0+, IE 7.0, Opera 9+ and Chrome 0.2.x.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86361", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1227001/" ] }
86,402
Is my best be going to be a shell script which replaces symlinks with copies, or is there another way of telling Git to follow symlinks? PS: I know it's not very secure, but I only want to do it in a few specific cases.
NOTE: This advice is now out-dated as per comment since Git 1.6.1. Git used to behave this way, and no longer does. Git by default attempts to store symlinks instead of following them (for compactness, and it's generally what people want). However, I accidentally managed to get it to add files beyond the symlink when the symlink is a directory. I.e.: /foo/ /foo/baz /bar/foo --> /foo /bar/foo/baz by doing git add /bar/foo/baz it appeared to work when I tried it. That behavior was however unwanted by me at the time, so I can't give you information beyond that.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/86402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15368/" ] }
86,408
Let's say I have an aspx page with this calendar control: <asp:Calendar ID="Calendar1" runat="server" SelectedDate="" ></asp:Calendar> Is there anything I can put in for SelectedDate to make it use the current date by default, without having to use the code-behind?
If you are already doing databinding: <asp:Calendar ID="Calendar1" runat="server" SelectedDate="<%# DateTime.Today %>" /> Will do it. This does require that somewhere you are doing a Page.DataBind() call (or a databind call on a parent control). If you are not doing that and you absolutely do not want any codebehind on the page, then you'll have to create a usercontrol that contains a calendar control and sets its selecteddate.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3043/" ] }
86,413
What is the best way to create a fixed width file in C#. I have a bunch of fields with lengths to write out. Say 20,80.10,2 etc all left aligned. Is there an easy way to do this?
You can use string.Format to easily pad a value with spacese.g. string a = String.Format("|{0,5}|{1,5}|{2,5}", 1, 20, 300);string b = String.Format("|{0,-5}|{1,-5}|{2,-5}", 1, 20, 300);// 'a' will be equal to "| 1| 20| 300|"// 'b' will be equal to "|1 |20 |300 |"
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/86413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3208/" ] }
86,417
I have a custom (code-based) workflow, deployed in WSS via features in a .wsp file. The workflow is configured with a custom task content type (ie, the Workflow element contains a TaskListContentTypeId attribute). This content type's declaration contains a FormUrls element pointing to a custom task edit page. When the workflow attempts to create a task, the workflow throws this exception: Invalid field name. {17ca3a22-fdfe-46eb-99b5-9646baed3f16 This is the ID of the FormURN site column. I thought FormURN is only used for InfoPath forms, not regular aspx forms... Does anyone have any idea how to solve this, so I can create tasks in my workflow?
You can use string.Format to easily pad a value with spacese.g. string a = String.Format("|{0,5}|{1,5}|{2,5}", 1, 20, 300);string b = String.Format("|{0,-5}|{1,-5}|{2,-5}", 1, 20, 300);// 'a' will be equal to "| 1| 20| 300|"// 'b' will be equal to "|1 |20 |300 |"
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/86417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5782/" ] }
86,426
Would it not make sense to support a set of languages (Java, Python, Ruby, etc.) by way of a standardized virtual machine hosted in the browser rather than requiring the use of a specialized language -- really, a specialized paradigm -- for client scripting only? To clarify the suggestion, a web page would contain byte code instead of any higher-level language like JavaScript. I understand the pragmatic reality that JavaScript is simply what we have to work with now due to evolutionary reasons, but I'm thinking more about the long term. With regard to backward compatibility, there's no reason that inline JavaScript could not be simultaneously supported for a period of time and of course JavaScript could be one of the languages supported by the browser virtual machine.
Well, yes. Certainly if we had a time machine, going back and ensuring a lot of the Javascript features were designed differently would be a major pastime (that, and ensuring the people who designed IE's CSS engine never went into IT). But it's not going to happen, and we're stuck with it now. I suspect, in time, it will become the "Machine language" for the web, with other better designed languages and APIs compile down to it (and cater for different runtime engine foibles). I don't think, however, any of these "better designed languages" will be Java, Python or Ruby. Javascript is, despite the ability to be used elsewhere, a Web application scripting language. Given that use case, we can do better than any of those languages.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/86426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3126/" ] }
86,428
I would like to reload an <iframe> using JavaScript. The best way I found until now was set the iframe’s src attribute to itself, but this isn’t very clean. Any ideas?
document.getElementById('some_frame_id').contentWindow.location.reload(); be careful, in Firefox, window.frames[] cannot be indexed by id, but by name or index
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/86428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8276/" ] }
86,458
Does a free .NET library exist with which I can upload a file to a SFTP (SSH FTP) server, which throws exceptions on problems with the upload and allows the monitoring of its progress?
Maybe you can script/control winscp ? Update: winscp now has a .NET library available as a nuget package that supports SFTP, SCP, and FTPS
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86458", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1595/" ] }
86,477
In JavaScript: encodeURIComponent("©√") == "%C2%A9%E2%88%9A" Is there an equivalent for C# applications? For escaping HTML characters I used: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"&#" + ((int)m.Value[0]).ToString() + ";"); But I'm not sure how to convert the match to the correct hexadecimal format that JS uses. For example this code: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"%" + String.Format("{0:x}", ((int)m.Value[0]))); Returns " %a9%221a" for "©√" instead of "%C2%A9%E2%88%9A" . It looks like I need to split the string up into bytes or something. Edit: This is for a windows app, the only items available in System.Web are: AspNetHostingPermission , AspNetHostingPermissionAttribute , and AspNetHostingPermissionLevel .
Uri.EscapeDataString or HttpUtility.UrlEncode is the correct way to escape a string meant to be part of a URL. Take for example the string "Stack Overflow" : HttpUtility.UrlEncode("Stack Overflow") --> "Stack+Overflow" Uri.EscapeUriString("Stack Overflow") --> "Stack%20Overflow" Uri.EscapeDataString("Stack + Overflow") --> Also encodes "+" to "%2b" ----> Stack%20%2B%20%20Overflow Only the last is correct when used as an actual part of the URL (as opposed to the value of one of the query string parameters)
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/86477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1414/" ] }
86,513
The eval function is a powerful and easy way to dynamically generate code, so what are the caveats?
Improper use of eval opens up yourcode for injection attacks Debugging can be more challenging(no line numbers, etc.) eval'd code executes slower (no opportunity to compile/cache eval'd code) Edit: As @Jeff Walden points out in comments, #3 is less true today than it was in 2008. However, while some caching of compiled scripts may happen this will only be limited to scripts that are eval'd repeated with no modification. A more likely scenario is that you are eval'ing scripts that have undergone slight modification each time and as such could not be cached. Let's just say that SOME eval'd code executes more slowly.
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/86513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5471/" ] }
86,526
I have an Ant script that performs a copy operation using the 'copy' task . It was written for Windows, and has a hardcoded C:\ path as the 'todir' argument. I see the 'exec' task has an OS argument, is there a similar way to branch a copy based on OS?
I would recommend putting the path in a property, then setting the property conditionally based on the current OS. <condition property="foo.path" value="C:\Foo\Dir"> <os family="windows"/></condition><condition property="foo.path" value="/home/foo/dir"> <os family="unix"/></condition><fail unless="foo.path">No foo.path set for this OS!</fail> As a side benefit, once it is in a property you can override it without editing the Ant script.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/86526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/99021/" ] }
86,550
Subversion has a superb client on Windows (Tortoise, of course). Everything I've tried on Linux just - well - sucks in comparison....
Disclaimer: A long long time ago I was one of the developers for RabbitVCS (previously known as NautilusSvn). If you use Nautilus then you might be interested in RabbitVCS (mentioned earlier by Trevor Bramble). It's an unadulterated clone of TortoiseSVN for Nautilus written in Python. While there's still a lot of improvement to be made (especially in the area of performance) some people seem to be quite satisfied with it. The name is quite fitting for the project, because the story it refers to quite accurately depicts the development pace (meaning long naps). If you do choose to start using RabbitVCS as your version control client, you're probably going to have to get your hands dirty.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/86550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
86,561
Is there any difference to the following code: class Foo { inline int SomeFunc() { return 42; } int AnotherFunc() { return 42; }}; Will both functions gets inlined? Does inline actually make any difference? Are there any rules on when you should or shouldn't inline code? I often use the AnotherFunc syntax (accessors for example) but I rarely specify inline directly.
Both forms should be inlined in the exact same way. Inline is implicit for function bodies defined in a class definition.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9236/" ] }
86,562
What is "missing" in the Visual Studio 2008 Express Editions? In particular, what functionality is not available? what restrictions are there on its use?
The major areas where Visual Studio Express lacks features compared to Visual Studio Professional: No add-ins/macros Some Win32 tools missing No Team Explorer support Limited refactoring support Debugging is much more limited (particularly problematic for server development is no remote debugging) Lack of support for setup projects No report creation tools No Office development support No mobile platform support Limited set of designers Limited set of database tools No code profiling or test framework support No MFC/ATL support No support for compiling C++ to 64-bit images (workaround is to install Windows SDK which is free) NOTE: it is often said that the Express EULA does not permit commercial development - that is not true ( Visual Studio Express FAQ Item 7 )
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/86562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6445/" ] }
86,582
Edit: From another question I provided an answer that has links to a lot of questions/answers about singletons: More info about singletons here: So I have read the thread Singletons: good design or a crutch? And the argument still rages. I see Singletons as a Design Pattern (good and bad). The problem with Singleton is not the Pattern but rather the users (sorry everybody). Everybody and their father thinks they can implement one correctly (and from the many interviews I have done, most people can't). Also because everybody thinks they can implement a correct Singleton they abuse the Pattern and use it in situations that are not appropriate (replacing global variables with Singletons!). So the main questions that need to be answered are: When should you use a Singleton How do you implement a Singleton correctly My hope for this article is that we can collect together in a single place (rather than having to google and search multiple sites) an authoritative source of when (and then how) to use a Singleton correctly. Also appropriate would be a list of Anti-Usages and common bad implementations explaining why they fail to work and for good implementations their weaknesses. So get the ball rolling: I will hold my hand up and say this is what I use but probably has problems. I like "Scott Myers" handling of the subject in his books "Effective C++" Good Situations to use Singletons (not many): Logging frameworks Thread recycling pools /* * C++ Singleton * Limitation: Single Threaded Design * See: http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf * For problems associated with locking in multi threaded applications * * Limitation: * If you use this Singleton (A) within a destructor of another Singleton (B) * This Singleton (A) must be fully constructed before the constructor of (B) * is called. */class MySingleton{ private: // Private Constructor MySingleton(); // Stop the compiler generating methods of copy the object MySingleton(MySingleton const& copy); // Not Implemented MySingleton& operator=(MySingleton const& copy); // Not Implemented public: static MySingleton& getInstance() { // The only instance // Guaranteed to be lazy initialized // Guaranteed that it will be destroyed correctly static MySingleton instance; return instance; }}; OK. Lets get some criticism and other implementations together. :-)
Answer: Use a Singleton if: You need to have one and only one object of a type in system Do not use a Singleton if: You want to save memory You want to try something new You want to show off how much you know Because everyone else is doing it (See cargo cult programmer in wikipedia) In user interface widgets It is supposed to be a cache In strings In Sessions I can go all day long How to create the best singleton: The smaller, the better. I am a minimalist Make sure it is thread safe Make sure it is never null Make sure it is created only once Lazy or system initialization? Up to your requirements Sometimes the OS or the JVM creates singletons for you (e.g. in Java every class definition is a singleton) Provide a destructor or somehow figure out how to dispose resources Use little memory
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/86582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14065/" ] }
86,636
Here is a problem I've struggled with ever since I first started learning object-oriented programming: how should one implement a logger in "proper" OOP code? By this, I mean an object that has a method that we want every other object in the code to be able to access; this method would output to console/file/whatever, which we would use for logging--hence, this object would be the logger object. We don't want to establish the logger object as a global variable, because global variables are bad, right? But we also don't want to have the pass the logger object in the parameters of every single method we call in every single object. In college, when I brought this up to the professor, he couldn't actually give me an answer. I realize that there are actually packages (for say, Java) that might implement this functionality. What I am ultimately looking for, though, is the knowledge of how to properly and in the OOP way implement this myself.
You do want to establish the logger as a global variable, because global variables are not bad. At least, they aren't inherently bad. A logger is a great example of the proper use of a globally accessible object. Read about the Singleton design pattern if you want more information.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16633/" ] }
86,653
I would like my JSON output in Ruby on Rails to be "pretty" or nicely formatted. Right now, I call to_json and my JSON is all on one line. At times this can be difficult to see if there is a problem in the JSON output stream. Is there way to configure to make my JSON "pretty" or nicely formatted in Rails?
Use the pretty_generate() function, built into later versions of JSON. For example: require 'json'my_object = { :array => [1, 2, 3, { :sample => "hash"} ], :foo => "bar" }puts JSON.pretty_generate(my_object) Which gets you: { "array": [ 1, 2, 3, { "sample": "hash" } ], "foo": "bar"}
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/86653", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10333/" ] }
86,685
I am having a really hard time attempting to debug LINQ to SQL and submitting changes. I have been using http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx , which works great for debugging simple queries. I'm working in the DataContext Class for my project with the following snippet from my application: JobMaster newJobToCreate = new JobMaster();newJobToCreate.JobID = 9999newJobToCreate.ProjectID = "New Project";this.UpdateJobMaster(newJobToCreate);this.SubmitChanges(); I will catch some very odd exceptions when I run this.SubmitChanges; Index was outside the bounds of the array. The stack trace goes places I cannot step into: at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager`3.TryCreateKeyFromValues(Object[] values, MultiKey`2& k) at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache`2.Find(Object[] keyValues) at System.Data.Linq.IdentityManager.StandardIdentityManager.Find(MetaType type, Object[] keyValues) at System.Data.Linq.CommonDataServices.GetCachedObject(MetaType type, Object[] keyValues) at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance) at System.Data.Linq.ChangeProcessor.BuildEdgeMaps() at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges() at JobTrakDataContext.CreateNewJob(NewJob job, String userName) in D:\JobTrakDataContext.cs:line 1119 Does anyone have any tools or techniques they use? Am I missing something simple? EDIT :I've setup .net debugging using Slace's suggestion, however the .net 3.5 code is not yet available: http://referencesource.microsoft.com/netframework.aspx EDIT2 :I've changed to InsertOnSubmit as per sirrocco's suggestion, still getting the same error. EDIT3: I've implemented Sam's suggestions trying to log the SQL generated and to catch the ChangeExceptoinException. These suggestions do not shed any more light, I'm never actually getting to generate SQL when my exception is being thrown. EDIT4: I found an answer that works for me below. Its just a theory but it has fixed my current issue.
I always found useful to know exactly what changes are being sent to the DataContext in the SubmitChanges() method. I use the DataContext.GetChangeSet() method, it returns a ChangeSet object instance that holds 3 read-only IList's of objects which have either been added, modified, or removed. You can place a breakpoint just before the SubmitChanges method call, and add a Watch (or Quick Watch) containing: ctx.GetChangeSet(); Where ctx is the current instance of your DataContext, and then you'll be able to track all the changes that will be effective on the SubmitChanges call.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7561/" ] }
86,766
Often I find myself interacting with files in some way but after writing the code I'm always uncertain how robust it actually is. The problem is that I'm not entirely sure how file related operations can fail and, therefore, the best way to handle exceptions. The simple solution would seem to be just to catch any IOExceptions thrown by the code and give the user an "Inaccessible file" error message, but is it possible to get a bit more fine-grained error messages? Is there a way to determine the difference between such errors as a file being locked by another program and the data being unreadable due to a hardware error? Given the following C# code, how would you handle errors in a user friendly (as informative as possible) way? public class IO{ public List<string> ReadFile(string path) { FileInfo file = new FileInfo(path); if (!file.Exists) { throw new FileNotFoundException(); } StreamReader reader = file.OpenText(); List<string> text = new List<string>(); while (!reader.EndOfStream) { text.Add(reader.ReadLine()); } reader.Close(); reader.Dispose(); return text; } public void WriteFile(List<string> text, string path) { FileInfo file = new FileInfo(path); if (!file.Exists) { throw new FileNotFoundException(); } StreamWriter writer = file.CreateText(); foreach(string line in text) { writer.WriteLine(line); } writer.Flush(); writer.Close(); writer.Dispose(); }}
...but is it possible to get a bit more fine-grained error messages. Yes. Go ahead and catch IOException , and use the Exception.ToString() method to get a relatively relevant error message to display. Note that the exceptions generated by the .NET Framework will supply these useful strings, but if you are going to throw your own exception, you must remember to plug in that string into the Exception 's constructor, like: throw new FileNotFoundException("File not found"); Also, absolutely, as per Scott Dorman , use that using statement. The thing to notice, though, is that the using statement doesn't actually catch anything, which is the way it ought to be. Your test to see if the file exists, for instance, will introduce a race condition that may be rather vexing . It doesn't really do you any good to have it in there. So, now, for the reader we have: try { using (StreamReader reader = file.OpenText()) { // Your processing code here } } catch (IOException e) { UI.AlertUserSomehow(e.ToString()); } In short, for basic file operations: 1. Use using 2, Wrap the using statement or function in a try / catch that catch es IOException 3. Use Exception.ToString() in your catch to get a useful error message 4. Don't try to detect exceptional file issues yourself. Let .NET do the throwing for you.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4055/" ] }
86,780
Say I have two strings, String s1 = "AbBaCca";String s2 = "bac"; I want to perform a check returning that s2 is contained within s1 . I can do this with: return s1.contains(s2); I am pretty sure that contains() is case sensitive, however I can't determine this for sure from reading the documentation. If it is then I suppose my best method would be something like: return s1.toLowerCase().contains(s2.toLowerCase()); All this aside, is there another (possibly better) way to accomplish this without caring about case-sensitivity?
Yes, contains is case sensitive. You can use java.util.regex.Pattern with the CASE_INSENSITIVE flag for case insensitive matching: Pattern.compile(Pattern.quote(wantedStr), Pattern.CASE_INSENSITIVE).matcher(source).find(); EDIT: If s2 contains regex special characters (of which there are many) it's important to quote it first. I've corrected my answer since it is the first one people will see, but vote up Matt Quail's since he pointed this out.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/86780", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2628/" ] }
86,793
If a user select all items in a .NET 2.0 ListView, the ListView will fire a SelectedIndexChanged event for every item, rather than firing an event to indicate that the selection has changed. If the user then clicks to select just one item in the list, the ListView will fire a SelectedIndexChanged event for every item that is getting unselected, and then an SelectedIndexChanged event for the single newly selected item, rather than firing an event to indicate that the selection has changed. If you have code in the SelectedIndexChanged event handler, the program will become pretty unresponsive when you begin to have a few hundred/thousand items in the list. I've thought about dwell timers , etc. But does anyone have a good solution to avoid thousands of needless ListView. SelectedIndexChange events, when really one event will do?
Good solution from Ian. I took that and made it into a reusable class, making sure to dispose of the timer properly. I also reduced the interval to get a more responsive app. This control also doublebuffers to reduce flicker. public class DoublebufferedListView : System.Windows.Forms.ListView { private Timer m_changeDelayTimer = null; public DoublebufferedListView() : base() { // Set common properties for our listviews if (!SystemInformation.TerminalServerSession) { DoubleBuffered = true; SetStyle(ControlStyles.ResizeRedraw, true); } } /// <summary> /// Make sure to properly dispose of the timer /// </summary> /// <param name="disposing"></param> protected override void Dispose(bool disposing) { if (disposing && m_changeDelayTimer != null) { m_changeDelayTimer.Tick -= ChangeDelayTimerTick; m_changeDelayTimer.Dispose(); } base.Dispose(disposing); } /// <summary> /// Hack to avoid lots of unnecessary change events by marshaling with a timer: /// http://stackoverflow.com/questions/86793/how-to-avoid-thousands-of-needless-listview-selectedindexchanged-events /// </summary> /// <param name="e"></param> protected override void OnSelectedIndexChanged(EventArgs e) { if (m_changeDelayTimer == null) { m_changeDelayTimer = new Timer(); m_changeDelayTimer.Tick += ChangeDelayTimerTick; m_changeDelayTimer.Interval = 40; } // When a new SelectedIndexChanged event arrives, disable, then enable the // timer, effectively resetting it, so that after the last one in a batch // arrives, there is at least 40 ms before we react, plenty of time // to wait any other selection events in the same batch. m_changeDelayTimer.Enabled = false; m_changeDelayTimer.Enabled = true; } private void ChangeDelayTimerTick(object sender, EventArgs e) { m_changeDelayTimer.Enabled = false; base.OnSelectedIndexChanged(new EventArgs()); } } Do let me know if this can be improved.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12597/" ] }
86,824
I'm getting a ConnectException: Connection timed out with some frequency from my code. The URL I am trying to hit is up. The same code works for some users, but not others. It seems like once one user starts to get this exception they continue to get the exception. Here is the stack trace: java.net.ConnectException: Connection timed outCaused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.Socket.connect(Socket.java:516) at java.net.Socket.connect(Socket.java:466) at sun.net.NetworkClient.doConnect(NetworkClient.java:157) at sun.net.www.http.HttpClient.openServer(HttpClient.java:365) at sun.net.www.http.HttpClient.openServer(HttpClient.java:477) at sun.net.www.http.HttpClient.<init>(HttpClient.java:214) at sun.net.www.http.HttpClient.New(HttpClient.java:287) at sun.net.www.http.HttpClient.New(HttpClient.java:299) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:796) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:748) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:673) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:840) Here is a snippet from my code: URLConnection urlConnection = null;OutputStream outputStream = null;OutputStreamWriter outputStreamWriter = null;InputStream inputStream = null;try { URL url = new URL(urlBase); urlConnection = url.openConnection(); urlConnection.setDoOutput(true); outputStream = urlConnection.getOutputStream(); // exception occurs on this line outputStreamWriter = new OutputStreamWriter(outputStream); outputStreamWriter.write(urlString); outputStreamWriter.flush(); inputStream = urlConnection.getInputStream(); String response = IOUtils.toString(inputStream); return processResponse(urlString, urlBase, response);} catch (IOException e) { throw new Exception("Error querying url: " + urlString, e);} finally { IoUtil.close(inputStream); IoUtil.close(outputStreamWriter); IoUtil.close(outputStream);}
Connection timeouts (assuming a local network and several client machines) typically result from a) some kind of firewall on the way that simply eats the packets without telling the sender things like "No Route to host" b) packet loss due to wrong network configuration or line overload c) too many requests overloading the server d) a small number of simultaneously available threads/processes on the server which leads to all of them being taken. This happens especially with requests that take a long time to run and may combine with c). Hope this helps.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/86824", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16677/" ] }
86,878
I am having problem that even though I specify the level to ERROR in the root tag, the specified appender logs all levels (debug, info, warn) to the file regardless the settings. I am not a Log4j expert so any help is appreciated. I have checked the classpath for log4j.properties (there is none) except the log4j.xml. Here is the log4j.xml file: <?xml version="1.0" encoding="UTF-8" ?><!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"><log4j:configuration xmlns:log4j='http://jakarta.apache.org/log4j/'> <!-- ============================== --> <!-- Append messages to the console --> <!-- ============================== --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <!-- The default pattern: Date Priority [Category] Message\n --> <param name="ConversionPattern" value="[AC - %5p] [%d{ISO8601}] [%t] [%c{1} - %L] %m%n" /> </layout> </appender> <appender name="logfile" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="./logs/server.log" /> <param name="MaxFileSize" value="1000KB" /> <param name="MaxBackupIndex" value="2" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" /> </layout> </appender> <appender name="payloadAppender" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="./logs/payload.log" /> <param name="MaxFileSize" value="1000KB" /> <param name="MaxBackupIndex" value="10" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" /> </layout> </appender> <appender name="errorLog" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="./logs/error.log" /> <param name="MaxFileSize" value="1000KB" /> <param name="MaxBackupIndex" value="10" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" /> </layout> </appender> <appender name="traceLog" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="./logs/trace.log" /> <param name="MaxFileSize" value="1000KB" /> <param name="MaxBackupIndex" value="20" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[AccessControl - %-5p] {%t: %d{dd.MM.yyyy - HH.mm.ss,SSS}} %m%n" /> </layout> </appender> <appender name="traceSocketAppender" class="org.apache.log4j.net.SocketAppender"> <param name="remoteHost" value="localhost" /> <param name="port" value="4445" /> <param name="locationInfo" value="true" /> </appender> <logger name="TraceLogger"> <level value="trace" /> <!-- Set level to trace to activate tracing --> <appender-ref ref="traceLog" /> </logger> <logger name="org.springframework.ws.server.endpoint.interceptor"> <level value="DEBUG" /> <appender-ref ref="payloadAppender" /> </logger> <root> <level value="error" /> <appender-ref ref="errorLog" /> </root></log4j:configuration> If I replace the root with another logger, then nothing gets logged at all to the specified appender. <logger name="com.mydomain.logic"> <level value="error" /> <appender-ref ref="errorLog" /></logger>
The root logger resides at the top of the logger hierarchy. It is exceptional in three ways: it always exists, its level cannot be set to null it cannot be retrieved by name. The rootLogger is the father of all appenders. Each enabled logging request for a given logger will be forwarded to all the appenders in that logger as well as the appenders higher in the hierarchy (including rootLogger) For example, if the console appender is added to the root logger , then all enabled logging requests will at least print on the console. If in addition a file appender is added to a logger, say L , then enabled logging requests for L and L's children will print on a file and on the console . It is possible to override this default behavior so that appender accumulation is no longer additive by setting the additivity flag to false . From the log4j manual To sum up: If you want not to propagate a logging event to the parents loggers (say rootLogger) then add the additivity flag to false in those loggers. In your case: <logger name="org.springframework.ws.server.endpoint.interceptor" additivity="false"> <level value="DEBUG" /> <appender-ref ref="payloadAppender" /></logger> In standard log4j config style (which I prefer to XML): log4j.logger.org.springframework.ws.server.endpoint.interceptor = INFO, payloadAppenderlog4j.additivity.org.springframework.ws.server.endpoint.interceptor = false Hope this helps.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/86878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15045/" ] }
86,901
I would like a panel in GWT to fill the page without actually having to set the size. Is there a way to do this? Currently I have the following: public class Main implements EntryPoint{ public void onModuleLoad() { HorizontalSplitPanel split = new HorizontalSplitPanel(); //split.setSize("250px", "500px"); split.setSplitPosition("30%"); DecoratorPanel dp = new DecoratorPanel(); dp.setWidget(split); RootPanel.get().add(dp); }} With the previous code snippet, nothing shows up. Is there a method call I am missing? Thanks. UPDATE Sep 17 '08 at 20:15 I put some buttons (explicitly set their size) on each side and that still doesn't work. I'm really surprised there isn't like a FillLayout class or a setFillLayout method or setDockStyle(DockStyle.Fill) or something like that. Maybe it's not possible? But for as popular as GWT is, I would think it would be possible. UPDATE Sep 18 '08 at 14:38 I have tried setting the RootPanel width and height to 100% and that still didn't work. Thanks for the suggestion though, that seemed like it maybe was going to work. Any other suggestions??
Google has answered the main part of your question in one of their FAQs: http://code.google.com/webtoolkit/doc/1.6/FAQ_UI.html#How_do_I_create_an_app_that_fills_the_page_vertically_when_the_b The primary point is that you can't set height to 100%, you must do something like this: final VerticalPanel vp = new VerticalPanel();vp.add(mainPanel);vp.setWidth("100%");vp.setHeight(Window.getClientHeight() + "px");Window.addResizeHandler(new ResizeHandler() { public void onResize(ResizeEvent event) { int height = event.getHeight(); vp.setHeight(height + "px"); }});RootPanel.get().add(vp);
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10333/" ] }
86,963
More than once I've lost work by accidentally killing a temporary buffer in Emacs. Can I set up Emacs to give me a warning when I kill a buffer not associated with a file?
Make a function that will ask you whether you're sure when the buffer has been edited and is not associated with a file. Then add that function to the list kill-buffer-query-functions . Looking at the documentation for Buffer File Name you understand: a buffer is not visiting a file if and only if the variable buffer-file-name is nil Use that insight to write the function: (defun maybe-kill-buffer () (if (and (not buffer-file-name) (buffer-modified-p)) ;; buffer is not visiting a file (y-or-n-p "This buffer is not visiting a file but has been edited. Kill it anyway? ") t)) And then add the function to the hook like so: (add-to-list 'kill-buffer-query-functions 'maybe-kill-buffer)
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/86963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/207/" ] }
87,096
I really hate using STL containers because they make the debug version of my code run really slowly. What do other people use instead of STL that has reasonable performance for debug builds? I'm a game programmer and this has been a problem on many of the projects I've worked on. It's pretty hard to get 60 fps when you use STL container for everything. I use MSVC for most of my work.
EASTL is a possibility, but still not perfect. Paul Pedriana of Electronic Arts did an investigation of various STL implementations with respect to performance in game applications the summary of which is found here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2271.html Some of these adjustments to are being reviewed for inclusion in the C++ standard. And note, even EASTL doesn't optimize for the non-optimized case. I had an excel file w/ some timing a while back but I think I've lost it, but for access it was something like: debug releaseSTL 100 10EASTL 10 3array[i] 3 1 The most success I've had was rolling my own containers. You can get those down to near array[x] performance.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/87096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16679/" ] }
87,101
What is a good way to select all or select no items in a listview without using: foreach (ListViewItem item in listView1.Items){ item.Selected = true;} or foreach (ListViewItem item in listView1.Items){ item.Selected = false;} I know the underlying Win32 listview common control supports LVM_SETITEMSTATE message which you can use to set the selected state, and by passing -1 as the index it will apply to all items. I'd rather not be PInvoking messages to the control that happens to be behind the .NET Listview control (I don't want to be a bad developer and rely on undocumented behavior - for when they change it to a fully managed ListView class) Bump Pseudo Masochist has the SelectNone case: ListView1.SelectedItems.Clear(); Now just need the SelectAll code
EASTL is a possibility, but still not perfect. Paul Pedriana of Electronic Arts did an investigation of various STL implementations with respect to performance in game applications the summary of which is found here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2271.html Some of these adjustments to are being reviewed for inclusion in the C++ standard. And note, even EASTL doesn't optimize for the non-optimized case. I had an excel file w/ some timing a while back but I think I've lost it, but for access it was something like: debug releaseSTL 100 10EASTL 10 3array[i] 3 1 The most success I've had was rolling my own containers. You can get those down to near array[x] performance.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/87101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12597/" ] }
87,107
I've setup a new .net 2.0 website on IIS 7 under Win Server 2k8 and when browsing to a page it gives me a 404.17 error, claiming that the file (default.aspx in this case) appears to be a script but is being handled by the static file handler. It SOUNDS like the module mappings for ASP.Net got messed up, but they look fine in the configurations. Does anyone have a suggestion for correcting this error?
I had this problem on IIS6 one time when somehow the ASP.NET ISAPI stuff was broke. Running %windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i to recreate the settings took care of it.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/87107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16729/" ] }
87,200
I have finally started messing around with creating some apps that work with RESTful web interfaces, however, I am concerned that I am hammering their servers every time I hit F5 to run a series of tests.. Basically, I need to get a series of web responses so I can test I am parsing the varying responses correctly, rather than hit their servers every time, I thought I could do this once, save the XML and then work locally. However, I don't see how I can "mock" a WebResponse, since (AFAIK) they can only be instantiated by WebRequest.GetResponse How do you guys go about mocking this sort of thing? Do you? I just really don't like the fact I am hammering their servers :S I dont want to change the code too much, but I expect there is a elegant way of doing this.. Update Following Accept Will's answer was the slap in the face I needed, I knew I was missing a fundamental point! Create an Interface that will return a proxy object which represents the XML. Implement the interface twice, one that uses WebRequest, the other that returns static "responses". The interface implmentation then either instantiates the return type based on the response, or the static XML. You can then pass the required class when testing or at production to the service layer. Once I have the code knocked up, I'll paste some samples.
I found this question while looking to do exactly the same thing. Couldn't find an answer anywhere, but after a bit more digging found that the .Net Framework has built in support for this. You can register a factory object with WebRequest.RegisterPrefix which WebRequest.Create will call when using that prefix (or url). The factory object must implement IWebRequestCreate which has a single method Create which returns a WebRequest . Here you can return your mock WebRequest . I've put some sample code up at http://blog.salamandersoft.co.uk/index.php/2009/10/how-to-mock-httpwebrequest-when-unit-testing/
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/87200", "https://Stackoverflow.com", "https://Stackoverflow.com/users/832/" ] }
87,220
How does gcc implement stack unrolling for C++ exceptions on linux? In particular, how does it know which destructors to call when unrolling a frame (i.e., what kind of information is stored and where is it stored)?
See section 6.2 of the x86_64 ABI . This details the interface but not a lot of the underlying data. This is also independent of C++ and could conceivably be used for other purposes as well. There are primarily two sections of the ELF binary as emitted by gcc which are of interest for exception handling. They are .eh_frame and .gcc_except_table . .eh_frame follows the DWARF format (the debugging format that primarily comes into play when you're using gdb). It has exactly the same format as the .debug_frame section emitted when compiling with -g . Essentially, it contains the information necessary to pop back to the state of the machine registers and the stack at any point higher up the call stack. See the Dwarf Standard at dwarfstd.org for more information on this. .gcc_except_table contains information about the exception handling "landing pads" the locations of handlers. This is necessary so as to know when to stop unwinding. Unfortunately this section is not well documented. The only snippets of information I have been able to glean come from the gcc mailing list. See particularly this post The remaining piece of information is then what actual code interprets the information found in these data sections. The relevant code lives in libstdc++ and libgcc. I cannot remember at the moment which pieces live in which. The interpreter for the DWARF call frame information can be found in the gcc source code in the file gcc/unwind-dw.c
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/87220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16480/" ] }
87,222
How do i represent CRLF using Hex in C#?
Since no one has actually given the answer requested, here it is: "\x0d\x0a"
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/87222", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
87,299
I want to ditch my current editor. I feel I need something else. That do not expose my hands to the risk of RSI. I need to see why I should change editor. And it would be nice to believe, that I will be coding when I'm 80 years old. All the big guys out there are using Vim. The only Emacs guy I know are RMS. Paul Graham is a Vi dude.
. (dot) - repeats the last editing action. Really handy when you need to perform a few similar edits.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/87299", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15054/" ] }
87,304
What's a good algorithm for calculating frames per second in a game? I want to show it as a number in the corner of the screen. If I just look at how long it took to render the last frame the number changes too fast. Bonus points if your answer updates each frame and doesn't converge differently when the frame rate is increasing vs decreasing.
You need a smoothed average, the easiest way is to take the current answer (the time to draw the last frame) and combine it with the previous answer. // eg.float smoothing = 0.9; // larger=more smoothingmeasurement = (measurement * smoothing) + (current * (1.0-smoothing)) By adjusting the 0.9 / 0.1 ratio you can change the 'time constant' - that is how quickly the number responds to changes. A larger fraction in favour of the old answer gives a slower smoother change, a large fraction in favour of the new answer gives a quicker changing value. Obviously the two factors must add to one!
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/87304", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16679/" ] }
87,350
Any recommendations on grep tools for Windows? Ideally ones that could leverage 64-bit OS. I'm aware of Cygwin , of course, and have also found PowerGREP , but I'm wondering if there are any hidden gems out there?
Based on recommendations in the comments, I've started using grepWin and it's fantastic and free . (I'm still a fan of PowerGREP , but I don't use it anymore.) I know you already mentioned it, but PowerGREP is awesome . Some of my favorite features are: Right-click on a folder to run PowerGREP on it Use regular expressions or literal text Specify wildcards for files to include & exclude Search & replace Preview mode is nice because you can make sure you're replacing what you intend to. Now I realize that the other grep tools can do all of the above. It's just that PowerGREP packages all of the functionality into a very easy-to-use GUI. From the same wonderful folks who brought you RegexBuddy and who I have no affiliation with beyond loving their stuff. (It should be noted that RegexBuddy includes a basic version of grep (for Windows) itself and it costs a lot less than PowerGREP.) Additional solutions Existing Windows commands FINDSTR Select-String in PowerShell Linux command implementations on Windows Cygwin Cash Grep tools with a graphical interface AstroGrep BareGrep GrepWin Additional Grep tools dnGrep
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/87350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1690/" ] }